CN117058183A - Image processing method and device based on double cameras, electronic equipment and storage medium - Google Patents

Image processing method and device based on double cameras, electronic equipment and storage medium Download PDF

Info

Publication number
CN117058183A
CN117058183A CN202311084881.5A CN202311084881A CN117058183A CN 117058183 A CN117058183 A CN 117058183A CN 202311084881 A CN202311084881 A CN 202311084881A CN 117058183 A CN117058183 A CN 117058183A
Authority
CN
China
Prior art keywords
image
processed
depth
depth image
binocular
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311084881.5A
Other languages
Chinese (zh)
Inventor
黄�俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wingtech Communication Co Ltd
Original Assignee
Wingtech Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wingtech Communication Co Ltd filed Critical Wingtech Communication Co Ltd
Priority to CN202311084881.5A priority Critical patent/CN117058183A/en
Publication of CN117058183A publication Critical patent/CN117058183A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images

Abstract

The embodiment of the application discloses an image processing method and device based on double cameras, electronic equipment and a storage medium; the method comprises the following steps: acquiring a first image to be processed acquired by a main camera, and acquiring a second image to be processed acquired by a secondary camera, wherein the field angle of the main camera is larger than that of the secondary camera; obtaining a binocular depth image according to the first image to be processed and the second image to be processed; obtaining a monocular depth image according to the first image to be processed; using the depth information of the monocular depth image, performing depth information compensation on the binocular depth image to obtain a target depth image; and according to the target depth image, blurring the first image to be processed to obtain a target image. By implementing the method, the target depth image with quality meeting the blurring requirement can be obtained under the condition that the field angle of the auxiliary camera is smaller than that of the main camera, and further the blurring effect of the background blurring image is improved.

Description

Image processing method and device based on double cameras, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing, and in particular, to an image processing method and apparatus based on dual cameras, an electronic device, and a storage medium.
Background
In the field of image processing, background blurring refers to that a main body and a background of an image are separated from each other by calculating parallax of the same object imaged by two imaging lenses in a space, and the main body is kept clear while the background part is subjected to blurring processing.
In the existing background blurring method, when a main camera and a secondary camera are used for generating background blurring images, when parameters of the secondary camera are poor, the quality of images acquired by the secondary camera is low, and the difference between the quality of the images acquired by the secondary camera and the quality of the images acquired by the main camera is large, so that excessive noise points exist in a depth map generated later or inaccurate depth values are generated. These problems can lead to undesirable blurring effects in the background portion of the resulting background blurred image, affecting the overall effect of the image.
Disclosure of Invention
The embodiment of the application discloses an image processing method, an image processing device, electronic equipment and a storage medium based on double cameras, which can generate a depth image meeting the blurring processing requirement under the condition that the auxiliary camera has poorer use parameters and the visual angle of the auxiliary camera is smaller than that of a main camera, so as to obtain a background blurring image.
The camera parameters are poor, and the problems of small field angle, low resolution, more noise points, inaccurate light and shade difference and the like of the acquired image are included. For example, video graphics array (Video Graph ics Array, VGA) cameras, which typically have 640 x 480 resolution, have poorer camera parameters and lower imaging quality than the commonly used 1920 x 1080 resolution (1080 p) or higher format cameras.
In order to achieve the above object, a first aspect of an embodiment of the present application provides an image processing method based on dual cameras, including:
acquiring a first image to be processed acquired by a main camera of the double cameras and acquiring a second image to be processed acquired by a secondary camera of the double cameras, wherein the field angle of the main camera is larger than that of the secondary camera;
obtaining a binocular depth image according to the first to-be-processed image and the second to-be-processed image, wherein an image acquisition area corresponding to the binocular depth image is the same as an image acquisition area corresponding to the second to-be-processed image, and the image acquisition area corresponding to the binocular depth image is smaller than the image acquisition area corresponding to the first to-be-processed image;
obtaining a monocular depth image corresponding to the first image to be processed according to the first image to be processed; the image acquisition area corresponding to the monocular depth image is the same as the image acquisition area corresponding to the first image to be processed;
Performing depth information compensation on the binocular depth image by using the depth information of the monocular depth image to obtain a target depth image, wherein an image acquisition area corresponding to the target depth image is the same as an image acquisition area corresponding to the first image to be processed;
and according to the target depth image, blurring the first image to be processed to obtain a target image.
In some embodiments, using the depth information of the monocular depth image, performing depth information compensation on the binocular depth image to obtain a target depth image, including:
performing image region matching on an image acquisition region corresponding to the monocular depth image and an image acquisition region corresponding to the binocular depth image, and determining an overlapping region;
setting the depth information corresponding to the overlapping region in the monocular depth image as null, and reserving the depth information except the overlapping region in the monocular depth image to obtain a phase difference region image; the phase difference region image contains depth information of the binocular depth image missing compared with the monocular depth image;
and carrying out depth information compensation on the binocular depth image according to the phase difference region image to obtain the target depth image.
In some embodiments, before the obtaining a binocular depth image from the first image to be processed and the second image to be processed, the method includes:
based on imaging differences caused by parameter differences of the main camera and the auxiliary camera, consistency adjustment is carried out on the first image to be processed and the second image to be processed; the consistency adjustment includes sharpening adjustment, blurring adjustment, and brightness adjustment.
In some embodiments, the first image to be processed and the second image to be processed are rotationally adjusted by a binocular parallelism correction process, such that the rotationally adjusted first image to be processed is aligned in parallel with the second image to be processed.
In some embodiments, the obtaining a binocular depth image according to the first to-be-processed image and the second to-be-processed image includes:
performing stereo matching processing on the first to-be-processed image and the second to-be-processed image to obtain a sparse depth image;
processing the first image to be processed, the second image to be processed and the sparse depth image to obtain a dense depth image;
and carrying out foreground segmentation processing on the first image to be processed to obtain a foreground mask image, and carrying out clear optimization on the foreground object edge of the dense depth image according to the foreground mask image to obtain the binocular depth image.
In some embodiments, the processing the first to-be-processed image, the second to-be-processed image, and the sparse depth image to obtain a dense depth image includes:
calculating the position transformation condition of pixel points between the first image to be processed and the second image to be processed by using an optical flow method to obtain an optical flow field;
and carrying out interpolation processing on unknown pixel points of the missing area of the sparse depth image according to the position transformation condition of the pixel points of the optical flow field, so as to obtain the dense depth image.
In some embodiments, performing foreground segmentation processing on the first image to obtain a foreground mask image, and performing sharpness optimization on foreground edges of the dense depth image according to the foreground mask image to obtain the binocular depth image, where the performing step includes:
performing foreground segmentation processing on the first image according to a deep learning algorithm, and extracting edge information of a foreground object to obtain the foreground mask image; the foreground mask image only reserves foreground object areas, and other areas are set to be empty;
and carrying out reinforcement treatment on the edges of the foreground objects of the dense depth image by combining the foreground mask image, highlighting the outline of the foreground objects, and further obtaining the binocular depth image.
A second aspect of an embodiment of the present application provides an image processing apparatus based on a dual camera, including:
the acquisition module is used for acquiring a first image to be processed acquired by the main cameras of the double cameras and acquiring a second image to be processed acquired by the auxiliary cameras of the double cameras, wherein the field angle of the main cameras is larger than that of the auxiliary cameras;
the processing module is used for obtaining a binocular depth image according to the first image to be processed and the second image to be processed, an image acquisition area corresponding to the binocular depth image is the same as an image acquisition area corresponding to the second image to be processed, and the image acquisition area corresponding to the binocular depth image is smaller than the image acquisition area corresponding to the first image to be processed; obtaining a monocular depth image corresponding to the first image to be processed according to the first image to be processed; the image acquisition area corresponding to the monocular depth image is the same as the image acquisition area corresponding to the first image to be processed; performing depth information compensation on the binocular depth image by using the depth information of the monocular depth image to obtain a target depth image, wherein an image acquisition area corresponding to the target depth image is the same as an image acquisition area corresponding to the first image to be processed; and according to the target depth image, blurring the first image to be processed to obtain a target image.
A third aspect of an embodiment of the present application provides an electronic device, including:
a memory storing executable program code;
and a processor coupled to the memory;
the processor invokes the executable program code stored in the memory, which when executed by the processor, causes the processor to implement the method according to the embodiment of the present application.
A fourth aspect of the embodiments of the present application provides a computer readable storage medium having executable program code stored thereon, which when executed by a processor, implements a method according to the embodiments of the present application.
According to the image processing method, the device, the electronic equipment and the storage medium based on the double cameras, in the method, a first image to be processed acquired by a main camera in the double cameras is acquired, and a second image to be processed acquired by a secondary camera with a view angle smaller than that of the main camera is acquired, wherein an image acquisition area corresponding to the second image to be processed is smaller than an image acquisition area corresponding to the first image to be processed. And processing the first to-be-processed image and the second to-be-processed image to obtain a binocular depth image, wherein the image acquisition area corresponding to the binocular depth image is smaller than the image acquisition area corresponding to the first to-be-processed image, and the size of the image acquisition area corresponding to the second to-be-processed image is the same as that of the image acquisition area corresponding to the second to-be-processed image. And processing the first to-be-processed image to obtain a monocular depth image corresponding to the first to-be-processed image, wherein the image acquisition area corresponding to the monocular depth image is the same as the image acquisition area corresponding to the first to-be-processed image in size and is larger than the image acquisition area corresponding to the binocular depth image. And performing depth information compensation on an image acquisition area, which is missing relative to the monocular depth image, in the binocular depth image by using the depth information of the monocular depth image to obtain a target depth image, and performing blurring processing on the first image to be processed according to the target depth image to obtain the target image. In summary, the method provided by the embodiment of the application can obtain the target depth image with quality meeting the blurring requirement under the condition that the field angle of the secondary camera is smaller than that of the primary camera, thereby improving the blurring effect of the background blurring image.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a background blurring effect diagram of an image processing method based on dual cameras according to an embodiment of the present application;
fig. 2 is a flowchart of an image processing method based on dual cameras according to an embodiment of the present application;
fig. 3 is an exemplary diagram of performing correction processing on a first to-be-processed image and a second to-be-processed image in the dual-camera-based image processing method according to the embodiment of the present application;
fig. 4 is another flowchart of an image processing method based on dual cameras according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an image processing device based on dual cameras according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more apparent, the specific technical solutions of the present application will be described in further detail below with reference to the accompanying drawings in the embodiments of the present application. The following examples are illustrative of the application and are not intended to limit the scope of the application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
It should be noted that the term "first/second/third" in relation to embodiments of the present application is used to distinguish between similar or different objects, and does not represent a particular ordering of the objects, it being understood that the "first/second/third" may be interchanged with a particular order or sequencing, as permitted, to enable embodiments of the present application described herein to be implemented in an order other than that illustrated or described herein.
When a photo is taken, the background blurring function of the shooting device can help a user to blur the background while keeping the foreground main body clear in the shot image, so that the part of the foreground main body is more highlighted, and the shot image has stronger artistic sense and aesthetic feeling in the aspect of the main body. The effect of background blurring can be achieved by adjusting the aperture size, changing the focal length, processing according to depth information, and the like. The depth information is different from the aperture size and focal length, and requires additional sensors or algorithmic processing to obtain it. Using depth information to achieve background blurring can more accurately identify foreground subjects and backgrounds and produce more natural effects. For some photographing devices, the adjustment of the focal length and the aperture is implemented by software, so there is a limit, and at this time, better effects can be obtained by using the acquired depth information to implement background blurring.
When the parameters of the auxiliary cameras are poor, the quality of the images acquired by the auxiliary cameras is low, and the difference between the quality of the images acquired by the auxiliary cameras and the quality of the images acquired by the main cameras is large, so that excessive noise points exist in the subsequently generated depth map or inaccurate depth values are generated. These problems can lead to undesirable blurring effects in the background portion of the resulting background blurred image, affecting the overall effect of the image.
In view of this, the embodiment of the application provides an image processing method based on dual cameras, which obtains binocular depth images according to a first to-be-processed image acquired by a main camera and a second to-be-processed image acquired by a secondary camera in the implementation process. And obtaining a monocular depth image corresponding to the first image to be processed according to the first image to be processed. And performing depth information compensation on the binocular depth image by using the depth information of the monocular depth image to obtain a target depth image. And according to the target depth image, blurring the first image to be processed to obtain the target image. By implementing the method, the target depth image with quality meeting the blurring requirement can be obtained under the condition that the field angle of the auxiliary camera is smaller than that of the main camera, and further the blurring effect of the background blurring image is improved.
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Referring to fig. 1, a background blurring effect diagram of an image processing method based on dual cameras according to an embodiment of the present application is shown.
As shown in fig. 1, the image processing method based on the dual cameras provided by the embodiment of the application is to perform a series of processing procedures on a first image to be processed acquired by a main camera of the dual cameras and a second image to be processed acquired by a sub camera of the dual cameras to obtain a target background blurring image. Compared with a first image to be processed acquired by main shooting, the target blurring image keeps the definition degree of a foreground main body and simultaneously carries out blurring processing on a background, so that the target background blurring image has stronger aesthetic feeling in the aspect of main appearance.
Where the foreground subject is typically an object, character or element with a primary focus in the photograph, the highlighted portion. The background is then the area around or behind the foreground subject, providing the environment or context. The background is typically a visually minor part relative to the foreground subject, but still has an impact on the aesthetic and story of the overall photograph. The background may include other objects, scenery, or environmental elements, and in some embodiments, areas of the captured image other than the foreground subject may be marked as background. In performing the background blurring process, the foreground subject and the background generally appear in a real coordinate system as if the foreground subject is closer to an image pickup device for capturing an image.
In the embodiment of the application, a first image to be processed acquired by a main camera of the double cameras is acquired, and a second image to be processed acquired by a secondary camera of the double cameras is acquired, wherein the field angle of the main camera is larger than that of the secondary camera.
As shown in fig. 1, the first image to be processed acquired by the main camera and the second image to be processed acquired by the sub camera each include a foreground main body, a background object 1 and a background object 2. Because the field angle of the main camera is larger than that of the auxiliary camera, the image acquisition area corresponding to the first image to be processed acquired by the main camera is larger than that of the image acquisition area corresponding to the second image to be processed acquired by the auxiliary camera. It should be noted that, the field of View (FOV) is also called a field of View in optical engineering, and is generally used to describe the wide angle or the focal length of a lens. The size of the angle of view determines the field of view of the optical instrument, and the same object is shot at the same position, so that a shooting device with a large angle of view can shoot more picture contents. As shown in fig. 1, in one embodiment, the first image to be processed contains all information of the foreground subject, the background object 1, and the background object 2. In the second image to be processed, however, the image contents of the background object 1 and the background object 2 are incomplete due to the small angle of view of the sub-camera used for capturing. In the related art, when binocular depth estimation is performed by using a first to-be-processed image and a second to-be-processed image, complete depth information cannot be obtained, but the image processing method based on the dual cameras provided by the embodiment of the application can obtain a target depth image with the same size of an image acquisition area corresponding to the first to-be-processed image through corresponding processing, so as to obtain a target background blurring image.
It should be noted that fig. 1 is only an example of a background blurring effect of the dual-camera-based image processing method provided by the embodiment of the present application, and is not limited to the method provided by the embodiment of the present application. The background objects 1 and 2 shown in fig. 1 may be buildings, landscapes, persons, objects or other elements, and are not limited herein. In addition to blurring the background object 1 and the background object 2, the target background blurring image in fig. 1 also blurring the background of the region other than the foreground main body.
In some embodiments, blurring the background after determining the foreground subject refers to blurring the background in the image to the same extent as the background in the image except for the foreground subject.
In some embodiments, after determining the foreground subject, blurring the background refers to blurring background regions in the image to different extents according to the depth information. By analyzing the depth information in the image, a depth difference between the foreground subject and the background can be determined. These depth differences can be used to perform blurring processing on the background to different degrees so as to simulate the focus adjustment capability of the human eyes on objects at different distances, and according to the depth information, the far-away area in the background can be further blurring processed to a greater degree, so that the image after blurring processing presents better effect.
Next, an image processing method based on a dual camera provided in an embodiment of the present application will be described.
Referring to fig. 2, a flowchart of a dual camera-based image processing method according to an embodiment of the present application includes the following steps:
step 201, a first image to be processed acquired by a main camera is acquired, and a second image to be processed acquired by a sub camera is acquired.
In the embodiment of the application, a first image to be processed acquired by a main camera of the double cameras is acquired, and a second image to be processed acquired by a secondary camera of the double cameras is acquired, wherein the field angle of the main camera is larger than that of the secondary camera.
Step 202, obtaining a binocular depth image according to the first image to be processed and the second image to be processed.
In the embodiment of the application, a binocular depth image is obtained according to the first to-be-processed image and the second to-be-processed image, an image acquisition area corresponding to the binocular depth image is the same as an image acquisition area corresponding to the second to-be-processed image, and the image acquisition area corresponding to the binocular depth image is smaller than the image acquisition area corresponding to the first to-be-processed image.
In the embodiment of the application, a stereo matching algorithm is used for processing the first to-be-processed image and the second to-be-processed image to obtain the binocular depth image.
In some embodiments, in order to ensure accuracy of depth information in the obtained binocular depth image, correction processing is performed on the first to-be-processed image and the second to-be-processed image, so as to obtain corrected images of the main camera and the auxiliary camera after the correction processing. The correction processing refers to aligning pixels representing the same entity or feature in a first image to be processed acquired by the main camera and a second image to be processed acquired by the auxiliary camera in the horizontal direction. By carrying out parallel correction on the two images, image distortion caused by different positions of the main camera and the auxiliary camera can be eliminated, and meanwhile, the two corrected images are aligned because of the pixel point rows of the same entity or feature, so that the complexity and the calculated amount in parallax calculation can be reduced, and the performance is improved.
The above correction processing for the first image to be processed and the second image to be processed may be divided into the following steps:
calibrating the camera to obtain the internal and external parameters of the main camera and the auxiliary camera;
analyzing a first image to be processed and a second image to be processed, detecting characteristics in the images to match, searching corresponding points and recording pixel coordinates;
By using the pixel coordinates of the corresponding points, the mapping relationship between the corrected images can be calculated so that the pixel positions of the same object in the two images are aligned in the horizontal direction;
the correction map is applied to the first image to be processed and the second image to be processed such that the corrected images are aligned in the horizontal direction.
And after the first image to be processed and the second image to be processed are subjected to parallel correction, performing parallax calculation according to the corrected two images to obtain a depth information image. Parallax calculation refers to estimating depth information of different points in an object or a scene by analyzing pixel differences between corrected images of a main camera and a sub camera, and can be divided into the following steps:
corresponding points are matched, corresponding points are found in the corrected image, and the corresponding points represent pixel positions representing the same entity in the two images;
calculating parallax, namely calculating pixel displacement between two images to obtain parallax, namely pixel distance difference of the corresponding points in the horizontal direction, wherein in the same group of images, the larger the parallax value is, the farther the parallax value is from an imaging device;
depth estimation, namely calculating depth information of a corresponding point by using a similar triangulation method through internal and external parameters and parallax values of a main camera and a secondary camera, wherein the depth information refers to the distance of an object relative to an imaging device;
And generating a depth information image, and calculating the depth information of all corresponding points to obtain the depth information image.
In some embodiments, the above-described depth information image is used as a binocular depth image for the subsequent step.
In some embodiments, due to the poor parameters of the secondary camera, the generated depth information image lacks the depth information of the partial region, and the obtained depth information image is further processed to ensure the effect of the generated target background blurring image.
And optimizing the depth information image by using a light flow method to make up for the missing depth information. In some embodiments, the processing the first to-be-processed image, the second to-be-processed image, and the sparse depth map to obtain a dense depth image includes:
calculating the position transformation condition of pixel points between the first image to be processed and the second image to be processed by using an optical flow method to obtain an optical flow field;
and carrying out interpolation processing on unknown pixel points of the missing area of the sparse depth image according to the position transformation condition of the pixel points of the optical flow field, so as to obtain the dense depth image.
Because optical flow uses a positional transformation of pixels in an image sequence over time to estimate their motion in space. In this case, the optical flow field is obtained by calculating the position conversion condition of the pixel points between the first image to be processed and the second image to be processed using the optical flow method.
By analyzing the position transformation condition of the pixel points on the optical flow field, the depth information of the pixel points covered by the missing part of the depth image can be deduced. The optical flow method assumes that pixels in adjacent frame images move on the same object or scene, and thus depth information can be estimated using positional transformation of the pixels.
When interpolation processing is carried out on the missing region of the sparse depth image, the position transformation condition of the pixel points provided by the optical flow method, namely the optical flow field, is used, and the depth values of the unknown pixel points covered by the missing region are interpolated according to a certain rule, so that a dense depth image is obtained. By the method, the depth information of the missing part area of the depth information image can be made up, and the effect of generating the target background blurring image is improved.
In some embodiments, to obtain more accurate depth information, the foreground subject is segmented from the acquired image using a deep learning, artificial intelligence approach to obtain a sharper foreground subject edge profile. The implementation steps are that foreground segmentation processing is carried out on the first image to obtain a foreground mask image, and foreground edges of the dense depth image are subjected to clear optimization according to the foreground mask image, so that the binocular depth image is obtained, and the implementation steps comprise:
Performing foreground segmentation processing on the first image according to a deep learning algorithm, and extracting edge information of a foreground object to obtain the foreground mask image; the foreground mask image only reserves foreground object areas, and other areas are set to be empty;
and carrying out reinforcement treatment on the edges of the foreground objects of the dense depth image by combining the foreground mask image, highlighting the outline of the foreground objects, and further obtaining the binocular depth image.
The foreground main body can be a portrait, clear portrait edges can be obtained after the portrait is subjected to foreground segmentation processing, missing depth information of a depth information image such as tiny hollowed-out portrait fingers is made up, the portrait can be separated from the background through the foreground segmentation processing, and the accurate position of the portrait edges is obtained, so that the effect of generating a target background blurring image is further perfected, and the final imaging effect is more real and clear. The foreground body may be other entities, which are not limited herein, and the edges of the corresponding entities are obtained through foreground segmentation processing.
Because the image acquisition area corresponding to the foreground mask image obtained according to the first image to be processed is larger than the image acquisition area corresponding to the dense depth image, the two images also need to be matched according to the positions of the foreground main body in the two images so as to obtain the binocular depth image with the foreground main body outline highlighted.
Step 203, obtaining a monocular depth image corresponding to the first image to be processed according to the first image to be processed.
According to the embodiment of the application, a monocular depth image corresponding to the first image to be processed is obtained according to the first image to be processed; and the image acquisition area corresponding to the monocular depth image is the same as the image acquisition area corresponding to the first image to be processed.
In some embodiments, the depth of each pixel in the image is predicted by learning training data using a deep learning model, such as a convolutional neural network or a network model such as a self-encoder. These models can learn the relationship between depth and image from the large scale annotated depth image data and make depth estimation for new images based on this relationship.
In some embodiments, the depth information is inferred using geometric constraints in the image and camera parameters.
In some embodiments, depth is estimated using a priori knowledge or statistical information of the scene. For example, depth is estimated by the size or shape distribution of the object. Such typically requires modeling of a particular scene or application and depth estimation based on a priori knowledge.
It should be noted that, since the image acquisition area corresponding to the binocular depth image is smaller than the image acquisition area corresponding to the first to-be-processed image, the depth information of the monocular depth image needs to be used for compensating the missing area so as to ensure the blurring effect of the target background blurring image.
And 204, performing depth information compensation on the binocular depth image by using the depth information of the monocular depth image to obtain a target depth image.
In the embodiment of the application, the depth information of the monocular depth image is used for carrying out depth information compensation on the binocular depth image to obtain a target depth image, and an image acquisition area corresponding to the target depth image is the same as an image acquisition area corresponding to the first image to be processed.
The image acquisition area corresponding to the binocular depth image obtained in the step 202 is smaller than the image acquisition area corresponding to the monocular depth image obtained in the step 203, and the missing part in the binocular depth image is filled with unique depth information in the monocular depth image, so that the target depth image is obtained.
In some embodiments, using the depth information of the monocular depth image, performing depth information compensation on the binocular depth image to obtain a target depth image, including:
Performing image region matching on an image acquisition region corresponding to the monocular depth image and an image acquisition region corresponding to the binocular depth image, and determining an overlapping region;
setting the depth information corresponding to the overlapping region in the monocular depth image as null, and reserving the depth information except the overlapping region in the monocular depth image to obtain a phase difference region image; the phase difference region image contains depth information of the binocular depth image missing compared with the monocular depth image;
and carrying out depth information compensation on the binocular depth image according to the phase difference region image to obtain the target depth image.
It should be noted that, the above-mentioned overlapping area refers to an image area range covering the same entity in the monocular depth image and the binocular depth image, and does not refer to that the image contents of the monocular depth image and the binocular depth image in the overlapping area are completely the same. The difference region refers to an image region except for an overlapping region in the monocular depth image, and the binocular depth image lacks depth information of the difference region.
In some embodiments, the performing depth information compensation on the binocular depth image according to the phase difference region image to obtain the target depth image includes: and carrying out fusion processing according to the phase difference region image and the binocular depth image, and carrying out smoothing processing on the fusion edge to obtain a target depth image of which the image acquisition region is the same as the image acquisition region corresponding to the first image to be processed.
When the depth information of the binocular depth image is compensated, the depth information of the adjacent positions of the overlapping area and the phase difference area can be changed greatly due to different acquisition modes of the binocular depth image and the monocular depth image, so that the compensated image is incoherent or abrupt. The smoothing processing can make the transition of the depth information at the boundary more natural and smooth, and improve the overall consistency of the image. The smoothing process may be implemented by means of filters, interpolation, weighted averaging, etc., and is not limited herein.
And 205, blurring the first image to be processed according to the target depth image to obtain a target image.
In the embodiment of the application, blurring processing is performed on the first image to be processed according to the target depth image, so as to obtain a target image.
In the embodiment of the present application, blurring processing is performed on a first image to be processed according to a target depth image, including the following steps:
calculating a circle of confusion radius image according to the target depth image, comprising: and converting the depth information into corresponding circle-of-diffusion radius values according to the depth information of the pixel points except the foreground main body area in the target depth image and the conversion rule, thereby obtaining a circle-of-diffusion radius image. The conversion rule may be a linear or exponential mapping, and is not limited herein.
Performing discrete derivation according to the first to-be-processed fuzzy check, including: in order to achieve the background blurring effect, discrete derivation of the blurring kernel is required, which is achieved by calculating derivatives of the blurring kernel in the horizontal and vertical directions. The discrete derivation strengthens the edge information of the fuzzy core, so that the fuzzy effect is more natural in the background. The blur kernel is a filtering operator used in image processing to blur or smooth an image.
And carrying out weighted filtering processing on the first image to be processed after discrete derivation according to the dispersion circle radius image, wherein the weighted filtering processing comprises the following steps: and carrying out weighted filtering on the first image to be processed and the fuzzy kernel after discrete derivation. In the weighted filtering, the blur degree weight of each pixel point is adjusted by using the circle-of-diffusion radius image. Pixels that are relatively far apart, i.e., pixels with a large radius of the circle of confusion, have a large weight, and the background area is more strongly blurred.
Imaging an object background ghosting image, comprising: and generating a background blurring effect by the weighted and filtered first image to be processed to obtain a target background blurring image.
According to the technical scheme, the target depth image is obtained by making up the depth information process of the binocular depth image obtained by processing the first to-be-processed image and the second to-be-processed image by using the monocular depth image obtained by processing the first to-be-processed image, and then the target image is obtained by carrying out blurring processing according to the target depth image. According to the method, the target depth image with quality meeting the blurring requirement can be obtained under the condition that the field angle of the auxiliary camera is smaller than that of the main camera, and then the blurring effect of the background blurring image is improved.
The correction processing of the first to-be-processed image and the second to-be-processed image in step 202 will be further described with reference to the accompanying drawings.
Referring to fig. 3, an exemplary diagram of performing correction processing on a first to-be-processed image and a second to-be-processed image in the dual-camera-based image processing method according to the embodiment of the application is shown.
Because the difference between the image quality of the images acquired by the main camera and the image quality of the images acquired by the auxiliary camera are large, in some embodiments, the acquired first to-be-processed image and the acquired second to-be-processed image are subjected to consistency adjustment so as to ensure that accurate depth information can be obtained in subsequent processing.
In some embodiments, the first image to be processed and the second image to be processed are rotationally adjusted by a binocular parallelism correction process, such that the rotationally adjusted first image to be processed is aligned in parallel with the second image to be processed.
As shown in fig. 3, in the first to-be-processed image and the second to-be-processed image which have not undergone the binocular disparity correction processing, the feature pixels of the foreground subject are not aligned in both horizontal directions. In the first to-be-processed image and the second to-be-processed image which are subjected to binocular parallel correction processing, the characteristic pixel points of the foreground main body are aligned in the horizontal directions, and the method can reduce complexity and calculated amount in parallax calculation and improve performance.
The two images to be processed include a background portion in addition to the foreground main body. The effect of the correction process is mainly focused and represented in fig. 3, and the background portion is not represented.
In some embodiments, because the difference in image quality between the first to-be-processed image acquired by the main camera and the second to-be-processed image acquired by the sub camera is large, when the binocular depth image is obtained according to the two to-be-processed images, the corresponding point detected by the parallax calculation step is inaccurate, so that the accuracy and the precision of the obtained binocular depth image are affected. The image quality can be optimized by consistency adjustment of the first image to be processed and the second image to be processed, so that the accuracy of the binocular depth image obtained through processing is improved.
Referring to fig. 4, another flowchart of a dual camera-based image processing method according to an embodiment of the present application includes the following steps:
step 401, acquiring a first image to be processed acquired by a main camera and acquiring a second image to be processed acquired by a sub camera.
Step 402, performing consistency adjustment on the first to-be-processed image and the second to-be-processed image.
In some embodiments, before the obtaining a binocular depth image from the first image to be processed and the second image to be processed, the method includes:
based on imaging differences caused by parameter differences of the main camera and the auxiliary camera, consistency adjustment is carried out on the first image to be processed and the second image to be processed; the consistency adjustment includes sharpening adjustment, blurring adjustment, and brightness adjustment.
In some embodiments, the first image to be processed and the second image to be processed may be adjusted for consistency based on imaging differences caused by differences in primary camera and secondary camera parameters. The aim of such consistency adjustment is to bring the two images closer and more consistent in quality and visual characteristics by means of sharpening, blurring, and brightness adjustment. Wherein the sharpening adjustment may enhance the edges and sharpness of the image. The brightness adjustment mainly adjusts the brightness and contrast of the images, so that the brightness and contrast of the two images are more consistent in the whole brightness. It should be noted that, instead of adjusting one image to be processed to the level of the other image to be processed, the two images to be processed are adjusted simultaneously so that they are closer and consistent in quality and visual characteristics.
Step 403, obtaining a binocular depth image according to the first to-be-processed image and the second to-be-processed image after the consistency adjustment.
Step 404, obtaining a monocular depth image corresponding to the first image to be processed according to the first image to be processed.
And step 405, performing depth information compensation on the binocular depth image by using the depth information of the monocular depth image to obtain a target depth image.
And step 406, blurring the first image to be processed according to the target depth image to obtain a target image.
It should be understood that, although the steps in the flowcharts described above are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described above may include a plurality of sub-steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of execution of the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with at least a part of the sub-steps or stages of other steps or other steps.
Based on the foregoing embodiments, the embodiments of the present application provide a dual camera-based image processing apparatus, where the apparatus includes each module included, and each unit included in each module may be implemented by a processor; of course, the method can also be realized by a specific logic circuit; in an implementation, the processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Referring to fig. 5, a schematic structural diagram of an image processing apparatus based on dual cameras according to an embodiment of the present application is shown in fig. 5, where the apparatus includes an acquisition module 501 and a processing module 502, and the processing module 502 is as follows:
the acquisition module 501 is configured to acquire a first image to be processed acquired by a main camera of the dual cameras and acquire a second image to be processed acquired by a sub camera of the dual cameras, where a field angle of the main camera is greater than a field angle of the sub camera.
The processing module 502 is configured to obtain a binocular depth image according to the first to-be-processed image and the second to-be-processed image, where an image acquisition area corresponding to the binocular depth image is the same as an image acquisition area corresponding to the second to-be-processed image, and the image acquisition area corresponding to the binocular depth image is smaller than the image acquisition area corresponding to the first to-be-processed image; obtaining a monocular depth image corresponding to the first image to be processed according to the first image to be processed; the image acquisition area corresponding to the monocular depth image is the same as the image acquisition area corresponding to the first image to be processed; performing depth information compensation on the binocular depth image by using the depth information of the monocular depth image to obtain a target depth image, wherein an image acquisition area corresponding to the target depth image is the same as an image acquisition area corresponding to the first image to be processed; and according to the target depth image, blurring the first image to be processed to obtain a target image.
The description of the apparatus embodiments above is similar to that of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present application, please refer to the description of the embodiments of the method of the present application.
It should be noted that, in the embodiment of the present application, the division of the modules by the image processing apparatus based on the dual cameras shown in fig. 5 is schematic, and is merely a logic function division, and another division manner may be adopted in actual implementation. In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units. Or in a combination of software and hardware.
It should be noted that, in the embodiment of the present application, if the method is implemented in the form of a software functional module, and sold or used as a separate product, the method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partly contributing to the related art, embodied in the form of a software product stored in a storage medium, including several instructions for causing an electronic device to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a read-only memory (Read On ly Memory, ROM), a magnetic disk, or an optical disk, or the like. Thus, embodiments of the application are not limited to any specific combination of hardware and software.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the application. The electronic device 600 as shown in fig. 6 may include a memory 601, and a processor 602 coupled to the memory 601.
Wherein the processor 602 of the electronic device 600 is configured to provide computing and control capabilities. The memory 601 of the electronic device 600 includes a nonvolatile storage medium and a random access memory.
The embodiment of the application discloses a computer readable storage medium, on which executable program code is stored, which when being executed by a processor, realizes the method according to the embodiment of the application.
It should be noted here that: the description of the storage medium and apparatus embodiments above is similar to that of the method embodiments described above, with similar benefits as the method embodiments. For technical details not disclosed in the storage medium, the storage medium and the device embodiments of the present application, please refer to the description of the method embodiments of the present application.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" or "some embodiments" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" or "in some embodiments" in various places throughout this specification are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application. The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments. The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
The term "and/or" is herein merely an association relation describing associated objects, meaning that there may be three relations, e.g. object a and/or object B, may represent: there are three cases where object a alone exists, object a and object B together, and object B alone exists.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments are merely illustrative, and the division of the modules is merely a logical function division, and other divisions may be implemented in practice, such as: multiple modules or components may be combined, or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or modules, whether electrically, mechanically, or otherwise.
The modules described above as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules; can be located in one place or distributed to a plurality of network units; some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated in one processing unit, or each module may be separately used as one unit, or two or more modules may be integrated in one unit; the integrated modules may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a removable storage device, a read only memory (Read On ly Memory, ROM), a magnetic or optical disk, or other various media capable of storing program code.
Alternatively, the above-described integrated units of the present application may be stored in a computer-readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partly contributing to the related art, embodied in the form of a software product stored in a storage medium, including several instructions for causing an electronic device to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
The methods disclosed in the method embodiments provided by the application can be arbitrarily combined under the condition of no conflict to obtain a new method embodiment.
The features disclosed in the several product embodiments provided by the application can be combined arbitrarily under the condition of no conflict to obtain new product embodiments.
The features disclosed in the embodiments of the method or the apparatus provided by the application can be arbitrarily combined without conflict to obtain new embodiments of the method or the apparatus.
The foregoing is merely an embodiment of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. The image processing method based on the double cameras is characterized by comprising the following steps of:
acquiring a first image to be processed acquired by a main camera of the double cameras and acquiring a second image to be processed acquired by a secondary camera of the double cameras, wherein the field angle of the main camera is larger than that of the secondary camera;
obtaining a binocular depth image according to the first to-be-processed image and the second to-be-processed image, wherein an image acquisition area corresponding to the binocular depth image is the same as an image acquisition area corresponding to the second to-be-processed image, and the image acquisition area corresponding to the binocular depth image is smaller than the image acquisition area corresponding to the first to-be-processed image;
obtaining a monocular depth image corresponding to the first image to be processed according to the first image to be processed; the image acquisition area corresponding to the monocular depth image is the same as the image acquisition area corresponding to the first image to be processed;
Performing depth information compensation on the binocular depth image by using the depth information of the monocular depth image to obtain a target depth image, wherein an image acquisition area corresponding to the target depth image is the same as an image acquisition area corresponding to the first image to be processed;
and according to the target depth image, blurring the first image to be processed to obtain a target image.
2. The method of claim 1, wherein performing depth information compensation on the binocular depth image using the depth information of the monocular depth image to obtain a target depth image comprises:
performing image region matching on an image acquisition region corresponding to the monocular depth image and an image acquisition region corresponding to the binocular depth image, and determining an overlapping region;
setting the depth information corresponding to the overlapping region in the monocular depth image as null, and reserving the depth information except the overlapping region in the monocular depth image to obtain a phase difference region image; the phase difference region image contains depth information of the binocular depth image missing compared with the monocular depth image;
and carrying out depth information compensation on the binocular depth image according to the phase difference region image to obtain the target depth image.
3. The method according to claim 1, wherein before the obtaining a binocular depth image from the first image to be processed and the second image to be processed, the method comprises:
based on imaging differences caused by parameter differences of the main camera and the auxiliary camera, consistency adjustment is carried out on the first image to be processed and the second image to be processed; the consistency adjustment includes sharpening adjustment, blurring adjustment, and brightness adjustment.
4. The method according to claim 1, wherein the first image to be processed and the second image to be processed are rotationally adjusted by a binocular parallelism correction process, so that the rotationally adjusted first image to be processed and the second processed image are aligned in parallel.
5. The method according to any one of claims 1-4, wherein obtaining a binocular depth image from the first image to be processed and the second image to be processed comprises:
performing stereo matching processing on the first to-be-processed image and the second to-be-processed image to obtain a sparse depth image;
processing the first image to be processed, the second image to be processed and the sparse depth image to obtain a dense depth image;
And carrying out foreground segmentation processing on the first image to be processed to obtain a foreground mask image, and carrying out clear optimization on the foreground object edge of the dense depth image according to the foreground mask image to obtain the binocular depth image.
6. The method of claim 5, wherein processing the first to-be-processed image, the second to-be-processed image, and the sparse depth image results in a dense depth image, comprising:
calculating the position transformation condition of pixel points between the first image to be processed and the second image to be processed by using an optical flow method to obtain an optical flow field;
and carrying out interpolation processing on unknown pixel points of the missing area of the sparse depth image according to the position transformation condition of the pixel points of the optical flow field, so as to obtain the dense depth image.
7. The method of claim 6, wherein performing foreground segmentation on the first image to obtain a foreground mask image, performing sharpness optimization on foreground edges of the dense depth image according to the foreground mask image, and further obtaining the binocular depth image, comprises:
performing foreground segmentation processing on the first image according to a deep learning algorithm, and extracting edge information of a foreground object to obtain the foreground mask image; the foreground mask image only reserves foreground object areas, and other areas are set to be empty;
And carrying out reinforcement treatment on the edges of the foreground objects of the dense depth image by combining the foreground mask image, highlighting the outline of the foreground objects, and further obtaining the binocular depth image.
8. An image processing apparatus based on dual cameras, comprising:
the acquisition module is used for acquiring a first image to be processed acquired by the main cameras of the double cameras and acquiring a second image to be processed acquired by the auxiliary cameras of the double cameras, wherein the field angle of the main cameras is larger than that of the auxiliary cameras;
the processing module is used for obtaining a binocular depth image according to the first image to be processed and the second image to be processed, an image acquisition area corresponding to the binocular depth image is the same as an image acquisition area corresponding to the second image to be processed, and the image acquisition area corresponding to the binocular depth image is smaller than the image acquisition area corresponding to the first image to be processed; obtaining a monocular depth image corresponding to the first image to be processed according to the first image to be processed; the image acquisition area corresponding to the monocular depth image is the same as the image acquisition area corresponding to the first image to be processed; performing depth information compensation on the binocular depth image by using the depth information of the monocular depth image to obtain a target depth image, wherein an image acquisition area corresponding to the target depth image is the same as an image acquisition area corresponding to the first image to be processed; and according to the target depth image, blurring the first image to be processed to obtain a target image.
9. An electronic device, comprising:
a memory storing executable program code;
and a processor coupled to the memory;
the processor invoking the executable program code stored in the memory, which when executed by the processor, causes the processor to implement the method of any of claims 1-7.
10. A computer readable storage medium having stored thereon executable program code, which when executed by a processor, implements the method of any of claims 1-7.
CN202311084881.5A 2023-08-25 2023-08-25 Image processing method and device based on double cameras, electronic equipment and storage medium Pending CN117058183A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311084881.5A CN117058183A (en) 2023-08-25 2023-08-25 Image processing method and device based on double cameras, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311084881.5A CN117058183A (en) 2023-08-25 2023-08-25 Image processing method and device based on double cameras, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117058183A true CN117058183A (en) 2023-11-14

Family

ID=88656984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311084881.5A Pending CN117058183A (en) 2023-08-25 2023-08-25 Image processing method and device based on double cameras, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117058183A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117560480A (en) * 2024-01-09 2024-02-13 荣耀终端有限公司 Image depth estimation method and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117560480A (en) * 2024-01-09 2024-02-13 荣耀终端有限公司 Image depth estimation method and electronic equipment

Similar Documents

Publication Publication Date Title
JP7003238B2 (en) Image processing methods, devices, and devices
US9361680B2 (en) Image processing apparatus, image processing method, and imaging apparatus
CN107705333B (en) Space positioning method and device based on binocular camera
KR101643607B1 (en) Method and apparatus for generating of image data
CN107945105B (en) Background blurring processing method, device and equipment
CN110493527B (en) Body focusing method and device, electronic equipment and storage medium
EP3480784B1 (en) Image processing method, and device
WO2019105261A1 (en) Background blurring method and apparatus, and device
JP7123736B2 (en) Image processing device, image processing method, and program
JP6577703B2 (en) Image processing apparatus, image processing method, program, and storage medium
US9619886B2 (en) Image processing apparatus, imaging apparatus, image processing method and program
CN109640066B (en) Method and device for generating high-precision dense depth image
CN109559353B (en) Camera module calibration method and device, electronic equipment and computer readable storage medium
JP7378219B2 (en) Imaging device, image processing device, control method, and program
CN110276831B (en) Method and device for constructing three-dimensional model, equipment and computer-readable storage medium
CN113313661A (en) Image fusion method and device, electronic equipment and computer readable storage medium
CN110619660A (en) Object positioning method and device, computer readable storage medium and robot
CN114693760A (en) Image correction method, device and system and electronic equipment
JP7156624B2 (en) Depth map filtering device, depth map filtering method and program
CN117058183A (en) Image processing method and device based on double cameras, electronic equipment and storage medium
CN110443228B (en) Pedestrian matching method and device, electronic equipment and storage medium
US20170289516A1 (en) Depth map based perspective correction in digital photos
CN113610865B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN105335959B (en) Imaging device quick focusing method and its equipment
JP6395429B2 (en) Image processing apparatus, control method thereof, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination