CN115731296A - Image processing method, device, terminal and storage medium - Google Patents

Image processing method, device, terminal and storage medium Download PDF

Info

Publication number
CN115731296A
CN115731296A CN202110982759.4A CN202110982759A CN115731296A CN 115731296 A CN115731296 A CN 115731296A CN 202110982759 A CN202110982759 A CN 202110982759A CN 115731296 A CN115731296 A CN 115731296A
Authority
CN
China
Prior art keywords
information
calibration
source image
image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110982759.4A
Other languages
Chinese (zh)
Inventor
张超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202110982759.4A priority Critical patent/CN115731296A/en
Publication of CN115731296A publication Critical patent/CN115731296A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The present disclosure relates to an image processing method, an image processing apparatus, a terminal, and a storage medium, wherein the image processing method includes: acquiring a first source image, a second source image and a third source image; and correcting and aligning the first source image and the second source image according to the first calibration information, the second calibration information, the first calibration focusing information, the second calibration focusing information, the first shooting focusing information, the second shooting focusing information and the third source image to obtain a first target image and a second target image. In the method, relevant information of a depth camera, focusing information of a color camera and the like are introduced to correct and align a color image shot by the color camera to obtain a registered target image. The method does not need to use a neural network, has lower performance requirement on the terminal, and can improve the image correction alignment effect and obtain a more registered target image due to the introduction of the related information of the depth camera, the focusing information of the color camera and the like.

Description

Image processing method, device, terminal and storage medium
Technical Field
The present disclosure relates to the field of terminal technologies, and in particular, to an image processing method and apparatus, a terminal, and a storage medium.
Background
In the application scenario of Yu Shuang camera, two images obtained by double-shot camera shooting generally need to be aligned in a rectification manner to obtain two registered images.
For example, background blurring, 3D modeling, etc. all use disparity information between two shots, and all applications based on disparity calculation use registered images. Second, the registered images are also used in scene cuts or image fusion.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides an image processing method, apparatus, terminal, and storage medium.
According to a first aspect of the embodiments of the present disclosure, there is provided an image processing method applied to a terminal, the method including:
acquiring a first source image, a second source image and a third source image, wherein the first source image is obtained by shooting with a first color camera, the second source image is obtained by shooting with a second color camera, and the third source image is obtained by shooting with a depth camera;
correcting and aligning the first source image and the second source image according to first calibration information, second calibration information, first calibration focusing information, second calibration focusing information, first shooting focusing information, second shooting focusing information and the third source image to obtain a first target image and a second target image, wherein the first calibration information is calibration information of the first color camera and the depth camera, the second calibration information is calibration information of the first color camera and the second color camera, the first calibration focusing information is focusing information of the first color camera when the second calibration information is determined, the second calibration focusing information is focusing information of the second color camera when the second calibration information is determined, the first shooting focusing information is focusing information of the first color camera when the first source image is shot, and the second shooting focusing information is focusing information of the second color camera when the second source image is shot.
Optionally, the performing, according to the first calibration information, the second calibration information, the first calibration focusing information, the second calibration focusing information, the first shooting focusing information, the second shooting focusing information, and the third source image, a correction alignment process on the first source image and the second source image to obtain a first target image and a second target image includes:
determining target depth information of a set target according to the first calibration information, the first source image and the third source image;
and correcting and aligning the first source image and the second source image according to the target depth information, the second calibration information, the first calibration focusing information, the second calibration focusing information, the first shooting focusing information and the second shooting focusing information to obtain a first target image and a second target image.
Optionally, the performing, according to the target depth information, the second calibration information, the first calibration focusing information, the second calibration focusing information, the first shooting focusing information, and the second shooting focusing information, a correction alignment process on the first source image and the second source image to obtain the first target image and the second target image includes:
correcting and aligning the first source image according to the target depth information, the second calibration information, the first calibration focusing information and the first shooting focusing information to obtain a first target image;
and correcting and aligning the second source image according to the target depth information, the second calibration focusing information and the second shooting focusing information to obtain a second target image.
Optionally, the performing, according to the target depth information, the second calibration information, the first calibration focusing information, and the first shooting focusing information, a correction alignment process on the first source image to obtain the first target image includes:
determining a first remapping adjustment parameter according to the target depth information, the second calibration information, the first calibration focusing information and the first shooting focusing information;
and carrying out remapping processing on the first source image according to the first remapping adjustment parameter to obtain the first target image.
Optionally, the performing, according to the target depth information, the second calibration focusing information, and the second shooting focusing information, a correction alignment process on the second source image to obtain the second target image includes:
determining a second remapping adjustment parameter according to the target depth information, the second calibration focusing information and the second shooting focusing information;
and carrying out remapping processing on the second source image according to the second remapping adjustment parameter to obtain the second target image.
Optionally, the determining, according to the first calibration information, the first source image, and the third source image, target depth information of the set target includes:
correcting and aligning the first source image according to the first calibration information to obtain a first corrected image;
correcting and aligning the third source image according to the first calibration information to obtain a third corrected image;
performing target selection processing of the set target on the first source image to obtain a target source image;
determining a target corrected image according to the target source image and the first corrected image;
and determining the target depth information according to the target corrected image and the third corrected image.
Optionally, the method comprises:
when shooting, controlling the first color camera, the second color camera and the depth camera to be exposed simultaneously, and controlling the frame rate of the first color camera, the frame rate of the second color camera and the frame rate of the depth camera to be the same; and/or the presence of a gas in the gas,
and when shooting, simultaneously acquiring the first shooting focusing information and the second shooting focusing information.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus applied to a terminal, the apparatus including:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a first source image, a second source image and a third source image, the first source image is shot by a first color camera, the second source image is shot by a second color camera, and the third source image is shot by a depth camera;
the processing module is configured to perform correction and alignment processing on the first source image and the second source image according to first calibration information, second calibration information, first shooting focusing information, second shooting focusing information, and the third source image to obtain a first target image and a second target image, where the first calibration information is calibration information of the first color camera and the depth camera, the second calibration information is calibration information of the first color camera and the second color camera, the first calibration focusing information is focusing information of the first color camera when the second calibration information is determined, the second calibration focusing information is focusing information of the second color camera when the second calibration information is determined, the first shooting focusing information is focusing information of the first color camera when the first source image is shot, and the second focusing information is focusing information of the second color camera when the second source image is shot.
Optionally, the processing module is configured to:
determining target depth information of a set target according to the first calibration information, the first source image and the third source image;
and correcting and aligning the first source image and the second source image according to the target depth information, the second calibration information, the first calibration focusing information, the second calibration focusing information, the first shooting focusing information and the second shooting focusing information to obtain a first target image and a second target image.
Optionally, the processing module is configured to:
correcting and aligning the first source image according to the target depth information, the second calibration information, the first calibration focusing information and the first shooting focusing information to obtain a first target image;
and correcting and aligning the second source image according to the target depth information, the second calibration focusing information and the second shooting focusing information to obtain a second target image.
Optionally, the processing module is configured to:
determining a first remapping adjustment parameter according to the target depth information, the second calibration information, the first calibration focusing information and the first shooting focusing information;
and carrying out remapping processing on the first source image according to the first remapping adjustment parameter to obtain the first target image.
Optionally, the processing module is configured to:
determining a second remapping adjustment parameter according to the target depth information, the second calibration focusing information and the second shooting focusing information;
and carrying out remapping processing on the second source image according to the second remapping adjustment parameter to obtain the second target image.
Optionally, the processing module is configured to:
correcting and aligning the first source image according to the first calibration information to obtain a first corrected image;
correcting and aligning the third source image according to the first calibration information to obtain a third corrected image;
performing target selection processing of the set target on the first source image to obtain a target source image;
determining a target corrected image according to the target source image and the first corrected image;
and determining the target depth information according to the target corrected image and the third corrected image.
Optionally, the apparatus comprises a control module configured to:
when shooting, controlling the first color camera, the second color camera and the depth camera to be exposed simultaneously, and controlling the frame rate of the first color camera, the frame rate of the second color camera and the frame rate of the depth camera to be the same; and/or the presence of a gas in the gas,
and during shooting, simultaneously acquiring the first shooting focusing information and the second shooting focusing information.
According to a third aspect of the embodiments of the present disclosure, there is provided a terminal, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the method of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium having instructions which, when executed by a processor of a terminal, enable the terminal to perform the method according to the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: in the method, relevant information of a depth camera, focusing information of a color camera and the like are introduced to correct and align a color image shot by the color camera to obtain a registered target image. The method does not need to use a neural network, has lower performance requirement on the terminal, and can improve the image correction alignment effect and obtain a more registered target image due to the introduction of the related information of the depth camera, the focusing information of the color camera and the like.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1a is a flow diagram illustrating an image processing method according to an exemplary embodiment.
FIG. 1b is a flow diagram illustrating an image processing method according to an exemplary embodiment.
FIG. 1c is a flow diagram illustrating an image processing method according to an exemplary embodiment.
FIG. 1d is a flow diagram illustrating an image processing method according to an exemplary embodiment.
FIG. 1e is a flowchart illustrating an image processing method according to an exemplary embodiment.
FIG. 2a is a schematic diagram illustrating an image processing procedure according to an exemplary embodiment.
FIG. 2b is a schematic diagram illustrating an image processing procedure according to an exemplary embodiment.
Fig. 3 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment.
Fig. 4 is a block diagram of a terminal shown in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the disclosure, as detailed in the appended claims.
In the related art, the registered images are generally obtained by two methods. Firstly, images shot by two cameras are corrected and aligned according to calibration information of the two cameras, and two registered images are obtained. Second, the two images are registered based on deep learning.
However, the first method has a poor correction alignment effect, and the second method requires introduction of a neural network model, requires a terminal to have high computational power, has a high requirement on the performance of the terminal, and has poor user experience in both the first and second methods.
The disclosure provides an image processing method applied to a terminal. In the method, relevant information of a depth camera, focusing information of a color camera and the like are introduced to correct and align a color image shot by the color camera to obtain a registered target image. The method does not need to use a neural network, has lower performance requirement on the terminal, and can improve the image correction alignment effect and obtain a more registered target image due to the introduction of the relevant information of the depth camera, the focusing information of the color camera and the like.
In one exemplary embodiment, an image processing method is provided and applied to a terminal. Referring to fig. 1a, the method comprises:
s110, acquiring a first source image, a second source image and a third source image;
s120, according to the first calibration information, the second calibration information, the first calibration focusing information, the second calibration focusing information, the first shooting focusing information, the second shooting focusing information and the third source image, the first source image and the second source image are corrected and aligned to obtain a first target image and a second target image.
In step S110, a first source image may be captured by a first color (RGB) camera, a second source image may be captured by a second color (RGB) camera, and a third source image may be captured by a Depth (Depth) camera.
During shooting, the frame synchronization of the first color camera, the second color camera and the depth camera can be controlled, that is, the first color camera, the second color camera and the depth camera can be controlled to be exposed simultaneously, and the frame rate of the first color camera, the frame rate of the second color camera and the frame rate of the depth camera are controlled to be the same, so that the first source image, the second source image and the third source image in each group of images are shot at the same time, and the effect of subsequent image registration is improved.
The first color camera, the second color camera and the depth camera may belong to a terminal to which the method is applied, or may belong to other terminals.
When the first color camera, the second color camera and the depth camera belong to the terminal applied by the method, after the first source image is shot by the first color image, the first source image can be transmitted to the processor, so that the processor can acquire the first source image; when a second source image is obtained by shooting of the second color camera, the second source image can be transmitted to the processor, so that the processor can obtain the second source image; when the depth camera captures a third source image, the third source image can be transmitted to the processor, so that the processor acquires the third source image.
It should be noted that the terminal may include a plurality of color cameras and a plurality of depth cameras, and when image registration is performed, images captured by two color cameras and one depth camera may be acquired and respectively recorded as a first source image, a second source image and a third source image.
And recording the terminal applied by the method as a first terminal, and recording other terminals as second terminals. When the first color camera, the second color camera and the depth camera belong to a second terminal, after a first source image is shot by a first color image, the first source image can be transmitted to the first terminal, so that a processor of the first terminal can acquire the first source image; when a second source image is obtained by shooting of a second color camera, the second source image can be transmitted to the first terminal, so that a processor of the first terminal can obtain the second source image; when the third source image is obtained by shooting by the depth camera, the third source image can be transmitted to the first terminal, so that the processor of the first terminal can obtain the third source image.
It should be noted that, after the first source image, the second source image, and the third source image are obtained by shooting, the three images may be stored, and when image matching needs to be performed on the first source image and the second source image, the first source image, the second source image, and the third source image are obtained.
In step S120, the first calibration information is calibration information of the first color camera and the depth camera, the second calibration information is calibration information of the first color camera and the second color camera, and both the first calibration information and the second calibration information are determined by using a factory offline calibration method.
The first calibration focusing information is the focusing information of the first color camera when the second calibration information is determined, and the second calibration focusing information is the focusing information of the second color camera when the second calibration information is determined.
During the calibration of the first color camera and the second color camera, in addition to the second calibration information formed by the "internal reference" and the "external reference" to be determined, focusing information (such as DAC value of AF) during calibration may be recorded, wherein the focusing information of the first color camera may be recorded as the first calibration focusing information, and the focusing information of the second color camera may be recorded as the second calibration focusing information.
In addition, since the depth camera is usually a Fixed Focus (FF) camera, during the calibration of the first color camera and the depth camera, only the first calibration information needs to be determined, and the focus information (e.g., AF information) of the depth camera does not need to be determined. The focus information of the first color camera when determining the first calibration information is generally not used in subsequent image registration.
In this step, the first photographing focusing information is focusing information of the first color camera when the first source image is photographed, and the second photographing focusing information is focusing information of the second color camera when the second source image is photographed.
When shooting, the first shooting focusing information and the second shooting focusing information can be acquired simultaneously so as to ensure the reliability of the first shooting focusing information and the second shooting focusing information, and thus, the image correction and alignment effect is improved.
The focusing information is determined according to the focusing type of the camera, and may be Autofocus (AF) information or Manual Focusing (MF) information. In addition, the method not only can realize the correction and alignment of two images, but also can realize the correction and alignment of more images through the correction and alignment of two images, thereby obtaining a plurality of registered target images.
According to the method, the first source image and the second source image are corrected and aligned according to the plurality of focusing information and the depth information of the third source image, the requirement on the performance of the terminal is low, the correction and alignment effect can be improved, two or more target images which are more registered are obtained, and the use experience of a user is improved.
In one exemplary embodiment, an image processing method is provided, which is applied to a terminal. Referring to fig. 1b, in the method, performing correction and alignment processing on the first source image and the second source image according to the first calibration information, the second calibration information, the first calibration focusing information, the second calibration focusing information, the first shooting focusing information, the second shooting focusing information, and the third source image to obtain the first target image and the second target image, may include:
s210, determining target depth information of a set target according to the first calibration information, the first source image and the third source image;
s220, according to the target depth information, the second calibration information, the first calibration focusing information, the second calibration focusing information, the first shooting focusing information and the second shooting focusing information, the first source image and the second source image are corrected and aligned to obtain a first target image and a second target image.
In step S210, referring to fig. 2a, according to the first calibration information, the first source image and the third source image may be corrected and aligned to obtain a first corrected image after the first source image is corrected and a third corrected image after the third source image is corrected, so that the first corrected image and the third corrected image are aligned with each other.
In general, the resolution of the first source image is higher than that of the first corrected image, and in this step, a target selection process for setting a target may be performed on the first source image to obtain a target source image in the first source image. Wherein the set target may be determined based on a user's selection.
For example, the target is set as a portrait, and through target selection processing, a portrait image in the first source image can be obtained, where the portrait image includes image information of the portrait and area information of the image information in the first source image. And then obtaining a target corrected image based on the target source image and the first corrected image.
It should be noted that the target correction image may be obtained by directly performing target selection processing for setting a target on the first corrected image. The target selection processing is carried out on the first source image, so that the reliability of target selection can be improved, and the accuracy of the target correction image is improved. And the target selection processing is directly carried out on the first corrected image, so that the speed of determining the target corrected image can be increased, and the time for correcting and aligning the images is saved.
The target corrected image may be recorded as a first target corrected image. After the first target corrected image is determined, a second target corrected image corresponding to the first target corrected image is determined from the third corrected image, and then the depth information of the second target corrected image is determined as target depth information.
In step S220, referring to fig. 2b, the first source image may be corrected and aligned according to the target depth information, the second calibration information, the first calibration focusing information, and the first shooting focusing information, so as to obtain a first target image. Because the target depth information, the first calibration focusing information and the first shooting focusing information are introduced, the reliability of correction alignment processing can be improved, and a more accurate first target image can be obtained.
In this step, the second source image may be corrected and aligned according to the target depth information, the second calibration focusing information, and the second photographing focusing information, so as to obtain a second target image. Similarly, as the target depth information, the second calibration focusing information and the second shooting focusing information are introduced, the reliability of the correction alignment processing can be improved, and a more accurate second target image can be obtained.
According to the method, the first source image and the second source image are subjected to image registration based on the depth information of the set target, so that more accurate registration can be performed, and the first target image and the second target image can be obtained more accurately.
In one exemplary embodiment, an image processing method is provided, which is applied to a terminal. Referring to fig. 1c and fig. 2b, in the method, performing correction and alignment processing on the first source image according to the target depth information, the second calibration information, the first calibration focusing information, and the first shooting focusing information to obtain the first target image, may include:
s310, determining a first remapping adjustment parameter according to the target depth information, the second calibration information, the first calibration focusing information and the first shooting focusing information;
s320, carrying out remapping processing on the first source image according to the first remapping adjustment parameter to obtain a first target image.
In step S310, the target depth information may represent object distance information of the set target, the first calibration focusing information may represent image distance information of the first color camera during the offline calibration, and the first photographing focusing information may represent image distance information of the first color camera during the photographing of the first source image. In this step, the remapping matrix of the first color camera is adjusted based on the object distance information, the two image distance information and the second calibration information, so as to obtain a more accurate first remapping adjustment parameter.
In step S320, the first source image may be remapped according to the above-mentioned more accurate first remapping adjustment parameter, so as to obtain a more accurate first target image.
In the method, the first remapping adjustment parameter is determined based on the object distance information of the set target, the image distance information during off-line calibration, the image distance information during shooting of the first source image and the second calibration information, so that the reliability of the finally determined first target information can be improved, the image correction alignment effect is improved, and two or more target images which are more registered are obtained.
In one exemplary embodiment, an image processing method is provided, which is applied to a terminal. Referring to fig. 1d and fig. 2b, in the method, according to the target depth information, the second calibration focusing information, and the second shooting focusing information, performing a correction alignment process on the second source image to obtain a second target image, including:
s410, determining a second remapping adjustment parameter according to the target depth information, the second calibration focusing information and the second shooting focusing information;
and S420, carrying out remapping processing on the second source image according to the second remapping adjustment parameter to obtain a second target image.
In step S410, the target depth information may represent object distance information of the set target, the second calibration focusing information may represent image distance information of the second color camera during the off-line calibration, and the second photographing focusing information may represent image distance information of the second color camera during the photographing of the second source image. In this step, the remapping matrix of the second color camera is adjusted based on the object distance information, the two image distance information and the second calibration information, so as to obtain a more accurate second remapping adjustment parameter.
In step S420, the second source image may be remapped according to the above-mentioned more accurate second remapping adjustment parameter, so as to obtain a more accurate second target image.
In the method, the second remapping adjustment parameter is determined based on the object distance information of the set target, the image distance information during off-line calibration, the image distance information during shooting of the second source image and the second calibration information, so that the reliability of the finally determined second target information can be improved, and the image correction alignment effect is improved.
In one exemplary embodiment, an image processing method is provided, which is applied to a terminal. Referring to fig. 1e, 2a and 2b, the method may comprise:
s510, acquiring a first source image, a second source image and a third source image;
s520, determining target depth information of a set target according to the first calibration information, the first source image and the third source image;
s530, determining a first remapping adjustment parameter according to the target depth information, the second calibration information, the first calibration focusing information and the first shooting focusing information;
s540, remapping the first source image according to the first remapping adjustment parameter to obtain a first target image;
s550, determining a second remapping adjustment parameter according to the target depth information, the second calibration focusing information and the second shooting focusing information;
and S560, carrying out remapping processing on the second source image according to the second remapping adjustment parameter to obtain a second target image.
The terminal may be a mobile phone, and a color camera in the mobile phone is generally an Auto Focus (AF).
Before the mobile phone leaves a factory, offline calibration can be performed on the first color camera and the depth camera, and first calibration information (for example, recorded as Res _ main & depth) is determined; and performing offline calibration on the first color camera and the second color camera, determining second calibration information (for example, denoted as Res _ main & slave), and simultaneously determining first calibration focusing information (for example, denoted as AF _ main) of the first color camera and second calibration focusing information (for example, denoted as AF _ slave) of the second color camera. It should be noted that the second calibration information, the first calibration focusing information and the second calibration focusing information may be saved in the same file.
In shooting, the frame synchronization of the first color camera, the second color camera and the Depth camera may be controlled, that is, the first color camera, the second color camera and the Depth camera may be controlled to be exposed simultaneously, and the frame rate of the first color camera, the frame rate of the second color camera and the frame rate of the Depth camera are controlled to be the same, so that the first source image (e.g. denoted as Rgb _ main), the second source image (e.g. denoted as Rgb _ slave) and the third source image (e.g. denoted as Depth image) in each set of images are shot at the same time.
In addition, at the time of shooting, first shooting focus information (for example, written as AF _ main _ 1) of the first color camera and second shooting focus information (for example, written as AF _ slave _ 1) of the second color camera may be acquired at the same time.
Then, the first source Image and the third source Image are subjected to rectification processing based on the first calibration information, and a first rectification Image (for example, denoted as Rgb _ main _ rectification _ 1) and a third rectification Image (for example, denoted as Image _ depth _ rectification) are obtained. And meanwhile, based on the first source image, performing target selection processing for setting a target, and determining a target source image. That is, the setting target of the selection of the focus area may be a portrait in the portrait blurring process.
Then, a target corrected image corresponding to the corrected target source image can be determined from the first corrected image through a mathematical mapping mode.
After the target rectified image is determined, target Depth information (for example, denoted as Depth _ Object) corresponding to the target rectified image is determined from the third rectified image.
Then, a remapping matrix of the first color camera is adjusted according to the target depth information, the first calibration focusing information and the first shooting focusing information, so as to obtain a first remapping adjustment parameter (for example, written as Reprojective _ main _ new). And then, remapping the first source image based on the first remapping adjustment parameter to obtain a corrected and aligned first target image (for example, the corrected and aligned first target image is denoted as Rgb _ main _ recotify).
Furthermore, the remapping matrix of the second color camera may be adjusted according to the target depth information, the second calibration focusing information, and the second photographing focusing information, so as to obtain a second remapping adjustment parameter (for example, written as Reprojective _ slave _ new). And then, performing remapping processing on the second source image based on the second remapping adjustment parameter to obtain a second target image after rectification alignment (for example, denoted as Rgb _ slave _ recotify).
Compared with a processing method based on a neural network model, the method has small operand and can reduce the requirement on the terminal performance. In addition, the method combines the related information and focusing information of the depth camera, so that the influence of automatic focusing on correction can be avoided, and a better registration effect is obtained.
In one exemplary embodiment, an image processing apparatus is provided for application to a terminal. The device is used for implementing the method. Referring to fig. 3, the apparatus includes an acquisition module 101 and a processing module 102, wherein,
the system comprises an acquisition module 101, a depth camera and a display module, wherein the acquisition module is used for acquiring a first source image, a second source image and a third source image, the first source image is obtained by shooting with the first color camera, the second source image is obtained by shooting with the second color camera, and the third source image is obtained by shooting with the depth camera;
the processing module 102 is configured to perform correction and alignment processing on a first source image and a second source image according to first calibration information, second calibration information, first shooting focusing information, second shooting focusing information, and a third source image to obtain a first target image and a second target image, where the first calibration information is calibration information of the first color camera and the depth camera, the second calibration information is calibration information of the first color camera and the second color camera, the first calibration focusing information is focusing information of the first color camera when the second calibration information is determined, the second calibration focusing information is focusing information of the second color camera when the second calibration information is determined, the first shooting focusing information is focusing information of the first color camera when the first source image is shot, and the second shooting focusing information is focusing information of the second color camera when the second source image is shot.
In one exemplary embodiment, an image processing apparatus is provided for application to a terminal. Referring to fig. 3, in the apparatus, the processing module 102 is configured to:
determining target depth information of a set target according to the first calibration information, the first source image and the third source image;
and correcting and aligning the first source image and the second source image according to the target depth information, the second calibration information, the first calibration focusing information, the second calibration focusing information, the first shooting focusing information and the second shooting focusing information to obtain a first target image and a second target image.
In one exemplary embodiment, an image processing apparatus is provided for application to a terminal. Referring to fig. 3, in the apparatus, the processing module 102 is configured to:
according to the target depth information, the second calibration information, the first calibration focusing information and the first shooting focusing information, carrying out correction alignment processing on the first source image to obtain a first target image;
and correcting and aligning the second source image according to the target depth information, the second calibration focusing information and the second shooting focusing information to obtain a second target image.
In one exemplary embodiment, an image processing apparatus is provided for application to a terminal. Referring to fig. 3, in the apparatus, the processing module 102 is configured to:
determining a first remapping adjustment parameter according to the target depth information, the second calibration information, the first calibration focusing information and the first shooting focusing information;
and carrying out remapping processing on the first source image according to the first remapping adjustment parameter to obtain a first target image.
In one exemplary embodiment, an image processing apparatus is provided for application to a terminal. Referring to fig. 3, in the apparatus, the processing module 102 is configured to:
determining a second remapping adjustment parameter according to the target depth information, the second calibration focusing information and the second shooting focusing information;
and carrying out remapping processing on the second source image according to the second remapping adjustment parameter to obtain a second target image.
In one exemplary embodiment, an image processing apparatus is provided for application to a terminal. Referring to fig. 3, in the apparatus, the processing module 102 is configured to:
correcting and aligning the first source image according to the first calibration information to obtain a first corrected image;
correcting and aligning the third source image according to the first calibration information to obtain a third corrected image;
performing target selection processing of a set target on the first source image to obtain a target source image;
determining a target correction image according to the target source image and the first correction image;
and determining target depth information according to the target corrected image and the third corrected image.
In one exemplary embodiment, an image processing apparatus is provided for application to a terminal. Referring to fig. 3, the apparatus includes a control module, control module 103, for:
when shooting, controlling the first color camera, the second color camera and the depth camera to be exposed simultaneously, and controlling the frame rate of the first color camera, the frame rate of the second color camera and the frame rate of the depth camera to be the same; and/or the presence of a gas in the gas,
and when shooting, simultaneously acquiring the first shooting focusing information and the second shooting focusing information.
In one exemplary embodiment, a terminal, such as a mobile phone, a laptop, a tablet, a wearable device, and the like, provided with at least one depth camera and at least two color cameras is provided.
Referring to fig. 4, terminal 400 may include one or more of the following components: processing components 402, memory 404, power components 406, multimedia components 408, audio components 410, input/output (I/O) interfaces 412, sensor components 414, and communication components 416.
The processing component 402 generally controls overall operation of the terminal 400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 402 may include one or more processors 420 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 402 can include one or more modules that facilitate interaction between the processing component 402 and other components. For example, the processing component 402 can include a multimedia module to facilitate interaction between the multimedia component 408 and the processing component 402.
The memory 404 is configured to store various types of data to support operations at the terminal 400. Examples of such data include instructions for any application or method operating on the terminal 400, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 404 may be implemented by any type of volatile or non-volatile storage terminal or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power components 406 provide power to the various components of the terminal 400. The power components 406 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the terminal 400.
The multimedia component 408 includes a screen providing an output interface between the terminal 400 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 408 includes a front camera module and/or a rear camera module. When the terminal 400 is in an operating mode, such as a shooting mode or a video mode, the front camera module and/or the rear camera module can receive external multimedia data. Each front camera module and rear camera module may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 410 is configured to output and/or input audio signals. For example, the audio component 410 includes a Microphone (MIC) configured to receive external audio signals when the terminal 400 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 404 or transmitted via the communication component 416. In some embodiments, audio component 410 also includes a speaker for outputting audio signals.
The I/O interface 412 provides an interface between the processing component 402 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 414 includes one or more sensors for providing various aspects of status assessment for the terminal 400. For example, the sensor assembly 414 can detect an open/closed state of the terminal 400, relative positioning of components, such as a display and keypad of the terminal 400, the sensor assembly 414 can also detect a change in position of the terminal 400 or a component of the terminal 400, the presence or absence of user contact with the terminal 400, orientation or acceleration/deceleration of the terminal 400, and a change in temperature of the terminal 400. The sensor assembly 414 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 414 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 416 is configured to facilitate communications between the terminal 400 and other terminals in a wired or wireless manner. The terminal 700 can access a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G, 5G, or a combination thereof. In an exemplary embodiment, the communication component 416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 416 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the terminal 400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital signal processing terminals (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 404 comprising instructions, executable by the processor 420 of the terminal 400 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage terminal, and the like. The instructions in the storage medium, when executed by a processor of the terminal, enable the terminal to perform the methods shown in the above-described embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements that have been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (16)

1. An image processing method applied to a terminal is characterized by comprising the following steps:
acquiring a first source image, a second source image and a third source image, wherein the first source image is obtained by shooting with a first color camera, the second source image is obtained by shooting with a second color camera, and the third source image is obtained by shooting with a depth camera;
correcting and aligning the first source image and the second source image according to first calibration information, second calibration information, first calibration focusing information, second calibration focusing information, first shooting focusing information, second shooting focusing information and the third source image to obtain a first target image and a second target image, wherein the first calibration information is calibration information of the first color camera and the depth camera, the second calibration information is calibration information of the first color camera and the second color camera, the first calibration focusing information is focusing information of the first color camera when the second calibration information is determined, the second calibration focusing information is focusing information of the second color camera when the second calibration information is determined, the first shooting focusing information is focusing information of the first color camera when the first source image is shot, and the second shooting focusing information is focusing information of the second color camera when the second source image is shot.
2. The method according to claim 1, wherein the performing a rectification alignment process on the first source image and the second source image according to the first calibration information, the second calibration information, the first calibration focusing information, the second calibration focusing information, the first shooting focusing information, the second shooting focusing information, and the third source image to obtain a first target image and a second target image comprises:
determining target depth information of a set target according to the first calibration information, the first source image and the third source image;
and correcting and aligning the first source image and the second source image according to the target depth information, the second calibration information, the first calibration focusing information, the second calibration focusing information, the first shooting focusing information and the second shooting focusing information to obtain a first target image and a second target image.
3. The method according to claim 2, wherein the performing a correction alignment process on the first source image and the second source image according to the target depth information, the second calibration information, the first calibration focusing information, the second calibration focusing information, the first shooting focusing information, and the second shooting focusing information to obtain the first target image and the second target image comprises:
correcting and aligning the first source image according to the target depth information, the second calibration information, the first calibration focusing information and the first shooting focusing information to obtain a first target image;
and correcting and aligning the second source image according to the target depth information, the second calibration focusing information and the second shooting focusing information to obtain a second target image.
4. The method of claim 3, wherein the performing the rectification alignment processing on the first source image according to the target depth information, the second calibration information, the first calibration focusing information, and the first photographing focusing information to obtain the first target image comprises:
determining a first remapping adjustment parameter according to the target depth information, the second calibration information, the first calibration focusing information and the first shooting focusing information;
and carrying out remapping processing on the first source image according to the first remapping adjustment parameter to obtain the first target image.
5. The method according to claim 3, wherein the performing the rectification and alignment process on the second source image according to the target depth information, the second calibration focusing information, and the second photographing focusing information to obtain the second target image comprises:
determining a second remapping adjustment parameter according to the target depth information, the second calibration focusing information and the second shooting focusing information;
and carrying out remapping processing on the second source image according to the second remapping adjustment parameter to obtain a second target image.
6. The method according to claim 2, wherein determining target depth information of the set target according to the first calibration information, the first source image and the third source image comprises:
correcting and aligning the first source image according to the first calibration information to obtain a first corrected image;
correcting and aligning the third source image according to the first calibration information to obtain a third corrected image;
performing target selection processing of the set target on the first source image to obtain a target source image;
determining a target corrected image according to the target source image and the first corrected image;
and determining the target depth information according to the target corrected image and the third corrected image.
7. The method according to any one of claims 1-6, characterized in that the method comprises:
when shooting, controlling the first color camera, the second color camera and the depth camera to be exposed simultaneously, and controlling the frame rate of the first color camera, the frame rate of the second color camera and the frame rate of the depth camera to be the same; and/or the presence of a gas in the gas,
and during shooting, simultaneously acquiring the first shooting focusing information and the second shooting focusing information.
8. An image processing apparatus applied to a terminal, the apparatus comprising:
the device comprises an acquisition module, a depth camera and a display module, wherein the acquisition module is used for acquiring a first source image, a second source image and a third source image, the first source image is obtained by shooting with the first color camera, the second source image is obtained by shooting with the second color camera, and the third source image is obtained by shooting with the depth camera;
the processing module is configured to perform correction and alignment processing on the first source image and the second source image according to first calibration information, second calibration information, first shooting focusing information, second shooting focusing information, and the third source image to obtain a first target image and a second target image, where the first calibration information is calibration information of the first color camera and the depth camera, the second calibration information is calibration information of the first color camera and the second color camera, the first calibration focusing information is focusing information of the first color camera when the second calibration information is determined, the second calibration focusing information is focusing information of the second color camera when the second calibration information is determined, the first shooting focusing information is focusing information of the first color camera when the first source image is shot, and the second focusing information is focusing information of the second color camera when the second source image is shot.
9. The apparatus of claim 8, wherein the processing module is configured to:
determining target depth information of a set target according to the first calibration information, the first source image and the third source image;
and correcting and aligning the first source image and the second source image according to the target depth information, the second calibration information, the first calibration focusing information, the second calibration focusing information, the first shooting focusing information and the second shooting focusing information to obtain a first target image and a second target image.
10. The apparatus of claim 9, wherein the processing module is configured to:
correcting and aligning the first source image according to the target depth information, the second calibration information, the first calibration focusing information and the first shooting focusing information to obtain a first target image;
and correcting and aligning the second source image according to the target depth information, the second calibration focusing information and the second shooting focusing information to obtain a second target image.
11. The apparatus of claim 10, wherein the processing module is configured to:
determining a first remapping adjustment parameter according to the target depth information, the second calibration information, the first calibration focusing information and the first shooting focusing information;
and carrying out remapping processing on the first source image according to the first remapping adjustment parameter to obtain the first target image.
12. The apparatus of claim 10, wherein the processing module is configured to:
determining a second remapping adjustment parameter according to the target depth information, the second calibration focusing information and the second shooting focusing information;
and carrying out remapping processing on the second source image according to the second remapping adjustment parameter to obtain the second target image.
13. The apparatus of claim 9, wherein the processing module is configured to:
correcting and aligning the first source image according to the first calibration information to obtain a first corrected image;
correcting and aligning the third source image according to the first calibration information to obtain a third corrected image;
performing target selection processing of the set target on the first source image to obtain a target source image;
determining a target corrected image according to the target source image and the first corrected image;
and determining the target depth information according to the target corrected image and the third corrected image.
14. The apparatus of any one of claims 8-13, comprising a control module configured to:
when shooting, controlling the first color camera, the second color camera and the depth camera to be exposed simultaneously, and controlling the frame rate of the first color camera, the frame rate of the second color camera and the frame rate of the depth camera to be the same; and/or the presence of a gas in the gas,
and when shooting, simultaneously acquiring the first shooting focusing information and the second shooting focusing information.
15. A terminal, characterized in that the terminal comprises:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the method of any one of claims 1 to 9.
16. A non-transitory computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of a terminal, enable the terminal to perform the method of any one of claims 1 to 9.
CN202110982759.4A 2021-08-25 2021-08-25 Image processing method, device, terminal and storage medium Pending CN115731296A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110982759.4A CN115731296A (en) 2021-08-25 2021-08-25 Image processing method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110982759.4A CN115731296A (en) 2021-08-25 2021-08-25 Image processing method, device, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN115731296A true CN115731296A (en) 2023-03-03

Family

ID=85289670

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110982759.4A Pending CN115731296A (en) 2021-08-25 2021-08-25 Image processing method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN115731296A (en)

Similar Documents

Publication Publication Date Title
CN110557547B (en) Lens position adjusting method and device
EP3544286B1 (en) Focusing method, device and storage medium
CN112114765A (en) Screen projection method and device and storage medium
CN110769147B (en) Shooting method and electronic equipment
CN111294511B (en) Focusing method and device of camera module and storage medium
CN110876014B (en) Image processing method and device, electronic device and storage medium
CN110620871B (en) Video shooting method and electronic equipment
CN115134505B (en) Preview picture generation method and device, electronic equipment and storage medium
CN106973275A (en) The control method and device of projector equipment
CN112188096A (en) Photographing method and device, terminal and storage medium
CN114339022B (en) Camera shooting parameter determining method and neural network model training method
CN112235509B (en) Focal length adjusting method and device, mobile terminal and storage medium
CN115731296A (en) Image processing method, device, terminal and storage medium
CN107682623B (en) Photographing method and device
CN110874829B (en) Image processing method and device, electronic device and storage medium
CN108769513B (en) Camera photographing method and device
CN114244999A (en) Automatic focusing method and device, camera equipment and storage medium
CN114339018B (en) Method and device for switching lenses and storage medium
CN114339017B (en) Distant view focusing method, device and storage medium
CN115222818A (en) Calibration verification method, calibration verification device and storage medium
CN115144870A (en) Image shooting method, device, terminal and storage medium
CN118190352A (en) Method, device, terminal and storage medium for determining lens structure information
CN117522942A (en) Depth distance measuring method, depth distance measuring device, electronic equipment and readable storage medium
CN115909472A (en) Gesture recognition method, device, terminal and storage medium
CN115731276A (en) Image processing method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination