CN115278071A - Image processing method, image processing device, electronic equipment and readable storage medium - Google Patents

Image processing method, image processing device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN115278071A
CN115278071A CN202210874779.4A CN202210874779A CN115278071A CN 115278071 A CN115278071 A CN 115278071A CN 202210874779 A CN202210874779 A CN 202210874779A CN 115278071 A CN115278071 A CN 115278071A
Authority
CN
China
Prior art keywords
image
area
target
camera
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210874779.4A
Other languages
Chinese (zh)
Inventor
王大成
王玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202210874779.4A priority Critical patent/CN115278071A/en
Publication of CN115278071A publication Critical patent/CN115278071A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application discloses an image processing method, an image processing device, electronic equipment and a readable storage medium, and belongs to the field of image processing. The image processing method comprises the following steps: under the condition that a first camera shoots a target object to obtain a first image and a second camera shoots the target object to obtain a second image, correcting the first image and the second image according to calibration information of the first camera and the second camera to obtain a first target image and a second target image; determining parallax offset of the first image and the second image according to a first area where the target object in the first image is located and a second area where the target object in the second image is located; and carrying out alignment processing on the first target image and the second target image according to the parallax offset.

Description

Image processing method, image processing device, electronic equipment and readable storage medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a readable storage medium.
Background
With the updating of electronic equipment, more and more electronic equipment carries a plurality of cameras to satisfy people's growing shooting demand. In order to improve the stability of a shot picture, improve the imaging effect in a low-brightness environment, and the like, shooting is performed in cooperation with schemes such as pan-tilt anti-shake, optical Image Stabilizer (OIS) anti-shake, and the like in the shooting process using a camera.
In the shooting process matched with the anti-shake scheme, the position and the attitude of the lens of the camera or the camera can be changed. At present, based on the existing image anti-shake processing scheme, the influence of the change of the pose of a camera on a shot image is difficult to overcome, so that the image after anti-shake processing has the technical problem of poor alignment precision.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing method and apparatus, an electronic device, and a readable storage medium, which can improve accuracy of image alignment processing for images captured by different cameras.
In a first aspect, an embodiment of the present application provides an image processing method, including:
under the condition that a first camera shoots a target object to obtain a first image and a second camera shoots the target object to obtain a second image, correcting the first image and the second image according to calibration information of the first camera and the second camera to obtain a first target image and a second target image;
determining the parallax offset of the first image and the second image according to a first area where the target object in the first image is located and a second area where the target object in the second image is located;
and carrying out alignment processing on the first target image and the second target image according to the parallax offset.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the correction module is used for correcting the first image and the second image according to the calibration information of the first camera and the second camera to obtain a first target image and a second target image under the condition that the first camera shoots the target object to obtain the first image and the second camera shoots the target object to obtain the second image;
the processing module is used for determining the parallax offset of the first image and the second image according to a first area where the target object in the first image is located and a second area where the target object in the second image is located;
and the processing module is also used for carrying out alignment processing on the first target image and the second target image according to the parallax offset.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, under the condition that the first camera shoots the target object to obtain the first image and the second camera shoots the target object to obtain the second image, the first image and the second image can be corrected according to the calibration information of the first camera and the second camera, so that the primary online correction of the first image and the second image can be realized. And then, determining the parallax offset of the first image and the second image according to the first area where the target object in the first image is located and the second area where the target object in the second image is located, and aligning the first target image and the second target image according to the parallax offset, so that the influence of the camera on the image alignment effect in the anti-shake process is effectively reduced, and a corrected image pair with small alignment error and accurate parallax is obtained.
Drawings
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic view of a parallax offset according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a epipolar geometry constraint provided by an embodiment of the present application;
fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 6 is a schematic hardware structure diagram of another electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
With the updating of electronic equipment, more and more electronic equipment carries a plurality of cameras to satisfy people's growing shooting demand. After the plurality of images are captured by the plurality of cameras, the images captured by the plurality of cameras are often aligned for performing other image processing, for example, performing a fusion process on the plurality of images.
However, in order to improve the stability of the captured image, improve the imaging effect in a low-brightness environment, and the like, an anti-shake technology is often configured for the camera in the electronic device. After the anti-shake technology is started, the pose of the lens of the camera or the camera is changed, so that the alignment result among different images is not ideal easily in the process of aligning the images shot by different cameras.
In order to solve the problems in the background art, embodiments of the present application provide an image processing method, an image processing apparatus, an electronic device, and a computer storage medium. Under the condition that the first camera shoots the target object to obtain the first image and the second camera shoots the target object to obtain the second image, the first image and the second image are corrected according to the calibration information of the first camera and the second camera, so that the first image and the second image can be preliminarily corrected on line to obtain the first target image and the second target image. And then, determining the parallax offset of the first image and the second image according to the first area where the target object is located in the first image and the second area where the target object is located in the second image, and aligning the first target image and the second target image according to the parallax offset, so that the influence of a camera on the image alignment effect in the anti-shake process is effectively reduced, and the first target image and the second target image which are small in alignment error and accurate in parallax are obtained.
The image processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application, including steps 110 to 130.
And step 110, under the condition that the first camera shoots the target object to obtain a first image and the second camera shoots the target object to obtain a second image, correcting the first image and the second image according to the calibration information of the first camera and the second camera to obtain a first target image and a second target image.
For example, the electronic device may carry a plurality of cameras, and each camera may capture a captured image. After the electronic equipment starts the shooting function, the electronic equipment shoots a target object through a plurality of cameras mounted on the electronic equipment, and a plurality of images can be obtained.
As a specific example, the electronic device may be configured with anti-shake technology such as micro-pan-tilt, OIS anti-shake scheme. Taking the micro cloud platform anti-shake as an example, the camera can be rotated through the micro cloud platform, and shake compensation can be realized. Taking OIS anti-shake as an example, shake can be compensated for by moving the corrective lenses in the lens group. Therefore, in the process of shooting images by the electronic device in combination with the anti-shake technology, calibration information between different cameras needs to be combined to correct the images shot by each camera.
As a specific example, the target object may be a shooting object such as a person, an animal, a plant, or the like, and may also be an object selected by a user in the shooting preview interface. The first camera shoots a target object to obtain a first image, and the second camera shoots the target object to obtain a second image.
The calibration information can represent the respective positions and postures of the first camera and the second camera in the process of shooting the target object and the relative positions and postures of the first camera and the second camera, so that the first image and the second image can be corrected on line according to the calibration information, and the first target image and the second target image are obtained.
The calibration information may include internal reference calibration information and external reference calibration information, where the internal reference calibration information is used to represent the position and attitude of the camera itself, and the external reference calibration information is used to represent the relationship between the relative position and attitude of the two cameras. Optionally, the calibration information between the first camera and the second camera may be obtained by solving using epipolar geometric constraints.
After the first image and the second image of the target object are obtained, the first image and the second image can be preliminarily corrected on line according to the calibration information of the first camera and the second camera, and the first target image and the second target image are obtained.
Step 120, determining a parallax offset between the first image and the second image according to a first region where the target object in the first image is located and a second region where the target object in the second image is located.
When two cameras are used for respectively shooting target images, due to the fact that the positions of the two cameras are different, the two cameras have parallax in the shooting process, therefore, correction processing is carried out based on calibration information, the same-name points in the first target image and the second target image can only be guaranteed to be aligned on a horizontal line, and imaging of a target object in the two images is offset in the horizontal direction. And the homonymous point is an imaging point corresponding to the first target image and the second target image.
As a specific example, when a first imaging point of a first target image and a second imaging point of a second target image are compared in the same rectangular coordinate system, the vertical coordinates of the first imaging point and the second imaging point are the same, but the horizontal coordinates of the first imaging point and the second imaging point are different. For clarity of description of the embodiment of the present application, fig. 2 is an exemplary view of a parallax offset provided by the embodiment of the present application, fig. 2 shows a target object 203, a background image is outside the target object 203, and after the first image and the second image are corrected according to calibration parameters due to parallax between the first camera and the second camera, a first imaging point 201 is shown in fig. 2 (a), and a second imaging point 202 is shown in fig. 2 (b). It can be seen that the ordinate of the first imaging point is the same as the ordinate of the second imaging point, and the abscissa of the first imaging point is different from the abscissa of the second imaging point. That is, the first imaging point and the second imaging point are aligned in the horizontal direction, and the first imaging point and the second imaging point are offset in the vertical direction.
In some embodiments of the present application, there is a correspondence between an occupied area of a region in which a target object is located in a captured image and depth information of the target object. Therefore, the depth information of the first image and the depth information of the second image can be respectively determined by combining the first area where the target object is located in the first image and the second area where the target object is located in the second image, and then the parallax offset of the first image and the parallax offset of the second image can be determined by combining the depth information of the first image and the depth information of the second image.
Step 130, performing alignment processing on the first target image and the second target image according to the parallax offset.
After the parallax offset is obtained, the first target image or the second target image is moved by the parallax offset, and the first target image and the second target image which are aligned to the maximum are obtained, so that the influence of parallax between cameras on the image alignment accuracy can be reduced.
According to the embodiment of the application, the first target image and the second target image can be obtained through preliminary online correction of the first image and the second image, then the parallax offset of the first image and the second image is combined, and the first target image and the second target image are aligned according to the parallax offset, so that the influence of a camera on the image alignment effect in the anti-shake process is effectively reduced, and the first target image and the second target image which are small in alignment error and accurate in parallax are obtained. In addition, according to the embodiment of the application, when the image is aligned, an additional calibration device is not needed, so that on one hand, the hardware cost can be reduced, and on the other hand, the software overhead is reduced due to the simple calculation process, and the image processing speed can be improved.
In some embodiments, in order to accurately determine the parallax offset, referring to the step 120, specifically, the area ratio of the first region to the first image may be obtained first to obtain first area information; acquiring the area ratio of the second area to the second image to obtain second area information; and then, determining the parallax offset between the first target image and the second target image according to the first area information, the second area information and the corresponding relation between the area and the depth.
Illustratively, the closer the target object is to the camera, the larger the area occupied by the target object for imaging in the captured image. Therefore, there is a correspondence relationship between the region occupied by the target object imaged in the captured image and the depth information imaged by the target object, whereby the correspondence relationship between the area and the depth can be set in advance. The method includes the steps of imaging a region of a target object in a captured image, determining depth information of the imaging of the target image, and then converting the depth information of the imaging region of the target image into parallax.
In some embodiments, according to the first area information, the second area information, and the corresponding relationship between the area and the depth, the depth information of the first area and the depth information of the second area may be respectively determined, and the depth information of the first area and the depth information of the second area are respectively converted into a parallax, so that the parallax offset may be quickly calculated. The parallax offset between the first target image and the second target image in the horizontal direction can be compensated through the parallax offset, and the accuracy of image alignment is improved.
According to the embodiment of the application, through setting the corresponding relation between the area and the depth, after the first area information and the second area information of the target object are obtained, the parallax offset between the first image and the second image can be rapidly calculated, so that the first target image and the second target image can be rapidly aligned, and the image processing speed is improved.
In some embodiments, the first area information is an area occupation ratio of a first region in the first image where the target object is located. The specific step of obtaining the first area information may include: firstly, identifying a first image, and determining a first area where a target object in the first image is located; next, first area information is determined according to the area of the first region and the area of the first image. For example, the first image may be identified by the image identification model, and an identification result output by the image identification model may be obtained, where the identification result includes a first region where the target object is located in the first image. Then, the ratio of the area of the first region to the area of the first image is calculated to obtain first area information.
In some embodiments, the second area information is an area occupation ratio of a second region in the second image where the target object is located. The second image can also be identified through the image identification model to obtain a corresponding identification result, wherein the identification result comprises a second area where the target object in the second image is located. Next, a ratio of the area of the second region to the area of the second image may be calculated to obtain second area information.
According to the embodiment of the application, the position of the target object in the shot image is rapidly determined by detecting the shot image in real time, and the accuracy of calculating the area information is improved. Meanwhile, the calculation process is simple, and the image processing speed can be improved.
As a specific example, after obtaining the first area information and the second area information, determining a parallax offset between the first image and the second image may specifically refer to the following steps: firstly, determining first depth information of a target object in a first image according to first area information and a corresponding relation between the area and the depth; and determining second depth information of the target object in the second image according to the second area information and the corresponding relation between the area and the depth. Next, according to the first depth information, determining a first parallax of the first target image; and determining a second parallax of the second target image according to the second depth information. And then, obtaining the parallax offset between the first image and the second image according to the difference value of the first parallax and the second parallax.
Illustratively, the closer the target object is to the camera, the larger the area occupied by the target object in imaging the captured image. Therefore, there is a correspondence relationship between the area occupied by the target object imaged in the captured image and the depth information imaged by the target object. The corresponding relation between the area and the depth is preset. After the first area information and the second area information are obtained, the first depth information of the target object in the first image and the second depth information of the target object in the second image can be quickly determined.
Determining a first parallax of the first target image according to the first depth information; and determining a second parallax of the second target image according to the second depth information. For example, the conversion relationship between the depth information and the disparity may be as shown in formula (1).
Figure BDA0003761907200000081
Wherein Z is depth information, d is parallax, f is the focal length of the camera, and b is the distance between the optical centers of the two cameras. The focal length of the camera and the distance between the optical centers of the two cameras can be preset, and can also be determined according to real-time shooting parameters and postures of the two cameras. And combining the formula (1), and obtaining a first parallax corresponding to the first image according to the first depth information obtained by calculation. According to the second depth information, a second parallax corresponding to the second image can be obtained.
And obtaining the parallax offset between the first image and the second image according to the difference value of the first parallax and the second parallax. Illustratively, the parallax offset amount is a pixel distance between the homologous points of the first image and the second image. Since the first image and the second image have been subjected to correction processing by the calibration information, the same-name points in the first target image and the second target image are aligned on the horizontal line. After the parallax offset is obtained, the first target image and the second target image only need to be moved according to the parallax offset, so that the rapid alignment processing of the images can be realized. For example, the second target image may be moved by a pixel distance corresponding to the parallax offset amount in the horizontal direction to the first target image, and the alignment process of the images may be implemented.
According to the embodiment of the application, the on-line correction and the parallax compensation of the shot images of different cameras are combined, the images are aligned, the anti-shake effect of the shooting process of the electronic equipment can not be influenced, and the aligned image pair with small alignment error and accurate parallax is obtained.
In a specific example, referring to the step 110, performing correction processing on the first image and the second image according to the calibration information of the first camera and the second camera to obtain a first target image and a second target image, which may specifically include the following steps: acquiring a matched characteristic point pair between the first image and the second image; determining calibration information between the first camera and the second camera according to the characteristic point pairs; and according to the calibration information, correcting the first image and the second image to obtain a first target image and a second target image.
Specifically, after a first image is acquired based on a first camera and a second image is acquired based on a second camera, corresponding feature points between the first image and the second image can be determined through a preset feature point extraction algorithm. For example, the Feature points in the image may be extracted based on a corresponding Feature extraction algorithm such as Speeded Up Robust Features (SURF), scale-Invariant Feature Transform (SIFT) Features, or FAST Feature point extraction and description (ORB) Features, which is not limited herein.
Corresponding to each image, after determining the feature points according to a preset feature point extraction algorithm, a descriptor of each feature point can be calculated. When matching the feature points between the two images, matching can be performed according to the similarity of the descriptors to obtain a plurality of feature point matching pairs between the two images, where the number of the feature point matching pairs is multiple.
Optionally, for the feature point matching pairs between two images, a plurality of feature point matching pairs may be filtered to improve the calculation accuracy. For example, a screening strategy may be set in combination with the directionality of the descriptor, and feature point matching pairs corresponding to descriptors whose directions are different from the preset direction may be screened out.
After the matching pairs of the feature points are obtained, the calibration information of the first camera and the second camera can be determined in an optimized solving mode according to the preset error function of online correction of the feature points.
In a specific example, fig. 3 is a schematic diagram of a epipolar geometry constraint provided by an embodiment of the present application. As shown in fig. 3, the point P is a spatial point where the target object is located, a projection of the point P on the first image 301 of the imaging plane imaged by the first camera is P1, and a second image of the point P on the imaging plane of the second camera is P2. The camera center point of the first camera is O1, and the camera center point of the second camera is O2. The plane defined by the three points O1, O2 and P is called polar plane, the line segment O1O2 is called base line, and the intersecting lines P1e1 and P2e2 between the polar plane and the two image planes are called polar lines. Epipolar geometric constraints are used to constrain the positional relationship of the feature points of the first and second images in space, for example, feature points on the first and second images 301 and 302 may be constrained to align in the horizontal direction.
In the case where the feature points of the first image 301 and the feature points of the second image 302 are aligned in the horizontal direction, the feature points satisfy formula (2).
Figure BDA0003761907200000101
Wherein F is a fundamental Matrix (fundamental Matrix), XlIn homogeneous form, X, of the coordinates of the feature points in the first imagerIn the form of a homogeneous number of coordinates of the feature points in the second image.
When the feature points of the first image 301 and the feature points of the second image 302 are aligned in the horizontal direction, F satisfies a special form as shown in formula (3) in the case when the feature points of the first image 301 and the feature points of the second image 302 are aligned in the horizontal direction.
Figure BDA0003761907200000102
In the embodiment of the present application, an online corrected error function can be constructed by combining the feature point matching pairs and the formula (2). Illustratively, the error function for online correction may be as shown in equation (4). And obtaining calibration information of the first camera and the second camera by solving the error function of the online correction.
Figure BDA0003761907200000103
Wherein, XlIn homogeneous form of the coordinates of the feature points in the first image, KlIs a first internal reference matrix, R, of a first cameralIs a first rotation matrix, X, of a first camerarIn homogeneous form of the coordinates of the feature points in the second image, KrIs a second reference matrix, R, of a second camerarIs a second rotation matrix of the second camera.
Wherein, Kl、Rl、Kr、RrAnd optimizing variables for the target of the preset optimization function. Obtaining optimized calibration information for the error function corrected on line according to a preset optimization function, wherein the optimized calibration information comprises a first target internal parameter matrix K of the first cameraL0A first target rotation matrix R of the first cameraL0Is a second target internal reference matrix K of a second cameraR0A second target rotation matrix R for the second cameraR0
Illustratively, the first internal reference matrix of the first camera is shown in equation (5).
Figure BDA0003761907200000111
Wherein f is the focal length of the first camera and (x)c1,yc1) Is the principal point coordinate of the first camera.
According to the embodiment of the application, the calibration information of the first camera and the second camera can be obtained by optimally solving the error function of online correction shown in the formula (4).
According to the embodiment of the application, the first image and the second image are subjected to preliminary online correction based on the calibration information, and the alignment of the homonymous points in the first target image and the second target image on the horizontal line is ensured.
In a specific example, according to the calibration information, performing correction processing on the first image and the second image to obtain a first target image and a second target image, which may specifically include: generating an image transformation matrix between the first image and the second image according to the calibration information; and correcting the first image and the second image according to the image transformation matrix to obtain a first target image and a second target image.
Illustratively, the calibration information of the first camera and the second camera includes a first target internal reference matrix of the first camera, a first target rotation matrix of the first camera, a second target internal reference matrix of the second camera, and a second target rotation matrix of the second camera. Thereby, an image transformation matrix between the first image and the second image is generated based on the calibration information. According to the image transformation matrix, image transformation processing such as translation, rotation, scaling and the like can be performed on the first image or the second image so as to finish correction processing on the first image and the second image and obtain a first target image and a second target image.
According to the embodiment of the application, the first image and the second image are corrected based on the calibration information, and the fact that the same-name points in the first target image and the second target image are aligned on the horizontal line is guaranteed.
In the image processing method provided by the embodiment of the application, the execution main body can be an image processing device. The embodiment of the present application takes a method for executing image processing by an image processing apparatus as an example, and describes an apparatus for image processing provided in the embodiment of the present application.
Fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. As shown in connection with fig. 4, the image processing apparatus 400 may include:
the correction module 410 is configured to, when the first camera captures a target object to obtain a first image and the second camera captures a target object to obtain a second image, perform correction processing on the first image and the second image according to calibration information of the first camera and the second camera to obtain a first target image and a second target image;
the processing module 420 is configured to determine a parallax offset between the first image and the second image according to a first region where the target object in the first image is located and a second region where the target object in the second image is located;
the processing module 420 is further configured to perform alignment processing on the first target image and the second target image according to the parallax offset.
According to the embodiment of the application, the first image and the second image are corrected according to the calibration information of the first camera and the second camera, so that the first image and the second image can be preliminarily corrected on line, and a first target image and a second target image are obtained. And then, determining the parallax offset of the first image and the second image according to the first area where the target object is located in the first image and the second area where the target object is located in the second image, and aligning the first target image and the second target image according to the parallax offset, so that the influence of a camera on the image alignment effect in the anti-shake process is effectively reduced, and the first target image and the second target image which are small in alignment error and accurate in parallax are obtained.
In some embodiments, the apparatus further comprises:
the acquisition module is used for acquiring the area ratio of the first area to the first image to obtain first area information;
the acquisition module is further used for acquiring the area ratio of the second area to the second image to obtain second area information;
the processing module 420 is further configured to determine a parallax offset between the first image and the second image according to the first area information, the second area information, and the corresponding relationship between the area and the depth.
According to the embodiment of the application, by setting the corresponding relation between the area and the depth, after the first area information and the second area information of the target object are obtained, the parallax offset between the first target image and the second target image can be rapidly calculated, and therefore the image processing speed is improved.
In some embodiments, the processing module 420 is further configured to determine first depth information of the target object in the first image according to the first area information and the corresponding relationship between the area and the depth;
the processing module 420 is further configured to determine second depth information of the target object in the second image according to the second area information and the corresponding relationship between the area and the depth;
the processing module 420 is further configured to determine a first parallax of the first target image according to the first depth information;
the processing module 420 is further configured to determine a second parallax of the second target image according to the second depth information;
the processing module 420 is further configured to obtain a parallax offset between the first image and the second image according to a difference between the first parallax and the second parallax.
According to the embodiment of the application, the on-line correction and the parallax compensation of the shot images of different cameras are combined, the images are aligned, the anti-shake effect of the shooting process of the electronic equipment can not be influenced, and the aligned image pair with small alignment error and accurate parallax is obtained.
In some embodiments, the processing module 420 is further configured to identify the first image, determine a first region in the first image where the target object is located;
the processing module 420 is further configured to determine first area information according to the area of the first region and the area of the first image.
According to the embodiment of the application, the position of the target object in the shot image is determined by detecting the shot image in real time, so that the accuracy of calculating the area information can be improved. Meanwhile, the calculation process is simple, and the image processing speed can be improved.
In some embodiments, the obtaining module is further configured to obtain a matched feature point pair between the first image and the second image;
the processing module 420 is further configured to determine calibration information between the first camera and the second camera according to the feature point pairs;
the processing module 420 is further configured to perform correction processing on the first image and the second image according to the calibration information to obtain a first target image and a second target image.
According to the embodiment of the application, the first image and the second image are corrected based on the calibration information, and the fact that the same-name points in the first target image and the second target image are aligned on the horizontal line is guaranteed.
In some embodiments, the processing module 420 is further configured to generate an image transformation matrix between the first image and the second image according to the calibration information;
the processing module 420 is further configured to perform correction processing on the first image and the second image according to the image transformation matrix to obtain a first target image and a second target image.
According to the embodiment of the application, the first image and the second image are corrected based on the calibration information, and the fact that the same-name points in the first target image and the second target image are aligned on the horizontal line is guaranteed.
The image processing apparatus in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (Network Attached Storage, NAS), a personal computer (NAS), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the image processing method embodiment, and is not described here again to avoid repetition.
Optionally, as shown in fig. 5, an electronic device 500 is further provided in an embodiment of the present application, and includes a processor 501 and a memory 502, where the memory 502 stores a program or an instruction that can be executed on the processor 501, and when the program or the instruction is executed by the processor 501, the steps of the embodiment of the image processing method are implemented, and the same technical effects can be achieved, and are not described again to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 600 includes, but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609, a processor 610, and the like.
Those skilled in the art will appreciate that the electronic device 600 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 610 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 6 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 610 is configured to, when a first camera captures a target object to obtain a first image and a second camera captures a target object to obtain a second image, perform correction processing on the first image and the second image according to calibration information of the first camera and the second camera to obtain a first target image and a second target image;
the processor 610 is configured to determine a parallax offset between the first image and the second image according to a first region where the target object in the first image is located and a second region where the target object in the second image is located;
the processor 610 is further configured to perform alignment processing on the first target image and the second target image according to the parallax offset.
According to the embodiment of the application, the first image and the second image are corrected according to the calibration information of the first camera and the second camera, so that the first image and the second image can be preliminarily corrected on line, and a first target image and a second target image are obtained. And then, determining the parallax offset of the first image and the second image according to the first area where the target object is located in the first image and the second area where the target object is located in the second image, and aligning the first target image and the second target image according to the parallax offset, so that the influence of a camera on the image alignment effect in the anti-shake process is effectively reduced, and the first target image and the second target image which are small in alignment error and accurate in parallax are obtained.
In some embodiments, the processor 610 is configured to obtain an area ratio of the first region to the first image, and obtain first area information;
the processor 610 is further configured to obtain an area ratio between the second region and the second image, so as to obtain second area information;
the processor 610 is further configured to determine a parallax offset between the first image and the second image according to the first area information, the second area information, and the corresponding relationship between the area and the depth.
According to the embodiment of the application, by setting the corresponding relation between the area and the depth, after the first area information and the second area information of the target object are obtained, the parallax offset between the first target image and the second target image can be rapidly calculated, so that the image processing speed is increased.
In some embodiments, the processor 610 is further configured to determine first depth information of the target object in the first image according to the first area information and the corresponding relationship between the area and the depth;
the processor 610 is further configured to determine second depth information of the target object in the second image according to the second area information and the corresponding relationship between the area and the depth;
a processor 610, further configured to determine a first disparity of the first target image according to the first depth information;
the processor 610 is further configured to determine a second parallax of the second target image according to the second depth information;
the processor 610 is further configured to obtain a parallax offset between the first image and the second image according to a difference between the first parallax and the second parallax.
According to the embodiment of the application, the on-line correction and the parallax compensation of the shot images of different cameras are combined, the images are aligned, the anti-shake effect of the shooting process of the electronic equipment can not be influenced, and the aligned image pair with small alignment error and accurate parallax is obtained.
In some embodiments, the processor 610 is further configured to identify a first image, determine a first region in the first image where the target object is located;
the processor 610 is further configured to determine first area information according to an area of the first region and an area of the first image.
According to the embodiment of the application, the position of the target object in the shot image is determined by detecting the shot image in real time, so that the accuracy of calculating the area information can be improved. Meanwhile, the calculation process is simple, and the image processing speed can be improved.
In some embodiments, the processor 610 is further configured to obtain a matching feature point pair between the first image and the second image;
the processor 610 is further configured to determine calibration information between the first camera and the second camera according to the feature point pairs;
the processor 610 is further configured to perform correction processing on the first image and the second image according to the calibration information to obtain a first target image and a second target image.
According to the embodiment of the application, the first image and the second image are corrected based on the calibration information, and the fact that the same-name points in the first target image and the second target image are aligned on the horizontal line is guaranteed.
In some embodiments, the processor 610 is further configured to generate an image transformation matrix between the first image and the second image according to the calibration information;
the processor 610 is further configured to perform correction processing on the first image and the second image according to the image transformation matrix to obtain a first target image and a second target image.
According to the embodiment of the application, the first image and the second image are corrected based on the calibration information, and the fact that the same-name points in the first target image and the second target image are aligned on the horizontal line is guaranteed.
It is to be understood that, in the embodiment of the present application, the input Unit 604 may include a Graphics Processing Unit (GPU) 6041 and a microphone 6042, and the Graphics Processing Unit 6041 processes image data of a still picture or a video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The display unit 606 may include a display panel 6061, and the display panel 6061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 607 includes at least one of a touch panel 6071 and other input devices 6072. A touch panel 6071, also referred to as a touch screen. The touch panel 6071 may include two parts of a touch detection device and a touch controller. Other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a first storage area storing a program or an instruction and a second storage area storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 609 may include volatile memory or nonvolatile memory, or the memory 609 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 609 in the embodiments of the subject application include, but are not limited to, these and any other suitable types of memory.
Processor 610 may include one or more processing units; optionally, the processor 610 integrates an application processor, which mainly handles operations related to the operating system, user interface, application programs, etc., and a modem processor, which mainly handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as a system-on-chip, or a system-on-chip.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing image processing method embodiments, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present application or portions thereof that contribute to the prior art may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An image processing method, comprising:
under the condition that a first camera shoots a target object to obtain a first image and a second camera shoots the target object to obtain a second image, correcting the first image and the second image according to calibration information of the first camera and the second camera to obtain a first target image and a second target image;
determining the parallax offset of the first image and the second image according to a first area where the target object is located in the first image and a second area where the target object is located in the second image;
and aligning the first target image and the second target image according to the parallax offset.
2. The method according to claim 1, wherein the determining the parallax offset of the first image and the second image according to a first area where the target object is located in the first image and a second area where the target object is located in the second image comprises:
acquiring the area ratio of the first area to the first image to obtain the first area information;
acquiring the area ratio of the second area to the second image to obtain second area information;
and determining the parallax offset between the first image and the second image according to the first area information, the second area information and the corresponding relation between the area and the depth.
3. The method according to claim 2, wherein determining the parallax offset between the first image and the second image according to the first area information, the second area information, and the area-to-depth correspondence comprises:
determining first depth information of a target object in the first image according to the first area information and the corresponding relation between the area and the depth;
determining second depth information of the target object in the second image according to the second area information and the corresponding relation between the area and the depth;
determining a first parallax of the first target image according to the first depth information;
determining a second parallax of the second target image according to the second depth information;
and obtaining the parallax offset between the first image and the second image according to the difference value of the first parallax and the second parallax.
4. The method according to claim 1, wherein the correcting the first image and the second image according to the calibration information of the first camera and the second camera to obtain a first target image and a second target image comprises:
acquiring a matched characteristic point pair between a first image and the second image;
determining calibration information between the first camera and the second camera according to the characteristic point pairs;
and correcting the first image and the second image according to the calibration information to obtain a first target image and a second target image.
5. The method according to claim 4, wherein the performing a correction process on the first image and the second image according to the calibration information to obtain a first target image and a second target image comprises:
generating an image transformation matrix between the first image and the second image according to the calibration information;
and correcting the first image and the second image according to the image transformation matrix to obtain the first target image and the second target image.
6. An image processing apparatus characterized by comprising:
the calibration module is used for calibrating the first image and the second image according to calibration information of the first camera and the second camera to obtain a first target image and a second target image under the condition that the first camera shoots a target object to obtain the first image and the second camera shoots the target object to obtain the second image;
the processing module is used for determining the parallax offset of the first image and the second image according to a first area where the target object in the first image is located and a second area where the target object in the second image is located;
the processing module is further configured to perform alignment processing on the first target image and the second target image according to the parallax offset.
7. The apparatus of claim 6, further comprising:
the acquisition module is used for acquiring the area ratio of the first area to the first image to obtain the first area information;
the obtaining module is further configured to obtain an area ratio between the second region and the second image to obtain the second area information;
the processing module is further configured to determine a parallax offset between the first image and the second image according to the first area information, the second area information, and a corresponding relationship between a preset area and a preset depth.
8. The apparatus of claim 7,
the processing module is further configured to determine first depth information of a target object in the first image according to the first area information and the corresponding relationship between the area and the depth;
the processing module is further configured to determine second depth information of the target object in the second image according to the second area information and the corresponding relationship between the area and the depth;
the processing module is further configured to determine a first parallax of the first target image according to the first depth information;
the processing module is further configured to determine a second parallax of the second target image according to the second depth information;
the processing module is further configured to obtain a parallax offset between the first image and the second image according to a difference between the first parallax and the second parallax.
9. An electronic device, comprising a processor and a memory, the memory storing a program or instructions executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 5.
10. A readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the image processing method according to any one of claims 1 to 5.
CN202210874779.4A 2022-07-25 2022-07-25 Image processing method, image processing device, electronic equipment and readable storage medium Pending CN115278071A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210874779.4A CN115278071A (en) 2022-07-25 2022-07-25 Image processing method, image processing device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210874779.4A CN115278071A (en) 2022-07-25 2022-07-25 Image processing method, image processing device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN115278071A true CN115278071A (en) 2022-11-01

Family

ID=83768807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210874779.4A Pending CN115278071A (en) 2022-07-25 2022-07-25 Image processing method, image processing device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115278071A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109600548A (en) * 2018-11-30 2019-04-09 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN110490938A (en) * 2019-08-05 2019-11-22 Oppo广东移动通信有限公司 For verifying the method, apparatus and electronic equipment of camera calibration parameter
CN111225201A (en) * 2020-01-19 2020-06-02 深圳市商汤科技有限公司 Parallax correction method and device, and storage medium
CN111292380A (en) * 2019-04-02 2020-06-16 展讯通信(上海)有限公司 Image processing method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109600548A (en) * 2018-11-30 2019-04-09 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN111292380A (en) * 2019-04-02 2020-06-16 展讯通信(上海)有限公司 Image processing method and device
CN110490938A (en) * 2019-08-05 2019-11-22 Oppo广东移动通信有限公司 For verifying the method, apparatus and electronic equipment of camera calibration parameter
CN111225201A (en) * 2020-01-19 2020-06-02 深圳市商汤科技有限公司 Parallax correction method and device, and storage medium

Similar Documents

Publication Publication Date Title
CN112017216B (en) Image processing method, device, computer readable storage medium and computer equipment
CN111325798B (en) Camera model correction method, device, AR implementation equipment and readable storage medium
CN113409391B (en) Visual positioning method and related device, equipment and storage medium
CN111340737B (en) Image correction method, device and electronic system
CN111445537B (en) Calibration method and system of camera
CN112470192A (en) Dual-camera calibration method, electronic device and computer-readable storage medium
CN114022560A (en) Calibration method and related device and equipment
CN108717704B (en) Target tracking method based on fisheye image, computer device and computer readable storage medium
CN113393505A (en) Image registration method, visual positioning method, related device and equipment
CN114640833A (en) Projection picture adjusting method and device, electronic equipment and storage medium
CN112233189A (en) Multi-depth camera external parameter calibration method and device and storage medium
CN113838151A (en) Camera calibration method, device, equipment and medium
CN111432117B (en) Image rectification method, device and electronic system
CN111741223B (en) Panoramic image shooting method, device and system
CN110750094A (en) Method, device and system for determining pose change information of movable equipment
CN111353945B (en) Fisheye image correction method, device and storage medium
CN115278071A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN114785957A (en) Shooting method and device thereof
CN114882194A (en) Method and device for processing room point cloud data, electronic equipment and storage medium
CN115086625A (en) Correction method, device and system of projection picture, correction equipment and projection equipment
CN115205419A (en) Instant positioning and map construction method and device, electronic equipment and readable storage medium
CN115578466A (en) Camera calibration method and device, computer readable storage medium and electronic equipment
CN112446928B (en) External parameter determining system and method for shooting device
CN112911091B (en) Parameter adjusting method and device of multipoint laser and electronic equipment
TWI834495B (en) Object posture recognition method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination