CN112689850A - Image processing method, image processing apparatus, image forming apparatus, removable carrier, and storage medium - Google Patents

Image processing method, image processing apparatus, image forming apparatus, removable carrier, and storage medium Download PDF

Info

Publication number
CN112689850A
CN112689850A CN202080005077.1A CN202080005077A CN112689850A CN 112689850 A CN112689850 A CN 112689850A CN 202080005077 A CN202080005077 A CN 202080005077A CN 112689850 A CN112689850 A CN 112689850A
Authority
CN
China
Prior art keywords
images
group
image
panoramic image
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080005077.1A
Other languages
Chinese (zh)
Inventor
李广
朱传杰
李静
郭浩铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN112689850A publication Critical patent/CN112689850A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

An image processing method, an image processing apparatus, an image forming apparatus, a removable carrier, and a storage medium. The method comprises the following steps: acquiring a first group of images and a second group of images, wherein the first group of images and the second group of images are acquired by two sensors with fixed relative positions respectively, and determining feature points of each image of the second group of images according to the feature points of each image in the first group of images; and splicing the second group of images based on the characteristic points of the images of the second group of images to obtain a second panoramic image. The images with serious detail loss are guided to extract the feature points by the images with richer details and easier feature point extraction, so that more feature points can be extracted from the images with serious detail loss, the images with serious detail loss are spliced, and the quality of the spliced images is improved.

Description

Image processing method, image processing apparatus, image forming apparatus, removable carrier, and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an imaging device, a removable carrier, and a storage medium.
Background
Generally, a plurality of images acquired by a sensor at different angles can be spliced to obtain a panoramic image with a large viewing angle or a 360-degree full viewing angle. When a plurality of images are spliced, feature points need to be extracted from each image, then registration of the feature points is carried out, and the images are fused based on a registration result to obtain a spliced large-view-angle image. The accurate extraction of the feature points is a precondition for ensuring the quality of the spliced large-view-angle image.
However, for some sensors, the collected image detail information is less, the resolution is lower or the noise is more serious, for example, the gray level images of infrared, ultraviolet and the like, so that few feature points can be extracted, and if image stitching is still performed according to the conventional method, the stitched image has a poor effect. Therefore, there is a need for an improved image stitching method for stitching images with serious detail information loss.
Disclosure of Invention
In view of the above, the present application provides an image processing method, an image processing apparatus, an imaging device, a removable carrier, and a storage medium.
According to a first aspect of the present application, there is provided an image processing method, the method comprising:
acquiring a first group of images and a second group of images, wherein the first group of images and the second group of images are acquired by two sensors with fixed relative positions respectively;
determining the characteristic points of the second group of images according to the characteristic points of the first group of images;
and splicing the images in the second group of images based on the characteristic points of the second group of images to obtain a second panoramic image.
According to a second aspect of the present application, there is provided an image processing method, the method comprising:
acquiring a first group of images and a second group of images, wherein the first group of images and the second group of images are acquired by two sensors with fixed relative positions respectively;
respectively fusing the first group of images with the second group of images to obtain a third group of images;
determining the characteristic points of the third group of images according to the characteristic points of the first group of images;
and splicing the images in the third group of images based on the characteristic points of the third group of images to obtain a fourth panoramic image.
According to a third aspect of the present application, there is provided an image processing apparatus comprising a processor, a memory for storing computer instructions executable by the processor, the processor executing the computer instructions to implement a method comprising:
acquiring a first group of images and a second group of images, wherein the first group of images and the second group of images are acquired by two sensors with fixed relative positions respectively;
determining the characteristic points of the second group of images according to the characteristic points of the first group of images;
and splicing the images in the second group of images based on the characteristic points of the second group of images to obtain a second panoramic image.
According to a fourth aspect of the present application, there is provided an image processing apparatus comprising a processor, a memory for storing computer instructions executable by the processor, the execution of the computer instructions by the processor enabling the following method:
acquiring a first group of images and a second group of images, wherein the first group of images and the second group of images are acquired by two sensors with fixed relative positions respectively;
respectively fusing the first group of images with the second group of images to obtain a third group of images;
determining the characteristic points of the third group of images according to the characteristic points of the first group of images;
and splicing the images in the third group of images based on the characteristic points of the third group of images to obtain a third panoramic image.
According to a fifth aspect of the present application, there is provided an imaging apparatus comprising a first image sensor, a second image sensor and an image processing device, the first and second image sensors being connected to the image processing device, the first and second images being fixed in relative position, the image processing device comprising a processor, a memory for storing computer instructions executable by the processor, the execution of the computer instructions by the processor being such that the following method is achieved:
acquiring a first group of images and a second group of images, wherein the first group of images are acquired by a first sensor and the second group of images are acquired by a second sensor;
determining the characteristic points of the second group of images according to the characteristic points of the first group of images;
and splicing the images in the second group of images based on the characteristic points of the second group of images to obtain a second panoramic image.
According to a sixth aspect of the present application, there is provided a movable carrier comprising a body and an imaging device mounted to the body, the imaging device comprising a first image sensor, a second image sensor and an image processing apparatus, the first and second image sensors being connected to the image processing apparatus, the first and second images being fixed in relative position, the image processing apparatus comprising a processor, a memory for storing computer instructions executable by the processor, the processor executing the computer instructions being operable to implement a method of:
acquiring a first group of images and a second group of images, wherein the first group of images are acquired through a first sensor, and the second group of images are acquired through a second sensor;
determining the characteristic points of the second group of images according to the characteristic points of the first group of images;
and splicing the images in the second group of images based on the characteristic points of the second group of images to obtain a second panoramic image.
According to a seventh aspect of the present application, there is provided a computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, which, when executed by a processor, causes the processor to implement the image processing method of any one of the present applications.
The two groups of images are acquired through the two sensors with fixed relative positions respectively, the feature points of the second group of images are determined through the feature points of the first group of images, so that the second group of images can be spliced according to the feature points, the images with more abundant details and more easily extracted feature points guide the images with serious detail loss to extract the feature points, the images with serious detail loss can be extracted to obtain more feature points, the splicing of the images with serious detail loss is realized, and the quality of the spliced images is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a schematic diagram of image stitching according to an embodiment of the present application.
Fig. 2 is a flowchart of an image processing method according to an embodiment of the present application.
Fig. 3 is a flowchart of an image processing method according to an embodiment of the present application.
Fig. 4 is a schematic diagram of image stitching and fusion according to an embodiment of the present application.
Fig. 5 is a schematic diagram of image stitching and fusion according to another embodiment of the present application.
Fig. 6 is a block diagram of a logical structure of an image processing apparatus according to an embodiment of the present application.
Fig. 7 is a block diagram of a logical structure of an image forming apparatus according to an embodiment of the present application.
Fig. 8 is a schematic diagram of a movable carrier according to an embodiment of the present application.
Fig. 9 is a block diagram of a logic structure of a removable carrier according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The panoramic image refers to an image of a large viewing angle or a 360 ° full viewing angle. Generally, a panoramic image can be obtained by splicing a plurality of images acquired by a sensor at different angles. The multiple images collected at different angles have the same area, the same area needs to be found out and matched, and then the images are fused in the same area to obtain a panoramic image. Generally, one way of image stitching is: extracting feature points from an image, wherein the feature points are generally pixel points with a large difference between pixel values in the image and surrounding pixel points, such as pixel points corresponding to corner points, inflection points, boundary points, and the like of a three-dimensional object. And then, finding matched feature points of the feature points in the other image, namely, carrying out registration on the feature points, determining a space geometric relationship between the two images based on the feature points and the matched feature points, namely, a corresponding relationship between pixel point coordinates of one image and matched pixel point coordinates on the other image, and fusing the two images based on the space geometric relationship to obtain a spliced image. As shown in fig. 1, the image 1 and the image 2 are two images acquired from different angles, the feature points extracted from the image 1 are a1, B1, C1 and D1, so that it is necessary to find a matching feature point a2 of a1, a matching feature point B2 of B1, a matching feature point C2 of C1 and a matching feature point D2 of D1 in the image 2, then determine a spatial geometric relationship between the two images based on a1, B1, C1, D1, a2, B2, C2 and D2, and fuse the image 1 and the image 2 based on the spatial geometric relationship to obtain a stitched image. Certainly, in the actual image stitching process, in order to achieve a better stitching effect, a large number of feature points need to be extracted, and accurate extraction and registration of the feature points are the premise of ensuring the quality of the stitched panoramic image.
However, for some sensors, due to various factors such as their own characteristics or imaging principles, the acquired image details are less, the resolution is lower, or a large amount of noise exists, and for such an image, it is often difficult to accurately extract feature points, and the number of feature points is small, so it is difficult to accurately stitch the image. For example, a grayscale image, particularly an infrared image, is an image generated by detecting infrared rays radiated by an object through an infrared sensor, and due to characteristics of an imaging principle of the grayscale image, detailed information of the infrared image is seriously lost, resolution ratio is low, a large amount of noise exists, and a field angle of the infrared image is often small, if an image with a large viewing angle is to be obtained, image stitching is required, but due to lack of details of the image, when feature point extraction is performed, detected feature points are few or even undetectable, and accurate stitching of the infrared image cannot be performed through a conventional method.
In order to solve the above problem, the present application provides an image processing method, specifically, as shown in fig. 2, the method may include the following steps:
s202, acquiring a first group of images and a second group of images, wherein the first group of images and the second group of images are acquired by two sensors with fixed relative positions respectively;
s204, determining the characteristic points of the second group of images according to the characteristic points of the first group of images;
s206, splicing the images in the second group of images based on the characteristic points of the second group of images to obtain a second panoramic image.
The image processing method and the image processing device can acquire two groups of images from different angles through two sensors with fixed relative positions, wherein the two groups of images are called as a first group of images and a second group of images, and the first group of images are richer in detail information and easier to detect and extract feature points compared with the second group of images. For example, in some embodiments, the first set of images may have a higher resolution than the second set of images, or the first set of images may have less noise than the second set of images and may have more detailed information.
The two groups of images in the present application may be two groups of images acquired by two image sensors of different types, for example, images acquired by an infrared sensor, a visible light sensor, a TOF (time-of-flight) sensor, an ultraviolet sensor, and the like. In some embodiments, the first set of images may be visible light images and the second set of images may be infrared images, and the visible light images may be more detailed, less noisy, and clearer in image quality than the infrared images, thereby making it easier to extract feature points from the images. Of course, in some embodiments, the first set of images may be visible light images and the second set of images may be ultraviolet images or TOF images. The first group of images is not limited as long as it is easier to extract feature points than the second group of images.
The image processing method of the application can be used for equipment with an image acquisition function, and the equipment can be equipment with two sensors fixed in relative positions, such as a camera with an infrared sensor and a visible light sensor. Of course, the device may be a device obtained by combining two devices with different sensors, for example, a device obtained by combining an infrared camera and a visible light camera, where the positions of the two cameras are relatively fixed, for example, the two cameras are respectively fixed to the same pan/tilt head. Of course, the image processing method of the application may also be used for image processing equipment which is specially used for image processing and does not have an image acquisition function, the equipment may be a mobile phone, a tablet, a notebook computer or a cloud server, and post-processing is performed after the first group of images and the second group of images are received from the image acquisition equipment.
Of course, it should be noted that the present application only exemplifies the case where two sensors acquire two sets of images, and the same applies to the case of three sensors or more, and the specific implementation process is consistent with that of two sensors.
For the second group of images, due to the fact that the detail information is less, when the feature points are extracted, the number of the detected feature points is limited, even the feature points cannot be detected, and therefore image splicing cannot be achieved or the effect of the spliced images is poor. Therefore, the application guides the second group of images to complete the image splicing through the first group of images with richer detailed information. Firstly, feature points can be extracted from each image in a first group of images, wherein the feature points are mostly pixel points corresponding to corners, boundaries, edges and inflection points of a three-dimensional object, and the feature points can be extracted by using a general feature point detection algorithm, such as a Scale-invariant feature transform (SIFT) algorithm, a Speeded-Up Robust Features algorithm (SURF) algorithm, and an ordered FAST and qualified brief (orb) algorithm. After determining the feature points of the images in the first set of images, the feature points of the second set of images may be determined based on the feature points of the first set of images. Because the two groups of images are acquired by the sensors with fixed relative positions, the mapping points of the characteristic points on the first group of images on the second group of images can be determined according to the spatial position relationship of the two sensors to be used as the characteristic points of the second group of images.
In some embodiments, when determining the feature points of the second group of images according to the feature points of the first group of images, the corresponding image of a frame image in the first group of images in the second group of images may be determined, and then the feature points of the corresponding image of the frame image in the second group of images may be determined according to the feature points of the frame image in the first group of images and the mapping relationship between the first group of images and the second group of images. The corresponding image of each image in the first group of images in the second group of images is an image respectively collected by the two sensors when the equipment where the two sensors are located or the equipment where the two sensors are fixed is at a certain position. Taking the example that the two sensors are fixed on the same pan-tilt head, assuming that the first image F1 of the first group of images corresponds to the second image F2 of the second group of images, the first image F1 and the second image F2 are images respectively captured by the two sensors when the pan-tilt head rotates to a certain position. Of course, the first image F1 and the second image F2 may be acquired by two sensors at the same time, or acquired by two sensors at a certain time interval.
Wherein the mapping relationship between the first group of images and the second group of images may be determined according to at least one of: internal parameters of each sensor and external parameters of each sensor. For example, it may be determined from the intrinsic parameters of each of the two sensors and the extrinsic parameters of each of the two sensors. The internal parameters and the external parameters of the two sensors can be calibrated in advance, for example, the internal parameters and the external parameters of the two sensors can be calibrated in advance when the two sensors leave a factory, and the calibration of the internal parameters and the external parameters of the sensors can adopt some existing calibration methods, such as a Zhang friend calibration method. The mapping relation between the first group of images and the second group of images can be calibrated in advance, or can be calculated temporarily by an image processing device according to internal parameters and external parameters calibrated in advance when the device is required to be used. For example, if two sensors are sensors of the same device, the mapping relationship between two sets of images acquired by the two sensors may be calibrated in advance when the device leaves a factory, and of course, if the two sensors are two independent devices, the two devices are fixed in a certain manner, and the relative positions of the two devices are kept unchanged, for example, the two devices are fixed to the same pan/tilt head, the mapping relationship between two sets of images acquired by the two sensors may be temporarily calculated by the image processing device.
The mapping relationship represents a corresponding relationship between a pixel point of the first image F1 in the first group of images and a corresponding pixel point of the second image F2 in the second group of images, and it is assumed that the two sensors are fixed on the same pan-tilt, and the first image F1 and the second image F2 are images respectively acquired by the two sensors when the pan-tilt rotates to a certain position. Certainly, the first image F1 and the second image F2 can be acquired by two sensors at the same time when the pan-tilt rotates to a certain position, so that the environmental conditions (such as light factors, temperature and humidity) during image acquisition can be ensured to be relatively consistent, and the influence of object distance change on images shot by an infrared sensor or an ultraviolet sensor can be reduced; in addition, the first image F1 and the second image F2 may be acquired within a certain time interval, for example, acquired at an interval of 1 second, so that the corresponding relationship between the corresponding pixels can still be obtained through conversion. Of course, too long time interval may result in a large shooting environment and a large shooting parallax of the first image F1 and the second image F2, and further result in inaccuracy of the determined feature point, but the mapping relationship between the first image F1 and the second image F2 acquired in different time intervals may still be obtained through several conversions.
In certain embodiments, the mapping relationship may be characterized by a homography matrix and/or an affine matrix. Wherein the homography matrix may be determined by a reference matrix for each of the two sensors and a transformation matrix of the coordinate system of one sensor relative to the coordinate system of the other sensor.
In some embodiments, a first image acquired simultaneously with a second image in the second set of images may be determined in the first set of images, and then a mapping point of a feature point of the first image in the second image is determined according to a mapping relationship between the two sets of images, and the mapping point is taken as a feature point of the second image. For example, after the mapping relationship is determined, a first image acquired simultaneously with a second image in the second group of images may be determined in the first group of images, for example, in some embodiments, assuming that two sensors are fixed to the same pan/tilt head, and each time the pan/tilt head rotates to a certain angle, two sensors may acquire one image at the same time, that is, two groups of images correspond to each other in one-to-one manner in acquisition time, a first image acquired simultaneously with a second image in the second group of images may be determined in the first group of images, and then a mapping point of a feature point of the first image in the second image may be determined according to the determined mapping relationship, and the mapping point may be used as a feature point of the second image. For example, assuming that there are two sensors fixed relative to each other, a first sensor and a second sensor, a first group of images is acquired by the first sensor, a second group of images is acquired by the second sensor, an image acquired by the first sensor at time T is a first image, an image acquired by the second sensor at time T is a second image, a feature point P1 is extracted from the first image, and a corresponding feature point P2 of P1 on the second image can be determined according to a homography matrix H, wherein the homography matrix H can be obtained according to formula (1), and the pixel coordinates of P2 can be obtained according to formula (2):
Figure BDA0002970777180000091
P2=HP1(formula 2)
Wherein, K1An internal reference matrix representing a first sensor;
K2an internal reference matrix representing a second sensor;
r represents a rotation matrix of the second sensor coordinate system to the first sensor coordinate system;
h represents a homography matrix from the first sensor coordinate system to the second sensor coordinate system;
P1coordinates representing a certain feature point of the first image;
P2represents P1A corresponding point on the second image;
in this way, the feature points of the images in the second group of images can be determined, and then the images in the second group of images can be stitched according to the feature points of the images in the second group of images to obtain a stitched panoramic image, which is hereinafter collectively referred to as a second panoramic image.
In some embodiments, when the images of the second group of images are stitched according to the feature points of the images in the second group of images to obtain the second panoramic image, the images in the second group of images may be registered based on the feature points of the images in the second group of images, and then the images in the second group of images are synthesized according to the registration result to obtain the second panoramic image. The second group of images are a plurality of images acquired by the sensor at different angles, and the same three-dimensional object can be detected in different images, so that the matching feature point of the feature point in another image is determined according to the feature point of one image, wherein the registration technology of the feature point is mature, the matching feature point of the feature point of one image in another image can be determined by adopting a universal feature point matching algorithm, such as an SIFT algorithm, an SURF algorithm, an ORB algorithm and the like, then the corresponding relation of the two images in space geometry can be determined according to the feature point and the matching feature point, global optimization can be performed based on the space geometry relation between every two images, the space geometry relation between the two images is optimized, and the final registration result is obtained. Then, a registration area (e.g., an overlapping area) between the multiple images can be determined based on the registration result, and the registration area is subjected to image synthesis to obtain a stitched second panoramic image. The image synthesis can also adopt a traditional image synthesis method, mainly carries out exposure compensation on the image and carries out image synthesis on the splicing seam of the two images, and the specific details can refer to the traditional image synthesis technology and are not detailed here.
In some embodiments, the images in the first group of images may be further stitched according to feature points of the images in the first group of images to obtain a first panoramic image.
Since the second group of images are images with serious loss of detail information, for example, infrared images, the detail information is less, and even if the panoramic images are spliced, the view angle of the images is only increased, the detail information is less and not rich enough, and the application of the panoramic images is limited. In order to solve the problem, after the images are spliced, the detail information of the second group of images can be further enhanced by utilizing the abundant detail information of the first group of images, so that the detail information of the second group of images is more abundant.
Therefore, in some embodiments, in order to perform detail enhancement on the second panoramic image obtained by stitching, the first panoramic image obtained by stitching the first group of images and the second panoramic image obtained by stitching the second group of images may be fused to obtain a third panoramic image with richer detail information.
Because the first group of images and the second group of images are acquired by two different sensors and the two groups of images correspond to different coordinate systems, before fusion, the first group of images and the second group of images can be mapped to a specified coordinate system according to the mapping relation of the first group of images and the second group of images. The specified coordinate system may be a coordinate system corresponding to the first group of images, may also be a coordinate system corresponding to the second group of images, and of course, other coordinate systems may also be used as long as the coordinate systems of the two groups of images are unified, and the present application is not limited thereto. The specific mapping mode is similar to the mapping of the feature points, and for example, by mapping the first group of images to the coordinate system corresponding to the second group of images, the coordinates of the pixel points of each image in the first group of images are determined to be mapped to the corresponding coordinates of the coordinate system of the second group of images according to the homography matrix mapped by the first group of images to the second group of images, so as to obtain the mapped images. Of course, in some examples, the step of extracting the feature points may be performed before mapping the two sets of images, or may be performed after mapping the two sets of images, and extracting the feature points before mapping the images can extract more feature points, so that the registration will be more accurate, and mapping the images before extracting the feature points can reduce the consumption of the operation time and the operation resources, improve the image processing rate, so that which step is performed first can be determined according to actual requirements. And mapping the two groups of images to the same coordinate system, splicing to obtain a first panoramic image and a second panoramic image, and fusing the first panoramic image and the second panoramic image. Of course, in some embodiments, after the first panoramic image and the second panoramic image are obtained by stitching, the first panoramic image and the second panoramic image may be mapped to the same coordinate system in a similar manner, and then image fusion may be performed.
Of course, in some embodiments, the field angles of the two sensors may be different, for example, the field angle of the infrared sensor is often smaller than that of the visible light image, and in order to make the field angles of the two sets of images coincide when the images are merged, after the first set of images and the second set of images are mapped to the specified coordinate system, the first set of images or the second set of images may be further cropped to make the field angle of the first set of images coincide with that of the second set of images. For example, the field angle of the first group of images is larger than the field angle of the second group of images, and thus each image of the first group of images may be cropped so that the field angle of each image coincides with the corresponding image of the second group of images. In addition, after the first group of images and the second group of images are mapped to the specified coordinate system, the first group of images or the second group of images are cut, so that the operation amount of subsequent image processing can be further reduced, redundant output is reduced, and the image presentation effect is improved.
In some embodiments, when fusing the first panoramic image and the second panoramic image, a first component may be extracted from the first panoramic image and then fused into the second panoramic image. Taking the first group of images as visible images, the second group of images as infrared images, the first panoramic image is a spliced visible light image, and the visible light image can be represented by YUV components, wherein the Y component represents the brightness of the image, and the UV component represents the chromaticity of the image. Since some of the grayscale images include luminance values, it is possible to extract the Y component of the first panoramic image to be fused into the second panoramic image.
When the first panoramic image is fused to the second panoramic image, the fusion range and the fusion intensity can be controlled according to actual requirements. For example, in the fusion process, only the edge pixel point of the first panoramic image can be fused to the second panoramic image, and the whole image can also be fused to the second panoramic image, which can be set according to actual requirements. In some embodiments, in order to control the fusion range, the extraction threshold of the first component of the first panoramic image may be adjusted, for example, a pixel point of the first component greater than a preset threshold may be extracted, and then the portion of pixel points is fused with a corresponding pixel point of the second panoramic image. For example, the brightness value of the edge pixel point of the visible light image is usually higher, if only the edge pixel point of the first panoramic image needs to be fused to the second panoramic image to increase the details of the edge of the second panoramic image, the pixel point with the brightness value greater than a certain threshold value can be determined, and then the Y component of the pixel point corresponding to the fused image is obtained according to the Y component of the part of the pixel point and the Y component of the pixel point corresponding to the part of the pixel point in the second panoramic image. In some embodiments, the intensity of the fusion may also be controlled during the fusion process, for example, the first component of the first panoramic image may be fused to the first component of the second panoramic image according to a first weight to obtain the first component of the third panoramic image, where the size of the first weight may be set according to a requirement. If a stronger fusion effect is to be obtained, the detail information of the fused image is richer, for example, the edge of the fused second panoramic image is more obvious, the brightness is stronger, the boundary of an object is easily distinguished, the first weight can be set to be larger, if the fusion effect is not needed to be too strong, the first weight can be set to be smaller, and the fusion effect can be flexibly set according to actual requirements.
In some embodiments, after the first component of the first panoramic image is fused to the second panoramic image according to the first weight to obtain the first component of the third panoramic image, the second component of the third panoramic image may be further obtained by combining the second component of the first panoramic image and the gain coefficient. For example, in the case of visible light, the first component may be the Y component, the second component may be the U, V component, or a combination of both. In some embodiments, the gain factor may be obtained by the first component of the third panoramic image and the first component of the second panoramic image, or may be obtained by self-setting, which is not limited in this application. Taking the first panoramic image as the visible light image and the second panoramic image as the infrared image as an example, the Y component of the visible light image may be fused into the infrared image according to the first weight to obtain the Y component of the fused image, and then the UV component of the fused image may be obtained according to the UV component of the visible light image and the gain coefficient. Specifically, the Y component Y of the fused imagebCan be determined by equation (3), equation (3) is as follows:
Yb=wY2+(1-w)Y1(formula 3)
The gain coefficient ratio can be obtained according to equation (4), where equation (4) is as follows:
ratio=Yb/Y2(formula 4)
U component U of fused imagebAnd a V component VbCan be obtained according to the formula (5) and the formula (6), respectively
Ub=ratio*U2(formula 5)
Vb=ratio*V2(formula 6)
Wherein the content of the first and second substances,
Y1a Y component representing an infrared image;
Y2、U2、V2y, U, V components representing visible light images, respectively;
Yb、Ub、Vby, U, V components representing the fused image, respectively;
w represents a first weight
ratio represents a gain coefficient
Of course, in some embodiments, the first panoramic image may also be directly fused to the second panoramic image according to the second weight to obtain the third panoramic image, for example, the pixel value of each pixel point of the first panoramic image is fused to the pixel value of each pixel point of the second panoramic image, where the weight of the pixel value of each pixel point of the first panoramic image is the second weight. Of course, the above is only an exemplary embodiment, and various existing image fusion techniques are applicable in the present application.
In some embodiments, when images with seriously missing detail information are stitched, in order to obtain a panoramic image with a larger viewing angle and richer detail information, images in the first group of images may be first merged with images in the second group of images, and after the detail information of the second group of images is enhanced, the images are stitched. Based on this, the present application also provides another image processing method, as shown in fig. 3, the method includes the following steps:
s302, acquiring a first group of images and a second group of images, wherein the first group of images and the second group of images are acquired by two sensors with fixed relative positions respectively;
s304, respectively fusing each image in the first group of images with each image in the second group of images to obtain a third group of images;
s306, determining the feature points of the third group of images according to the feature points of the first group of images;
and S308, splicing the images of the third group of images based on the characteristic points of the third group of images to obtain a fourth panoramic image.
Each image in the first set of images may be fused with an image in the second set of images corresponding to the image. Taking the example that the two sensors are fixed on the same cradle head, when the cradle head rotates to a certain position, the two images which are respectively collected by the two sensors can be collected simultaneously or at a certain time interval.
In some embodiments, before the two sets of images are fused, the two sets of images may be mapped to a specified coordinate system according to a mapping relationship between the first set of images and the second set of images, where the specified coordinate system may be a coordinate system corresponding to the first set of images or a coordinate system corresponding to the second set of images.
Of course, in some embodiments, if the two groups of images have different field angles, before the two groups of images are fused, the group of images with the larger field angle may be cropped to be consistent with the field angle of the other group of images.
After a third group of images are obtained by fusing the first group of images and the second group of images, the feature points of the third group of images can be determined through the feature points of the first group of images, and then the images in the third group of images are spliced based on the feature points of the third group of images to obtain a fourth panoramic image. The specific stitching process may refer to the description in the image processing method, and is not described herein again.
To further explain the image processing method of the present application, a specific embodiment is explained below.
Due to the imaging principle of the infrared image, the loss of detail information of the infrared image is serious, the resolution ratio is low, and more noise exists. However, the field angle of the infrared image is relatively small, and if an image with a large viewing angle is to be obtained, a plurality of images acquired at different angles need to be spliced to obtain the image. Due to the fact that the infrared image detail information is seriously lost, the extracted feature points are very limited, even the feature points cannot be extracted, and therefore the infrared image cannot be spliced. In order to solve the problems, a set of visible light images and a set of infrared images are respectively collected by a visible light sensor and an infrared sensor which are fixed on the same cloud deck and have fixed relative positions, and the infrared images are guided to extract characteristic points through the visible light images with abundant details and high resolution so as to realize splicing of the infrared images and obtain the infrared panoramic images. Certainly, in order to make the details of the infrared panoramic image richer, the visible light image can be used for carrying out detail enhancement processing on the infrared image, so that the details of the infrared panoramic image are richer.
In order to obtain an infrared panoramic image with enhanced details, two ways can be adopted, the first way is as shown in fig. 4, firstly, a visible light image is used for guiding the infrared image to extract feature points, the infrared image is spliced to obtain an infrared panoramic image, then the visible light image is spliced to obtain a visible panoramic image, and then the visible panoramic image is fused to the infrared image for enhancement to obtain an enhanced infrared panoramic image. The specific implementation process is as follows:
1. image acquisition
And acquiring visible light images and infrared images of the cradle head rotating to a plurality of angles through the infrared sensor and the visible light sensor fixed on the cradle head. Two pictures at the same angle are acquired at the same time so as to reduce the difference of exposure and the like as much as possible.
2. Feature point detection
And detecting the characteristic points of the visible light image, wherein the detection method is not limited and can be any characteristic point detection method, including but not limited to a SIFT algorithm, a SURF algorithm, an ORB algorithm and the like.
3. Feature point mapping
And (3) mapping the characteristic points of the visible light image detected in the step (2) onto the infrared image, wherein the mapping relation between the visible light sensor and the infrared sensor can be obtained by pre-calibrating internal parameters and external parameters of the camera, and the used mapping method comprises but is not limited to a homography matrix (H matrix), an Affine matrix (Affinix) and the like. After the step is carried out, the characteristic points of the infrared image can be obtained. The following is a formula for mapping by using an H-matrix (assuming that the object distance is infinity), the H-matrix can be obtained according to formula (1), and the coordinates of the feature points on the infrared image can be obtained according to formula (2):
Figure BDA0002970777180000161
P2=HP1(formula 2)
Wherein the content of the first and second substances,
K1an internal reference matrix representing a visible light camera;
K2an internal reference matrix representing an infrared camera;
r represents a rotation matrix from an infrared camera coordinate system to a visible light camera coordinate system;
h represents a homography matrix from a visible light camera coordinate system to an infrared camera coordinate system;
P1coordinates representing a point of the visible light image;
P2represents P1A corresponding point on the infrared image;
4. image mapping
The step maps the visible light image to the infrared image coordinate system, and certainly, the step can also map the light image to the infrared image coordinate system, so that the coordinate systems of the two groups of images are unified, and the subsequent image fusion is convenient. The mapping relation is the same as that in step 3. The image mapping can be obtained by using a backward mapping and interpolation method, and the visible light image after mapping is cut to be consistent with the FOV (Field of Vision) of the infrared image. The step is to prepare for the subsequent fusion of the visible light image and the infrared image. It should be noted that the order of the two steps of image mapping and feature point detection may be changed, and feature point detection may be performed first, or image mapping may be performed first.
5. Image registration
The step can adopt a traditional feature point matching algorithm, such as a SIFT algorithm, a SURF algorithm, an ORB algorithm and the like, a sensor detects, maps and matches feature points when shooting images of all angles, simultaneously obtains a space geometric relationship between two images, globally optimizes the geometric relationship between every two images, and obtains a final registration result. After the step is carried out, the registration relationship among the plurality of infrared images and the registration relationship among the plurality of visible light images after mapping can be obtained.
6. Image synthesis
The step can adopt the traditional image synthesis technology, mainly carries out exposure compensation on the image, carries out fusion on the image at the splicing seam and the like. The existing algorithm for exposure compensation and splicing seam fusion can be used for the step, details are not described here, and the spliced visible light panoramic image and infrared panoramic image are obtained through the step.
7. Image fusion
The Image Fusion (Image Fusion) process can adopt the existing Image Fusion technology to fuse the visible light panoramic Image and the infrared panoramic Image so as to improve the detail information of the infrared panoramic Image, and a user can control the Fusion range and intensity. For example, the edge of the visible light image or the detail information of the whole image is extracted to fuse the infrared image, and the weight occupied by the visible light image in the fusion process can be set according to the actual requirement. In theory, the present embodiment is applicable to both edge detection and image fusion methods in each color space. The following introduces a method for fusing an infrared image and a visible light image after color palette toning in a YUV color space. The fusion process is as follows: extracting pixel points with Y components larger than a preset threshold value from the visible light image, and fusing the pixel points with corresponding pixel points in the infrared image, wherein the weight occupied by the Y components of the visible light image is w during fusing, and the Y components Yb of the fused image can be obtained by calculation according to a formula 3:
Yb=wY2+(1-w)Y1(formula 3)
The U component of the fused image can be obtained from the U component and the V component of the visible light image and a gain coefficient, wherein the gain coefficient can be calculated according to formula (4):
ratio=Yb/Y2(formula 4)
The fused U component of the image can be obtained according to the U component and the V component of the visible light according to the formulas (5) and (6) in a sub-table manner:
Ub=ratio*U2(formula 5)
Vb=ratio*V2(formula 6)
Wherein the content of the first and second substances,
Y1a Y component representing an infrared image;
Y2、U2、V2y, U, V components representing visible light images, respectively;
Yb、Ub、Vby, U, V components representing fused images, respectively;
w represents a Y component fusion weight of the visible light image;
ratio represents a gain coefficient;
the second method is as shown in fig. 5, the visible light image may be fused to the infrared image to obtain a plurality of enhanced images, the visible light image may then be used to guide feature point extraction of the plurality of enhanced images, and the plurality of enhanced images may be spliced based on the extracted feature points to obtain an enhanced infrared panoramic image. The method is only adjusted in the sequence of fusion and splicing, and specific implementation details can refer to the first method, which is not described herein again.
In addition, the present application also provides an image processing apparatus, as shown in fig. 6, the image processing apparatus includes a processor 61, a memory 62 for storing computer instructions executable by the processor, and the processor executing the computer instructions can implement the following method:
acquiring a first group of images and a second group of images, wherein the first group of images and the second group of images are acquired by two sensors with fixed relative positions respectively;
determining the characteristic points of the second group of images according to the characteristic points of the first group of images;
and splicing the images in the second group of images based on the characteristic points of the second group of images to obtain a second panoramic image.
In certain embodiments, the first set of images is of a higher resolution than the second set of images.
In some embodiments, the stitching, by the processor, each image in the second group of images based on the feature points of the second group of images to obtain a panoramic image includes:
registering each image in the second set of images based on the feature points of the second set of images;
and synthesizing the images in the second group of images based on the registration result to obtain the panoramic image.
In some embodiments, the processor is configured to, before stitching the images of the second set of images based on the feature points of the second set of images, further configured to:
mapping the first group of images and the second group of images to a specified coordinate system according to a mapping relation of the first group of images and the second group of images, wherein the mapping relation is determined according to at least one of the following: internal parameters of each sensor and external parameters of each sensor.
In some embodiments, the specified coordinate system comprises: the coordinate system of the first set of images and/or the coordinate system of the second set of images.
In some embodiments, the processor, after mapping the first group of images and the second group of images to the specified coordinate system according to the mapping relationship between the first group of images and the second group of images, is further configured to:
cropping the first set of images or the second set of images to bring the field of view of the first set of images into agreement with the field of view of the second set of images.
In certain embodiments, the processor is further configured to:
and splicing the images in the first group of images based on the characteristic points of the first group of images to obtain a first panoramic image.
In certain embodiments, the processor is further configured to:
and fusing the first panoramic image and the second panoramic image to obtain a third panoramic image.
In some embodiments, the processor, when configured to fuse the first panoramic image and the second panoramic image, comprises:
and extracting a first component of the first panoramic image, and fusing the first component of the first panoramic image to the second panoramic image.
In some embodiments, the processor, when being configured to extract a first component of the first panoramic image and to blend the first component of the first panoramic image with the second panoramic image, comprises:
and fusing the first component of the first panoramic image to the first component of the second panoramic image according to a first weight to obtain a first component of the third panoramic image.
In some embodiments, the extraction threshold of the first component of the first panoramic image is adjustable.
In certain embodiments, the second component of the third panoramic image is obtained by the second component of the second panoramic image by a gain factor.
In some embodiments, the gain factor is derived from a first component of a third panoramic image and a first component of the second panoramic image.
In some embodiments, the processor, when configured to fuse the first panoramic image and the second panoramic image, comprises:
and fusing the first panoramic image to the second panoramic image according to a second weight.
In some embodiments, the processor, when determining the feature points of the second set of images from the feature points of the first set of images, comprises:
determining a first image in the first set of images acquired concurrently with a second image in the second set of images;
determining a mapping point of a feature point of the first image in the second image according to a mapping relationship between the first set of images and the second set of images, the mapping relationship being determined according to at least one of: internal parameters of each sensor and external parameters of each sensor;
and taking the mapping point as a characteristic point of the second image.
In certain embodiments, the mapping is characterized by a homography matrix and/or an affine matrix.
In certain embodiments, the homography matrix is determined by an internal reference matrix of the two sensors and a transformation matrix of a coordinate system of one of the two sensors relative to a coordinate system of a second other sensor.
In certain embodiments, the first set of images are visible light images and the second set of images are infrared images.
The present application also provides another image processing apparatus, as shown in fig. 6, the image processing apparatus includes a processor 62, a memory 61 for storing computer instructions executable by the processor, and the execution of the computer instructions by the processor can further implement the following method:
acquiring a first group of images and a second group of images, wherein the first group of images and the second group of images are acquired by two sensors with fixed relative positions respectively;
respectively fusing each image in the first group of images with each image in the second group of images to obtain a third group of images;
determining the characteristic points of the third group of images according to the characteristic points of the first group of images;
and splicing the images in the third group of images based on the characteristic points of the third group of images to obtain a fourth panoramic image.
The present application further provides an imaging device, as shown in fig. 7, comprising a first image sensor 71, a second image sensor 72, and an image processing apparatus 73, the first image sensor and the second image sensor being connected to the image processing apparatus, the first image sensor and the second image sensor being fixed in relative position, the image processing apparatus comprising a processor 732, a memory 731 for storing computer instructions executable by the processor, the execution of the computer instructions by the processor implementing the following method:
acquiring a first group of images and a second group of images, wherein the first group of images are acquired through a first sensor, and the second group of images are acquired through a second sensor;
determining the characteristic points of the second group of images according to the characteristic points of the first group of images;
and splicing the images in the second group of images based on the second group of image feature points to obtain a second panoramic image.
In certain embodiments, the first set of images is of a higher resolution than the second set of images.
In some embodiments, the stitching, by the processor, each image in the second group of images based on the feature points of the second group of images to obtain a panoramic image includes:
registering each image in the second set of images based on the feature points of the second set of images;
and synthesizing the images in the second group of images based on the registration result to obtain the panoramic image.
In some embodiments, the processor is configured to, before stitching the images of the second set of images based on the feature points of the second set of images, further configured to:
mapping the first group of images and the second group of images to a specified coordinate system according to a mapping relation of the first group of images and the second group of images, wherein the mapping relation is determined according to at least one of the following: internal parameters of each sensor and external parameters of each sensor.
In some embodiments, the specified coordinate system comprises: the coordinate system of the first set of images and/or the coordinate system of the second set of images.
In some embodiments, the processor, after mapping the first group of images and the second group of images to the specified coordinate system according to the mapping relationship between the first group of images and the second group of images, is further configured to:
cropping the first set of images or the second set of images to bring the field of view of the first set of images into agreement with the field of view of the second set of images.
In certain embodiments, the processor is further configured to:
and splicing the characteristic points in the first group of images based on the characteristic points of the first group of images to obtain a first panoramic image.
In certain embodiments, the processor is further configured to:
and fusing the first panoramic image and the second panoramic image to obtain a third panoramic image.
In some embodiments, the processor, when configured to fuse the first panoramic image and the second panoramic image, comprises:
and extracting a first component of the first panoramic image, and fusing the first component of the first panoramic image to the second panoramic image.
In some embodiments, the processor, when being configured to extract a first component of the first panoramic image and to blend the first component of the first panoramic image with the second panoramic image, comprises:
and fusing the first component of the first panoramic image to the first component of the second panoramic image according to a first weight to obtain a first component of the third panoramic image.
In some embodiments, the extraction threshold of the first component of the first panoramic image is adjustable.
In certain embodiments, the second component of the third panoramic image is obtained by the second component of the second panoramic image by a gain factor.
In some embodiments, the gain factor is derived from a first component of a third panoramic image and a first component of the second panoramic image.
In some embodiments, the processor, when configured to fuse the first panoramic image and the second panoramic image, comprises:
and fusing the first panoramic image to the second panoramic image according to a second weight.
In some embodiments, the processor, when determining the feature points of the second set of images from the feature points of the first set of images, comprises:
determining a first image in the first set of images acquired concurrently with a second image in the second set of images;
determining a mapping point of a feature point of the first image in the second image according to a mapping relationship of the first sensor and the second sensor, the mapping relationship being determined according to at least one of: internal parameters of each sensor and external parameters of each sensor;
and taking the mapping point as a characteristic point of the second image.
In certain embodiments, the mapping is characterized by a homography matrix and/or an affine matrix.
In certain embodiments, the homography matrix is determined by an internal reference matrix of the first sensor, an internal reference matrix of the second sensor, and a transformation matrix of a coordinate system of the first sensor relative to a coordinate system of the second sensor.
In certain embodiments, the first set of images are visible light images and the second set of images are infrared images.
The imaging device is a variety of terminal devices having two image sensors, such as an infrared camera, a cellular phone, and the like.
The present application also provides a movable carrier, wherein the movable carrier may be an unmanned aerial vehicle, an unmanned ship, a mobile terminal, a movable cart, or a movable robot. As shown in fig. 8, a schematic diagram of the movable carrier being a drone includes a drone body 81 and an imaging device 82. As shown in fig. 9, which is a logical structure block diagram of a removable carrier, the removable carrier includes a main body 91 and an imaging device 92, the imaging device is installed in the main body, the imaging device includes a first image sensor 921, a second image sensor 922 and an image processing apparatus 923, the first image sensor and the second image sensor are connected to the image processing apparatus, the relative positions of the first image sensor and the second image sensor are fixed, the image processing apparatus includes a processor and a memory for storing computer instructions executable by the processor, and the processor executes the computer instructions to implement the following method:
acquiring a first group of images and a second group of images, wherein the first group of images are acquired by a first sensor, and the second group of images are acquired by a second sensor;
determining the characteristic points of the second group of images according to the characteristic points of the first group of images;
and splicing the images in the second group of images based on the characteristic points of the second group of images to obtain a second panoramic image.
Accordingly, the embodiments of the present specification further provide a computer storage medium, in which a program is stored, and the program, when executed by a processor, implements the image processing method in any of the above embodiments.
Embodiments of the present description may take the form of a computer program product embodied on one or more storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having program code embodied therein. Computer-usable storage media include permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of the storage medium of the computer include, but are not limited to: phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technologies, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium, may be used to store information that may be accessed by a computing device.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The method and apparatus provided by the embodiments of the present invention are described in detail above, and the principle and the embodiments of the present invention are explained in detail herein by using specific examples, and the description of the embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (59)

1. An image processing method, characterized in that the method comprises:
acquiring a first group of images and a second group of images, wherein the first group of images and the second group of images are acquired by two sensors with fixed relative positions respectively;
determining the characteristic points of the second group of images according to the characteristic points of the first group of images;
and splicing the images of the second group of images based on the characteristic points of the second group of images to obtain a second panoramic image.
2. The image processing method of claim 1, wherein the first set of images is of higher resolution than the second set of images.
3. The image processing method according to claim 1, wherein stitching the images of the second group of images based on the feature points of the second group of images to obtain a panoramic image comprises:
registering each image in the second set of images based on the feature points of the second set of images;
and synthesizing the images in the second group of images based on the registration result to obtain the panoramic image.
4. The image processing method according to claim 1, wherein before stitching the second group of images based on the feature points of the second group of images, the method further comprises:
mapping the first group of images and the second group of images to a specified coordinate system according to a mapping relation of the first group of images and the second group of images, wherein the mapping relation is determined according to at least one of the following: internal parameters of each sensor and external parameters of each sensor.
5. The image processing method according to claim 4, wherein the specifying a coordinate system includes: the coordinate system of the first set of images and/or the coordinate system of the second set of images.
6. The image processing method according to claim 4, wherein after mapping the first group of images and the second group of images to a specified coordinate system according to the mapping relationship between the first group of images and the second group of images, further comprising:
cropping the first set of images or the second set of images to bring the field of view of the first set of images into agreement with the field of view of the second set of images.
7. The image processing method according to claim 1, characterized in that the method further comprises:
and splicing the images in the first group of images based on the characteristic points of the first group of images to obtain a first panoramic image.
8. The image processing method according to claim 7, further comprising:
and fusing the first panoramic image and the second panoramic image to obtain a third panoramic image.
9. The image processing method of claim 8, wherein said fusing the first panoramic image and the second panoramic image comprises:
and extracting a first component of the first panoramic image, and fusing the first component of the first panoramic image to the second panoramic image.
10. The method of image processing according to claim 9, wherein said extracting a first component of the first panoramic image, fusing the first component of the first panoramic image to the second panoramic image, comprises:
and fusing the first component of the first panoramic image to the first component of the second panoramic image according to a first weight to obtain a first component of the third panoramic image.
11. The image processing method of claim 9, wherein an extraction threshold of the first component of the first panoramic image is adjustable.
12. The image processing method of claim 10, wherein the second component of the third panoramic image is obtained from the second component of the second panoramic image by a gain factor.
13. The method of claim 12, wherein the gain factor is derived from a first component of a third panoramic image and a first component of the second panoramic image.
14. The image processing method of claim 10, the fusing the first panoramic image and the second panoramic image, comprising:
and fusing the first panoramic image to the second panoramic image according to a second weight.
15. The image processing method according to claim 1, wherein determining feature points of the second set of images from feature points of the first set of images comprises:
determining a first image in the first set of images acquired concurrently with a second image in the second set of images;
determining a mapping point of a feature point of the first image in the second image according to a mapping relationship between the first set of images and the second set of images, the mapping relationship being determined according to at least one of: internal parameters of each sensor and external parameters of each sensor;
and taking the mapping point as a characteristic point of the second image.
16. The image processing method according to claim 15, characterized in that the mapping is characterized by a homography and/or an affine matrix.
17. The image processing method of claim 15, wherein the homography matrix is determined by an internal reference matrix of the two sensors and a transformation matrix of a coordinate system of one of the two sensors relative to a coordinate system of the other sensor.
18. The image processing method according to any of claims 1 to 17, wherein the first set of images are visible light images and the second set of images are infrared images.
19. An image processing method, characterized in that the method comprises:
acquiring a first group of images and a second group of images, wherein the first group of images and the second group of images are acquired by two sensors with fixed relative positions;
respectively fusing the first group of images with the second group of images to obtain a third group of images;
determining the characteristic points of the third group of images according to the first group of image characteristic points;
and splicing the images in the third group of images based on the characteristic points of the third group of images to obtain a fourth panoramic image.
20. The method of claim 19, wherein the first set of images has a resolution greater than the second set of images.
21. An image processing apparatus comprising a processor, a memory for storing computer instructions executable by the processor, the processor executing the computer instructions to implement a method comprising:
acquiring a first group of images and a second group of images, wherein the first group of images and the second group of images are acquired by two sensors with fixed relative positions respectively;
determining the characteristic points of the second group of images according to the characteristic points of the first group of images;
and splicing the images in the second group of images based on the characteristic points of the second group of images to obtain a second panoramic image.
22. The image processing apparatus of claim 21, wherein the first set of images has a higher resolution than the second set of images.
23. The image processing apparatus of claim 21, wherein the processor is configured to, when stitching the images in the second group of images based on the feature points of the second group of images to obtain a panoramic image, include:
registering each image in the second set of images based on the feature points of the second set of images;
and synthesizing the images in the second group of images based on the registration result to obtain the panoramic image.
24. The image processing apparatus of claim 21, wherein the processor is configured to, before stitching the images of the second group of images based on the feature points of the second group of images, further configured to:
mapping the first group of images and the second group of images to a specified coordinate system according to a mapping relation of the first group of images and the second group of images, wherein the mapping relation is determined according to at least one of the following: internal parameters of each sensor and external parameters of each sensor.
25. The image processing apparatus according to claim 24, wherein the specifying a coordinate system comprises: the coordinate system of the first set of images and/or the coordinate system of the second set of images.
26. The image processing apparatus of claim 24, wherein the processor is configured to, after mapping the first group of images and the second group of images to a specified coordinate system according to the mapping relationship between the first group of images and the second group of images, further:
cropping the first set of images or the second set of images to bring the field of view of the first set of images into agreement with the field of view of the second set of images.
27. The image processing apparatus of claim 24, wherein the processor is further configured to:
and splicing the images in the first group of images based on the characteristic points of the first group of images to obtain a first panoramic image.
28. The image processing apparatus of claim 27, wherein the processor is further configured to:
and fusing the first panoramic image and the second panoramic image to obtain a third panoramic image.
29. The apparatus of claim 28, wherein the processor, when fusing the first panoramic image and the second panoramic image, is configured to:
and extracting a first component of the first panoramic image, and fusing the first component of the first panoramic image to the second panoramic image.
30. The apparatus of claim 29, wherein the processor is configured to extract a first component of the first panoramic image, and when fusing the first component of the first panoramic image to the second panoramic image, the processor is configured to:
and fusing the first component of the first panoramic image to the first component of the second panoramic image according to a first weight to obtain a first component of the third panoramic image.
31. The image processing apparatus of claim 29, wherein an extraction threshold of the first component of the first panoramic image is adjustable.
32. The apparatus according to claim 30, wherein the second component of the third panoramic image is obtained from the second component of the second panoramic image by a gain factor.
33. The apparatus of claim 32, wherein the gain factor is derived from a first component of a third panoramic image and a first component of the second panoramic image.
34. The image processing apparatus of claim 28, the processor, when fusing the first panoramic image and the second panoramic image, comprising:
and fusing the first panoramic image to the second panoramic image according to a second weight.
35. The image processing apparatus of claim 21, wherein the processor, when determining the feature points of the images of the second group of images according to the feature points of the images of the first group of images, comprises:
determining a first image in the first set of images acquired concurrently with a second image in the second set of images;
determining a mapping point of a feature point of the first image in the second image according to a mapping relationship between the first set of images and the second set of images, the mapping relationship being determined according to at least one of: internal parameters of each sensor and external parameters of each sensor;
and taking the mapping point as a characteristic point of the second image.
36. The image processing apparatus according to claim 35, wherein the mapping is characterized by a homography matrix and/or an affine matrix.
37. The image processing apparatus of claim 35, wherein the homography matrix is determined by an internal reference matrix of the two sensors and a transformation matrix of a coordinate system of one of the two sensors relative to a coordinate system of the other sensor.
38. The image processing apparatus according to any of claims 21 to 37, wherein the first set of images are visible light images and the second set of images are infrared images.
39. An image processing apparatus comprising a processor, a memory for storing computer instructions executable by the processor, the processor executing the computer instructions to implement a method comprising:
acquiring a first group of images and a second group of images, wherein the first group of images and the second group of images are acquired by two sensors with fixed relative positions;
respectively fusing the first group of images with the second group of images to obtain a third group of images;
determining the characteristic points of the third group of images according to the characteristic points of the first group of images;
and splicing the images in the third group of images based on the characteristic points of the third group of images to obtain a third panoramic image.
40. An imaging device comprising a first image sensor, a second image sensor, and an image processing apparatus, the first and second image sensors being connected to the image processing apparatus, the first and second image sensors being fixed in relative position, the image processing apparatus comprising a processor, a memory for storing computer instructions executable by the processor, the execution of the computer instructions by the processor enabling the following method:
acquiring a first group of images and a second group of images, wherein the first group of images and the second group of images are acquired by a sensor with a fixed relative position;
determining the characteristic points of the second group of images according to the characteristic points of the first group of images;
and splicing the images in the second group of images based on the characteristic points of the second group of images to obtain a second panoramic image.
41. The imaging apparatus of claim 40, wherein the first set of images is of higher resolution than the second set of images.
42. The imaging apparatus of claim 40, wherein the processor is configured to stitch the images in the second group of images based on the feature points of the second group of images to obtain a panoramic image, and comprises:
registering each image in the second set of images based on the feature points of the second set of images;
and synthesizing the images in the second group of images based on the registration result to obtain the panoramic image.
43. The imaging apparatus of claim 40, wherein the processor, prior to stitching the images of the second set of images based on the feature points of the second set of images, is further configured to:
mapping the first group of images and the second group of images to a specified coordinate system according to a mapping relation of the first group of images and the second group of images, wherein the mapping relation is determined according to at least one of the following: internal parameters of each sensor and external parameters of each sensor.
44. The imaging apparatus of claim 40, wherein the specified coordinate system comprises: the coordinate system of the first set of images and/or the coordinate system of the second set of images.
45. The imaging apparatus of claim 43, wherein the processor, after mapping the first set of images and the second set of images to a specified coordinate system according to the mapping relationship between the first set of images and the second set of images, is further configured to:
cropping the first set of images or the second set of images to bring the field of view of the first set of images into agreement with the field of view of the second set of images.
46. The imaging apparatus of claim 40, wherein the processor is further configured to:
and splicing the images in the first group of images based on the characteristic points of the first group of images to obtain a first panoramic image.
47. The imaging apparatus of claim 46, wherein the processor is further configured to:
and fusing the first panoramic image and the second panoramic image to obtain a third panoramic image.
48. The imaging device of claim 47, wherein the processor, when configured to fuse the first panoramic image and the second panoramic image, comprises:
and extracting a first component of the first panoramic image, and fusing the first component of the first panoramic image to the second panoramic image.
49. The imaging device of claim 48, wherein the processor, when extracting the first component of the first panoramic image and fusing the first component of the first panoramic image to the second panoramic image, is configured to:
and fusing the first component of the first panoramic image to the first component of the second panoramic image according to a first weight to obtain a first component of the third panoramic image.
50. The imaging device of claim 49, wherein an extraction threshold of the first component of the first panoramic image is adjustable.
51. The imaging device of claim 49, wherein the second component of the third panoramic image is derived from the second component of the second panoramic image by a gain factor.
52. The imaging device of claim 51, wherein the gain factor is derived from a first component of a third panoramic image and a first component of the second panoramic image.
53. The imaging device of claim 47, the processor, when used to fuse the first panoramic image and the second panoramic image, comprising:
and fusing the first panoramic image to the second panoramic image according to a second weight.
54. The imaging apparatus of claim 40, wherein the processor, when determining feature points of the second set of images from feature points of the first set of images, comprises:
determining a mapping relation between the first group of images and the second group of images according to internal parameters and external parameters of the first sensor and the second sensor;
determining a first image in the first set of images acquired concurrently with a second image in the second set of images;
determining a mapping point of a feature point of the first image in the second image according to a mapping relationship of the first sensor and the second sensor, the mapping relationship being determined according to at least one of: internal parameters of each sensor and external parameters of each sensor;
and taking the mapping point as a characteristic point of the second image.
55. The imaging apparatus of claim 54, wherein the mapping is characterized by a homography and/or an affine matrix.
56. The imaging apparatus of claim 55, wherein the homography matrix is determined by an internal reference matrix of the first sensor, an internal reference matrix of the second sensor, and a transformation matrix of a coordinate system of the first sensor relative to a coordinate system of the second sensor.
57. The imaging device of any of claims 40-56, wherein the first set of images are visible light images and the second set of images are infrared images.
58. A removable carrier comprising a body and an imaging device mounted to the body, the imaging device comprising a first image sensor, a second image sensor and an image processing apparatus, the first and second image sensors being connected to the image processing apparatus, the first and second images being fixed in relative position, the image processing apparatus comprising a processor, a memory for storing computer instructions executable by the processor, the execution of the computer instructions by the processor enabling the following method:
acquiring a first group of images and a second group of images, wherein the first group of images are acquired through first sensing acquisition, and the second group of images are acquired through a second sensor acquisition;
determining the characteristic points of the second group of images according to the characteristic points of the first group of images;
and splicing the images in the second group of images based on the characteristic points of the second group of images to obtain a second panoramic image.
59. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to implement the image processing method according to any one of claims 1 to 20.
CN202080005077.1A 2020-03-19 2020-03-19 Image processing method, image processing apparatus, image forming apparatus, removable carrier, and storage medium Pending CN112689850A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/080219 WO2021184302A1 (en) 2020-03-19 2020-03-19 Image processing method and apparatus, imaging device, movable carrier, and storage medium

Publications (1)

Publication Number Publication Date
CN112689850A true CN112689850A (en) 2021-04-20

Family

ID=75457727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080005077.1A Pending CN112689850A (en) 2020-03-19 2020-03-19 Image processing method, image processing apparatus, image forming apparatus, removable carrier, and storage medium

Country Status (2)

Country Link
CN (1) CN112689850A (en)
WO (1) WO2021184302A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022570A (en) * 2022-01-05 2022-02-08 荣耀终端有限公司 Method for calibrating external parameters between cameras and electronic equipment
CN116016816A (en) * 2022-12-13 2023-04-25 之江实验室 Embedded GPU zero-copy panoramic image stitching method and system for improving L-ORB algorithm

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114418941B (en) * 2021-12-10 2024-05-10 国网浙江省电力有限公司宁波供电公司 Defect diagnosis method and system based on detection data of power inspection equipment
CN115619782B (en) * 2022-12-15 2023-04-07 常州海图信息科技股份有限公司 Shaft 360 panorama splicing detection system and method based on machine vision
CN117094895B (en) * 2023-09-05 2024-03-26 杭州一隅千象科技有限公司 Image panorama stitching method and system
CN117745537B (en) * 2024-02-21 2024-05-17 微牌科技(浙江)有限公司 Tunnel equipment temperature detection method, device, computer equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170025058A (en) * 2015-08-27 2017-03-08 삼성전자주식회사 Image processing apparatus and electronic system including the same
CN106384383B (en) * 2016-09-08 2019-08-06 哈尔滨工程大学 A kind of RGB-D and SLAM scene reconstruction method based on FAST and FREAK Feature Correspondence Algorithm
CN107154014B (en) * 2017-04-27 2020-06-26 上海大学 Real-time color and depth panoramic image splicing method
US20180343432A1 (en) * 2017-05-23 2018-11-29 Microsoft Technology Licensing, Llc Reducing Blur in a Depth Camera System
CN109360150A (en) * 2018-09-27 2019-02-19 轻客小觅智能科技(北京)有限公司 A kind of joining method and device of the panorama depth map based on depth camera

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022570A (en) * 2022-01-05 2022-02-08 荣耀终端有限公司 Method for calibrating external parameters between cameras and electronic equipment
CN114022570B (en) * 2022-01-05 2022-06-17 荣耀终端有限公司 Method for calibrating external parameters between cameras and electronic equipment
CN116016816A (en) * 2022-12-13 2023-04-25 之江实验室 Embedded GPU zero-copy panoramic image stitching method and system for improving L-ORB algorithm
CN116016816B (en) * 2022-12-13 2024-03-29 之江实验室 Embedded GPU zero-copy panoramic image stitching method and system for improving L-ORB algorithm

Also Published As

Publication number Publication date
WO2021184302A1 (en) 2021-09-23

Similar Documents

Publication Publication Date Title
CN112689850A (en) Image processing method, image processing apparatus, image forming apparatus, removable carrier, and storage medium
WO2021227359A1 (en) Unmanned aerial vehicle-based projection method and apparatus, device, and storage medium
CN111179358B (en) Calibration method, device, equipment and storage medium
CN110622497B (en) Device with cameras having different focal lengths and method of implementing a camera
US10681271B2 (en) Image processing apparatus, image capturing system, image processing method, and recording medium
EP3028252B1 (en) Rolling sequential bundle adjustment
US10244164B1 (en) Systems and methods for image stitching
CN110351494B (en) Panoramic video synthesis method and device and electronic equipment
US20190289223A1 (en) Apparatus and methods for the storage of overlapping regions of imaging data for the generation of optimized stitched images
US9635251B2 (en) Visual tracking using panoramas on mobile devices
JP5580164B2 (en) Optical information processing apparatus, optical information processing method, optical information processing system, and optical information processing program
CN108921897B (en) Method and apparatus for locating card area
US10373360B2 (en) Systems and methods for content-adaptive image stitching
JP2019510234A (en) Depth information acquisition method and apparatus, and image acquisition device
CN111915483B (en) Image stitching method, device, computer equipment and storage medium
US20210120194A1 (en) Temperature measurement processing method and apparatus, and thermal imaging device
US10489885B2 (en) System and method for stitching images
CN110536057A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN109559353B (en) Camera module calibration method and device, electronic equipment and computer readable storage medium
CN107749069B (en) Image processing method, electronic device and image processing system
CN105578023A (en) Image quick photographing method and device
CN111866523B (en) Panoramic video synthesis method and device, electronic equipment and computer storage medium
US11393076B2 (en) Blurring panoramic image blurring method, terminal and computer readable storage medium
WO2013149866A2 (en) Method and device for transforming an image
CN115953483A (en) Parameter calibration method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination