WO2021184302A1 - Procédé et appareil de traitement d'image, dispositif d'imagerie, porteur mobile et support de stockage - Google Patents

Procédé et appareil de traitement d'image, dispositif d'imagerie, porteur mobile et support de stockage Download PDF

Info

Publication number
WO2021184302A1
WO2021184302A1 PCT/CN2020/080219 CN2020080219W WO2021184302A1 WO 2021184302 A1 WO2021184302 A1 WO 2021184302A1 CN 2020080219 W CN2020080219 W CN 2020080219W WO 2021184302 A1 WO2021184302 A1 WO 2021184302A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
group
image
panoramic image
feature points
Prior art date
Application number
PCT/CN2020/080219
Other languages
English (en)
Chinese (zh)
Inventor
李广
朱传杰
李静
郭浩铭
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/080219 priority Critical patent/WO2021184302A1/fr
Priority to CN202080005077.1A priority patent/CN112689850A/zh
Publication of WO2021184302A1 publication Critical patent/WO2021184302A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images

Definitions

  • This application relates to the field of image processing technology, and in particular to an image processing method, device, imaging device, removable carrier and storage medium.
  • multiple images collected by the sensor at different angles can be stitched together to obtain a panoramic image with a large viewing angle or a full 360-degree viewing angle.
  • feature points need to be extracted from each image, and then the feature points need to be registered, and the images are fused based on the registration result to obtain a stitched large-view image.
  • Accurately extracting feature points is a prerequisite for ensuring the quality of large-view images after stitching.
  • the collected images have less detailed information, lower resolution or more noise, such as infrared, ultraviolet and other grayscale images, so there are few feature points that can be extracted. If you still follow the traditional method Perform image stitching, and the image effect after stitching is poor. Therefore, it is necessary to improve the image stitching method for stitching images with serious lack of detailed information.
  • the present application provides an image processing method, device, imaging device, removable carrier and storage medium.
  • an image processing method including:
  • the images in the second group of images are spliced based on the feature points of the second group of images to obtain a second panoramic image.
  • an image processing method including:
  • the images in the third group of images are spliced based on the feature points of the third group of images to obtain a fourth panoramic image.
  • an image processing device includes a processor and a memory for storing computer instructions executable by the processor.
  • the processor can execute the computer instructions to achieve The following methods:
  • the images in the second group of images are spliced based on the feature points of the second group of images to obtain a second panoramic image.
  • an image processing device includes a processor, a memory for storing computer instructions executable by the processor, and the processor can execute the computer instructions to achieve The following methods:
  • the images in the third group of images are spliced based on the feature points of the third group of images to obtain a third panoramic image.
  • an imaging device that includes a first image sensor, a second image sensor, and an image processing device, and the first image sensor and the second image sensor are connected to the image processing device.
  • the first image sensor and the second image have a fixed relative position
  • the image processing device includes a processor and a memory for storing computer instructions executable by the processor.
  • the processor can execute the computer instructions.
  • the images in the second group of images are spliced based on the feature points of the second group of images to obtain a second panoramic image.
  • a movable carrier includes a body and an imaging device, the imaging device is mounted on the body, and the imaging device includes a first image sensor, a second image sensor, and a second image sensor.
  • An image sensor and an image processing device the first image sensor and the second image sensor are connected to the image processing device, the relative position of the first image sensor and the second image are fixed, and the image processing device includes a processor,
  • execution of the computer instructions by the processor can implement the following methods:
  • the images in the second group of images are spliced based on the feature points of the second group of images to obtain a second panoramic image.
  • a computer-readable storage medium wherein the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor realizes the present application. Any one of the image processing methods.
  • two sets of images are collected by two sensors with fixed relative positions, and the feature points of the second set of images are determined by the feature points of the first set of images, so that the second set of images can be realized based on the feature points Image splicing, through the image with richer details and easier to extract feature points to guide the image with serious lack of detail to extract the feature points, which can extract more feature points from the image with serious lack of detail, so as to realize the splicing of the image with serious lack of detail. , Improve the quality of the stitched image.
  • Fig. 1 is a schematic diagram of image stitching provided by an embodiment of the present application.
  • Fig. 2 is a flowchart of an image processing method provided by an embodiment of the present application.
  • Fig. 3 is a flowchart of an image processing method provided by an embodiment of the present application.
  • Fig. 4 is a schematic diagram of image splicing and fusion provided by an embodiment of the present application.
  • Fig. 5 is a schematic diagram of image stitching and fusion according to another embodiment of the present application.
  • Fig. 6 is a block diagram of the logical structure of an image processing device provided by an embodiment of the present application.
  • FIG. 7 is a block diagram of the logical structure of an imaging device provided by an embodiment of the present application.
  • Fig. 8 is a schematic diagram of a movable carrier provided by an embodiment of the present application.
  • Fig. 9 is a block diagram of the logical structure of a removable carrier provided by an embodiment of the present application.
  • a panoramic image refers to an image with a large viewing angle or a full 360° viewing angle.
  • a panoramic image can be obtained by stitching together multiple images collected by the sensor at different angles. Multiple images collected from different angles will have the same area, and the same area needs to be found and matched, and then the same area is merged to obtain a panoramic image.
  • one way of image stitching is to extract feature points from an image, where feature points are generally pixels in the image with large differences in pixel values from surrounding pixels, such as corners, inflection points, and corners of three-dimensional objects. Corresponding pixels such as boundary points.
  • image 1 and image 2 are two images collected from different angles.
  • the feature points extracted in image 1 are A1, B1, C1, and D1. Therefore, it is necessary to find the matching feature point of A1 in image 2.
  • image 1 and image 2 are merged to obtain a mosaic image.
  • grayscale images are images generated by infrared sensors that detect object radiation. Due to the characteristics of the imaging principle, the detailed information of infrared images is seriously missing, and the resolution is relatively low. There are a lot of Noise, and the angle of view of infrared images is often small. If you want to get an image with a large viewing angle, you need to perform image stitching. However, due to the lack of details in the image, when the feature point is extracted, the detected feature points are few or even not detected. As a result, it is impossible to accurately stitch infrared images through traditional methods.
  • this application provides an image processing method. Specifically, as shown in FIG. 2, the method may include the following steps:
  • S204 Determine the feature points of the second group of images according to the feature points of the first group of images
  • S206 Splicing each image in the second group of images based on the feature points of the second group of images to obtain a second panoramic image.
  • two sets of images can be collected from different angles through two sensors with fixed relative positions, which are hereinafter referred to as the first set of images and the second set of images, where the first set of images is compared to the second set of images.
  • the detailed information is richer, and it is easier to detect and extract feature points.
  • the resolution of the first group of images is higher than that of the second group of images, or the first group of images has less noise than the second group of images, and the detailed information is more abundant.
  • the two sets of images in this application may be two sets of images collected by two different types of image sensors, such as infrared sensors, visible light sensors, TOF (time-of-flight) sensors, ultraviolet sensors, etc.
  • the image captured by the sensor may be visible light images
  • the second set of images may be infrared images.
  • visible light images have richer details, less noise, and clearer image quality, and therefore more It is easy to extract feature points from the image.
  • the first set of images may be visible light images
  • the second set of images may be ultraviolet images or TOF images. As long as the first group of images is easier to extract feature points than the second group of images, this application does not limit it.
  • the image processing method of the present application can be used in a device with an image acquisition function.
  • the device can be a device with two sensors with fixed relative positions, for example, it can be a camera with both an infrared sensor and a visible light sensor.
  • it can also be a combination of two devices with different sensors.
  • it can be a combination of an infrared camera and a visible light camera.
  • the positions of the two cameras are relatively fixed.
  • the two cameras are fixed to the same one. Yuntai.
  • the image processing method of the present application can also be used for image processing equipment that specializes in image processing and does not have the image acquisition function.
  • the equipment can be a mobile phone, a tablet, a notebook computer or a cloud server. After the group of images and the second group of images, post-processing is performed.
  • the present application guides the second group of images to complete image stitching through the first group of images with more detailed information.
  • feature points can be extracted from each image in the first group of images.
  • feature points are mostly pixels corresponding to corners, boundaries, edges, and inflection points of three-dimensional objects.
  • the extraction of feature points can use general feature point detection algorithms.
  • the feature points of the second group of images can be determined according to the feature points of the first group of images. Since the two sets of images are collected by sensors with fixed relative positions, the mapping points of the feature points on the first set of images on the second set of images can be determined according to the spatial position relationship of the two sensors, as the second set Feature points of the image.
  • the corresponding image of a certain frame of the first group of images in the second group of images can be determined first , And then determine the feature points of the corresponding image of the frame image in the second group of images according to the feature points of the frame image in the first group of images and the mapping relationship between the first group of images and the second group of images.
  • the corresponding image of each image in the first group of images in the second group of images is the image collected by the two sensors when the device where the two sensors are located, or the device that fixes the two sensors is at a certain position.
  • the first image F1 and the second image F2 are clouds
  • the two sensors collect images separately.
  • the first image F1 and the second image F2 may be acquired by two sensors at the same time, or may be acquired by two sensors at a certain time interval.
  • the mapping relationship between the first group of images and the second group of images may be determined according to at least one of the following: internal parameters of each sensor and external parameters of each sensor. For example, it can be determined according to the respective internal parameters of the two sensors and the respective external parameters of the two sensors.
  • the internal and external parameters of the two sensors can be calibrated in advance. For example, the two sensors can be calibrated before they leave the factory.
  • the internal and external parameters of the sensor can be calibrated using some existing calibration methods, such as the Zhang Zhengyou calibration method. .
  • the mapping relationship between the first group of images and the second group of images can be pre-calibrated, or it can be temporarily calculated by the device for image processing based on pre-calibrated internal and external parameters when needed.
  • the mapping relationship between the two sets of images collected by the two sensors can be pre-calibrated when the device leaves the factory.
  • the image processing device can temporarily calculate the mapping relationship between the two sets of images collected by the two sensors.
  • the mapping relationship represents the corresponding relationship between the pixels of the first image F1 in the first group of images and the corresponding pixels of the second image F2 in the second group of images.
  • the first image F1 and the second image F2 are the images collected by the two sensors when the pan-tilt rotates to a certain position.
  • the first image F1 and the second image F2 can be acquired by the two sensors at the same time when the pan/tilt rotates to a certain position, which can ensure that the environmental conditions (such as light factors, temperature and humidity, etc.) during image acquisition are relatively consistent.
  • the first image F1 and the second image F2 can be collected within a certain time interval, for example, collected at an interval of 1 second, so that it can still pass
  • the corresponding relationship of the corresponding pixels can be obtained by conversion.
  • too long an interval may cause the shooting environment and shooting parallax of the first image F1 and the second image F2 to be large, which in turn leads to the inaccuracy of the determined feature points.
  • the image F2 can still be converted several times to get its mapping relationship.
  • the mapping relationship may be characterized by a homography matrix and/or an affine matrix.
  • the homography matrix can be determined by the respective internal parameter matrices of the two sensors and the transformation matrix of the coordinate system of one sensor relative to the coordinate system of the other sensor.
  • the first image acquired at the same time as the second image in the second group of images can be determined in the first group of images, and then the first image can be determined according to the mapping relationship between the two groups of images.
  • the feature point of the image is a mapping point in the second image, and the mapping point is taken as the feature point of the second image.
  • the first image acquired at the same time as the second image in the second group of images can be determined in the first group of images.
  • the two sensors are fixed at For the same pan/tilt, each time the pan/tilt rotates to a certain angle, the two sensors can collect an image at the same time, that is, the two sets of images have a one-to-one correspondence in the acquisition time, and then they can be in the first set of images Determine the first image acquired at the same time as the second image in the second group of images, and then determine the mapping point of the feature point of the first image in the second image according to the determined mapping relationship, and use the mapping point as the second image Characteristic points.
  • the first sensor and the second sensor The first group of images is collected by the first sensor, and the second group of images is collected by the second sensor.
  • the image collected at time T is the first image
  • the image collected by the second sensor at time T is the second image.
  • a feature point P1 is extracted from the first image. It can be determined that P1 is on the second image according to the homography matrix H.
  • K 1 represents the internal parameter matrix of the first sensor
  • K 2 represents the internal parameter matrix of the second sensor
  • R represents the rotation matrix from the second sensor coordinate system to the first sensor coordinate system
  • H represents the homography matrix from the first sensor coordinate system to the second sensor coordinate system
  • P 1 represents the coordinates of a certain feature point of the first image
  • P 2 represents the corresponding point of P 1 on the second image
  • the feature points of the images in the second group of images can be determined, and then the images of the second group of images can be spliced according to the feature points of the images in the second group of images to obtain the stitched panorama Images, hereinafter collectively referred to as the second panoramic image.
  • the images of the second group of images are spliced according to the feature points of the images in the second group of images to obtain the second panoramic image.
  • the feature points register each image in the second group of images, and then synthesize each image in the second group of images according to the registration result to obtain a second panoramic image.
  • the second group of images are multiple images collected by the sensor at different angles, the same three-dimensional object will be detected in different images, so according to the feature points of one image, it is determined that the feature point is in another Matching feature points of images.
  • the registration technology of feature points is relatively mature, and general feature point matching algorithms can be used to determine the matching feature points of one image's feature points in another image, such as SIFT algorithm, SURF Algorithms, ORB algorithms, etc., and then based on the feature points and matching feature points, the spatial geometric correspondence between the two images can be determined, and global optimization can be performed based on the spatial geometric relationship between the two images to optimize the relationship between the two images. The spatial geometric relationship is used to obtain the final registration result. Then, based on the result of the registration, the registration area (for example, the overlap area) between the multiple images can be determined, and the registration area can be combined to obtain the second panoramic image after stitching.
  • traditional image synthesis methods can also be used for image synthesis, which mainly include image exposure compensation and image synthesis at the stitching seam of the two images. For details, please refer to the traditional image synthesis technology, which will not be detailed here. Narrated.
  • the images in the first group of images may be spliced according to the feature points of the images in the first group of images to obtain the first panoramic image.
  • the second group of images is an image with serious lack of detail information, take infrared images as an example, because there is less detail information, even if stitched to a panoramic image, it only increases the angle of view of the image, and its detail information is still relatively small. , Is not rich enough, which limits the application of panoramic images.
  • the rich detail information of the first group of images can be further used to enhance the detail information of the second group of images, so that the detail information of the second group of images is more abundant. .
  • the first panoramic image obtained by splicing the first group of images may be spliced with the second panoramic image obtained by splicing the second group of images. Fusion to obtain a third panoramic image with more detailed information.
  • the two sets of images correspond to different coordinate systems, so before fusion, you can first based on the first set of images and the second set of images
  • the image mapping relationship maps the first group of images and the second group of images to the specified coordinate system.
  • the designated coordinate system can be the coordinate system corresponding to the first group of images, or the coordinate system corresponding to the second group of images, of course, it can also be other coordinate systems, as long as the coordinate system of the two sets of images is one That is, this application is not restricted.
  • the specific mapping method is similar to the mapping of feature points.
  • the mapping of the first group of images to the second group of images can be based on the single
  • the response matrix determines that the coordinates of the pixel points of each image in the first group of images are mapped to the corresponding coordinates of the coordinate system of the second group of images to obtain the mapped image.
  • the feature point extraction step can be performed before the two sets of images are mapped, or it can be performed after the two sets of images are mapped.
  • the feature points can be extracted before the image mapping is performed. More feature points make the registration more accurate, and firstly mapping the image and then extracting the feature points can reduce the consumption of computing time and computing resources, and increase the image processing rate.
  • the two sets of images are mapped to the same coordinate system, they are spliced to obtain the first panoramic image and the second panoramic image, and then the first panoramic image and the second panoramic image are merged.
  • the first panoramic image and the second panoramic image can be mapped to the same coordinate system in a similar manner, and then image fusion is performed. .
  • the field of view of the two sensors may be different.
  • the field of view of the infrared sensor is often smaller than the field of view of the visible light image, in order to make the field of view of the two sets of images when the image is fused Therefore, after the first group of images and the second group of images are mapped to the specified coordinate system, the first group of images or the second group of images can be cropped to make the first group of images
  • the angle of view is consistent with the angle of view of the second set of images.
  • the angle of view of the first group of images is greater than the angle of view of the second group of images, so each image of the first group of images can be cropped so that the angle of view of each image is equal to that of the second group of images.
  • the corresponding images are the same.
  • cropping the first group of images or the second group of images can further reduce the amount of subsequent image processing operations and reduce redundancy. I output to improve the presentation effect of the image.
  • the first component when fusing the first panoramic image and the second panoramic image, can be extracted from the first panoramic image, and then the first component of the first panoramic image can be fused to the second panoramic image among.
  • the first panoramic image is the visible light image obtained by stitching.
  • the visible light image can be represented by the YUV component, where the Y component represents the brightness of the image, and the UV The component represents the chroma of the image. Since some grayscale images include brightness values, the Y component of the first panoramic image can be extracted and merged into the second panoramic image.
  • the range and intensity of the fusion can be controlled according to actual needs. For example, in the fusion process, only the edge pixels of the first panoramic image can be fused to the second panoramic image, or the entire image can be fused to the second panoramic image, which can be set according to actual needs.
  • the extraction threshold of the first component of the first panoramic image can be adjusted.
  • the corresponding pixels of the second panoramic image are merged. For example, the brightness value of the edge pixels of the visible light image is usually high.
  • the pixels whose brightness value is greater than a certain threshold can be determined, and then according to the Y component of this part of the pixel and the Y component of the pixel corresponding to this part of the pixel in the second panoramic image, the Y of the corresponding pixel of the fused image can be obtained.
  • Weight the intensity of the fusion can also be controlled during the fusion process.
  • the first component of the first panoramic image can be fused with the first component of the second panoramic image according to the first weight to obtain the The first component of the third panoramic image, where the size of the first weight can be set according to requirements.
  • the first weight is set to be larger. If the fusion effect does not need to be too strong, the first weight can be set to be smaller, which can be set flexibly according to actual needs.
  • the first component of the first panoramic image is merged into the second panoramic image according to the first weight, and after the first component of the third panoramic image is obtained, the second component of the first panoramic image can also be And the gain coefficient to obtain the second component of the third panoramic image.
  • the first component can be a Y component
  • the second component can be a U, V component or a combination of both.
  • the gain coefficient can be obtained from the first component of the third panoramic image and the first component of the second panoramic image, of course, it can also be obtained by self-setting, which is not limited in this application.
  • the Y component of the visible light image can be fused into the infrared image according to the first weight to obtain the Y component of the fused image, and then based on the visible light image
  • the UV component and gain coefficient of the fusion result in the UV component of the fused image.
  • the Y component Y b of the fused image can be determined by formula (3), which is as follows:
  • the gain coefficient ratio can be obtained according to formula (4), which is as follows:
  • the U component U b and V component V b of the fused image can be obtained according to formula (5) and formula (6), respectively.
  • V b ratio*V 2 (Equation 6)
  • Y 1 represents the Y component of the infrared image
  • Y 2 , U 2 , and V 2 respectively represent the Y, U, and V components of the visible light image
  • Y b , U b , and V b respectively represent the Y, U, and V components of the fused image
  • w represents the first weight
  • the first panoramic image can also be directly fused to the second panoramic image according to the second weight to obtain the third panoramic image.
  • the pixel value of each pixel of the first panoramic image is compared with the second panoramic image.
  • the pixel value of each pixel of the image is merged, and the weight of the pixel value of each pixel of the first panoramic image is the second weight.
  • the images in the first group of images may also be merged with each other.
  • the detail information of the second group of images is enhanced and then spliced.
  • S306. Determine the feature points of the third group of images according to the feature points of the first group of images.
  • S308 Perform stitching on each image of the third group of images based on the feature points of the third group of images to obtain a fourth panoramic image.
  • each image in the first group of images may be first fused with an image corresponding to the image in the second group of images.
  • the two images to be fused are the two images collected by the two sensors when the pan-tilt rotates to a certain position.
  • the two images can be collected at the same time or at a certain position. Collect within the time interval.
  • the two sets of images may be mapped to a specified coordinate system according to the mapping relationship between the first set of images and the second set of images. It can be the coordinate system corresponding to the first group of images, or it can be the coordinate system corresponding to the second group of images.
  • the characteristic points of the third group of images can be determined from the characteristic points of the first group of images, and then based on the third group of images
  • the feature points in the third group of images are stitched together to obtain the fourth panoramic image.
  • the specific splicing process can refer to the description in the above image processing method, which will not be repeated here.
  • this application collects a set of visible light images and a set of infrared images through a visible light sensor and an infrared sensor fixed on the same pan-tilt and at a fixed relative position.
  • the visible light image can also be used to perform detail enhancement processing on the infrared image, so that the details of the infrared panoramic image are richer.
  • the first method is shown in Figure 4.
  • the visible light image is used to guide the infrared image to extract feature points, the infrared image is stitched, and the infrared panoramic image is obtained.
  • the visible light image is spliced to obtain a visible light panoramic image, and then the visible light panoramic image is fused to the infrared image for enhancement, and an enhanced infrared panoramic image is obtained.
  • the specific implementation process is as follows:
  • the infrared sensor and the visible light sensor fixed to the pan/tilt are used to obtain the visible light image and the infrared image when the pan/tilt rotates to multiple angles. Two pictures at the same angle are taken at the same time to minimize differences in exposure.
  • the detection method is not limited, and can be any feature point detection method, including but not limited to SIFT algorithm, SURF algorithm, ORB algorithm, etc.
  • mapping the feature points of the visible light image detected in step 2 to the infrared image can be obtained by pre-calibrating the internal and external parameters of the camera.
  • the mapping methods used include but are not limited to the homography matrix (Homograph matrix, H matrix), affine matrix (Affine matrix), etc.
  • H matrix homography matrix
  • affine matrix affine matrix
  • the characteristic points of the infrared image can be obtained.
  • the following is the formula for mapping using the H distance matrix (assuming the object distance is infinite). According to formula (1), the H matrix can be obtained, and according to formula (2), the coordinates of the characteristic points on the infrared image can be obtained:
  • K 1 represents the internal parameter matrix of the visible light camera
  • K 2 represents the internal parameter matrix of the infrared camera
  • R represents the rotation matrix from the infrared camera coordinate system to the visible light camera coordinate system
  • H represents the homography matrix from the visible light camera coordinate system to the infrared camera coordinate system
  • P 1 represents the coordinates of a certain point of the visible light image
  • P 2 represents the corresponding point of P 1 on the infrared image
  • the visible light image is mapped to the infrared image coordinate system.
  • the light image can also be mapped to the infrared image coordinate system.
  • the purpose is to make the coordinate system of the two groups of images one to facilitate subsequent image fusion.
  • the mapping relationship is the same as step 3.
  • Image mapping can be obtained by backward mapping and interpolation methods.
  • the mapped visible light image will be cropped to be consistent with the infrared image FOV (Field of Vision, field of view).
  • FOV Field of Vision, field of view
  • This step can use traditional feature point matching algorithms, such as SIFT algorithm, SURF algorithm, ORB algorithm, etc.
  • the sensor detects, maps and matches feature points when shooting images from various angles, and at the same time obtains the spatial geometric relationship between the two images. And globally optimize the geometric relationship between the two images, and get the final registration result.
  • the registration relationship between multiple infrared images and the registration relationship between multiple visible light images after mapping can be obtained.
  • This step can use traditional image synthesis technology, mainly for image exposure compensation and image fusion at the stitching seam, etc.
  • Existing algorithms for exposure compensation and splicing seam fusion can be used in this step, and will not be described in detail here.
  • the image fusion process can use the existing image fusion technology to fuse the visible light panorama and the infrared panorama to improve the detailed information of the infrared panorama.
  • the user can control the range and intensity of the fusion.
  • the edge of the visible light image or the detail information of the entire image is extracted to fuse the infrared image, and the weight of the visible light image in the fusion process can also be set according to actual needs.
  • the edge detection or image fusion method in each color space is applicable to this embodiment.
  • the following introduces a fusion method of infrared image and visible light image in YUV color space after palette coloring.
  • the fusion process is as follows: extract the pixels whose Y component is greater than the preset threshold from the visible light image, and fuse these pixels with the corresponding pixels in the infrared image, where the weight of the Y component of the visible light image is w during the fusion ,
  • the Y component Yb of the fused image can be calculated by formula 3:
  • the U component of the fused image can be obtained from the U component and V component of the visible light image and the gain coefficient, where the gain coefficient can be calculated according to formula (4):
  • the U component of the fused image can be obtained according to the U component and V component of the visible light according to formula (5) and formula (6):
  • V b ratio*V 2 (Equation 6)
  • Y 1 represents the Y component of the infrared image
  • Y 2 , U 2 , and V 2 respectively represent the Y, U, and V components of the visible light image
  • Y b , U b , and V b represent the Y, U, and V components of the fused image, respectively;
  • w represents the Y component fusion weight of the visible light image
  • the present application also provides an image processing device.
  • the image processing device includes a processor 61, a memory 62 for storing computer instructions executable by the processor, and the processor executes
  • the computer instructions can implement the following methods:
  • the images in the second group of images are spliced based on the feature points of the second group of images to obtain a second panoramic image.
  • the resolution of the first set of images is higher than that of the second set of images.
  • the method when the processor is configured to stitch each image in the second group of images based on the feature points of the second group of images to obtain a panoramic image, the method includes:
  • the processor is further configured to: before splicing each image in the second group of images based on the feature points of the second group of images:
  • the mapping relationship between the first set of images and the second set of images are mapped to a specified coordinate system, and the mapping relationship is based on at least the following One determination: the internal parameters of each sensor and the external parameters of each sensor.
  • the designated coordinate system includes: the coordinate system of the first group of images and/or the coordinate system of the second group of images.
  • the processor is configured to map the first group of images and the second group of images to the mapping relationship between the first group of images and the second group of images After specifying the coordinate system, it is also used to:
  • the first group of images or the second group of images are cropped so that the angle of view of the first group of images is consistent with the angle of view of the second group of images.
  • the processor is further configured to:
  • the processor is further configured to:
  • the method when the processor is configured to fuse the first panoramic image and the second panoramic image, the method includes:
  • the processor is configured to extract the first component of the first panoramic image, and when fusing the first component of the first panoramic image with the second panoramic image, the method includes:
  • the first component of the first panoramic image is fused with the first component of the second panoramic image according to the first weight to obtain the first component of the third panoramic image.
  • the extraction threshold of the first component of the first panoramic image is adjustable.
  • the second component of the third panoramic image is obtained by the second component of the second panoramic image according to a gain coefficient.
  • the gain coefficient is obtained from the first component of the third panoramic image and the first component of the second panoramic image.
  • the method when the processor is configured to fuse the first panoramic image and the second panoramic image, the method includes:
  • the first panoramic image is merged with the second panoramic image according to a second weight.
  • the method when the processor is configured to determine the characteristic points of the second group of images according to the characteristic points of the first group of images, the method includes:
  • mapping point of the feature point of the first image in the second image is determined according to the mapping relationship between the first group of images and the second group of images, and the mapping relationship is determined according to at least one of the following :Internal parameters of each sensor and external parameters of each sensor;
  • the mapping point is used as a feature point of the second image.
  • the mapping relationship is characterized by a homography matrix and/or an affine matrix.
  • the homography matrix is determined by the internal parameter matrix of the two sensors and the transformation matrix of the coordinate system of one sensor of the two sensors with respect to the coordinate system of the second other sensor. .
  • the first set of images are visible light images
  • the second set of images are infrared images.
  • the present application also provides another image processing device.
  • the image processing device includes a processor 62, a memory 61 for storing computer instructions executable by the processor, and the processor executes all
  • the computer instructions can also implement the following methods:
  • the images in the third group of images are spliced based on the feature points of the third group of images to obtain a fourth panoramic image.
  • the imaging device includes a first image sensor 71, a second image sensor 72, and an image processing device 73.
  • the first image sensor and the second image sensor are connected.
  • the image processing device includes a processor 732, a memory 731 for storing computer instructions executable by the processor, and
  • the execution of the computer instructions by the device may implement the following methods:
  • the resolution of the first set of images is higher than that of the second set of images.
  • the method when the processor is configured to stitch each image in the second group of images based on the feature points of the second group of images to obtain a panoramic image, the method includes:
  • the processor is further configured to: before splicing each image in the second group of images based on the feature points of the second group of images:
  • the mapping relationship between the first set of images and the second set of images are mapped to a specified coordinate system, and the mapping relationship is based on at least the following One determination: the internal parameters of each sensor and the external parameters of each sensor.
  • the designated coordinate system includes: the coordinate system of the first group of images and/or the coordinate system of the second group of images.
  • the processor is configured to map the first group of images and the second group of images to the mapping relationship between the first group of images and the second group of images After specifying the coordinate system, it is also used to:
  • the first group of images or the second group of images are cropped so that the angle of view of the first group of images is consistent with the angle of view of the second group of images.
  • the processor is further configured to:
  • the feature points in the first group of images are spliced based on the feature points of the first group of images to obtain a first panoramic image.
  • the processor is further configured to:
  • the method when the processor is configured to fuse the first panoramic image and the second panoramic image, the method includes:
  • the processor is configured to extract the first component of the first panoramic image, and when fusing the first component of the first panoramic image with the second panoramic image, the method includes:
  • the first component of the first panoramic image is fused with the first component of the second panoramic image according to the first weight to obtain the first component of the third panoramic image.
  • the extraction threshold of the first component of the first panoramic image is adjustable.
  • the second component of the third panoramic image is obtained by the second component of the second panoramic image according to a gain coefficient.
  • the gain coefficient is obtained from the first component of the third panoramic image and the first component of the second panoramic image.
  • the method when the processor is configured to fuse the first panoramic image and the second panoramic image, the method includes:
  • the first panoramic image is merged with the second panoramic image according to a second weight.
  • the method when the processor is configured to determine the characteristic points of the second group of images according to the characteristic points of the first group of images, the method includes:
  • mapping point of the feature point of the first image in the second image is determined according to the mapping relationship between the first sensor and the second sensor, and the mapping relationship is determined according to at least one of the following: Parameters and external parameters of each sensor;
  • the mapping point is used as a feature point of the second image.
  • the mapping relationship is characterized by a homography matrix and/or an affine matrix.
  • the homography matrix is transformed by the internal parameter matrix of the first sensor, the internal parameter matrix of the second sensor, and the coordinate system of the first sensor relative to the coordinate system of the second sensor. The matrix is determined.
  • the first set of images are visible light images
  • the second set of images are infrared images.
  • the imaging device has various terminal devices with two image sensors, such as infrared cameras, mobile phones, and so on.
  • the application also provides a movable carrier, wherein the movable carrier may be a drone, an unmanned ship, a mobile terminal, a movable car or a movable robot.
  • the movable carrier is an unmanned aerial vehicle, including an unmanned aerial vehicle body 81 and an imaging device 82.
  • FIG. 9 it is a block diagram of the logical structure of a movable carrier.
  • the movable carrier includes a body 91 and an imaging device 92.
  • the imaging device is mounted on the body.
  • the imaging device includes a first image sensor 921.
  • the processor includes a processor and a memory configured to store computer instructions executable by the processor, and executing the computer instructions by the processor can implement the following methods:
  • the images in the second group of images are spliced based on the feature points of the second group of images to obtain a second panoramic image.
  • an embodiment of the present specification also provides a computer storage medium in which a program is stored, and the program is executed by a processor to implement the image processing method in any of the foregoing embodiments.
  • the embodiments of this specification may adopt the form of a computer program product implemented on one or more storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing program codes.
  • Computer usable storage media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
  • the information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to: phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
  • PRAM phase change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory or other memory technology
  • CD-ROM compact disc
  • DVD digital versatile disc
  • Magnetic cassettes magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
  • the relevant part can refer to the part of the description of the method embodiment.
  • the device embodiments described above are merely illustrative.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network units.
  • Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments. Those of ordinary skill in the art can understand and implement it without creative work.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

Procédé et appareil de traitement d'image, dispositif d'imagerie, porteur mobile et support de stockage. Le procédé consiste : à obtenir un premier groupe d'images et un second groupe d'images, le premier groupe d'images et le second groupe d'images étant respectivement acquis par deux capteurs ayant des positions relatives fixes ; à déterminer des points caractéristiques d'images dans le second groupe d'images selon des points caractéristiques d'images dans le premier groupe d'images ; et à assembler le second groupe d'images sur la base des points caractéristiques des images dans le second groupe d'images pour obtenir une seconde image panoramique. Des points caractéristiques d'images ayant des détails fortement manquants sont guidés pour réaliser une extraction selon des images ayant des détails plus riches et des points caractéristiques plus faciles à extraire, de sorte que davantage de points caractéristiques des images ayant des détails fortement manquants puissent être extraits, ce qui permet d'assembler les images ayant des détails fortement manquants, et d'améliorer la qualité des images assemblées.
PCT/CN2020/080219 2020-03-19 2020-03-19 Procédé et appareil de traitement d'image, dispositif d'imagerie, porteur mobile et support de stockage WO2021184302A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/080219 WO2021184302A1 (fr) 2020-03-19 2020-03-19 Procédé et appareil de traitement d'image, dispositif d'imagerie, porteur mobile et support de stockage
CN202080005077.1A CN112689850A (zh) 2020-03-19 2020-03-19 图像处理方法、装置、成像设备、可移动载体及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/080219 WO2021184302A1 (fr) 2020-03-19 2020-03-19 Procédé et appareil de traitement d'image, dispositif d'imagerie, porteur mobile et support de stockage

Publications (1)

Publication Number Publication Date
WO2021184302A1 true WO2021184302A1 (fr) 2021-09-23

Family

ID=75457727

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/080219 WO2021184302A1 (fr) 2020-03-19 2020-03-19 Procédé et appareil de traitement d'image, dispositif d'imagerie, porteur mobile et support de stockage

Country Status (2)

Country Link
CN (1) CN112689850A (fr)
WO (1) WO2021184302A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114418941A (zh) * 2021-12-10 2022-04-29 国网浙江省电力有限公司宁波供电公司 一种基于电力巡检设备检测数据的缺陷诊断方法及系统
CN115619782A (zh) * 2022-12-15 2023-01-17 常州海图信息科技股份有限公司 基于机器视觉的井筒360全景拼接检测系统及方法
CN117094895A (zh) * 2023-09-05 2023-11-21 杭州一隅千象科技有限公司 图像全景拼接方法及其系统
CN117745537A (zh) * 2024-02-21 2024-03-22 微牌科技(浙江)有限公司 隧道设备温度检测方法、装置、计算机设备和存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022570B (zh) * 2022-01-05 2022-06-17 荣耀终端有限公司 相机间外参的标定方法及电子设备
CN116016816B (zh) * 2022-12-13 2024-03-29 之江实验室 一种改进l-orb算法的嵌入式gpu零拷贝全景图像拼接方法和系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106384383A (zh) * 2016-09-08 2017-02-08 哈尔滨工程大学 一种基于fast和freak特征匹配算法的rgb‑d和slam场景重建方法
US20170061703A1 (en) * 2015-08-27 2017-03-02 Samsung Electronics Co., Ltd. Image processing device and electronic system including the same
CN107154014A (zh) * 2017-04-27 2017-09-12 上海大学 一种实时彩色及深度全景图像拼接方法
US20180343432A1 (en) * 2017-05-23 2018-11-29 Microsoft Technology Licensing, Llc Reducing Blur in a Depth Camera System
CN109360150A (zh) * 2018-09-27 2019-02-19 轻客小觅智能科技(北京)有限公司 一种基于深度相机的全景深度图的拼接方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683043B (zh) * 2015-11-10 2020-03-06 中国航天科工集团第四研究院指挥自动化技术研发与应用中心 一种多通道光学探测系统的并行图像拼接方法、装置
CN107292860B (zh) * 2017-07-26 2020-04-28 武汉鸿瑞达信息技术有限公司 一种图像处理的方法及装置
CN109360145A (zh) * 2018-10-30 2019-02-19 电子科技大学 一种基于涡流脉冲红外热图像拼接方法
CN109886878B (zh) * 2019-03-20 2020-11-03 中南大学 一种基于由粗到精配准的红外图像拼接方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170061703A1 (en) * 2015-08-27 2017-03-02 Samsung Electronics Co., Ltd. Image processing device and electronic system including the same
CN106384383A (zh) * 2016-09-08 2017-02-08 哈尔滨工程大学 一种基于fast和freak特征匹配算法的rgb‑d和slam场景重建方法
CN107154014A (zh) * 2017-04-27 2017-09-12 上海大学 一种实时彩色及深度全景图像拼接方法
US20180343432A1 (en) * 2017-05-23 2018-11-29 Microsoft Technology Licensing, Llc Reducing Blur in a Depth Camera System
CN109360150A (zh) * 2018-09-27 2019-02-19 轻客小觅智能科技(北京)有限公司 一种基于深度相机的全景深度图的拼接方法及装置

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114418941A (zh) * 2021-12-10 2022-04-29 国网浙江省电力有限公司宁波供电公司 一种基于电力巡检设备检测数据的缺陷诊断方法及系统
CN114418941B (zh) * 2021-12-10 2024-05-10 国网浙江省电力有限公司宁波供电公司 一种基于电力巡检设备检测数据的缺陷诊断方法及系统
CN115619782A (zh) * 2022-12-15 2023-01-17 常州海图信息科技股份有限公司 基于机器视觉的井筒360全景拼接检测系统及方法
CN117094895A (zh) * 2023-09-05 2023-11-21 杭州一隅千象科技有限公司 图像全景拼接方法及其系统
CN117094895B (zh) * 2023-09-05 2024-03-26 杭州一隅千象科技有限公司 图像全景拼接方法及其系统
CN117745537A (zh) * 2024-02-21 2024-03-22 微牌科技(浙江)有限公司 隧道设备温度检测方法、装置、计算机设备和存储介质
CN117745537B (zh) * 2024-02-21 2024-05-17 微牌科技(浙江)有限公司 隧道设备温度检测方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN112689850A (zh) 2021-04-20

Similar Documents

Publication Publication Date Title
WO2021184302A1 (fr) Procédé et appareil de traitement d'image, dispositif d'imagerie, porteur mobile et support de stockage
WO2021227359A1 (fr) Procédé et appareil de projection à base de véhicule aérien sans pilote, dispositif, et support de stockage
US10540806B2 (en) Systems and methods for depth-assisted perspective distortion correction
CN111179358B (zh) 标定方法、装置、设备及存储介质
WO2020014909A1 (fr) Procédé et dispositif de photographie, et véhicule aérien sans pilote
KR101666959B1 (ko) 카메라로부터 획득한 영상에 대한 자동보정기능을 구비한 영상처리장치 및 그 방법
CN112258579B (zh) 图像处理方法、装置、电子设备和计算机可读存储介质
CN109474780B (zh) 一种用于图像处理的方法和装置
US20180160046A1 (en) Depth-based zoom function using multiple cameras
WO2019042426A1 (fr) Procédé et appareil de traitement de scène en réalité augmentée, et support de stockage informatique
CN111915483B (zh) 图像拼接方法、装置、计算机设备和存储介质
WO2020007320A1 (fr) Procédé de fusion d'images à plusieurs angles de vision, appareil, dispositif informatique, et support de stockage
WO2014023231A1 (fr) Système et procédé d'imagerie optique de très grande résolution et à large champ de vision
US10489885B2 (en) System and method for stitching images
US20210120194A1 (en) Temperature measurement processing method and apparatus, and thermal imaging device
CN107749069B (zh) 图像处理方法、电子设备和图像处理系统
CN111866523B (zh) 全景视频合成方法、装置、电子设备和计算机存储介质
KR101705558B1 (ko) Avm 시스템의 공차 보정 장치 및 방법
CN114640833B (zh) 投影画面调整方法、装置、电子设备和存储介质
CN114615480B (zh) 投影画面调整方法、装置、设备、存储介质和程序产品
WO2019205087A1 (fr) Procédé et dispositif de stabilisation d'image
US11734877B2 (en) Method and device for restoring image obtained from array camera
CN113228104B (zh) 热图像和可见图像对的自动共配准
CN110796690B (zh) 图像匹配方法和图像匹配装置
US20210027439A1 (en) Orientation adjustment of objects in images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20925381

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20925381

Country of ref document: EP

Kind code of ref document: A1