CN112017117A - Panoramic image acquisition method and system based on thermal infrared imager - Google Patents
Panoramic image acquisition method and system based on thermal infrared imager Download PDFInfo
- Publication number
- CN112017117A CN112017117A CN202010797974.2A CN202010797974A CN112017117A CN 112017117 A CN112017117 A CN 112017117A CN 202010797974 A CN202010797974 A CN 202010797974A CN 112017117 A CN112017117 A CN 112017117A
- Authority
- CN
- China
- Prior art keywords
- image
- images
- thermal infrared
- infrared imager
- matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000001931 thermography Methods 0.000 claims abstract description 5
- 239000011159 matrix material Substances 0.000 claims description 84
- 230000004927 fusion Effects 0.000 claims description 49
- 230000009466 transformation Effects 0.000 claims description 18
- 230000008569 process Effects 0.000 claims description 13
- 238000003384 imaging method Methods 0.000 claims description 9
- 230000001629 suppression Effects 0.000 claims description 9
- 230000002457 bidirectional effect Effects 0.000 claims description 5
- 238000009499 grossing Methods 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 3
- 238000003331 infrared imaging Methods 0.000 claims description 3
- 238000009529 body temperature measurement Methods 0.000 abstract description 12
- 101150039239 LOC1 gene Proteins 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 3
- 101150027068 DEGS1 gene Proteins 0.000 description 2
- 101100498938 Rattus norvegicus Degs2 gene Proteins 0.000 description 2
- 102100037416 Sphingolipid delta(4)-desaturase DES1 Human genes 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000009966 trimming Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The invention discloses a panoramic image acquisition method and a panoramic image acquisition system based on a thermal infrared imager. The method comprises the following steps: and carrying out multiple times of infrared thermal imaging on the target to obtain N images, wherein N is a preset value, overlapping areas exist among the N images, and the N images are spliced by using computer software to obtain a panoramic image. According to the invention, the infrared images shot by the user are spliced in a full-automatic seamless manner through the image panoramic splicing algorithm, and the overall observation and temperature measurement of the user on the large building are realized on the basis of not increasing the hardware cost and basically not increasing the complexity of the user operation.
Description
Technical Field
The invention belongs to the technical field of infrared imaging, and particularly relates to a panoramic image acquisition method and a panoramic image acquisition system based on a thermal infrared imager.
Background
The infrared thermal imager is limited by the size of an infrared device, the field angle of the infrared thermal imager is generally small, objects with normal sizes can be conveniently photographed and measured through infrared, and for large building objects, the building overall appearance is difficult to obtain through single photographing.
When a large building is shot, the visible light camera can directly adopt the wide-angle lens to shoot and can obtain a better shooting effect. For the thermal infrared imager, the thermal infrared imager generally needs to have a temperature measurement function, and the premise that a temperature measurement curve can be accurately calibrated is that all pixels of the whole detector area array have the same or similar receptivity to external infrared radiation, but obviously, a wide-angle lens does not have the characteristic. Therefore, the wide-angle lens is additionally arranged to assist the common thermal imager product to carry out panoramic photography of the large building target, so that the cost is high, and the temperature measurement precision is difficult to ensure.
Disclosure of Invention
Aiming at least one defect or improvement requirement in the prior art, the invention provides a panoramic image acquisition method and a panoramic image acquisition system based on a thermal infrared imager, which realize the functions of overall observation and temperature measurement of a large-format target building on a common thermal imager.
In order to achieve the above object, according to a first aspect of the present invention, there is provided a method for acquiring a panoramic image based on a thermal infrared imager, comprising: performing multiple infrared thermal imaging on a target to obtain N images, wherein N is a preset value, overlapping areas exist among the N images, and the N images are spliced to obtain a panoramic image;
wherein the splicing comprises the steps of:
recording two images to be spliced into a first image and a second image, and respectively extracting a first characteristic point of the first image and a second characteristic point of the second image by using a FAST algorithm;
respectively carrying out non-maximum suppression processing on the acquired first characteristic point and the acquired second characteristic point to respectively obtain a first local maximum point and a second local maximum point;
respectively carrying out feature description on the first local maximum point and the second local maximum point by utilizing an ORB description algorithm, and respectively generating a first descriptor of the first feature point and a second descriptor of the second feature point;
performing Hamming distance bidirectional matching on the first characteristic point and the second characteristic point;
further matching the matching result by using a RANSAC (random sample consensus) rule, and solving a perspective transformation model T and an inverse matrix Tinv of the T in the further matching process;
expanding the edge of the first image to obtain a first image matrix, initializing a second image matrix which is as large as the first image matrix, performing reverse bilinear interpolation operation on the initialized second image matrix according to the inverse matrix Tinv to complete coordinate transformation of the second image, and storing the second image after the coordinate transformation in the second image matrix;
performing brightness value compensation on the second image matrix according to the difference value of the gray average values of the overlapping areas of the first image matrix and the second image matrix;
counting a splicing seam of the first image matrix and the second image matrix, initializing a first fusion weight and a second fusion weight according to the position of the splicing seam, wherein the first fusion weight and the second fusion weight are as large as the first image matrix;
performing Gaussian smoothing on the first fusion weight and the second fusion weight, and completing fusion of the two images by using a formula IM 1W 1+ IM 2W 2 to obtain a fusion image IM, wherein IM1 is a first image matrix, IM2 is a second image matrix, W1 is the first fusion weight, and IM2 is the second fusion weight;
and performing edge cutting processing on the fused image IM by using the minimum circumscribed rectangle to obtain a spliced image.
Preferably, in the imaging process of step S1, the current field of view of the thermal infrared imager is acquired, the current field of view of the thermal infrared imager is displayed in a first area of an operation interface of the thermal infrared imager, an imaged image is displayed in a thumbnail form in a second area of the operation interface of the thermal infrared imager, and a user operation prompt is displayed in a third area of the operation interface of the thermal infrared imager.
According to a second aspect of the present invention, there is provided a thermal infrared imager-based panoramic image acquisition system, comprising:
the thermal infrared imager is used for carrying out multiple times of thermal infrared imaging on the target to obtain N images, wherein N is a preset value, and overlapping regions exist among the N images; the panoramic stitching module is used for stitching the N images to obtain a panoramic image;
this panorama concatenation module includes:
the characteristic point extraction module is used for recording two images to be spliced into a first image and a second image and respectively extracting a first characteristic point of the first image and a second characteristic point of the second image by using a FAST algorithm;
the non-maximum suppression processing module is used for respectively performing non-maximum suppression processing on the acquired first characteristic point and the acquired second characteristic point to respectively obtain a first local maximum point and a second local maximum point;
the descriptor generation module is used for respectively carrying out feature description on the first local maximum point and the second local maximum point by utilizing an ORB description algorithm and respectively generating a first descriptor of the first feature point and a second descriptor of the second feature point;
the rough matching module is used for carrying out Hamming distance bidirectional matching on the first characteristic point and the second characteristic point;
the fine matching module is used for further matching the matching result by utilizing the RANSAC criterion, and solving a perspective transformation model T and an inverse matrix Tinv of the T in the further matching process;
the coordinate change module is used for expanding the edge of the first image to obtain a first image matrix, initializing a second image matrix which is as large as the first image matrix, performing reverse bilinear interpolation operation on the initialized second image matrix according to the inverse matrix Tinv to complete coordinate transformation of the second image, and storing the second image after the coordinate transformation in the second image matrix;
the brightness compensation module is used for compensating the brightness value of the second image matrix according to the difference value of the gray average values of the superposed areas of the first image matrix and the second image matrix;
the fusion weight initialization module is used for counting a splicing seam of the first image matrix and the second image matrix, initializing a first fusion weight and a second fusion weight according to the position of the splicing seam, and enabling the first fusion weight and the second fusion weight to be as large as the first image matrix;
the fusion module performs Gaussian smoothing on the first fusion weight and the second fusion weight, and completes fusion of the two images by using a formula IM 1W 1+ IM 2W 2 to obtain a fusion image IM, wherein IM1 is a first image matrix, IM2 is a second image matrix, W1 is the first fusion weight, and IM2 is the second fusion weight;
and the splicing module is used for performing edge cutting processing on the fused image IM by using the minimum circumscribed rectangle to obtain a spliced image.
Preferably, the thermal infrared imager is further used for acquiring the current field of view of the thermal infrared imager during imaging, displaying the current field of view of the thermal infrared imager in a first area of an operation interface of the thermal infrared imager, displaying an imaged image in a thumbnail mode in a second area of the operation interface of the thermal infrared imager, and displaying a user operation prompt in a third area of the operation interface of the thermal infrared imager.
In general, compared with the prior art, the invention realizes the global observation and temperature measurement of the user on the large building on the basis of not increasing the hardware cost and basically not increasing the operation complexity. Through this scheme, the user can obtain the image of wideer picture on the basis of guaranteeing image quality, temperature measurement precision, has effectively promoted product competitiveness. Specifically, the following beneficial effects are included:
(1) the system cost is low. The whole processing process is completed by a software algorithm, only a small amount of additional operation is needed by a user, and no peripheral hardware equipment is needed to be added, so that the overall observation and temperature measurement functions of a large-format target building can be realized on a common thermal imager.
(2) The image distortion is small. Compared with the serious image distortion when the wide-angle lens is used for acquiring the oversized picture image, the scheme seamlessly synthesizes a plurality of originally shot sub-images into a large picture image through a panoramic stitching algorithm, and can ensure that the image has no obvious distortion.
(3) The temperature measurement is accurate and reliable. Compared with the temperature measurement curve of a wide-angle lens, the method is difficult to accurately calibrate, the scheme directly carries out panoramic stitching on the original data of the subimages shot by the common lens, can generate 16-bit large-frame images, and does not theoretically influence the accuracy of the temperature measurement result.
Drawings
FIG. 1 is a schematic view of panoramic image acquisition according to an embodiment of the present invention;
FIG. 2 is a schematic view of an imaging operator interface according to an embodiment of the present invention;
FIG. 3 is a schematic view of an imaging operation of an embodiment of the present invention;
FIG. 4 is a schematic illustration of a stitching method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an exemplary embodiment of a minimum bounding rectangle trimming process.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The panoramic image acquisition method based on the thermal infrared imager comprises an imaging step and a panoramic stitching step. And in the imaging step, multiple times of infrared thermal imaging is carried out on the target to obtain N images, wherein N is a preset value which is arbitrarily greater than or equal to 2, and overlapping regions exist among the N images. And the panoramic stitching step is used for stitching the N images to obtain a panoramic image. The panorama stitching step may be implemented based on computer software.
Taking N as 9, i.e. 3 × 3 panoramic stitching as an example, the main flow is shown in fig. 1: when a user uses the thermal infrared imager with the panoramic photographing function, the user enters a panoramic photographing mode through a menu option, controls the thermal infrared imager to photograph 3 multiplied by 3 sub-images with about 30 percent of overlapping areas in different visual angle positions in the horizontal and vertical directions according to operation prompts; and after the subimages are led into a computer through WIFI or a data line, panoramic stitching is selected on infrared analysis software, and the modules can automatically perform seamless stitching on the shot subimages to generate 1 large-frame panoramic image of the target building. Through the scheme, the user can directly observe the panoramic image of the target building and perform subsequent processing such as temperature measurement, so that the user experience of the user when the thermal imager is used for shooting a large building can be effectively improved.
Preferably, the operating interface of the thermal infrared imager is divided into 3 areas. In the imaging process, the current visual field of the thermal infrared imager is acquired, the current visual field of the thermal infrared imager is displayed in a first area of an operation interface of the thermal infrared imager, an imaged image is displayed in a thumbnail mode in a second area of the operation interface of the thermal infrared imager, and a user operation prompt is displayed in a third area of the operation interface of the thermal infrared imager. Taking splicing of 9 sub-images of 3 × 3 as an example, a user enters a panoramic photographing mode through a menu option, an operation interface of a display screen of the thermal infrared imager is shown in fig. 2 in the photographing process, and then the user performs panoramic photographing according to the flow shown in fig. 3.
After obtaining the N images, the process of splicing the N images comprises the following steps:
preferably, the stitching the N images includes the steps of:
and S1, reading 2 images in the N images for splicing to obtain a spliced image.
And S2, if N is greater than 2, continuing to read the images which are not spliced in the N images, and splicing the images which are not spliced with the spliced images obtained in the previous time. Namely, the images which are not spliced are spliced with the spliced images obtained in step S1.
And S3, if N is greater than 3, repeating the step S2 until the splicing of the N images is completed, and obtaining the panoramic image. Namely, the images which are not spliced are spliced with the spliced images obtained in the last step of S2, and the splicing is executed in a circulating way until the splicing of the N images is completed.
Preferably, in the above steps S1, S2 and S3, the splicing method as shown in fig. 4 includes the steps of:
(1) recording 2 images to be spliced as Image1 and Image2, and respectively extracting feature points1 and 2 of the images Image1 and Image2 by using a FAST algorithm; and if the number of the feature points does not meet the preset requirement, adjusting the feature point extraction threshold of the FAST algorithm to extract again.
(2) And carrying out non-maximum suppression treatment on the acquired feature points1 and points2 to obtain local maximum points loc1 and loc 2.
(3) And (3) respectively carrying out feature description on the local maximum points loc1 and loc2 by utilizing an ORB description algorithm, and respectively generating a descriptor Des1 of the feature point points1 and a descriptor Des2 of the feature point points 2.
(4) Hamming distance two-way matching is carried out on the feature points1 and points2, and if the number of the matched point pairs is too small, the Hamming matching threshold is adjusted to carry out re-matching.
(5) And further performing fine matching on the Hamming matching result by utilizing a RANSAC (random sample consensus) rule, and solving a perspective transformation model T and an inverse matrix Tinv of the T in the fine matching process.
(6) The Image1 is padded (edge-extended) to obtain an Image matrix IM1, a large Image matrix IM2 such as the Image matrix IM1 is initialized, the initialized Image IM2 is subjected to inverse bilinear interpolation operation according to the inverse matrix Tinv, the coordinate transformation of the Image2 is completed, and the Image2 after the coordinate transformation is stored in the Image matrix IM 2.
(7) And performing brightness value compensation on the image matrix IM2 according to the difference value of the gray average values of the overlapped areas of the image matrix IM1 and the image matrix IM 2.
(8) Counting the splicing seams of the image matrix IM1 and the image matrix IM2, initializing the fusion weights W1 and W2 according to the positions of the splicing seams, and making the weights W1 and W2 and the image matrix IM1 equal in size.
(9) The weights W1 and W2 are gaussian smoothed, and the two images are fused by the formula IM 1W 1+ IM 2W 2, so that a fused image IM is obtained.
(10) As shown in fig. 5, the fused image IM is trimmed by the minimum circumscribed rectangle to obtain a mosaic image PIC.
The panoramic image acquisition system based on the thermal infrared imager comprises a panoramic photographing module and a panoramic splicing module. And the panoramic photographing module is a thermal infrared imager. The panoramic photographing module is used for carrying out multiple times of infrared thermal imaging on a target to obtain N images, wherein N is a preset value, and overlapping regions exist among the N images. And the panoramic stitching module is used for stitching the N images to obtain a panoramic image.
This panorama concatenation module includes:
the characteristic point extraction module is used for recording the 2 images to be spliced into Image1 and Image2 and respectively extracting characteristic points1 and 2 of the Image1 and the Image 2;
the non-maximum suppression processing module is used for respectively performing non-maximum suppression processing on the acquired feature points1 and points2 to respectively obtain local maximum points loc1 and loc 2;
the descriptor generation module is used for respectively carrying out feature description on the local maximum points loc1 and loc2 by utilizing an ORB description algorithm and respectively generating a descriptor Des1 of the feature point points1 and a descriptor Des2 of the feature point points 2;
the rough matching module is used for carrying out Hamming distance bidirectional matching on the feature points1 and points 2;
the fine matching module is used for further matching the matching result by utilizing the RANSAC criterion and solving a perspective transformation model T and an inverse matrix Tinv of the T in the further matching process;
the coordinate change module is used for expanding the edge of the Image1 to obtain an Image matrix IM1, initializing an Image matrix IM2 which is as large as the Image matrix IM1 and the like, performing reverse bilinear interpolation operation on the initialized Image IM2 according to the inverse matrix Tinv to complete the coordinate transformation of the Image IM2, and storing an Image2 after the coordinate transformation in the Image matrix IM 2;
the brightness compensation module is used for performing brightness value compensation on the image matrix IM2 according to the difference value of the gray average values of the overlapped areas of the image matrix IM1 and the image matrix IM 2;
the fusion weight initialization module is used for counting splicing seams of the image matrix IM1 and the image matrix IM2, initializing fusion weights W1 and W2 according to the positions of the splicing seams, and enabling the weights W1 and W2 and the image matrix IM1 to be equal in size;
the fusion module is used for performing Gaussian smoothing on the weights W1 and W2, and completing fusion of the two images by using a formula IM 1W 1+ IM 2W 2 to obtain a fused image IM;
and the splicing module is used for performing edge cutting processing on the fused image IM by using the minimum circumscribed rectangle to obtain a spliced image.
Preferably, the thermal infrared imager is further used for acquiring the current field of view of the thermal infrared imager during imaging, displaying the current field of view of the thermal infrared imager in a first area of an operation interface of the thermal infrared imager, displaying an imaged image in a thumbnail mode in a second area of the operation interface of the thermal infrared imager, and displaying a user operation prompt in a third area of the operation interface of the thermal infrared imager.
The implementation principle and technical effect of the panoramic image acquisition system are similar to those of the method, and are not described herein again.
It must be noted that in any of the above embodiments, the methods are not necessarily executed in order of sequence number, and as long as it cannot be assumed from the execution logic that they are necessarily executed in a certain order, it means that they can be executed in any other possible order.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (6)
1. A panoramic image acquisition method based on a thermal infrared imager is characterized by comprising the following steps: performing multiple infrared thermal imaging on a target to obtain N images, wherein N is a preset value, overlapping areas exist among the N images, and the N images are spliced to obtain a panoramic image;
wherein the splicing comprises the steps of:
recording two images to be spliced into a first image and a second image, and respectively extracting a first characteristic point of the first image and a second characteristic point of the second image by using a FAST algorithm;
respectively carrying out non-maximum suppression processing on the acquired first characteristic point and the acquired second characteristic point to respectively obtain a first local maximum point and a second local maximum point;
respectively carrying out feature description on the first local maximum point and the second local maximum point by utilizing an ORB description algorithm, and respectively generating a first descriptor of the first feature point and a second descriptor of the second feature point;
performing Hamming distance bidirectional matching on the first characteristic point and the second characteristic point;
further matching the matching result by using a RANSAC (random sample consensus) rule, and solving a perspective transformation model T and an inverse matrix Tinv of the T in the further matching process;
expanding the edge of the first image to obtain a first image matrix, initializing a second image matrix which is as large as the first image matrix, performing reverse bilinear interpolation operation on the initialized second image matrix according to the inverse matrix Tinv to complete coordinate transformation of the second image, and storing the second image after the coordinate transformation in the second image matrix;
performing brightness value compensation on the second image matrix according to the difference value of the gray average values of the overlapping areas of the first image matrix and the second image matrix;
counting a splicing seam of the first image matrix and the second image matrix, initializing a first fusion weight and a second fusion weight according to the position of the splicing seam, wherein the first fusion weight and the second fusion weight are as large as the first image matrix;
performing Gaussian smoothing on the first fusion weight and the second fusion weight, and completing fusion of the two images by using a formula IM 1W 1+ IM 2W 2 to obtain a fusion image IM, wherein IM1 is a first image matrix, IM2 is a second image matrix, W1 is the first fusion weight, and IM2 is the second fusion weight;
and performing edge cutting processing on the fused image IM by using the minimum circumscribed rectangle to obtain a spliced image.
2. The method of claim 1, wherein during the step S1, a current field of view of the thermal infrared imager is obtained, the current field of view of the thermal infrared imager is displayed in a first region of an operating interface of the thermal infrared imager, the imaged image is displayed in a thumbnail form in a second region of the operating interface of the thermal infrared imager, and a user operation prompt is displayed in a third region of the operating interface of the thermal infrared imager.
3. The thermal infrared imager-based panoramic image acquisition method of claim 1, wherein the stitching of the N images comprises the steps of:
s1, reading 2 images in the N images for splicing to obtain spliced images;
s2, if N is greater than 2, continuing to read the images which are not spliced in the N images, and splicing the images which are not spliced with the spliced images obtained in the previous time;
and S3, if N is greater than 3, repeating the step S2 until the splicing of the N images is completed, and obtaining the panoramic image.
4. A panoramic image acquisition system based on a thermal infrared imager is characterized by comprising:
the thermal infrared imager is used for carrying out multiple times of thermal infrared imaging on the target to obtain N images, wherein N is a preset value, and overlapping regions exist among the N images; the panoramic stitching module is used for stitching the N images to obtain a panoramic image;
this panorama concatenation module includes:
the characteristic point extraction module is used for recording two images to be spliced into a first image and a second image and respectively extracting a first characteristic point of the first image and a second characteristic point of the second image by using a FAST algorithm;
the non-maximum suppression processing module is used for respectively performing non-maximum suppression processing on the acquired first characteristic point and the acquired second characteristic point to respectively obtain a first local maximum point and a second local maximum point;
the descriptor generation module is used for respectively carrying out feature description on the first local maximum point and the second local maximum point by utilizing an ORB description algorithm and respectively generating a first descriptor of the first feature point and a second descriptor of the second feature point;
the rough matching module is used for carrying out Hamming distance bidirectional matching on the first characteristic point and the second characteristic point;
the fine matching module is used for further matching the matching result by utilizing the RANSAC criterion, and solving a perspective transformation model T and an inverse matrix Tinv of the T in the further matching process;
the coordinate change module is used for expanding the edge of the first image to obtain a first image matrix, initializing a second image matrix which is as large as the first image matrix, performing reverse bilinear interpolation operation on the initialized second image matrix according to the inverse matrix Tinv to complete coordinate transformation of the second image, and storing the second image after the coordinate transformation in the second image matrix;
the brightness compensation module is used for compensating the brightness value of the second image matrix according to the difference value of the gray average values of the superposed areas of the first image matrix and the second image matrix;
the fusion weight initialization module is used for counting a splicing seam of the first image matrix and the second image matrix, initializing a first fusion weight and a second fusion weight according to the position of the splicing seam, and enabling the first fusion weight and the second fusion weight to be as large as the first image matrix;
the fusion module performs Gaussian smoothing on the first fusion weight and the second fusion weight, and completes fusion of the two images by using a formula IM 1W 1+ IM 2W 2 to obtain a fusion image IM, wherein IM1 is a first image matrix, IM2 is a second image matrix, W1 is the first fusion weight, and IM2 is the second fusion weight;
and the splicing module is used for performing edge cutting processing on the fused image IM by using the minimum circumscribed rectangle to obtain a spliced image.
5. The thermal infrared imager-based panoramic image acquisition system of claim 4, wherein the thermal infrared imager is further configured to acquire a current field of view of the thermal infrared imager during the imaging process, display the current field of view of the thermal infrared imager in a first region of an operating interface of the thermal infrared imager, display the imaged image in a thumbnail in a second region of the operating interface of the thermal infrared imager, and display a user operation prompt in a third region of the operating interface of the thermal infrared imager.
6. The system of claim 4, wherein the stitching the N images comprises:
s1, reading 2 images in the N images for splicing to obtain spliced images;
s2, if N is greater than 2, continuing to read the images which are not spliced in the N images, and splicing the images which are not spliced with the spliced images obtained in the previous time;
and S3, if N is greater than 3, repeating the step S2 until the splicing of the N images is completed, and obtaining the panoramic image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010797974.2A CN112017117A (en) | 2020-08-10 | 2020-08-10 | Panoramic image acquisition method and system based on thermal infrared imager |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010797974.2A CN112017117A (en) | 2020-08-10 | 2020-08-10 | Panoramic image acquisition method and system based on thermal infrared imager |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112017117A true CN112017117A (en) | 2020-12-01 |
Family
ID=73499577
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010797974.2A Pending CN112017117A (en) | 2020-08-10 | 2020-08-10 | Panoramic image acquisition method and system based on thermal infrared imager |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112017117A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106204440A (en) * | 2016-06-29 | 2016-12-07 | 北京互信互通信息技术有限公司 | A kind of multiframe super resolution image reconstruction method and system |
CN107784632A (en) * | 2016-08-26 | 2018-03-09 | 南京理工大学 | A kind of infrared panorama map generalization method based on infra-red thermal imaging system |
CN107945113A (en) * | 2017-11-17 | 2018-04-20 | 北京天睿空间科技股份有限公司 | The antidote of topography's splicing dislocation |
CN109859137A (en) * | 2019-02-14 | 2019-06-07 | 重庆邮电大学 | A kind of irregular distortion universe bearing calibration of wide angle camera |
-
2020
- 2020-08-10 CN CN202010797974.2A patent/CN112017117A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106204440A (en) * | 2016-06-29 | 2016-12-07 | 北京互信互通信息技术有限公司 | A kind of multiframe super resolution image reconstruction method and system |
CN107784632A (en) * | 2016-08-26 | 2018-03-09 | 南京理工大学 | A kind of infrared panorama map generalization method based on infra-red thermal imaging system |
CN107945113A (en) * | 2017-11-17 | 2018-04-20 | 北京天睿空间科技股份有限公司 | The antidote of topography's splicing dislocation |
CN109859137A (en) * | 2019-02-14 | 2019-06-07 | 重庆邮电大学 | A kind of irregular distortion universe bearing calibration of wide angle camera |
Non-Patent Citations (3)
Title |
---|
柳运波: "全景图像拼接关键技术研究", 《中国优秀硕士学位论文全文数据库》, no. 1, 15 January 2014 (2014-01-15), pages 138 - 1914 * |
牛卫: "红外全向告警系统图像拼接算法研究与实现技术", 《中国优秀硕士学位论文全文数据库》, no. 4, 15 April 2018 (2018-04-15), pages 138 - 2824 * |
鲜秦毅: "红外全向图像拼接算法研究与实现技术", 《中国优秀硕士学位论文全文数据库》, no. 2, 15 February 2019 (2019-02-15), pages 138 - 2105 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5580164B2 (en) | Optical information processing apparatus, optical information processing method, optical information processing system, and optical information processing program | |
JP6263623B2 (en) | Image generation method and dual lens apparatus | |
US9986158B2 (en) | Method and apparatus for photographing a panoramic image | |
CN104995905B (en) | Image processing equipment, filming control method and program | |
WO2015180659A1 (en) | Image processing method and image processing device | |
JP4010754B2 (en) | Image processing apparatus, image processing method, and computer-readable recording medium | |
TWI493504B (en) | Method for combining images | |
EP2328125A1 (en) | Image splicing method and device | |
JP6436783B2 (en) | Image processing apparatus, imaging apparatus, image processing method, program, and storage medium | |
JP6047025B2 (en) | Imaging apparatus and control method thereof | |
CN108965742A (en) | Abnormity screen display method, apparatus, electronic equipment and computer readable storage medium | |
CN118014832B (en) | Image stitching method and related device based on linear feature invariance | |
CN111385461A (en) | Panoramic shooting method and device, camera and mobile terminal | |
CN111654624B (en) | Shooting prompting method and device and electronic equipment | |
CN114485953A (en) | Temperature measuring method, device and system | |
KR100934211B1 (en) | How to create a panoramic image on a mobile device | |
US20090059018A1 (en) | Navigation assisted mosaic photography | |
CN110675349B (en) | Endoscopic imaging method and device | |
CN110796690B (en) | Image matching method and image matching device | |
CN112017117A (en) | Panoramic image acquisition method and system based on thermal infrared imager | |
JP2017103695A (en) | Image processing apparatus, image processing method, and program of them | |
KR101132976B1 (en) | Mobile device with a plurality of camera, method for display using the sane | |
JP4266736B2 (en) | Image processing method and apparatus | |
JP7393179B2 (en) | Photography equipment | |
Oliveira et al. | Lenslet light field panorama creation: A sub-aperture image stitching approach |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |