CN113837929B - Graph splicing device and method - Google Patents

Graph splicing device and method Download PDF

Info

Publication number
CN113837929B
CN113837929B CN202111104216.9A CN202111104216A CN113837929B CN 113837929 B CN113837929 B CN 113837929B CN 202111104216 A CN202111104216 A CN 202111104216A CN 113837929 B CN113837929 B CN 113837929B
Authority
CN
China
Prior art keywords
image
images
expressed
frames
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111104216.9A
Other languages
Chinese (zh)
Other versions
CN113837929A (en
Inventor
张艳超
余毅
高策
宋聪聪
齐东浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Original Assignee
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Institute of Optics Fine Mechanics and Physics of CAS filed Critical Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority to CN202111104216.9A priority Critical patent/CN113837929B/en
Publication of CN113837929A publication Critical patent/CN113837929A/en
Application granted granted Critical
Publication of CN113837929B publication Critical patent/CN113837929B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides an image stitching device, which comprises a double-shaft rotating platform and an imaging system; the biaxial rotation platform comprises a horizontal shaft and a vertical shaft; the imaging system adjusts the pitching angle by rotating around a horizontal axis, and the imaging system realizes 360 DEG Zhou Sao by rotating around a vertical axis; the imaging system comprises an optical system and an image detector, wherein the detector target surface of the image detector is perpendicular to the optical axis of the optical system, and the target surface center of the detector target surface is intersected with the optical axis. By utilizing the device, the invention provides an image stitching method which combines the pointing angle information to realize the automatic registration of the images line by line. The method adopts a non-characteristic matching mode, so that the defect of the traditional splicing algorithm is effectively avoided, and the method has good adaptability and instantaneity for panoramic imaging of areas with unobvious characteristics such as sky and the like.

Description

Graph splicing device and method
Technical Field
The invention relates to the field of optical imaging, in particular to a graph splicing device and a graph splicing method.
Background
From the recent years, the united states and unmanned aerial vehicles are becoming the main weapon for accurate military striking, and the traditional combat mode is changed. The unmanned aerial vehicle has the characteristics of high attack precision, small radar reflection area (RCS), low flying height and difficult radar detection, and effective defense means are explored in all countries of the world at present.
The air defense early warning system adopts a mode of uninterrupted panoramic scanning of area array image detectors of different wavebands to carry out real-time panoramic imaging on a designated airspace, can make up for the defect that the radar has weak detection capability on low altitude and small targets, and has the characteristics of strong recognition capability, high early warning precision, good concealing effect, low cost, no radiation and the like.
The anti-air early warning system adopts a two-axis stable platform, realizes a working mode of pitching adjustable and 360-degree circumferential scanning in azimuth, and can realize the acquisition of clear images of the area array infrared detector. And acquiring the panoramic image of the equipment early warning area in real time by adopting an image stitching technology. And through suspicious target detection and multi-frame characteristic association operation in the panoramic image, automatic capturing, identification and early warning of the suspicious target are realized. Image stitching is a necessary means for realizing panoramic imaging, and is also a key technology for determining the visualization level of an air defense early warning system.
In 2014 Julio zaagaoza et al proposed the APAP algorithm in order to avoid the problems of mis-alignment and excessive reliance on post-processing and artifact removal of the conventional stitching algorithm at the time. Most of the splicing applications currently adopt APAP algorithm and various derivative algorithms.
The image splicing method proposed by the Chinese patent invention video splicing method and device with the application number of CN201811625266.X and the application date of 2018, 12 and 28 is used for splicing a first video and a second video, and comprises the following steps: extracting, matching and screening the characteristics of a first target frame of the first video and a second target frame of the second video to obtain a first characteristic point pair set of the first target frame and the second target frame; forward tracking is carried out on the first target frame and the second target frame, and a second characteristic point pair set of the first target frame and the second target frame is obtained; backward tracking is carried out on the first target frame and the second target frame, and a third characteristic point pair set of the first target frame and the second target frame is obtained; and calculating the geometric transformation relation between the first target frame and the second target frame according to the union set of the first characteristic point pair set, the second characteristic point pair set and the third characteristic point pair set of the first target frame and the second target frame, and registering the two frames of images.
The invention patent of China with the application number of CN201611075609.0 and the application date of 2016, 11 and 29 discloses a method and a device for splicing fisheye images into a panoramic image and a panoramic video, wherein N fisheye images are obtained at the same time by utilizing N fisheye lenses; according to the mapping relation of longitude and latitude coordinates, spherical coordinates and plane coordinates, performing feature point matching on the intersection region of each group of cylindrical projection images in the N cylindrical projection images to obtain matching feature point pairs of the intersection region of each group of cylindrical projection images; acquiring an attitude matrix of the matched feature point pair according to the mapping relation between the spherical coordinates and the longitude and latitude coordinates; taking the coordinate system of one fisheye lens in the N fisheye lenses as a world coordinate system, and calculating to obtain the gesture matrix of each fisheye lens in the world coordinate system by utilizing the gesture matrix of the matched characteristic point pair; according to the gesture matrix of each fish-eye lens in the world coordinate system, determining the pixel value of each pixel point in the panoramic image; and acquiring the panoramic image according to the pixel value of each pixel point in the panoramic image.
The invention provides a method and a system for splicing low-altitude aerial images, which are disclosed in Chinese patent application No. CN202110388923.9, wherein the application date of the Chinese patent application is 2021, 04 and 12, and the method comprises the following steps: setting up a semantic segmentation network, collecting a low-altitude aerial image data set, marking remarkable buildings in the image data, and sending the marked image data into the semantic segmentation network for training; sending the aerial images to be spliced into a semantic segmentation model obtained through training to detect the remarkable building and divide the remarkable building at a pixel level, obtaining an external rectangular image of the remarkable building, and leaving a region corresponding to the external rectangular image in the aerial images to be spliced white to obtain an orthographic external rectangular image with optimal orthographic property; and splicing at least two aerial photographs to be spliced after the white processing to obtain an initial spliced image, expanding the orthographic circumscribed rectangular image, and fusing the obtained expanded rectangular image and the initial spliced image to obtain a result spliced image. The invention can effectively solve the problems of non-orthographic projection, ghost shadow and miscut generated by splicing low-altitude aerial photographs.
The key to image stitching is the effective registration of overlapping areas of two adjacent frames of images. The common image registration method is generally realized based on a feature extraction and feature pair matching mode, and the algorithm has the following limitations: firstly, the method has strong dependence on image characteristics and has poor applicability to areas with unobvious characteristics such as sky, sea surface and the like; secondly, the algorithm operand is large, and for a system with higher frame frequency or higher resolution, the real-time requirement is difficult to meet; third, the feature-based automatic matching mode is prone to mismatching, and for real-time video stitching, discontinuity of panoramic video is often caused.
Disclosure of Invention
The invention provides a graph splicing device for realizing real-time panoramic imaging of a designated airspace around an observation point.
In order to realize the pattern splicing in a non-characteristic matching mode, the invention adopts the following specific technical scheme:
an image stitching device comprises a biaxial rotation platform and an imaging system;
the biaxial rotation platform comprises a horizontal shaft and a vertical shaft;
the imaging system adjusts the pitch angle by rotating around a horizontal axis, and the imaging system realizes 360 DEG Zhou Sao by rotating around a vertical axis;
the imaging system comprises an optical system and an image detector, wherein the detector target surface of the image detector is perpendicular to the optical axis of the optical system, and the target surface center of the detector target surface is intersected with the optical axis.
A method of image stitching comprising the steps of;
s1, after a pitch angle is adjusted around a horizontal axis by an imaging system, keeping unchanged, performing 360-degree circumferential scanning around a vertical axis and continuously imaging, and writing a corresponding pointing angle in the center of a current frame image in each image;
s2, obtaining a corresponding relation between the pointing angle of the pixel on the image and the pointing angle of the center of the image;
s3, obtaining the overlapped pixel number delta n of the y-th row y
S4, registering and splicing the images row by row;
the number of overlapping pixels delta n according to the y-th row y And carrying out line-by-line fusion and splicing on the two frames of images.
Further, step S2 includes the steps of:
s21, the azimuth angle corresponding to the center of the image is expressed as Ac, the pitch angle is expressed as Ec, and the angular deviation of the pixel located at the (x, y) position in the pixel coordinate system relative to the center of the image is expressed as(ΔA x 、ΔE y ) The pointing angle of the picture element is expressed as:
ΔA x expressed by the following formula:
wherein W is the image width; alpha is the azimuth opening angle or the pitching opening angle corresponding to each pixel on the image
ΔE y Expressed by the following formula:
where H is the image height.
Further, step S3 includes the steps of:
s31, the azimuth angle and the pitch angle of the centers of two continuous frames of images are respectively expressed as (A cN ,E cN ),(A cN+1 ,E cN+1 ) The two images center azimuth deviation Δa is expressed as:
ΔA=A cN+1 -A cN (4)
s32, correcting the azimuth deviation delta A of the centers of the two frames of images
According to the geometrical projection relation, correcting the center azimuth deviation delta A of the two images to delta A is expressed as the following formula:
ΔA′=ΔAcosE (5)
s33, obtaining the actual pixel number n corresponding to the azimuth angle of the center image column of the two frames of images, wherein the actual pixel number n is expressed as follows:
n=ΔA′/α (6)
alpha is the azimuth opening angle or the pitching opening angle corresponding to each pixel on the image;
s34, the number of overlapping pixels Δn of two frames of images is expressed as:
Δn=W-n (7)
wherein W is the image width;
when the number delta n of overlapping pixels of the two frames of images is larger than zero, the overlapping area exists between the two frames of images; when the number delta n of overlapping pixels of the two frames of images is smaller than zero, the fact that no overlapping area exists between the two frames of images is indicated;
s35, bringing the formula (5) and the formula (6) into the formula (7), and obtaining the overlapping pixel number delta n of the two frames of images, wherein the overlapping pixel number delta n is expressed as follows:
Δn=W-ΔAcosE/α (8)
s36, the formula (1) is introduced into the formula (8) to obtain the corresponding overlapping number delta n of the pixels of the y-th row y Expressed by the following formula:
further, the calculation process of α is as follows:
from the geometrical optical imaging relation of the imaging system, the azimuth or pitching opening angle alpha corresponding to each pixel is expressed as the following formula:
α=tan -1 (P s /v) (10)
wherein P is s The pixel size of the image detector is that of the image detector, and v is that of the image distance;
from the optical system conjugate relationship, it can be seen that:
wherein u is the object distance, f is the focal length, and the image distance v is equal to the focal length f; alpha is represented by the formula:
α=tan -1 (P s /f) (12)。
further, in step S4, a progressive-in and progressive-out algorithm is adopted to perform progressive fusion and splicing on the two frames of images, and the expression is as follows:
wherein g (x, y) is a pixel value corresponding to a (x, y) point of the spliced image, and g1 (x, y) and g2 (x, y) are pixel values corresponding to a (x, y) point of the image to be spliced respectively; w1 (x, y), w2 (x, y) is a weight coefficient in the fusion process, and the calculation formula is as follows:
the invention can obtain the following technical effects:
1. the device can be used for image stitching of the area array circumferential scanning imaging system for air defense early warning, and is convenient for the discovery and the state analysis of the flying target.
2. A progressive registration method based on the pointing angle of the device is used. For the circumferential scanning equipment rotating at a uniform speed, the higher the pitching shooting angle is, the larger the overlapping area between the images of the adjacent frames is, and the more the number of overlapping pixels of the pixel rows in the image is close to the top end of the image, otherwise, the smaller the number of overlapping pixels is. Therefore, the invention calculates the pixel overlapping number row by means of the additional pointing angle information of each frame of image so as to realize the real-time stitching of the panoramic image. The method adopts a non-characteristic matching mode, so that the defect of the traditional splicing algorithm is effectively avoided, and the method has good adaptability and instantaneity for panoramic imaging of areas with unobvious characteristics such as sky and the like. The invention provides an image stitching method for realizing progressive automatic registration by combining pointing angle information. The method adopts a non-characteristic matching mode, so that the defect of the traditional splicing algorithm is effectively avoided, and the method has good adaptability and instantaneity for panoramic imaging of areas with unobvious characteristics such as sky and the like.
3. The graph splicing method does not depend on image characteristics, and has better applicability to areas with unobvious characteristics such as sky, sea surface and the like;
4. the algorithm of graph splicing is simple, the operand is small, and the method still has good real-time performance for a system with higher frame frequency or higher resolution; the overlapping area is calculated according to the actual included angle between frames, so that the problem of mismatching is solved, and the video continuous splicing method has good stability and continuity. Compared with the conventional splicing method, the method has an outstanding splicing effect on the areas with unobvious characteristics such as sky background and the like. For other scenes, the method has a good splicing effect, and the method is more obvious in the effect of unobvious characteristics of sky, sea surface and the like.
Drawings
FIG. 1 is a schematic diagram of a graphic splicing device according to an embodiment of the present invention;
FIG. 2 is a schematic view of the field angle of imaging a pixel in an embodiment of the invention;
FIG. 3 is a schematic diagram of parameters of two adjacent frames of images according to an embodiment of the present invention;
FIG. 4 is a schematic view of an azimuthal projection relationship according to an embodiment of the present invention;
FIG. 5 is a diagram of a progressive-in and progressive-out fusion splice process according to an embodiment of the present invention.
Reference numerals:
an imaging system 1, a horizontal axis 2, a vertical axis 3.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not to be construed as limiting the invention.
The graphic splicing device is a biaxial (mutually perpendicular horizontal axis 2 and vertical axis 3) rotating platform, and in the working process of the system shown in fig. 1, the imaging system 1 can realize pitching angle adjustment around the horizontal axis 2 and 360-degree azimuth circumferential scanning around the vertical axis 3. Biaxial rotation platforms capable of pitch and azimuth adjustment are prior art in the art and their structure is not described here in detail. The imaging system 1 consists of an optical system and an image detector, wherein the image detector is a single-band or multi-band planar array image detector, the target surface of the detector is perpendicular to the optical axis of the optical system, and the center of the target surface is intersected with the optical axis of the optical system. The optical system may be a camera lens, and the image detector is configured to capture an image formed via the camera lens. In the working process of the system, the optical axis of the optical system of the imaging system 1 is rotated around the horizontal axis 2 to be adjusted to a reasonable pitching angle E, then the optical axis is kept unchanged, 360-degree continuous circumferential scanning work is carried out around the vertical axis, meanwhile, continuous imaging is carried out, and the pointing angle information corresponding to the center of a frame image, namely an azimuth angle Ac and a pitch angle Ec, is written in each image.
By using the device or other equipment, graphic stitching can be realized, and a specific image stitching method comprises the following steps:
s1, after a pitch angle is adjusted around a horizontal axis, the imaging system keeps unchanged, 360-degree circumferential scanning and continuous imaging are carried out around a vertical axis, and a pointing angle corresponding to the center of a frame image, namely an azimuth angle Ac and a pitch angle Ec are written in each image;
s2, obtaining a corresponding relation between the pointing angle of the pixel on the image and the pointing angle of the center of the image, wherein the method comprises the following steps:
s21, as known from the geometric optical imaging relationship of the imaging field angle of the pixel shown in fig. 2, for a given optical system and image detector, the azimuth or pitch opening angle α corresponding to each pixel, that is, α is the opening angle of the pitch direction or azimuth direction corresponding to each pixel, where α is expressed by the following formula:
α=tan -1 (P s /v) (1)
wherein P is s The pixel size of the image detector is that of the image detector, and v is that of the image distance;
from the optical system conjugate relationship, it can be seen that:
where u is the object distance, v is the image distance, and f is the focal length.
For the early warning system, in order to ensure panoramic field of view imaging, the focal length is generally much smaller than the object distance, and then the image distance v is approximately equal to the focal length f. Thus, the approximate expression of equation (1) is:
α=tan -1 (P s /f) (3)
knowing that the azimuth and pitch pointing angles corresponding to the center of the image are Ac and Ec, respectively, the angular deviation expression of the pixel located at the (x, y) position in the image relative to the center of the image is:
wherein W is the image width and H is the image height; wherein x and y are the column value and the row value of the image where the pixel is located in the pixel coordinate system;
the pointing angle of a picture element in the image at the (x, y) position is expressed as:
s3, obtaining the overlapped pixel number delta n of the y-th row y Comprising the following steps:
s31, for the circumferential scanning air defense warning equipment, in order to obtain a stable circumferential scanning image, the azimuth angle E is set so as not to change any more, and only azimuth rotation is carried out. Therefore, in the image stitching process, only the calculation of the pixel overlap number Δn in the azimuth direction needs to be considered. The azimuth pitch angles of the centers of two continuous frames of images are respectively (A) cN ,E cN ),(A cN+1 ,E cN+1 ),E cN And E is connected with cN+1 Approximately equal, as shown in fig. 3. The center azimuth deviation Δa of the two images is expressed as:
ΔA=A cN+1 -A cN (6)
s32, correcting the azimuth deviation delta A of the centers of the two frames of images
As can be seen from the projection principle of the angle, the azimuth angle A and the pitch angle E have a projection relationship (as shown in FIG. 4), so that the azimuth angle deviation of the two images needs to be corrected by projection, and the corrected expression is:
ΔA′=ΔAcosE (7)
s33, obtaining the actual pixel number n corresponding to the azimuth angle of the center image column of the two frames of images, wherein the azimuth angle corresponding to one column in the middle of the images is expressed as the actual pixel number n corresponding to the azimuth angle of the center image column of the two images by the following formula:
n=ΔA′/α (8)
s34, the number of overlapping pixels Δn of two frames of images is expressed as:
Δn=W-n (9)
when deltan is greater than zero, it indicates that there is an overlap region between the two frames of images, and when deltan is less than zero, it indicates that there is no overlap region between the two frames of images.
S35, the formulas (7) and (8) are brought into the formula (9), and the number delta n of overlapping pixels of the two frames of images is expressed as follows:
Δn=W-ΔAcosE/α (10)
s36, taking the formula (4) and the formula (5) into the formula (10) to obtain the corresponding overlapping number delta n of the y-th row pixels y Expressed by the following formula:
as can be seen from equation (11), when E C At a fixed level (typically in the range 0-40 °), the number of overlapping pixels Δn y The number of overlapping pixels deltan being a monotonically decreasing function with respect to y, i.e. closer to the image top (smaller y value) y The more, the fewer vice versa.
S4, registering and splicing images row by row
And (3) according to the calculation result of the formula (11), carrying out line-by-line fusion and splicing on the two frames of images. The fusion splicing algorithm adopts a simple and classical gradual-in gradual-out algorithm (shown in fig. 4-5), and the expression is:
wherein, x and y are expressed as column values and row values of the image where the picture elements are located in the pixel coordinate system. g (x, y) is a pixel value corresponding to a (x, y) point of the spliced image, and g1 (x, y) and g2 (x, y) are pixel values corresponding to the (x, y) point of the image to be spliced respectively; w1 (x, y), w2 (x, y) is a weight coefficient in the fusion process, and the calculation formula is as follows:
the two frames of images are fused and spliced row by row to form the prior art, and the description is omitted here.
Most of the splicing applications currently adopt APAP algorithm and various derivative algorithms. The splicing algorithm based on feature point extraction has strong dependence on feature points, the images of the air defense early warning system are mostly sky backgrounds with unobvious features, the image features in the overlapping areas are unobvious, and the following limitations exist in the image splicing by adopting the algorithm: firstly, the method has strong dependence on image characteristics and has poor applicability to areas with unobvious characteristics such as sky, sea surface and the like; secondly, the algorithm operand is large, and for a system with higher frame frequency or higher resolution, the real-time requirement is difficult to meet; third, the automatic matching mode based on the characteristics is easy to cause mismatching, and for real-time video splicing, discontinuity of panoramic video and small shaking of joints are often caused. The invention adopts a progressive registration method based on the pointing angle of the device. For the circumferential scanning equipment rotating at a uniform speed, the higher the pitching shooting angle is, the larger the overlapping area between the images of the adjacent frames is, and the more the number of overlapping pixels of the pixel rows in the image is close to the top end of the image, otherwise, the smaller the number of overlapping pixels is. Therefore, the invention calculates the pixel overlapping number row by means of the additional pointing angle information of each frame of image so as to realize the real-time stitching of the panoramic image. The method adopts a non-characteristic matching mode, so that the defect of the traditional splicing algorithm is effectively avoided, and the method has good adaptability and instantaneity for panoramic imaging of areas with unobvious characteristics such as sky and the like.
Compared with the conventional splicing method, the method has an outstanding splicing effect on the areas with unobvious characteristics such as sky background and the like. The method has good splicing effect on other scenes, and is more obvious in particular to the unobvious effect on the characteristics of sky, sea surface and the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
While embodiments of the present invention have been illustrated and described above, it will be appreciated that the above described embodiments are illustrative and should not be construed as limiting the invention. Variations, modifications, alternatives and variations of the above-described embodiments may be made by those of ordinary skill in the art within the scope of the present invention.
The above embodiments of the present invention do not limit the scope of the present invention. Any other corresponding changes and modifications made in accordance with the technical idea of the present invention shall be included in the scope of the claims of the present invention.

Claims (2)

1. A method of image stitching, comprising the steps of;
s1, after a pitch angle is adjusted around a horizontal axis by an imaging system, keeping unchanged, performing 360-degree circumferential scanning around a vertical axis and continuously imaging, and writing a corresponding pointing angle in the center of a current frame image in each image;
s2, obtaining a corresponding relation between the pointing angle of the pixel on the image and the pointing angle of the center of the image;
s3, obtaining the number of overlapped pixels of the y-th row
S4, registering and splicing the images row by row;
according to the number of overlapping pixels of the y-th rowCarrying out line-by-line fusion and splicing on the two frames of images;
step S2 comprises the steps of:
s21, the azimuth angle corresponding to the image center is expressed as Ac, the pitch angle is expressed as Ec, and the angular deviation of the pixel positioned at the (x, y) position in the pixel coordinate system relative to the image center is expressed as #、/>) The pointing angle of the picture element is expressed by the following formula:
(1)
the saidExpressed by the following formula:
(2)
wherein W is the image width;the azimuth opening angle or the pitching opening angle corresponding to each pixel on the image;
the saidExpressed by the following formula:
(3)
wherein H is the image height;
step S3 comprises the steps of:
S31. the azimuth angle and the pitch angle of the centers of two continuous frames of images are respectively expressed as%),(/>) The center azimuthal deviation of the two images, fata, is expressed as:
(4)
s32, correcting the center azimuth deviation fatA of the two frames of images
Correcting the center azimuth deviation fatA of the two images into the following components according to the geometric projection relationAnd is represented by the formula:
(5)
s33, obtaining the actual pixel number n corresponding to the azimuth angle of the center image column of the two frames of images, wherein the actual pixel number n is expressed as follows:
(6)
s34, the overlapping pixel number n of the two frames of images is expressed as the following formula:
(7)
wherein W is the image width;
when the overlapping pixel number n of the two frames of images is larger than zero, the overlapping area exists between the two frames of images; when the overlapping pixel number n of the two frames of images is smaller than zero, the fact that no overlapping area exists between the two frames of images is indicated;
s35, bringing the formula (5) and the formula (6) into the formula (7), and obtaining the overlapping pixel number n of the two frames of images is expressed as the following formula:
(8)
s36, carrying the formula (1) into the formula (8) to obtain the corresponding overlapping number of the pixels in the y-th rowExpressed by the following formula:
(9);
the saidThe calculation process of (2) is as follows:
the geometrical optical imaging relation of the imaging system can know that each pixel corresponds to an azimuth opening angle or a pitching opening angleExpressed by the following formula:
(10)
wherein,the pixel size of the image detector is that of the image detector, and v is that of the image distance;
from the optical system conjugate relationship, it can be seen that:
(11)
wherein u is an object distance, f is a focal length, and the image distance v is equal to the focal length f; the said methodExpressed by the following formula:
(12);
in the step S4, a progressive-in and progressive-out algorithm is adopted to carry out progressive fusion and splicing on two frames of images, and the expression is as follows:
(13)
wherein g (x, y) is a pixel value corresponding to a (x, y) point of the spliced image, and g1 (x, y) and g2 (x, y) are pixel values corresponding to a (x, y) point of the image to be spliced respectively; w1 (x, y), w2 (x, y) is a weight coefficient in the fusion process, and the calculation formula is as follows:
(14)。
2. an image stitching device for carrying out the image stitching method according to claim 1, characterized in that it comprises a biaxial rotation platform and an imaging system (1);
the biaxial rotation platform comprises a horizontal shaft (2) and a vertical shaft (3);
-the imaging system (1) adjusts the pitch angle by rotation about the horizontal axis (2), the imaging system (1) achieving 360 ° Zhou Sao by rotation about the vertical axis (3);
the imaging system (1) comprises an optical system and an image detector, wherein a detector target surface of the image detector is perpendicular to an optical axis of the optical system, and a target surface center of the detector target surface is intersected with the optical axis.
CN202111104216.9A 2021-09-18 2021-09-18 Graph splicing device and method Active CN113837929B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111104216.9A CN113837929B (en) 2021-09-18 2021-09-18 Graph splicing device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111104216.9A CN113837929B (en) 2021-09-18 2021-09-18 Graph splicing device and method

Publications (2)

Publication Number Publication Date
CN113837929A CN113837929A (en) 2021-12-24
CN113837929B true CN113837929B (en) 2024-04-12

Family

ID=78960124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111104216.9A Active CN113837929B (en) 2021-09-18 2021-09-18 Graph splicing device and method

Country Status (1)

Country Link
CN (1) CN113837929B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117495933B (en) * 2024-01-02 2024-03-12 中国科学院长春光学精密机械与物理研究所 Parallax correction-based real-time registration method for external lens image of photoelectric telescope

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101562693A (en) * 2009-06-01 2009-10-21 中国兵器工业第二〇五研究所 Optical imaging splicing device of double CCD image splicing detector
CN102222337A (en) * 2011-06-14 2011-10-19 重庆大学 Fisheye image correcting method for image stitching
CN103295231A (en) * 2013-05-14 2013-09-11 杭州海康希牧智能科技有限公司 Method for geometrically correcting vertically mapped images of fisheye lenses in fisheye image mosaic
WO2018205623A1 (en) * 2017-05-11 2018-11-15 Boe Technology Group Co., Ltd. Method for displaying a virtual image, a virtual image display system and device, a non-transient computer-readable storage medium
CN111080523A (en) * 2019-12-17 2020-04-28 天津津航技术物理研究所 Infrared panoramic search system and infrared panoramic image splicing method based on angle information
CN111882484A (en) * 2020-06-16 2020-11-03 河北汉光重工有限责任公司 Servo control method in high-speed seamless image splicing of submersible imaging system
CN112040097A (en) * 2020-07-24 2020-12-04 北京空间机电研究所 Large-breadth camera system with spliced view fields

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101562693A (en) * 2009-06-01 2009-10-21 中国兵器工业第二〇五研究所 Optical imaging splicing device of double CCD image splicing detector
CN102222337A (en) * 2011-06-14 2011-10-19 重庆大学 Fisheye image correcting method for image stitching
CN103295231A (en) * 2013-05-14 2013-09-11 杭州海康希牧智能科技有限公司 Method for geometrically correcting vertically mapped images of fisheye lenses in fisheye image mosaic
WO2018205623A1 (en) * 2017-05-11 2018-11-15 Boe Technology Group Co., Ltd. Method for displaying a virtual image, a virtual image display system and device, a non-transient computer-readable storage medium
CN111080523A (en) * 2019-12-17 2020-04-28 天津津航技术物理研究所 Infrared panoramic search system and infrared panoramic image splicing method based on angle information
CN111882484A (en) * 2020-06-16 2020-11-03 河北汉光重工有限责任公司 Servo control method in high-speed seamless image splicing of submersible imaging system
CN112040097A (en) * 2020-07-24 2020-12-04 北京空间机电研究所 Large-breadth camera system with spliced view fields

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Image Stitching Algorithm Based on Embedded System;Zhengde Shi 等;IEEE;20181007;全文 *
亚象元线阵CCD焦平面的光学拼接;刘新平 等;光子学报;20020625(第06期);全文 *
空间相机焦平面CCD交错拼接重叠像元数计算;武奕楠 等;光学精密工程;20160215(第02期);全文 *

Also Published As

Publication number Publication date
CN113837929A (en) 2021-12-24

Similar Documents

Publication Publication Date Title
US10033924B2 (en) Panoramic view imaging system
US11070725B2 (en) Image processing method, and unmanned aerial vehicle and system
CN110211043B (en) Registration method based on grid optimization for panoramic image stitching
CN103198487B (en) A kind of automatic marking method for video monitoring system
CN108614273B (en) Airborne dual-waveband photoelectric wide-area reconnaissance and tracking device and method
WO2020014909A1 (en) Photographing method and device and unmanned aerial vehicle
CN103971375B (en) A kind of panorama based on image mosaic stares camera space scaling method
CN107492069B (en) Image fusion method based on multi-lens sensor
US20120200703A1 (en) Imaging system for uav
CN104125372B (en) Target photoelectric search and detection method
CN103501409A (en) Ultrahigh resolution panorama speed dome AIO (All-In-One) system
CN103295231A (en) Method for geometrically correcting vertically mapped images of fisheye lenses in fisheye image mosaic
CN107038714B (en) Multi-type visual sensing cooperative target tracking method
WO2021184302A1 (en) Image processing method and apparatus, imaging device, movable carrier, and storage medium
CN111815517B (en) Self-adaptive panoramic stitching method based on snapshot pictures of dome camera
US20230023046A1 (en) Method and device for generating vehicle panoramic surround view image
CN108830811A (en) A kind of aviation image real-time correction method that flight parameter is combined with camera internal reference
CN113222820A (en) Pose information assisted aerial remote sensing image splicing method
CN113837929B (en) Graph splicing device and method
CN112750075A (en) Low-altitude remote sensing image splicing method and device
CN107635096B (en) A kind of panorama aerial camera inclination imaging method increasing photograph Duplication
CN113850905B (en) Panoramic image real-time stitching method for circumferential scanning type photoelectric early warning system
CN110796690B (en) Image matching method and image matching device
CN111444385A (en) Electronic map real-time video mosaic method based on image corner matching
CN112689084B (en) Airborne photoelectric reconnaissance imaging system and electronic image stabilization method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant