CN113837929A - Graph splicing device and method - Google Patents
Graph splicing device and method Download PDFInfo
- Publication number
- CN113837929A CN113837929A CN202111104216.9A CN202111104216A CN113837929A CN 113837929 A CN113837929 A CN 113837929A CN 202111104216 A CN202111104216 A CN 202111104216A CN 113837929 A CN113837929 A CN 113837929A
- Authority
- CN
- China
- Prior art keywords
- image
- images
- formula
- angle
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000003384 imaging method Methods 0.000 claims abstract description 35
- 230000003287 optical effect Effects 0.000 claims abstract description 25
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000007526 fusion splicing Methods 0.000 claims description 6
- 238000012634 optical imaging Methods 0.000 claims description 4
- 238000007499 fusion processing Methods 0.000 claims description 3
- 230000007547 defect Effects 0.000 abstract description 5
- 230000000694 effects Effects 0.000 description 8
- 230000007123 defense Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- RZVAJINKPMORJF-UHFFFAOYSA-N Acetaminophen Chemical compound CC(=O)NC1=CC=C(O)C=C1 RZVAJINKPMORJF-UHFFFAOYSA-N 0.000 description 3
- 208000029618 autoimmune pulmonary alveolar proteinosis Diseases 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- IAZDPXIOMUYVGZ-UHFFFAOYSA-N Dimethylsulphoxide Chemical compound CS(C)=O IAZDPXIOMUYVGZ-UHFFFAOYSA-N 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/14—Transformations for image registration, e.g. adjusting or mapping for alignment of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The invention provides an image splicing device, which comprises a double-shaft rotating platform and an imaging system, wherein the imaging system is arranged on the double-shaft rotating platform; the double-shaft rotating platform comprises a horizontal shaft and a vertical shaft; the imaging system adjusts the pitching angle by rotating around a horizontal shaft, and the imaging system realizes 360-degree circumferential scanning by rotating around a vertical shaft; the imaging system comprises an optical system and an image detector, wherein a detector target surface of the image detector is vertical to an optical axis of the optical system, and the center of the detector target surface is intersected with the optical axis. By utilizing the device, the invention provides an image splicing method, and the image splicing method realizes automatic line-by-line registration by combining pointing angle information. The method adopts a non-feature matching mode, so that the defects of the traditional splicing algorithm are effectively avoided, and the method has better adaptability and real-time performance on panoramic imaging of regions with unobvious features such as sky and the like.
Description
Technical Field
The invention relates to the field of optical imaging, in particular to a graph splicing device and a graph splicing method.
Background
In recent years, unmanned aerial vehicles are gradually becoming the main weapon of the precise military attack, and the traditional operation mode is changed. The unmanned aerial vehicle has the characteristics of high attack precision, small radar reflection area (RCS), low flying height and difficult detection of radar, and effective defense means are explored in all countries in the world at present.
The air defense early warning system adopts a mode of continuous panoramic scanning of area array image detectors with different wave bands to carry out real-time panoramic imaging on an appointed airspace, can make up the defect that the radar has weak detection capability on low altitude and small targets, and has the characteristics of strong identification capability, high early warning precision, good hiding effect, low cost, no radiation and the like.
The air defense early warning system adopts a two-axis stable platform, realizes a working mode that the pitching is adjustable, the direction is 360 degrees, and the clear image of the area array infrared detector can be acquired. And acquiring a panoramic image of the early warning area of the equipment in real time by adopting an image splicing technology. And the automatic capture, identification and early warning of the suspicious target are realized through suspicious target detection and multi-frame characteristic correlation operation in the panoramic image. Image splicing is a necessary means for realizing panoramic imaging and is also a key technology for determining the visualization level of the air defense early warning system.
In 2014, Julio Zaragoza et al proposed an APAP algorithm to avoid the problem of misalignment of the conventional stitching algorithm and excessive dependence on post-processing and artifact removal at that time. At present, most splicing applications adopt APAP algorithm and various derivative algorithms thereof.
The image stitching method provided by a Chinese invention patent video stitching method and device with the application number of CN201811625266.X and the application date of 2018, 12 and 28 is used for stitching a first video and a second video, and comprises the following steps: performing feature extraction, feature matching and screening on a first target frame of a first video and a second target frame of a second video to obtain a first feature point pair set of the first target frame and the second target frame; forward tracking the first target frame and the second target frame to obtain a second characteristic point pair set of the first target frame and the second target frame; carrying out backward tracking on the first target frame and the second target frame to obtain a third characteristic point pair set of the first target frame and the second target frame; and calculating the geometric transformation relation between the first target frame and the second target frame according to the first characteristic point pair set, the second characteristic point pair set and the third characteristic point pair set of the first target frame and the second target frame, and carrying out registration on the two frames of images.
The invention discloses a method and a device for splicing fisheye images into panoramic images and panoramic videos, and belongs to the Chinese patent with the application number of CN201611075609.0 and the application date of 2016, 11, and 29, and the method and the device for splicing the fisheye images into the panoramic images and the panoramic videos are disclosed; according to the mapping relation among the longitude and latitude coordinates, the spherical coordinates and the plane coordinates, carrying out feature point matching on the intersection area of each group of cylindrical surface projection images in the obtained N cylindrical surface projection images to obtain matched feature point pairs of the intersection area of each group of cylindrical surface projection images; acquiring an attitude matrix of the matched characteristic point pair according to the mapping relation between the spherical coordinates and the longitude and latitude coordinates; calculating to obtain a posture matrix of each fisheye lens in the world coordinate system by using a coordinate system of a certain fisheye lens in the N fisheye lenses as the world coordinate system and using the posture matrix of the matched characteristic point pair; determining the pixel value of each pixel point in the panoramic image according to the attitude matrix of each fisheye lens in the world coordinate system; and acquiring the panoramic image according to the pixel value of each pixel point in the panoramic image.
Chinese patent application No. CN202110388923.9, entitled "low-altitude aerial image stitching method and system", 2021, 04, 12, provides a low-altitude aerial image stitching method and system, the method includes: building a semantic segmentation network, acquiring a low-altitude aerial image data set, labeling a significant building in the image data, and sending the labeled image data into the semantic segmentation network for training; sending the aerial photo to be spliced into a trained semantic segmentation model for detection and pixel-level segmentation of the significant building, obtaining an external rectangular picture of the significant building, and leaving a white area in the aerial photo to be spliced, which corresponds to the external rectangular picture, so as to obtain an orthometric external rectangular picture with the best orthometric property; and splicing at least two aerial images to be spliced after the margin processing to obtain an initial splicing image, expanding the orthographic circumscribed rectangular image, and fusing the obtained expanded rectangular image and the initial splicing image to obtain a result splicing image. The invention can effectively solve the problems of non-orthographic projection, ghost and miscut caused by splicing low-altitude aerial photography.
The key of image stitching is effective registration of the overlapping area of two adjacent frames of images. Common image registration methods are generally implemented based on a feature extraction and feature pair matching method, and such algorithms have the following limitations: firstly, the method has strong dependence on image characteristics and poor applicability to regions with unobvious characteristics such as sky, sea surface and the like; secondly, the arithmetic operation amount is large, and the real-time requirement is difficult to meet for a system with higher frame frequency or higher resolution; and thirdly, mismatching easily occurs in an automatic matching mode based on characteristics, and discontinuity of panoramic videos is often caused for real-time video splicing.
Disclosure of Invention
The invention provides a graph splicing device for realizing real-time panoramic imaging of a specified airspace around an observation point to solve the problems.
In order to realize the pattern splicing in a non-characteristic matching mode, the invention adopts the following specific technical scheme:
an image stitching device comprises a double-shaft rotating platform and an imaging system;
the double-shaft rotating platform comprises a horizontal shaft and a vertical shaft;
the imaging system adjusts the pitch angle by rotating around a horizontal shaft, and the imaging system realizes 360-degree circumferential scanning by rotating around a vertical shaft;
the imaging system comprises an optical system and an image detector, wherein a detector target surface of the image detector is vertical to an optical axis of the optical system, and the center of the detector target surface is intersected with the optical axis.
A method for image stitching comprises the following steps;
s1, the imaging system keeps unchanged after adjusting the pitch angle around the horizontal axis, performs 360-degree circumferential scanning around the vertical axis and continuously images, and writes the pointing angle corresponding to the center of the current frame image into each image;
s2, obtaining the corresponding relation between the pointing angle of the pixel on the image and the pointing angle of the image center;
s3, obtaining the number delta n of overlapped pixels of the y-th liney;
S4, registering and splicing the images line by line;
number of overlapping pixels Δ n according to the y-th rowyAnd performing line-by-line fusion splicing on the two frames of images.
Further, step S2 includes the following steps:
s21, the azimuth angle corresponding to the image center is represented as Ac, the pitch angle is represented as Ec, and the angular deviation of the image element located at the (x, y) position in the pixel coordinate system with respect to the image center is represented as (Δ A)x、ΔEy) Then the pointing angle of the pixel is expressed by:
ΔAxrepresented by the formula:
wherein W is the image width; alpha is the azimuth opening angle or the elevation opening angle corresponding to each pixel on the image
ΔEyRepresented by the formula:
where H is the image height.
Further, step S3 includes the following steps:
s31, and the azimuth angle and the pitch angle of the centers of two continuous frames of images are respectively expressed as (A)cN,EcN),(AcN+1,EcN+1) Then, the central azimuth angle deviation Δ a of the two images is expressed by the following formula:
ΔA=AcN+1-AcN (4)
s32, correcting the central azimuth angle deviation delta A of the two frames of images
According to the geometric projection relation, correcting the central azimuth angle deviation Delta A of the two images into Delta A, which is expressed as follows:
ΔA′=ΔAcosE (5)
s33, obtaining the actual pixel number n corresponding to the azimuth angle of the central image column of the two frames of images, and expressing the actual pixel number n as follows:
n=ΔA′/α (6)
wherein alpha is an azimuth opening angle or a pitch opening angle corresponding to each pixel on the image;
s34, the number of pixels Δ n of image overlap of two frames is expressed as:
Δn=W-n (7)
wherein W is the image width;
when the number of overlapped pixels delta n of the two frames of images is larger than zero, the overlapped area exists between the two frames of images; when the number of overlapped pixels delta n of the two frames of images is less than zero, the fact that no overlapped area exists between the two frames of images is shown;
s35, substituting the formula (5) and the formula (6) into the formula (7), and obtaining the number Δ n of pixels overlapped by the two frames of images, which is expressed by the following formula:
Δn=W-ΔAcosE/α (8)
s36, substituting the formula (1) into the formula (8) to obtain the overlapping number delta n corresponding to the y-th row of pixelsyRepresented by the formula:
further, the calculation process of α is as follows:
according to the geometrical optical imaging relationship of the imaging system, the azimuth or elevation angle alpha corresponding to each pixel is expressed by the following formula:
α=tan-1(Ps/v) (10)
wherein, PsThe image element size of the image detector is shown, and v is the image distance;
the conjugate relation of the optical system shows that:
wherein u is the object distance, f is the focal length, and the image distance v is equal to the focal length f; then α is represented by:
α=tan-1(Ps/f) (12)。
further, in step S4, a gradual-in and gradual-out algorithm is used to perform line-by-line fusion splicing on the two images, where the expression is:
wherein g (x, y) is a pixel value corresponding to the (x, y) point of the spliced image, and g1(x, y) and g2(x, y) are pixel values corresponding to the (x, y) point of the image to be spliced respectively; w1(x, y) and w2(x, y) are weight coefficients in the fusion process, and the calculation formula is as follows:
the invention can obtain the following technical effects:
1. the device can be used for image splicing of the area array circumferential scanning type imaging system of air defense early warning, and is convenient for finding and state analysis of a flying target.
2. And adopting a line-by-line registration method based on the pointing angle of the equipment. For the scanning device rotating at a constant speed, the higher the pitching shooting angle is, the larger the overlapping area between the adjacent frame images is, and the more the pixel rows in the images close to the top of the images overlap, otherwise, the fewer the number of the overlapped pixels is. Therefore, the invention realizes the real-time splicing of the panoramic image by calculating the pixel overlapping number line by means of the pointing angle information attached to each frame of image. The method adopts a non-feature matching mode, so that the defects of the traditional splicing algorithm are effectively avoided, and the method has better adaptability and real-time performance on panoramic imaging of regions with unobvious features such as sky and the like. The invention provides an image splicing method for realizing line-by-line automatic registration by combining pointing angle information. The method adopts a non-feature matching mode, so that the defects of the traditional splicing algorithm are effectively avoided, and the method has better adaptability and real-time performance on panoramic imaging of regions with unobvious features such as sky and the like.
3. The graph splicing method does not depend on image characteristics, and has good applicability to regions with unobvious characteristics such as sky, sea surface and the like;
4. the algorithm of graph splicing is simple, the operand is small, and the real-time performance is good for a system with higher frame frequency or higher resolution; and the overlapped area is calculated according to the actual inter-frame included angle, so that the problem of mismatching does not exist, and the video continuous splicing has good stability and continuity. Compared with the conventional splicing method, the method has a prominent splicing effect on areas with unobvious characteristics such as sky background and the like. For other scenes, the method has a good splicing effect, and the effect is more obvious only for the unobvious characteristics of the sky, the sea surface and the like.
Drawings
FIG. 1 is a schematic structural diagram of a pattern stitching apparatus according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a pixel imaging field angle of an embodiment of the invention;
FIG. 3 is a diagram illustrating parameters of two adjacent frames according to an embodiment of the present invention;
FIG. 4 is a schematic view of an azimuthal projection relationship according to an embodiment of the present invention;
FIG. 5 is a diagram of a process for a fade-in fade-out fusion splice in accordance with an embodiment of the present invention.
Reference numerals:
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention.
The graph splicing device is a double-shaft (a horizontal shaft 2 and a vertical shaft 3 which are mutually perpendicular) rotating platform, and in the working process of the system shown in figure 1, an imaging system 1 can realize pitching angle adjustment around the horizontal shaft 2 and realize 360-degree azimuth circumferential scanning around the vertical shaft 3. Two-axis rotary platforms capable of pitch and azimuth adjustment are known in the art and their construction is not described in detail here. The imaging system 1 is composed of an optical system and an image detector, the image detector is a single-waveband or multiband area array image detector, a target surface of the detector is perpendicular to an optical axis of the optical system, and the center of the target surface is intersected with the optical axis of the optical system. The optical system may be a camera lens and the image detector is for capturing an image formed via the camera lens. In the working process of the system, the optical axis of the optical system of the imaging system 1 rotates around the horizontal axis 2 and is adjusted to be a reasonable pitch angle E, then the optical axis is kept unchanged, the imaging system continuously performs imaging while continuously and circularly scanning around the vertical axis for 360 degrees, and pointing angle information corresponding to the center of a current frame image, namely an azimuth angle Ac and a pitch angle Ec, is written in each image.
By utilizing the device or other equipment, graph splicing can be realized, and the specific image splicing method comprises the following steps:
s1, the imaging system keeps unchanged after adjusting the pitch angle around the horizontal axis, performs 360-degree circumferential scanning around the vertical axis and continuously images, and writes a pointing angle corresponding to the center of the current frame image, namely an azimuth angle Ac and a pitch angle Ec in each image;
s2, obtaining the corresponding relation between the pointing angle of the pixel on the image and the pointing angle of the image center, including the following steps:
s21, as can be seen from the geometrical optical imaging relationship of the imaging field angle of the image elements shown in fig. 2, for a given optical system and image detector, the azimuth or elevation field angle α corresponding to each image element, that is, α is the field angle of the elevation direction or azimuth direction corresponding to each image element, and α is represented by the following formula:
α=tan-1(Ps/v) (1)
wherein, PsThe image element size of the image detector is shown, and v is the image distance;
the conjugate relation of the optical system shows that:
wherein u is the object distance, v is the image distance, and f is the focal length.
For the early warning system, in order to ensure panoramic view field imaging, the focal length is generally far less than the object distance, and then the image distance v is approximately equal to the focal length f. Thus, the approximate expression of equation (1) is:
α=tan-1(Ps/f) (3)
knowing that the azimuth and the pitch pointing angles corresponding to the center of the image are Ac and Ec, respectively, the angular deviation expression of the image element located at the (x, y) position in the image relative to the center of the image is:
wherein W is the image width and H is the image height; wherein x and y are expressed as column values and row values of an image where the pixels are located in a pixel coordinate system;
the pointing angle of the picture element located at the (x, y) position in the image is expressed by:
s3, obtaining the number delta n of overlapped pixels of the y-th lineyThe method comprises the following steps:
and S31, for the circumferential scanning air defense early warning device, in order to obtain a stable circumferential scanning image, the azimuth E is not changed any more after being set, and only azimuth rotation is carried out. Therefore, only the calculation of the pixel overlapping number Δ n in the azimuth direction needs to be considered in the image stitching process. The azimuth angle and the pitch angle of the centers of two continuous frames of images are respectively (A)cN,EcN),(AcN+1,EcN+1),EcNAnd EcN+1Approximately equal as shown in fig. 3. The central azimuth deviation Δ a of the two images is expressed by:
ΔA=AcN+1-AcN (6)
s32, correcting the central azimuth angle deviation delta A of the two frames of images
As can be known from the angle projection principle, the azimuth angle a and the pitch angle E have a projection relationship (as shown in fig. 4), so that the azimuth angle deviation of the two images needs to be corrected by projection, and the corrected expression is:
ΔA′=ΔAcosE (7)
s33, obtaining the actual pixel number n corresponding to the azimuth angle of the row of the center image of the two frames of images, wherein the azimuth angle corresponding to the row of the center image of the two frames of images can be known from formula (3) that the actual pixel number n corresponding to the azimuth angle of the row of the center image of the two frames of images is expressed by the following formula:
n=ΔA′/α (8)
s34, the number of pixels Δ n of two-frame image overlap is expressed by the following equation:
Δn=W-n (9)
when the delta n is larger than zero, the overlapping area exists between the two frames of images, and when the delta n is smaller than zero, the overlapping area does not exist between the two frames of images.
S35, substituting the formulas (7) and (8) into the formula (9), and obtaining the number Δ n of overlapped pixels of the two frames of images, which is expressed by the following formula:
Δn=W-ΔAcosE/α (10)
s36, substituting the formula (4) and the formula (5) into the formula (10), the overlapping number Deltan corresponding to the y-th row pixel can be obtainedyRepresented by the formula:
as can be seen from equation (11), when ECAt fixed (typical range 0-40 deg.), number of overlapping pixels deltanyThe number of overlapping pixels Δ n is a monotonically decreasing function with respect to y, i.e., closer to the top of the image (smaller y value)yThe more and the less vice versa.
S4, image line-by-line registration and stitching
And (4) performing line-by-line fusion splicing on the two frames of images according to the calculation result of the formula (11). The fusion splicing algorithm adopts a simple and classical gradual-in and gradual-out algorithm (as shown in fig. 4-5), and the expression is as follows:
wherein x and y are expressed as column values and row values of the image where the pixel is located in the pixel coordinate system. g (x, y) is a pixel value corresponding to the (x, y) point of the spliced image, and g1(x, y) and g2(x, y) are pixel values corresponding to the (x, y) point of the image to be spliced respectively; w1(x, y) and w2(x, y) are weight coefficients in the fusion process, and the calculation formula is as follows:
the line-by-line fusion splicing of two frame images is the prior art, and is not described herein again.
At present, most splicing applications adopt APAP algorithm and various derivative algorithms thereof. This type of concatenation algorithm based on feature point draws, relies on the feature point stronger, and air defense early warning system image is mostly the obscure sky background of feature, and the image characteristic in the overlap region is not obvious, adopts this type of algorithm to carry out image concatenation and has following limitation: firstly, the method has strong dependence on image characteristics and poor applicability to regions with unobvious characteristics such as sky, sea surface and the like; secondly, the arithmetic operation amount is large, and the real-time requirement is difficult to meet for a system with higher frame frequency or higher resolution; and thirdly, mismatching easily occurs in an automatic matching mode based on characteristics, and discontinuity of panoramic videos and small shaking of seams are often caused for real-time video splicing. The invention adopts a line-by-line registration method based on the pointing angle of the equipment. For the scanning device rotating at a constant speed, the higher the pitching shooting angle is, the larger the overlapping area between the adjacent frame images is, and the more the pixel rows in the images close to the top of the images overlap, otherwise, the fewer the number of the overlapped pixels is. Therefore, the invention realizes the real-time splicing of the panoramic image by calculating the pixel overlapping number line by means of the pointing angle information attached to each frame of image. The method adopts a non-feature matching mode, so that the defects of the traditional splicing algorithm are effectively avoided, and the method has better adaptability and real-time performance on panoramic imaging of regions with unobvious features such as sky and the like.
Compared with the conventional splicing method, the method has a prominent splicing effect on areas with unobvious characteristics such as sky background and the like. The method has a good splicing effect for other scenes, and particularly has a more obvious effect for unobvious features such as sky, sea surface and the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
While embodiments of the present invention have been shown and described above, it should be understood that the above embodiments are exemplary and should not be taken as limiting the invention. Variations, modifications, substitutions and alterations of the above-described embodiments may be made by those of ordinary skill in the art without departing from the scope of the present invention.
The above embodiments of the present invention should not be construed as limiting the scope of the present invention. Any other corresponding changes and modifications made according to the technical idea of the present invention should be included in the protection scope of the claims of the present invention.
Claims (6)
1. An image stitching device is characterized by comprising a biaxial rotation platform and an imaging system (1);
the biaxial rotation platform comprises a horizontal shaft (2) and a vertical shaft (3);
the imaging system (1) adjusts a pitch angle by rotating around the horizontal shaft (2), and the imaging system (1) realizes 360-degree circumferential scanning by rotating around the vertical shaft (3);
the imaging system (1) comprises an optical system and an image detector, wherein a detector target surface of the image detector is perpendicular to an optical axis of the optical system, and the center of the detector target surface is intersected with the optical axis.
2. The image stitching method is characterized by comprising the following steps;
s1, the imaging system keeps unchanged after adjusting the pitch angle around the horizontal axis, performs 360-degree circumferential scanning around the vertical axis and continuously images, and writes the pointing angle corresponding to the center of the current frame image into each image;
s2, obtaining the corresponding relation between the pointing angle of the pixel on the image and the pointing angle of the image center;
s3, obtaining the number delta n of overlapped pixels of the y-th liney;
S4, registering and splicing the images line by line;
the number of overlapped pixels delta n according to the y lineyAnd carrying out line-by-line fusion splicing on the two frames of images.
3. The method for image stitching according to claim 2, wherein the step S2 comprises the steps of:
s21, wherein the azimuth angle corresponding to the image center is represented as Ac, the pitch angle is represented as Ec, and the angular deviation of the image element located at the (x, y) position in the pixel coordinate system relative to the image center is represented as (delta A)x、ΔEy) Then the pointing angle of the pixel is expressed by the following formula:
the described Δ AxRepresented by the formula:
wherein W is the image width; alpha is an azimuth opening angle or a pitch opening angle corresponding to each pixel on the image;
the Δ EyRepresented by the formula:
where H is the image height.
4. The method for image stitching according to claim 3, wherein the step S3 comprises the steps of:
s31, and the azimuth angle and the pitch angle of the centers of two continuous frames of images are respectively expressed as (A)cN,EcN),(AcN+1,EcN+1) Then, the central azimuth angle deviation Δ a of the two images is expressed by the following formula:
ΔA=AcN+1-AcN (4)
s32, correcting the central azimuth angle deviation delta A of the two frames of images
Correcting the central azimuth angle deviation Delta A of the two images into Delta A 'according to the geometric projection relation and expressing the central azimuth angle deviation Delta A' as follows:
ΔA′=ΔAcosE (5)
s33, obtaining the actual pixel number n corresponding to the azimuth angle of the central image column of the two frames of images, and expressing the actual pixel number n as follows:
n=ΔA′/α (6)
s34, the number of pixels Δ n of two-frame image overlap is expressed by the following equation:
Δn=W-n (7)
wherein W is the image width;
when the number delta n of overlapped pixels of the two frames of images is more than zero, the overlapped area exists between the two frames of images; when the number delta n of overlapped pixels of the two frames of images is less than zero, the fact that no overlapped area exists between the two frames of images is represented;
s35, substituting the formula (5) and the formula (6) into the formula (7), and obtaining the number Δ n of overlapped pixels of the two frame images, which is expressed by the following formula:
Δn=W-ΔAcosE/α (8)
s36, substituting the formula (1) into the formula (8) to obtain the overlapping number delta n corresponding to the y-th row of pixelsyRepresented by the formula:
5. the method for image stitching according to claim 4, wherein the calculation of α is as follows:
according to the geometrical optical imaging relationship of the imaging system, the azimuth opening angle or the elevation opening angle alpha corresponding to each pixel element is expressed by the following formula:
α=tan-1(Ps/v) (10)
wherein, PsThe image element size of the image detector is shown, and v is the image distance;
the conjugate relation of the optical system shows that:
wherein u is an object distance, f is a focal length, and the image distance v is equal to the focal length f; then said α is represented by the following formula:
α=tan-1(Ps/f) (12)。
6. the method for image stitching according to claim 4, wherein in the step S4, the two images are merged and stitched line by using a fade-in and fade-out algorithm, where the expression is:
wherein g (x, y) is a pixel value corresponding to the (x, y) point of the spliced image, and g1(x, y) and g2(x, y) are pixel values corresponding to the (x, y) point of the image to be spliced respectively; w1(x, y) and w2(x, y) are weight coefficients in the fusion process, and the calculation formula is as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111104216.9A CN113837929B (en) | 2021-09-18 | 2021-09-18 | Graph splicing device and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111104216.9A CN113837929B (en) | 2021-09-18 | 2021-09-18 | Graph splicing device and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113837929A true CN113837929A (en) | 2021-12-24 |
CN113837929B CN113837929B (en) | 2024-04-12 |
Family
ID=78960124
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111104216.9A Active CN113837929B (en) | 2021-09-18 | 2021-09-18 | Graph splicing device and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113837929B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117495933A (en) * | 2024-01-02 | 2024-02-02 | 中国科学院长春光学精密机械与物理研究所 | Parallax correction-based real-time registration method for external lens image of photoelectric telescope |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101562693A (en) * | 2009-06-01 | 2009-10-21 | 中国兵器工业第二〇五研究所 | Optical imaging splicing device of double CCD image splicing detector |
CN102222337A (en) * | 2011-06-14 | 2011-10-19 | 重庆大学 | Fisheye image correcting method for image stitching |
CN103295231A (en) * | 2013-05-14 | 2013-09-11 | 杭州海康希牧智能科技有限公司 | Method for geometrically correcting vertically mapped images of fisheye lenses in fisheye image mosaic |
WO2018205623A1 (en) * | 2017-05-11 | 2018-11-15 | Boe Technology Group Co., Ltd. | Method for displaying a virtual image, a virtual image display system and device, a non-transient computer-readable storage medium |
CN111080523A (en) * | 2019-12-17 | 2020-04-28 | 天津津航技术物理研究所 | Infrared panoramic search system and infrared panoramic image splicing method based on angle information |
CN111882484A (en) * | 2020-06-16 | 2020-11-03 | 河北汉光重工有限责任公司 | Servo control method in high-speed seamless image splicing of submersible imaging system |
CN112040097A (en) * | 2020-07-24 | 2020-12-04 | 北京空间机电研究所 | Large-breadth camera system with spliced view fields |
-
2021
- 2021-09-18 CN CN202111104216.9A patent/CN113837929B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101562693A (en) * | 2009-06-01 | 2009-10-21 | 中国兵器工业第二〇五研究所 | Optical imaging splicing device of double CCD image splicing detector |
CN102222337A (en) * | 2011-06-14 | 2011-10-19 | 重庆大学 | Fisheye image correcting method for image stitching |
CN103295231A (en) * | 2013-05-14 | 2013-09-11 | 杭州海康希牧智能科技有限公司 | Method for geometrically correcting vertically mapped images of fisheye lenses in fisheye image mosaic |
WO2018205623A1 (en) * | 2017-05-11 | 2018-11-15 | Boe Technology Group Co., Ltd. | Method for displaying a virtual image, a virtual image display system and device, a non-transient computer-readable storage medium |
CN111080523A (en) * | 2019-12-17 | 2020-04-28 | 天津津航技术物理研究所 | Infrared panoramic search system and infrared panoramic image splicing method based on angle information |
CN111882484A (en) * | 2020-06-16 | 2020-11-03 | 河北汉光重工有限责任公司 | Servo control method in high-speed seamless image splicing of submersible imaging system |
CN112040097A (en) * | 2020-07-24 | 2020-12-04 | 北京空间机电研究所 | Large-breadth camera system with spliced view fields |
Non-Patent Citations (3)
Title |
---|
ZHENGDE SHI 等: "Image Stitching Algorithm Based on Embedded System", IEEE, 7 October 2018 (2018-10-07) * |
刘新平 等: "亚象元线阵CCD焦平面的光学拼接", 光子学报, no. 06, 25 June 2002 (2002-06-25) * |
武奕楠 等: "空间相机焦平面CCD交错拼接重叠像元数计算", 光学精密工程, no. 02, 15 February 2016 (2016-02-15) * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117495933A (en) * | 2024-01-02 | 2024-02-02 | 中国科学院长春光学精密机械与物理研究所 | Parallax correction-based real-time registration method for external lens image of photoelectric telescope |
CN117495933B (en) * | 2024-01-02 | 2024-03-12 | 中国科学院长春光学精密机械与物理研究所 | Parallax correction-based real-time registration method for external lens image of photoelectric telescope |
Also Published As
Publication number | Publication date |
---|---|
CN113837929B (en) | 2024-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11070725B2 (en) | Image processing method, and unmanned aerial vehicle and system | |
US10033924B2 (en) | Panoramic view imaging system | |
CN110211043B (en) | Registration method based on grid optimization for panoramic image stitching | |
WO2020014909A1 (en) | Photographing method and device and unmanned aerial vehicle | |
Aghaei et al. | PV power plant inspection by image mosaicing techniques for IR real-time images | |
CN110351494B (en) | Panoramic video synthesis method and device and electronic equipment | |
EP2791868B1 (en) | System and method for processing multi-camera array images | |
CN110782394A (en) | Panoramic video rapid splicing method and system | |
CN103198487B (en) | A kind of automatic marking method for video monitoring system | |
CN103873758B (en) | The method, apparatus and equipment that panorama sketch generates in real time | |
US20120200703A1 (en) | Imaging system for uav | |
CN103971375B (en) | A kind of panorama based on image mosaic stares camera space scaling method | |
US20140340427A1 (en) | Method, device, and system for computing a spherical projection image based on two-dimensional images | |
CN107660337A (en) | For producing the system and method for assembled view from fish eye camera | |
CN106357976A (en) | Omni-directional panoramic image generating method and device | |
CN104125372B (en) | Target photoelectric search and detection method | |
US20210176395A1 (en) | Gimbal system and image processing method thereof and unmanned aerial vehicle | |
CN103501409A (en) | Ultrahigh resolution panorama speed dome AIO (All-In-One) system | |
CN107038714B (en) | Multi-type visual sensing cooperative target tracking method | |
CN111815517B (en) | Self-adaptive panoramic stitching method based on snapshot pictures of dome camera | |
CN103295231A (en) | Method for geometrically correcting vertically mapped images of fisheye lenses in fisheye image mosaic | |
CN113221665A (en) | Video fusion algorithm based on dynamic optimal suture line and improved gradual-in and gradual-out method | |
WO2019023914A1 (en) | Image processing method, unmanned aerial vehicle, ground console, and image processing system thereof | |
CN109040565A (en) | Panoramic shooting system | |
CN113837929B (en) | Graph splicing device and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |