CN112233020A - Unmanned aerial vehicle image splicing method and device, computer equipment and storage medium - Google Patents

Unmanned aerial vehicle image splicing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112233020A
CN112233020A CN202011239324.2A CN202011239324A CN112233020A CN 112233020 A CN112233020 A CN 112233020A CN 202011239324 A CN202011239324 A CN 202011239324A CN 112233020 A CN112233020 A CN 112233020A
Authority
CN
China
Prior art keywords
image
intermediate image
processed
aerial vehicle
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011239324.2A
Other languages
Chinese (zh)
Inventor
邓练兵
余大勇
方文佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Dahengqin Technology Development Co Ltd
Original Assignee
Zhuhai Dahengqin Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Dahengqin Technology Development Co Ltd filed Critical Zhuhai Dahengqin Technology Development Co Ltd
Priority to CN202011239324.2A priority Critical patent/CN112233020A/en
Publication of CN112233020A publication Critical patent/CN112233020A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an unmanned aerial vehicle image splicing method, an unmanned aerial vehicle image splicing device, computer equipment and a storage medium, wherein the unmanned aerial vehicle image splicing method comprises the following steps: acquiring a first image to be processed and a second image to be processed which are shot by a shooting device of an unmanned aerial vehicle from different angles and different positions; based on a mapping relation, resampling pixel points from the first image to be processed and the second image to be processed to obtain a first intermediate image and a second intermediate image, wherein the mapping relation is generated according to the positions of the pixel points on the image to be processed and an optical distortion coefficient of a shooting device of the unmanned aerial vehicle, the first intermediate image corresponds to the first image to be processed, and the second intermediate image corresponds to the second image to be processed; identifying an overlap region between the first intermediate image and the second intermediate image; and splicing the first intermediate image and the first intermediate image based on the overlapping area to obtain a first target image. The method can eliminate optical distortion and improve the quality of image splicing.

Description

Unmanned aerial vehicle image splicing method and device, computer equipment and storage medium
Technical Field
The invention relates to the field of image processing, in particular to an unmanned aerial vehicle image splicing method and device, computer equipment and a storage medium.
Background
The unmanned aerial vehicle low-altitude platform has the advantages of flexibility, quick response, low use cost and the like, and becomes an important means for quickly acquiring regional space data. However, because the shooting device of the unmanned aerial vehicle is influenced by the production process and other reasons, the optical center of the lens deviates, and the lens has optical distortion, so that the shot same-name ground objects are greatly deformed, and the subsequent image splicing is greatly influenced. Therefore, how to realize the stable splicing of the images of the unmanned aerial vehicle is a key technology needing breakthrough.
Disclosure of Invention
The embodiment of the invention provides an unmanned aerial vehicle image splicing method, an unmanned aerial vehicle image splicing device, computer equipment and a storage medium, so as to eliminate optical distortion and improve the image splicing quality.
The embodiment of the invention provides an unmanned aerial vehicle image splicing method, which comprises the following steps:
acquiring a first image to be processed and a second image to be processed which are shot by a shooting device of an unmanned aerial vehicle from different angles and different positions;
on the basis of a mapping relation, resampling pixel points from a first image to be processed and a second image to be processed to obtain a first intermediate image and a second intermediate image, wherein the mapping relation is generated according to the positions of the pixel points on the image to be processed and an optical distortion coefficient of a shooting device of an unmanned aerial vehicle, the first intermediate image corresponds to the first image to be processed, and the second intermediate image corresponds to the second image to be processed;
identifying an overlap region between the first and second intermediate images;
and splicing the first intermediate image and the first intermediate image based on the overlapping area to obtain a first target image.
Preferably, the identifying the overlapping region between the first intermediate image and the second intermediate image comprises:
extracting control points of the first intermediate image and the second intermediate image, and pairing the control points;
and determining the overlapping area of the first intermediate image and the second intermediate image according to the control point pair obtained by pairing.
Preferably, the control point pairing is performed by the following steps:
extracting pixel gray values of control points in the first intermediate image and the second intermediate image;
respectively calculating the average pixel gray value of a control point set of each of the first intermediate image and the second intermediate image, wherein the control point set is composed of a plurality of adjacent control points;
calculating the difference between the average pixel gray value of the control point set of each of the first intermediate images and the average pixel gray value of the control point set of each of the second intermediate images;
and taking the first intermediate image control point set and the second intermediate image control point set with the difference value smaller than the preset value as a control point set pair so as to match the control points.
Preferably, if there are a plurality of first intermediate image control point sets and second intermediate image control point sets with differences smaller than a preset value, the control point set with the smallest difference is taken as the control point set pair.
Preferably, before resampling pixel points from all the images to be processed based on the mapping relationship to obtain n intermediate images, the method further includes:
and eliminating the rotation error of each image to be processed by adopting a phase correlation method.
Preferably, after obtaining the first target image, the method further comprises:
converting the target image into a grey-scale map;
counting the number of pixels, the gray values and the total number of the gray values of each gray level of the gray map, and the total number of the pixels of the gray map;
calculating a new gray value corresponding to each gray level after equalization according to the following formula:
Figure BDA0002767899000000031
where k is the total number of gray levels, xbIs a new gray value, xaIs a gray value of a pixel of the gray scale image, L is a maximum gray value of the gray scale image, L is a minimum gray value of the gray scale image, N is a total number of pixels, j represents a j-th gray scale level,
Figure BDA0002767899000000032
is the probability of the j-th gray level appearing in the gray image.
And replacing the gray value with the new gray value to obtain a second target image.
The embodiment of the invention also provides an unmanned aerial vehicle image splicing device, which comprises:
the unmanned aerial vehicle processing device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a first image to be processed and a second image to be processed which are shot by a shooting device of the unmanned aerial vehicle from different angles and different positions;
the device comprises a resampling unit and a processing unit, wherein the resampling unit is used for resampling pixel points from a first image to be processed and a second image to be processed based on a mapping relation to obtain a first intermediate image and a second intermediate image, the mapping relation is generated according to the positions of the pixel points on the image to be processed and an optical distortion coefficient of a shooting device of an unmanned aerial vehicle, the first intermediate image corresponds to the first image to be processed, and the second intermediate image corresponds to the second image to be processed;
an overlap area identifying unit configured to identify an overlap area between the first intermediate image and the second intermediate image;
and the splicing unit is used for splicing the first intermediate image and the first intermediate image based on the overlapping area to obtain a first target image.
Preferably, the overlapping area identifying unit includes:
the control point matching subunit is used for extracting the control points of the first intermediate image and the second intermediate image and pairing the control points;
and the determining subunit is used for determining the overlapping area of the first intermediate image and the second intermediate image according to the control point pair obtained by pairing.
The embodiment of the invention also provides computer equipment which comprises a memory and a processor, wherein the memory stores the unmanned aerial vehicle image splicing program, and the processor is used for realizing the steps of the unmanned aerial vehicle image splicing method when executing the unmanned aerial vehicle image splicing program.
The embodiment of the invention also provides a computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, and the computer program is executed by a processor to realize the steps of the unmanned aerial vehicle image stitching method.
According to the unmanned aerial vehicle image splicing method and device, the computer equipment and the storage medium, the control point set matched with another image is determined by calculating the average gray value of the control point sets and calculating the difference value of the average gray values of every two control point sets, so that the control point sets are matched, and the purpose of matching the control points is achieved. The embodiment utilizes the principle that the gray values of the pixels in the overlapping area are not different, so as to carry out control point matching, and the matching accuracy can be improved. In addition, because the error of calculating the gray values of two control points is large, the method for matching by using the control point set can further improve the accuracy.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a flowchart of a method for image stitching by an unmanned aerial vehicle according to an embodiment of the present invention;
fig. 2 is a schematic block diagram of an image stitching device of an unmanned aerial vehicle according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "first", "second", "third", and the like are used only for distinguishing the description, and are not intended to indicate or imply relative importance.
In an embodiment, as shown in fig. 1, an unmanned aerial vehicle image stitching method is provided, which is described by taking an example that the method is applied to a server in fig. 1, and includes the following steps:
s10: acquiring a first image to be processed and a second image to be processed which are shot by a shooting device of an unmanned aerial vehicle from different angles and different positions;
it can be understood that the first to-be-processed image and the second to-be-processed image may be obtained by a shooting device of the unmanned aerial vehicle shooting the same object from different angles and different positions, but this does not mean that the shooting device needs to focus on the same object, and it is only necessary to ensure that the first to-be-processed image and the second to-be-processed image have the same scene, and of course, if the first to-be-processed image and the second to-be-processed image do not have the same scene, the overlapping region between the two images cannot be identified subsequently.
S20: based on a mapping relation, pixel resampling is carried out on the first image to be processed and the second image to be processed, a first intermediate image and a second intermediate image are obtained, the mapping relation is generated according to the positions of the pixels on the image to be processed and the optical distortion coefficient of a shooting device of the unmanned aerial vehicle, the first intermediate image corresponds to the first image to be processed, and the second intermediate image corresponds to the second image to be processed.
Specifically, before resampling the pixel points, the parameters of the shooting device need to be calibrated to obtain the optical distortion coefficient of the shooting device. Thereby performing distortion correction using the optical distortion coefficient. Specifically, the optical distortion coefficient mainly includes a radial distortion coefficient and an eccentric distortion coefficient. However, in the embodiment of the present invention, it is an unnecessary step to calibrate the parameters of the photographing apparatus, and therefore, distortion correction can be performed directly using the optical distortion coefficient provided by the manufacturer of the photographing apparatus.
The mapping relationship actually refers to a pixel position correspondence relationship between distorted images (i.e., the first image to be processed and the second image to be processed) and undistorted images (i.e., the second intermediate image and the second image to be processed). The coordinates O (x) of the image principal point can be obtained from the parameters of the shooting device0,y0) Coefficient of radial distortion K1、K2Coefficient of eccentric distortion P1、P2If P (x)i,yi) To distort an image point on an image, the coordinate deformation of the image point can be expressed as:
Figure BDA0002767899000000061
wherein x is xi-x0,y=yi-y0,r2=x2+y2Therefore, if P' (x)j,yj) For the pixel on the undistorted image corresponding to P, the position of P' can be expressed as:
Figure BDA0002767899000000062
the formula (2) can construct a mapping relation between a distorted image and a distorted image, namely a mapping relation between a first intermediate image and a first image to be processed, a mapping relation between a second intermediate image and a second image to be processed, and finally resampling is carried out on image points according to the mapping relation to obtain the first intermediate image and the second intermediate image.
Resampling is a process of interpolating information of another image element according to information of one type of image element, and specifically, a nearest neighbor interpolation method, a bilinear interpolation method and a cubic convolution interpolation method can be used. However, in consideration of the interpolation effect and the processing speed, the present embodiment preferably employs a bilinear interpolation method.
If the position of the image point P on the undistorted image corresponding to the image point P' on the distorted image is (i + u, j)+ v) where i, j are integer parts representing row and column numbers, respectively, and u, v are fractional parts, the pixel value of P' can be derived from the pixel point P1(i,j)、P2(i+1,j)、P3(i,j+1)、P4The pixel value of (i +1, j +1) represents:
f(i+u,j+v)=(1-u)(1-v)f(i,j)+v(1-u)f(i,j+1)+u(1-v)f(i+1,j)+uvf(i+1,j+1)
wherein f (i, j) represents the image point P on the distorted image1F (i +1, j) represents a point P on the distorted image2F (i, j +1) represents the pixel value of the point P3 on the distorted image, and f (i +1, j +1) represents the point P on the distorted image4The pixel value of (2).
And sequentially and respectively carrying out the operation on the first image to be processed and each image point on the first image to be processed to finally obtain a first intermediate image and a first intermediate image.
S30: an overlap region between the first intermediate image and the second intermediate image is identified.
The overlapped region refers to a region where the same scene exists in the image, and the identification method may be that of using a neural network. Preferably, the present embodiment employs the following method to identify the overlapping area:
s31: extracting control points of the first intermediate image and the second intermediate image, and pairing the control points;
s32: and determining the overlapping area of the first intermediate image and the second intermediate image according to the control point pair obtained by pairing.
The control points in steps S31-S32 may be corner points, and a possible implementation may be derived from steps S31-S32:
firstly, respectively carrying out corner detection on a first intermediate image and a second intermediate image by adopting a Harris algorithm, then carrying out rough matching on the corners of the first intermediate image and the corners of the second intermediate image, then obtaining the positions of the rough matching corners of the first intermediate image and the second intermediate image, carrying out rough matching corner connection, finally carrying out fine matching and connection of the corners, wherein the region after the fine matching connection is an overlapping region.
The control points may also be pixel points, and based on the above, in steps S31-S32, another possible implementation may be derived:
s311: pixel grayscale values of the control points in the first intermediate image and the second intermediate image are extracted.
S312: and respectively calculating the average pixel gray value of the control point set of each first intermediate image and the second intermediate image, wherein the control point set is composed of a plurality of adjacent control points.
The control point set may be formed by a plurality of adjacent control points centered on one control point, for example, O0And (n, m) are control points in eight directions at the center.
S313: calculating the difference between the average pixel gray-scale value of the set of control points of each of the first intermediate images and the average pixel gray-scale value of the set of control points of each of the second intermediate images,
s313: and taking the first intermediate image control point set and the second intermediate image control point set with the difference value smaller than the preset value as a control point set pair so as to match the control points.
In the embodiment, the average gray value of the control point set is calculated, and the difference value of the average gray values of every two control point sets is calculated to determine the control point set matched with another image, so that the control point sets are matched, and the aim of matching the control points is fulfilled. The embodiment utilizes the principle that the gray values of the pixels in the overlapping area are not different, so as to carry out control point matching, and the matching accuracy can be improved. In addition, because the error of calculating the gray values of two control points is large, the method for matching by using the control point set can further improve the accuracy.
It should be noted that, in some cases, a difference value between multiple control point sets may be smaller than a preset value, in this case, the several difference values may be compared and sorted, and the control point set with the smallest difference value is taken as the control point set pair.
S40: and splicing the first intermediate image and the first intermediate image based on the overlapping area to obtain a first target image.
The step is mainly to splice the first intermediate image and the first intermediate image to obtain a target image.
In the implementation, after the first image to be processed and the second image to be processed are obtained, the distortion correction is performed on the first image to be processed and the second image to be processed by resampling the pixel points of the first image to be processed and the second image to be processed, so that the influence of the optical distortion of a shooting device of the unmanned aerial vehicle on the splicing of subsequent images is avoided; through the overlapping area between the first intermediate image and the second intermediate image, the first intermediate image and the second intermediate image are spliced, the accuracy of image splicing can be improved, a large number of overlapped pixels are avoided during splicing, an obvious separation line can be avoided in the image splicing process, and the image splicing effect is better.
The first image to be processed and the second image to be processed can be continuously shot in a short time, and due to the fact that the unmanned aerial vehicle is small in size and gravity and is in a wind image, certain course drift exists in the flying process of the unmanned aerial vehicle, the flying track is irregular, the first image to be processed and the second image to be processed rotate to a certain degree, and therefore rotation errors occur, and therefore before resampling is conducted on the first image to be processed and the second image to be processed, the rotation difference between the first image to be processed and the second image to be processed can be eliminated through a phase correlation method.
Since the first to-be-processed image and the second to-be-processed image captured at different times have a certain difference in brightness of light, and the first target image obtained by stitching has a certain color difference, the embodiment may further need to reprocess the first target image to reduce the color difference between the images. Specifically, the following steps may be adopted for the processing:
s51: converting the target image into a grey-scale map;
s52: counting the number of pixels, the gray values and the total number of the gray values of each gray level of the gray map, and the total number of the pixels of the gray map;
s53: calculating a new gray value corresponding to each gray level after equalization according to the following formula:
Figure BDA0002767899000000091
where k is the total number of gray levels, xbIs a new gray value, xaIs a gray value of a pixel of the gray scale image, L is a maximum gray value of the gray scale image, L is a minimum gray value of the gray scale image, N is a total number of pixels, j represents a j-th gray scale level,
Figure BDA0002767899000000101
is the probability of the j-th gray level appearing in the gray image.
S54: and replacing the gray value with the new gray value to obtain a second target image.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In an embodiment, an unmanned aerial vehicle image stitching device is provided, as shown in fig. 2, the unmanned aerial vehicle image stitching device corresponds to the unmanned aerial vehicle image stitching method in the above embodiments one to one. Specifically, the apparatus includes:
an acquisition unit 10 configured to acquire a first image to be processed and a second image to be processed, which are captured by a camera of an unmanned aerial vehicle from different angles and different positions;
the resampling unit 20 is configured to perform pixel resampling from a first image to be processed and a second image to be processed based on a mapping relationship to obtain a first intermediate image and a second intermediate image, where the mapping relationship is generated according to positions of pixels on the image to be processed and an optical distortion coefficient of a shooting device of the unmanned aerial vehicle, the first intermediate image corresponds to the first image to be processed, and the second intermediate image corresponds to the second image to be processed;
an overlap area identifying unit 30 for identifying an overlap area between the first intermediate image and the second intermediate image;
and the splicing unit 40 is configured to splice the first intermediate image and the first intermediate image based on the overlapping area to obtain a first target image.
The overlapping area identifying unit 30 includes:
the control point matching subunit is used for extracting the control points of the first intermediate image and the second intermediate image and pairing the control points;
and the determining subunit is used for determining the overlapping area of the first intermediate image and the second intermediate image according to the control point pair obtained by pairing.
For specific limitations of the unmanned aerial vehicle image stitching device, reference may be made to the above limitations on the unmanned aerial vehicle image stitching method, which is not described herein again. The modules in the unmanned aerial vehicle image splicing device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing a method for unmanned aerial vehicle image stitching when executing the computer program.
In one embodiment, a computer readable storage medium is provided having a drone image stitching program stored thereon, the computer program when executed by a processor implementing a drone image stitching method.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. An unmanned aerial vehicle image splicing method is characterized by comprising the following steps:
acquiring a first image to be processed and a second image to be processed which are shot by a shooting device of an unmanned aerial vehicle from different angles and different positions;
on the basis of a mapping relation, resampling pixel points from a first image to be processed and a second image to be processed to obtain a first intermediate image and a second intermediate image, wherein the mapping relation is generated according to the positions of the pixel points on the image to be processed and an optical distortion coefficient of a shooting device of an unmanned aerial vehicle, the first intermediate image corresponds to the first image to be processed, and the second intermediate image corresponds to the second image to be processed;
identifying an overlap region between the first and second intermediate images;
and splicing the first intermediate image and the first intermediate image based on the overlapping area to obtain a first target image.
2. The unmanned aerial vehicle image stitching method of claim 1, wherein the identifying an overlap region between the first intermediate image and the second intermediate image comprises:
extracting control points of the first intermediate image and the second intermediate image, and pairing the control points;
and determining the overlapping area of the first intermediate image and the second intermediate image according to the control point pair obtained by pairing.
3. An unmanned aerial vehicle image stitching method as claimed in claim 2, wherein the control point pairing is performed by the following steps:
extracting pixel gray values of control points in the first intermediate image and the second intermediate image;
respectively calculating the average pixel gray value of a control point set of each of the first intermediate image and the second intermediate image, wherein the control point set is composed of a plurality of adjacent control points;
calculating the difference between the average pixel gray value of the control point set of each of the first intermediate images and the average pixel gray value of the control point set of each of the second intermediate images;
and taking the first intermediate image control point set and the second intermediate image control point set with the difference value smaller than the preset value as a control point set pair so as to match the control points.
4. An unmanned aerial vehicle image stitching method as claimed in claim 3, wherein if there are a plurality of first and second intermediate image control point sets having a difference value smaller than a preset value, the control point set having the smallest difference value is taken as the control point set pair.
5. An unmanned aerial vehicle image stitching method as claimed in claim 1, wherein before resampling pixel points on all the images to be processed based on the mapping relationship to obtain n intermediate images, the method further comprises:
and eliminating the rotation error of each image to be processed by adopting a phase correlation method.
6. An unmanned aerial vehicle image stitching method as defined in claim 1, wherein after obtaining the first target image, the method further comprises:
converting the target image into a grey-scale map;
counting the number of pixels, the gray values and the total number of the gray values of each gray level of the gray map, and the total number of the pixels of the gray map;
calculating a new gray value corresponding to each gray level after equalization according to the following formula:
Figure FDA0002767898990000021
where k is the total number of gray levels, xbIs a new gray value, xaIs a gray value of a pixel of the gray scale image, L is a maximum gray value of the gray scale image, L is a minimum gray value of the gray scale image, N is a total number of pixels, j represents a j-th gray scale level,
Figure FDA0002767898990000022
the probability of the j-th gray level in the gray level image is shown;
and replacing the gray value with the new gray value to obtain a second target image.
7. The utility model provides an unmanned aerial vehicle image splicing apparatus which characterized in that includes:
the unmanned aerial vehicle processing device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a first image to be processed and a second image to be processed which are shot by a shooting device of the unmanned aerial vehicle from different angles and different positions;
the device comprises a resampling unit and a processing unit, wherein the resampling unit is used for resampling pixel points from a first image to be processed and a second image to be processed based on a mapping relation to obtain a first intermediate image and a second intermediate image, the mapping relation is generated according to the positions of the pixel points on the image to be processed and an optical distortion coefficient of a shooting device of an unmanned aerial vehicle, the first intermediate image corresponds to the first image to be processed, and the second intermediate image corresponds to the second image to be processed;
an overlap area identifying unit configured to identify an overlap area between the first intermediate image and the second intermediate image;
and the splicing unit is used for splicing the first intermediate image and the first intermediate image based on the overlapping area to obtain a first target image.
8. An unmanned aerial vehicle image stitching device as defined in claim 7, wherein the overlap region identification unit comprises:
the control point matching subunit is used for extracting the control points of the first intermediate image and the second intermediate image and pairing the control points;
and the determining subunit is used for determining the overlapping area of the first intermediate image and the second intermediate image according to the control point pair obtained by pairing.
9. A computer device comprising a memory in which a drone image stitching program is stored and a processor for implementing the steps of the drone image stitching method of any one of claims 1 to 6 when executing the drone image stitching program.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the unmanned aerial vehicle image stitching method according to any one of claims 1 to 5.
CN202011239324.2A 2020-11-09 2020-11-09 Unmanned aerial vehicle image splicing method and device, computer equipment and storage medium Pending CN112233020A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011239324.2A CN112233020A (en) 2020-11-09 2020-11-09 Unmanned aerial vehicle image splicing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011239324.2A CN112233020A (en) 2020-11-09 2020-11-09 Unmanned aerial vehicle image splicing method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112233020A true CN112233020A (en) 2021-01-15

Family

ID=74122222

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011239324.2A Pending CN112233020A (en) 2020-11-09 2020-11-09 Unmanned aerial vehicle image splicing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112233020A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113393378A (en) * 2021-05-26 2021-09-14 浙江大华技术股份有限公司 Image splicing method and device for photovoltaic module, electronic device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106447602A (en) * 2016-08-31 2017-02-22 浙江大华技术股份有限公司 Image mosaic method and device
CN109102464A (en) * 2018-08-14 2018-12-28 四川易为智行科技有限公司 Panorama Mosaic method and device
CN111598777A (en) * 2020-05-13 2020-08-28 上海眼控科技股份有限公司 Sky cloud image processing method, computer device and readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106447602A (en) * 2016-08-31 2017-02-22 浙江大华技术股份有限公司 Image mosaic method and device
CN109102464A (en) * 2018-08-14 2018-12-28 四川易为智行科技有限公司 Panorama Mosaic method and device
CN111598777A (en) * 2020-05-13 2020-08-28 上海眼控科技股份有限公司 Sky cloud image processing method, computer device and readable storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
仲明: "基于特征点精确配准的图像拼接技术的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
刘国阳: "基于机器视觉的微小零件尺寸测量技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
向兆威: "全景拼接中图像配准算法的研究及应用", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
徐秋辉: "无控制点的无人机遥感影像几何校正与拼接方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
李玉霞等: "缺少控制点的无人机遥感影像几何畸变校正算法", 《电子科技大学学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113393378A (en) * 2021-05-26 2021-09-14 浙江大华技术股份有限公司 Image splicing method and device for photovoltaic module, electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN111179358B (en) Calibration method, device, equipment and storage medium
CN110717942B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110414507B (en) License plate recognition method and device, computer equipment and storage medium
US11164323B2 (en) Method for obtaining image tracking points and device and storage medium thereof
CN109345467B (en) Imaging distortion correction method, imaging distortion correction device, computer equipment and storage medium
CN110400278B (en) Full-automatic correction method, device and equipment for image color and geometric distortion
CN109919971B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN109559353B (en) Camera module calibration method and device, electronic equipment and computer readable storage medium
CN113850807B (en) Image sub-pixel matching positioning method, system, device and medium
CN114529837A (en) Building outline extraction method, system, computer equipment and storage medium
WO2019232793A1 (en) Two-camera calibration method, electronic device and computer-readable storage medium
CN110796709A (en) Method and device for acquiring size of frame number, computer equipment and storage medium
CN114037992A (en) Instrument reading identification method and device, electronic equipment and storage medium
CN112257713A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN115526781A (en) Splicing method, system, equipment and medium based on image overlapping area
CN115937003A (en) Image processing method, image processing device, terminal equipment and readable storage medium
CN112233020A (en) Unmanned aerial vehicle image splicing method and device, computer equipment and storage medium
US20120038785A1 (en) Method for producing high resolution image
CN116777769A (en) Method and device for correcting distorted image, electronic equipment and storage medium
CN116524041A (en) Camera calibration method, device, equipment and medium
CN109801334B (en) Workpiece positioning method, standard point determining method, device and equipment
CN113947686A (en) Method and system for dynamically adjusting feature point extraction threshold of image
CN113409375A (en) Image processing method, image processing apparatus, and non-volatile storage medium
CN109242894B (en) Image alignment method and system based on mobile least square method
CN115239612A (en) Circuit board positioning method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210115

RJ01 Rejection of invention patent application after publication