CN103679674A - Method and system for splicing images of unmanned aircrafts in real time - Google Patents

Method and system for splicing images of unmanned aircrafts in real time Download PDF

Info

Publication number
CN103679674A
CN103679674A CN201310628020.9A CN201310628020A CN103679674A CN 103679674 A CN103679674 A CN 103679674A CN 201310628020 A CN201310628020 A CN 201310628020A CN 103679674 A CN103679674 A CN 103679674A
Authority
CN
China
Prior art keywords
image
training
unmanned vehicle
splicing
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310628020.9A
Other languages
Chinese (zh)
Other versions
CN103679674B (en
Inventor
安山
王婷
张宏
张春泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Space Star Technology Co Ltd
Original Assignee
Space Star Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Space Star Technology Co Ltd filed Critical Space Star Technology Co Ltd
Priority to CN201310628020.9A priority Critical patent/CN103679674B/en
Publication of CN103679674A publication Critical patent/CN103679674A/en
Application granted granted Critical
Publication of CN103679674B publication Critical patent/CN103679674B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a system for splicing images of unmanned aircrafts in real time. The method and the system mainly solve the problem of poor image splicing real-time performance or obvious splicing traces in the prior art. The method includes a training stage: acquiring training images before the unmanned aircrafts execute tasks, extracting local features of the training images and building vocabulary trees; an online stage: acquiring practical earth observation images when the unmanned aircrafts execute the tasks, extracting local features of the images, retrieving the vocabulary trees to quickly search spatially adjacent images, matching the spatially adjacent images, computing transformation relations among the images, splicing the images according to the transformation relations and removing splicing joints. The method and the system have the advantages that the images of the unmanned aircrafts can be spliced by the aid of the method and the system, and broad vision field ranges can be provided for users; the method and the system are high in real-time performance and good in splicing effect, and requirements of application places in multiple fields can be met.

Description

A kind of unmanned vehicle realtime graphic joining method and system
Technical field
The present invention relates to a kind of unmanned vehicle realtime graphic joining method and system thereof, belong to image processing field.
Background technology
The multi-modal CCD camera earth observation that unmanned vehicle carries can obtain a large amount of aerial remote sensing images.When unmanned vehicle remote-sensing flatform obtains image, be subject to the restriction of unmanned vehicle flying height and digital camera focal length, single image is difficult to comprise completely interested region.In order to obtain the more information in multiple goal region, need further to expand field range, the splicing that can carry out by the image that different angles are obtained smooth and seamless is merged, and structure spliced map reaches the object of extended field of view scope.
At application number, be 200810237427.8, denomination of invention is in the patent application document of " method for splicing non-control point image ", by the feature point set of every width image in abstraction sequence image, search for the same place pair of the unique point between adjacent two width images, utilize RANSAC tolerant fail algorithm to calculate the homograph relation between adjacent two images, recycling connects takes advantage of formula and integration technology to obtain splicing result, mainly for video image, wherein utilize to connect that to take advantage of formula to carry out Image Mosaics be to carry out successively to N width image from the 1st width image, order according to original sequence image splices by width.
There is following defect in said process:
Have the problem of error accumulation, the error that in image sequence, the homography matrix of certain a pair of image occurs in calculating is by impact every piece image thereafter, and the accumulation of error can cause successive image splicing effect poor; Foregoing invention, for the Image Mosaics problem of universal significance, is not the special-purpose joining method of unmanned vehicle image, according to practical application, is not optimized.
At application number, be 201010502908.4, denomination of invention is in the patent application document of " low-altitude unmanned vehicle sequence image splicing method and system ", adopt SURF feature and the combination of HARRIS-AFFINE feature, the unique point of every piece image in abstraction sequence image; Adopt RANSAC tolerant fail algorithm and to Epipolar geometric constraint to unique point to accurately mating, accurate Calculation homography matrix; Adopt probabilistic model checking matching image, retain image, remove the bad images such as driftage completely; According to the overall situation splicing strategy of minimum spanning tree (Minimum Spanning Tree), obtain UNICOM's order of sequence image, avoid producing large cumulative errors; And adopt global optimization method, and adjusting homography matrix, in Shi Ce district, hundreds of is opened and is even gone up the amalgamation of thousand sheets image seamless together.
There is following defect in said process:
Foregoing invention is for the Image Mosaics after the laggard behaviour of unmanned vehicle acquisition sequence image, but not unmanned vehicle gathers the realtime graphic splicing in image process; The Image Mosaics that foregoing invention realizes be hundreds of to open thousand sheets Image Mosaic be a stitching image, in unmanned vehicle flight course, may obtain tens thousand of or more image, foregoing invention is difficult to support the Image Mosaics of the order of magnitude like this; Matching double points between foregoing invention computed image, for a large amount of images, computing time, length consuming time, can not realize fast.
At application number, be 201110085596.6, denomination of invention is in the patent application document of " real-time panoramic image stitching method of aerial videos shot by unmanned plane ", utilize video frequency collection card collection by unmanned plane, by microwave channel, to be passed in real time back the image of base station, image sequence is carried out to key frame selection, key frame is carried out to figure image intensifying; In Image Mosaics process, first adopt the good SURF characteristic detection method of robustness to the feature detection of picture frame and frame matching; Adopt again frame to the image conversion mode of panorama, reduce image and even take advantage of cumulative errors, and in conjunction with the GPS positional information of unmanned plane, determine flight path non-conterminous but adjacent picture frame spatially in sequential, optimize frame to the transformation relation of panorama, determine doubling of the image region, realize the structure of image co-registration and panorama sketch, realized the effect of limit overlap splicing; When image conversion, utilize consecutive frame information and space consecutive frame information in the ken, optimized image conversion, obtains panorama sketch accurately.
There is following defect in said process:
Matching double points between foregoing invention computed image, for a large amount of images, computing time length consuming time, can not be real-time realize Image Mosaics; Foregoing invention applying GPS positional information is obtained picture frame adjacent on space, and gps signal is vulnerable to disturb or block, and can affect in the case this working of an invention effect; Foregoing invention does not take means to remove splicing seam, the stitching image weak effect of acquisition, and splicing vestige is obvious.
Target of the present invention is to overcome above defect, designs a kind ofly can carry out in real time the method for Image Mosaics.By using classical way-words tree method of image retrieval, the present invention can realize the realtime graphic splicing that frame per second is 10Hz.
Summary of the invention
Technology of the present invention is dealt with problems and is: overcome the deficiencies in the prior art, a kind of unmanned vehicle realtime graphic joining method and system are provided, solved in prior art the poor and obvious problem of splicing vestige of Image Mosaics real-time.
Technical solution of the present invention is:
A unmanned vehicle realtime graphic joining method, comprising: for setting up the training stage of words tree and when unmanned plane is executed the task, carry out the on-line stage of Image Mosaics according to words tree before unmanned plane being executed the task;
The described training stage comprises the following steps:
(11) gather training image and extract the training image point of interest in training image; Described training image adopts vision sensor to gather;
(12) utilize the neighborhood information of described training image point of interest to describe training image, utilize training image local message structure description sub-vector, as training image feature;
(13) adopt hierarchical clustering method that described image feature amount is turned to word and builds words tree;
Described on-line stage comprises the following steps:
(21) extract the point of interest of the earth observation image obtaining in unmanned plane during flying process; And utilize the neighborhood information of observed image point of interest to describe observed image, utilize observed image local message structure description sub-vector, as observed image feature;
(22) utilize the words tree that the training stage obtains to retrieve and obtain spatial neighbor image observed image feature;
(23) ask for the unique point that belongs to same word in observed image and spatial neighbor image to as match point, and form coupling point set;
(24) utilize described coupling point set, calculating observation image obtains the transformation model between observed image and spatial neighbor image to the basis matrix of spatial neighbor image;
(25) carry out Image Mosaics
Above-mentioned matching image is out of shape and splicing, obtains stitching image, the overlapping region of using Fast Interpolation method to merge above-mentioned matching image;
(26) find the splicing line between matching image, will after the overall score error minimize of the splicing line through step (25) generation stitching image, obtain output image.
Described training and observed image feature obtain by following any one method:
(a) first use Hessian matrix determinant computation and search for three dimension scale spatial extrema and obtain the unique point that yardstick is constant; Then according to each unique point characteristic direction of border circular areas definition around, and according to the descriptor vectors of intensity profile extraction 64 dimensions in unique point neighborhood, as characteristics of image;
(b) by the gray-scale value intensity of some pixel in the Bresenham circle that is 3 at radius in comparison training image or observed image, extract the FAST unique point that is similar to angle point; Then in FAST unique point rectangular image fritter around, calculate BRIEF descriptor vector as characteristics of image.
(c) by the gray-scale value intensity of some pixel in the Bresenham circle that is 3 at radius in comparison training image or observed image, extract the FAST unique point that is similar to angle point; Then using FAST unique point rectangular image fritter around as characteristics of image.
Described words tree builds by following steps:
Use training set that the characteristics of image of all training images forms carry out words tree without supervised training, the bifurcated factor that definition k is cluster, be the child node number that each node has, adopt k-means clustering algorithm or k-means++ clustering algorithm that initial training collection is divided into k cluster; Then by said process repeated application in each cluster, each cluster recurrence is divided into k new sub-cluster, the structure of decision tree so from level to level, until reach predefined maximum number of plies L.
The method that the middle retrieval of the step of on-line stage (22) words tree obtains spatial neighbor image is as follows:
The proper vector of the observed image of current time, from the root node of words tree, with k the cluster centre comparison of lower one deck, is chosen to a cluster centre that Euclidean distance is nearest, propagate downwards layer by layer, until leaf node; With an integer, represent the path of propagating along words tree downwards, adopt TF-IDF method to carry out the similarity scoring of observed image and training image; Select similarity scoring to reach several width training images of predetermined threshold value as the spatial neighbor image of current time observed image.
The concrete grammar of the calculating basis matrix described in the step of on-line stage (24) is as follows:
The video camera of unmanned vehicle is generally down to look vertically and takes over the ground, and highly higher, ground can be approximately to plane, and choice for use RANSAC, PROSAC, BaySAC or GroupSAC algorithm calculate basis matrix.
The concrete grammar of the Fast Interpolation method described in the step of on-line stage (25) is as follows:
Distance according to each pixel of described matching image to stitching image center, the proportional weights that arrange, are added respectively synthetic stitching image according to these weights by the pixel value of the R of above-mentioned matching image overlapping region, G, tri-passages of B.
The concrete grammar of finding splicing line between described matching image in the step of on-line stage (26) is as follows: use dijkstra's algorithm to find splicing line.
A unmanned vehicle realtime graphic splicing system, comprises following 8 modules:
Image capture module, can apply dissimilar vision sensor and obtain image;
Image pretreatment module, for carrying out medium filtering pre-service to image;
Image characteristics extraction module, for detection of the point of interest in image, and computed image feature;
Words tree builds module, and before unmanned vehicle is executed the task, applying hierarchical clustering procedure turns to word by the image feature amount of all training images, with it, builds words tree; Described process only needs to carry out once, and the words tree of structure can be used in different task;
Obtain spatial neighbor image module, at unmanned vehicle, execute the task in process, extract the characteristics of image of current time image, from the root node of words tree, with k the cluster centre comparison of one deck under the characteristics of image of current time image and words tree, select the nearest cluster centre of Euclidean distance, be transmitted to layer by layer leaf node, carry out according to this similarity scoring, select similarity scoring to reach several width images of predetermined threshold value as the spatial neighbor image of current time image;
Images match module, utilize direct index to preserve the list of the contained word of every width image in all earth observation images, and the characteristics of image that is associated with each word, when calculating the images match of current time image and certain width spatial neighbor image, only ask for belong to same word unique point to as match point, for avoiding, all features between described image are mated;
Ask for image conversion module, coupling point set between the described image of being obtained by images match module, application robust unchangeability algorithm, as RANSAC, PROSAC, BaySAC, GroupSAC etc., is obtained current time image to the basis matrix of certain width spatial neighbor image, obtains the transformation model of image;
Image Mosaics module, is out of shape described matching image and splicing, obtains stitching image, the overlapping region of using Fast Interpolation method to merge described matching image;
Remove splicing seam module, find the splicing line between described matching image, minimize the total square error along splicing line between described matching image and splice seam to remove.
Described system can be moved on the airborne processor of unmanned vehicle, the stitching image calculating is sent to the monitored base stations of unmanned vehicle by wireless link; Or described system is moved on the computing machine of the monitored base stations of unmanned vehicle, unmanned vehicle is sent to monitored base stations by wireless link after obtaining image, stitching image on the computing machine of monitored base stations.
Described image can be from dissimilar vision sensor, as photopic vision sensor, infrared vision sensor; Described image can be selected from the key frame of unmanned vehicle Airborne camera capture video, or the digital photo of on-board camera shooting.
The present invention compared with prior art tool has the following advantages:
(1) the present invention is real-time: the image to 640 * 480, to extract FAST+BRIEF, be characterized as example, and read in image and show average used time 42ms; Extract characteristics of image and retrieve the spatial neighbor image averaging used time 6.15ms of current time image; Utilize direct index method can obtain inter-image transformations relation in conjunction with the average 0.82ms of RANSAC; The average used time 50ms of Image Mosaics and cancellation seam; The total used time of stitching image is about 100ms, can realize the Image Mosaics that frame per second is 10Hz; Real-time performance of the present invention is better than other Image Mosaics technology.
(2) splicing effect of the present invention is good: use a kind of overlapping region of Fast Interpolation method anastomosing and splicing image, and eliminated the splicing line between stitching image, the stitching image of acquisition is effective.
(3) wide adaptability of the present invention: the present invention only needs image and do not need other external informations as GPS information, in theory can any environment (comprise under water, valley, underground, block environment) in application.
(4) method of the present invention can be widely used in the various fields such as military surveillance, disaster field real-time monitored, electrical network line walking, advertising shooting.
Accompanying drawing explanation
Fig. 1 is the inventive method process flow diagram;
Fig. 2 is that figure is cut apart in the space of a kind of unmanned vehicle realtime graphic joining method training stage layering of the present invention k-means clustering procedure;
Fig. 3 is a kind of unmanned vehicle realtime graphic of the present invention splicing system schematic diagram.
Embodiment
Below in conjunction with accompanying drawing, technical scheme of the present invention is elaborated:
As shown in Figure 1, the invention discloses a kind of unmanned vehicle realtime graphic joining method and system, the method comprises two stages: the training stage, and before unmanned vehicle is executed the task, gather training image and extract training image local feature, build words tree;
Training stage
Before unmanned vehicle is executed the task, gather training image and extract training image local feature, build words tree; Described process only needs to carry out once, and the words tree of structure can be used in different task; Realize through the following steps:
1, gather training image and carry out pre-service
Gather training image and extract the training image point of interest in training image; Training image adopts vision sensor to gather; Can apply dissimilar vision sensor and obtain training image, can select described image to carry out the pre-service such as medium filtering.
2, extract the local feature of training image
Detect the point of interest in all training images, utilize the neighborhood information of point of interest to describe training image, with the descriptor vector of local message structure higher-dimension, as characteristics of image.Spendable a kind of characteristics of image is FAST+BRIEF feature, and concrete extracting method is following lower any one method:
(a) first use Hessian matrix determinant computation and search for three dimension scale spatial extrema and obtain the unique point that yardstick is constant; Then according to each unique point characteristic direction of border circular areas definition around, and according to the descriptor vectors of intensity profile extraction 64 dimensions in unique point neighborhood, as characteristics of image;
(b) by the gray-scale value intensity of some pixel in the Bresenham circle that is 3 at radius in comparison training image or observed image, extract the FAST unique point that is similar to angle point; Then in FAST unique point rectangular image fritter around, calculate BRIEF descriptor vector as characteristics of image; This descriptor is scale-of-two vector, and each bit is in image fritter, to choose at random the relatively result of intensity level of two pixels, and image fritter is used gaussian kernel to carry out smoothly for eliminating noise in advance.
(c) by the gray-scale value intensity of some pixel in the Bresenham circle that is 3 at radius in comparison training image or observed image, extract the FAST unique point that is similar to angle point; Then using FAST unique point rectangular image fritter around as characteristics of image.
3, adopt hierarchical clustering method that described image feature amount is turned to word and builds words tree;
As shown in Figure 2, use training set that the characteristics of image of all training images forms carry out words tree without supervised training, the bifurcated factor that definition k is cluster, be the child node number that each node has, adopt k-means clustering algorithm or k-means++ clustering algorithm that initial training collection is divided into k cluster; Then by said process repeated application in each cluster, each cluster recurrence is divided into k new sub-cluster, the structure of decision tree so from level to level, until reach predefined maximum number of plies L.
K-means clustering procedure is a kind of without supervision real-time clustering algorithm.The workflow of this algorithm is as follows:
(1) from n descriptor vector, select arbitrarily k vector as initial cluster center;
(2), for other vectors, the similarity (Euclidean distance) according to them to these cluster centres, distributes to them respectively the cluster nearest with it;
(3) cluster centre new according to cluster mean value computation, cluster average ni is the vector number that belongs to cluster i, and px is the vector that belongs to cluster i;
(4) so circulation, until target function value meets end condition, data are divided into k class the most at last.
Adopt error sum of squares criterion function as objective function:
E = arg min s Σ i = 1 k Σ x j ∈ S i | | x j - u i | | 2 - - - ( 1 )
X wherein jfor data vector, s ifor x jresiding cluster, u ifor cluster s ithe mean value of mid point.
On-line stage
Unmanned vehicle is executed the task in process, and retrieval words tree obtains spatial neighbor image, and computed image transformation relation obtains spells embedding figure, and removes splicing seam; Realize through the following steps:
1, obtain earth observation image on the spot and carry out pre-service
Can apply dissimilar vision sensor and obtain the earth observation image in flight course, can select described image to carry out the pre-service such as medium filtering.
2, extract the local feature of earth observation image
Detect the point of interest in earth observation image, utilize the neighborhood information Description Image of point of interest, with the descriptor vector of local message structure higher-dimension, as characteristics of image; The characteristics of image herein extracting is identical with training stage characteristics of image used.
Concrete extracting method is following lower any one method:
(a) first use Hessian matrix determinant computation and search for three dimension scale spatial extrema and obtain the unique point that yardstick is constant; Then according to each unique point characteristic direction of border circular areas definition around, and according to the descriptor vectors of intensity profile extraction 64 dimensions in unique point neighborhood, as characteristics of image;
(b) by the gray-scale value intensity of some pixel in the Bresenham circle that is 3 at radius in comparison training image or observed image, extract the FAST unique point that is similar to angle point; Then in FAST unique point rectangular image fritter around, calculate BRIEF descriptor vector as characteristics of image; This descriptor is scale-of-two vector, and each bit is in image fritter, to choose at random the relatively result of intensity level of two pixels, and image fritter is used gaussian kernel to carry out smoothly for eliminating noise in advance.
(c) by the gray-scale value intensity of some pixel in the Bresenham circle that is 3 at radius in comparison training image or observed image, extract the FAST unique point that is similar to angle point; Then using FAST unique point rectangular image fritter around as characteristics of image.
3, utilize the retrieval words tree that the training stage obtains to obtain spatial neighbor image
From the root node of words tree, k the cluster centre comparison with one deck under the characteristics of image of current time image and words tree, select the nearest cluster centre of Euclidean distance, be transmitted to layer by layer leaf node, carry out according to this similarity scoring, select similarity scoring to reach several width images (image obtaining) of predetermined threshold value as the spatial neighbor image of current time image in flight course before.
Application TF-IDF(Term Frequency Inverse Document Frequency) model is marked to the similarity of image.With following methods, weigh the similarity of the image obtaining in current time image and flight course before: to each node, set weights, then the image through same node is added to corresponding mark.Because the contained quantity of information of different nodes is different, so weights are also different.When two vectors approach leaf node, these two vectors are more similar, thereby the weights of this node are also larger, and when close to root node, weights are less.The weights of node i in words tree are set as according to information entropy:
w i = 1 nN N i - - - ( 2 )
Wherein N is picture number in database, N ifor having a descriptor vector in database at least by the picture number of node i.Then according to these weights, define vector database vector to be retrieved:
q i = n i w i d i = m i w i - - - ( 3 )
M wherein i, n ibe respectively in image to be retrieved and database images by the descriptor vector number of node i.Similarity scoring between two width iamge description sub-vectors is:
s ( q , d ) = | | q | | q | | - d | | d | | | | = | | q - d | | - - - ( 4 )
Can use L 1norm calculation normalization difference, computing method are:
| | d - d | | = 1 - 1 2 | q | q | - d | d | | - - - ( 5 )
Or use L 2norm calculation normalization difference, computing method are:
| | q - d | | 2 = 2 - 2 × Σ i | q i ≠ 0 , d i ≠ 0 q i d i - - - ( 6 )
Select similarity scoring to reach several width images of predetermined threshold value as the spatial neighbor image of current time image.
4, ask for the unique point that belongs to same word in observed image and spatial neighbor image to as match point, and form coupling point set
Utilize direct index to preserve the list of the contained word of every width image in all earth observation images, and the characteristics of image that is associated with each word, when calculating the images match of current time image and certain width spatial neighbor image, only ask for belong to same word unique point to as match point, for avoiding, all features between described image are mated.
5, utilize described coupling point set, calculating observation image obtains the transformation model between observed image and spatial neighbor image to the basis matrix of spatial neighbor image
The video camera of unmanned vehicle is generally down to look vertically and takes over the ground, and highly higher, ground can be approximately to plane, and choice for use is as RANSAC, PROSAC, BaySAC, GroupSAC scheduling algorithm calculating basis matrix.
Use RANSAC algorithm to calculate basis matrix.
The concrete steps of RANSAC algorithm are:
(1) a sampling collection that comprises S data point of random selection in match point S set, is used this subset instantiation model;
(2) specified data point S set within the scope of the distance threshold t of this model i, S set ibe the consistent collection of sampling, defined S iinterior point;
(3) if S isize (quantity of interior point) be greater than threshold value T, use S iin all data reappraise model, and stop to calculate;
(4) if S isize be less than threshold value T, select a new subset and repeat above step;
(5) carry out, after N test, can selecting most homogeneous S set i, then adopt subset S iin all data points reappraise model.
6, carry out Image Mosaics
Described matching image is out of shape and splicing, obtains stitching image, use a kind of Fast Interpolation method to merge the overlapping region of above-mentioned matching image.Distance according to each pixel of described matching image to stitching image center, the proportional weights that arrange, are added respectively synthetic stitching image according to these weights by the pixel value of the R of above-mentioned matching image overlapping region, G, tri-passages of B.
7, remove splicing seam
Use dijkstra's algorithm to find the splicing line between above-mentioned matching image, minimize the total square error along splicing line between above-mentioned matching image and splice seam to remove.
As shown in Figure 3, the system that realizes said method comprises that 8 modules are as described below:
Image capture module, can apply dissimilar vision sensor and obtain image;
Image pretreatment module, for carrying out medium filtering pre-service to image;
Image characteristics extraction module, for detection of the point of interest in image, and computed image feature;
Words tree builds module, and before unmanned vehicle is executed the task, applying hierarchical clustering procedure turns to word by the image feature amount of all training images, with it, builds words tree; Described process only needs to carry out once, and the words tree of structure can be used in different task;
Obtain spatial neighbor image module, at unmanned vehicle, execute the task in process, extract the characteristics of image of current time image, from the root node of words tree, with k the cluster centre comparison of one deck under the characteristics of image of current time image and words tree, select the nearest cluster centre of Euclidean distance, be transmitted to layer by layer leaf node, carry out according to this similarity scoring, select similarity scoring to reach several width images of predetermined threshold value as the spatial neighbor image of current time image;
Images match module, utilize direct index to preserve the list of the contained word of every width image in all earth observation images, and the characteristics of image that is associated with each word, when calculating the images match of current time image and certain width spatial neighbor image, only ask for belong to same word unique point to as match point, for avoiding, all features between described image are mated;
Ask for image conversion module, coupling point set between the described image of being obtained by images match module, application robust unchangeability algorithm, as RANSAC, PROSAC, BaySAC, GroupSAC etc., is obtained current time image to the basis matrix of certain width spatial neighbor image, obtains the transformation model of image;
Image Mosaics module, is out of shape described matching image and splicing, obtains stitching image, the overlapping region of using Fast Interpolation method to merge described matching image;
Remove splicing seam module, find the splicing line between described matching image, minimize the total square error along splicing line between described matching image and splice seam to remove.
System can be moved on the airborne processor of unmanned vehicle, the stitching image calculating is sent to the monitored base stations of unmanned vehicle by wireless link; Or described system is moved on the computing machine of the monitored base stations of unmanned vehicle, unmanned vehicle is sent to monitored base stations by wireless link after obtaining image, stitching image on the computing machine of monitored base stations.
Image can be from dissimilar vision sensor, as photopic vision sensor, infrared vision sensor; Image can be selected from the key frame of unmanned vehicle Airborne camera capture video, or the digital photo of on-board camera shooting.
By preferred embodiment, described the present invention above, should be understood that, except the present invention clearly record, the modification that those skilled in the art can predict, substitute and to be also considered to drop in protection scope of the present invention.
The unspecified part of the present invention belongs to general knowledge as well known to those skilled in the art.

Claims (10)

1. a unmanned vehicle realtime graphic joining method, comprising:
For the training stage of setting up words tree before unmanned plane is executed the task;
With when unmanned plane is executed the task, according to words tree, carry out the on-line stage of Image Mosaics;
It is characterized in that:
The described training stage comprises the following steps:
(11) gather training image and extract the training image point of interest in training image; Described training image adopts vision sensor to gather;
(12) utilize the neighborhood information of described training image point of interest to describe training image, utilize training image local message structure description sub-vector, as training image feature;
(13) adopt hierarchical clustering method that described image feature amount is turned to word and builds words tree;
Described on-line stage comprises the following steps:
(21) extract the point of interest of the earth observation image obtaining in unmanned plane during flying process; And utilize the neighborhood information of observed image point of interest to describe observed image, utilize observed image local message structure description sub-vector, as observed image feature;
(22) utilize the words tree that the training stage obtains to retrieve and obtain spatial neighbor image observed image feature;
(23) ask for the unique point that belongs to same word in observed image and spatial neighbor image to as match point, and form coupling point set;
(24) utilize described coupling point set, calculating observation image obtains the transformation model between observed image and spatial neighbor image to the basis matrix of spatial neighbor image;
(25) carry out Image Mosaics
Above-mentioned matching image is out of shape and splicing, obtains stitching image, the overlapping region of using Fast Interpolation method to merge above-mentioned matching image;
(26) find the splicing line between matching image, will after the overall score error minimize of the splicing line through step (25) generation stitching image, obtain output image.
2. unmanned vehicle realtime graphic joining method according to claim 1, is characterized in that: described training and observed image feature obtain by following any one method:
(a) first use Hessian matrix determinant computation and search for three dimension scale spatial extrema and obtain the unique point that yardstick is constant; Then according to each unique point characteristic direction of border circular areas definition around, and according to the descriptor vectors of intensity profile extraction 64 dimensions in unique point neighborhood, as characteristics of image;
(b) by the gray-scale value intensity of some pixel in the Bresenham circle that is 3 at radius in comparison training image or observed image, extract the FAST unique point that is similar to angle point; Then in FAST unique point rectangular image fritter around, calculate BRIEF descriptor vector as characteristics of image.
(c) by the gray-scale value intensity of some pixel in the Bresenham circle that is 3 at radius in comparison training image or observed image, extract the FAST unique point that is similar to angle point; Then using FAST unique point rectangular image fritter around as characteristics of image.
3. a kind of unmanned vehicle realtime graphic joining method according to claim 1, is characterized in that: described words tree builds by following steps:
Use training set that the characteristics of image of all training images forms carry out words tree without supervised training, the bifurcated factor that definition k is cluster, be the child node number that each node has, adopt k-means clustering algorithm or k-means++ clustering algorithm that initial training collection is divided into k cluster; Then by said process repeated application in each cluster, each cluster recurrence is divided into k new sub-cluster, the structure of decision tree so from level to level, until reach predefined maximum number of plies L.
4. according to a kind of unmanned vehicle realtime graphic joining method described in claim 1 or 3, it is characterized in that: the method that the middle retrieval of the step of on-line stage (22) words tree obtains spatial neighbor image is as follows:
The proper vector of the observed image of current time, from the root node of words tree, with k the cluster centre comparison of lower one deck, is chosen to a cluster centre that Euclidean distance is nearest, propagate downwards layer by layer, until leaf node; With an integer, represent the path of propagating along words tree downwards, adopt TF-IDF method to carry out the similarity scoring of observed image and training image; Select similarity scoring to reach several width training images of predetermined threshold value as the spatial neighbor image of current time observed image.
5. a kind of unmanned vehicle realtime graphic joining method according to claim 1, is characterized in that: the concrete grammar of the calculating basis matrix described in the step of on-line stage (24) is as follows:
The video camera of unmanned vehicle is generally down to look vertically and takes over the ground, and highly higher, ground can be approximately to plane, and choice for use RANSAC, PROSAC, BaySAC or GroupSAC algorithm calculate basis matrix.
6. a kind of unmanned vehicle realtime graphic joining method according to claim 1, is characterized in that: the concrete grammar of the Fast Interpolation method described in the step of on-line stage (25) is as follows:
Distance according to each pixel of described matching image to stitching image center, the proportional weights that arrange, are added respectively synthetic stitching image according to these weights by the pixel value of the R of above-mentioned matching image overlapping region, G, tri-passages of B.
7. a kind of unmanned vehicle realtime graphic joining method according to claim 1, is characterized in that: the concrete grammar of finding splicing line between described matching image in the step of on-line stage (26) is as follows:
Use dijkstra's algorithm to find splicing line.
8. a unmanned vehicle realtime graphic splicing system of realizing method described in claim 1, is characterized in that comprising following 8 modules:
Image capture module, can apply dissimilar vision sensor and obtain image;
Image pretreatment module, for carrying out medium filtering pre-service to image;
Image characteristics extraction module, for detection of the point of interest in image, and computed image feature;
Words tree builds module, and before unmanned vehicle is executed the task, applying hierarchical clustering procedure turns to word by the image feature amount of all training images, with it, builds words tree; Described process only needs to carry out once, and the words tree of structure can be used in different task;
Obtain spatial neighbor image module, at unmanned vehicle, execute the task in process, extract the characteristics of image of current time image, from the root node of words tree, with k the cluster centre comparison of one deck under the characteristics of image of current time image and words tree, select the nearest cluster centre of Euclidean distance, be transmitted to layer by layer leaf node, carry out according to this similarity scoring, select similarity scoring to reach several width images of predetermined threshold value as the spatial neighbor image of current time image;
Images match module, utilize direct index to preserve the list of the contained word of every width image in all earth observation images, and the characteristics of image that is associated with each word, when calculating the images match of current time image and certain width spatial neighbor image, only ask for belong to same word unique point to as match point, for avoiding, all features between described image are mated;
Ask for image conversion module, coupling point set between the described image of being obtained by images match module, application robust unchangeability algorithm, as RANSAC, PROSAC, BaySAC, GroupSAC etc., is obtained current time image to the basis matrix of certain width spatial neighbor image, obtains the transformation model of image;
Image Mosaics module, is out of shape described matching image and splicing, obtains stitching image, the overlapping region of using Fast Interpolation method to merge described matching image;
Remove splicing seam module, find the splicing line between described matching image, minimize the total square error along splicing line between described matching image and splice seam to remove.
9. a kind of unmanned vehicle realtime graphic splicing system of realizing method described in claim 1 according to claim 8, it is characterized in that: described system can be moved on the airborne processor of unmanned vehicle, the stitching image calculating is sent to the monitored base stations of unmanned vehicle by wireless link; Or described system is moved on the computing machine of the monitored base stations of unmanned vehicle, unmanned vehicle is sent to monitored base stations by wireless link after obtaining image, stitching image on the computing machine of monitored base stations.
10. a kind of unmanned vehicle realtime graphic splicing system of realizing method described in claim 1 according to claim 8, is characterized in that: described image can be from dissimilar vision sensor, as photopic vision sensor, infrared vision sensor; Described image can be selected from the key frame of unmanned vehicle Airborne camera capture video, or the digital photo of on-board camera shooting.
CN201310628020.9A 2013-11-29 2013-11-29 Method and system for splicing images of unmanned aircrafts in real time Active CN103679674B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310628020.9A CN103679674B (en) 2013-11-29 2013-11-29 Method and system for splicing images of unmanned aircrafts in real time

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310628020.9A CN103679674B (en) 2013-11-29 2013-11-29 Method and system for splicing images of unmanned aircrafts in real time

Publications (2)

Publication Number Publication Date
CN103679674A true CN103679674A (en) 2014-03-26
CN103679674B CN103679674B (en) 2017-01-11

Family

ID=50317129

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310628020.9A Active CN103679674B (en) 2013-11-29 2013-11-29 Method and system for splicing images of unmanned aircrafts in real time

Country Status (1)

Country Link
CN (1) CN103679674B (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104349142A (en) * 2014-11-03 2015-02-11 南京航空航天大学 Layered representation-based unmanned plane video adaptive transmission method
CN105046909A (en) * 2015-06-17 2015-11-11 中国计量学院 Agricultural loss assessment assisting method based on small-sized unmanned aerial vehicle
CN105282492A (en) * 2014-07-08 2016-01-27 山东省科学院海洋仪器仪表研究所 Near-space airborne-to-ground real-time imaging system
CN105460217A (en) * 2015-12-03 2016-04-06 北京奇虎科技有限公司 Continuous shooting method based on unmanned aerial vehicle and unmanned aerial vehicle
CN105631847A (en) * 2014-10-31 2016-06-01 航天恒星科技有限公司 Multispectral image processing method and device
WO2016141543A1 (en) * 2015-03-10 2016-09-15 SZ DJI Technology Co., Ltd. System and method for adaptive panoramic image generation
CN106055573A (en) * 2016-05-20 2016-10-26 西安邮电大学 Method and system for shoeprint image retrieval under multi-instance learning framework
CN106407885A (en) * 2016-08-22 2017-02-15 苏州华兴源创电子科技有限公司 Small sized unmanned aerial vehicle based affected area estimating method
CN106828952A (en) * 2016-07-14 2017-06-13 科盾科技股份有限公司北京分公司 A kind of method and device of assisting in flying device safe flight
WO2017113818A1 (en) * 2015-12-31 2017-07-06 深圳市道通智能航空技术有限公司 Unmanned aerial vehicle and panoramic image stitching method, device and system thereof
CN107390712A (en) * 2016-05-17 2017-11-24 恩康德有限公司 Flight formula in-store advertising system
CN108506170A (en) * 2018-03-08 2018-09-07 上海扩博智能技术有限公司 Fan blade detection method, system, equipment and storage medium
WO2018170857A1 (en) * 2017-03-23 2018-09-27 深圳市大疆创新科技有限公司 Method for image fusion and unmanned aerial vehicle
WO2019061295A1 (en) * 2017-09-29 2019-04-04 深圳市大疆创新科技有限公司 Video processing method and device, unmanned aerial vehicle and system
CN109658450A (en) * 2018-12-17 2019-04-19 武汉天乾科技有限责任公司 A kind of quick orthography generation method based on unmanned plane
CN109827547A (en) * 2019-03-27 2019-05-31 中国人民解放军战略支援部队航天工程大学 A kind of distributed multi-sensor extraterrestrial target synchronization association method
CN110675319A (en) * 2019-09-12 2020-01-10 创新奇智(成都)科技有限公司 Mobile phone photographing panoramic image splicing method based on minimum spanning tree
CN111144239A (en) * 2019-12-12 2020-05-12 中国地质大学(武汉) Unmanned aerial vehicle oblique image feature matching method guided by vocabulary tree
WO2020113423A1 (en) * 2018-12-04 2020-06-11 深圳市大疆创新科技有限公司 Target scene three-dimensional reconstruction method and system, and unmanned aerial vehicle
CN111402579A (en) * 2020-02-29 2020-07-10 深圳壹账通智能科技有限公司 Road congestion degree prediction method, electronic device and readable storage medium
CN111815690A (en) * 2020-09-11 2020-10-23 湖南国科智瞳科技有限公司 Method, system and computer equipment for real-time splicing of microscopic images
CN112148909A (en) * 2020-09-18 2020-12-29 微梦创科网络科技(中国)有限公司 Method and system for searching similar pictures
CN112215304A (en) * 2020-11-05 2021-01-12 珠海大横琴科技发展有限公司 Gray level image matching method and device for geographic image splicing
CN113326860A (en) * 2020-05-29 2021-08-31 阿里巴巴集团控股有限公司 Data processing method and device, electronic equipment and computer storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1926007A2 (en) * 2006-09-05 2008-05-28 Honeywell International, Inc. Method and system for navigation of an unmanned aerial vehicle in an urban environment
CN102426019A (en) * 2011-08-25 2012-04-25 航天恒星科技有限公司 Unmanned aerial vehicle scene matching auxiliary navigation method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1926007A2 (en) * 2006-09-05 2008-05-28 Honeywell International, Inc. Method and system for navigation of an unmanned aerial vehicle in an urban environment
CN102426019A (en) * 2011-08-25 2012-04-25 航天恒星科技有限公司 Unmanned aerial vehicle scene matching auxiliary navigation method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LI YAN-SHAN等: "An Automatic Mosaic Method For Unmanned Aerial Vehicle Video Images Based On Kalman Filter", 《WIRELESS, MOBILE & MULTIMEDIA NETWORKS (ICWMMN 2011), 4TH IET INTERNATIONAL CONFERENCE ON》 *
狄颖辰等: "无人机图像拼接算法综述", 《计算机应用》 *

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105282492A (en) * 2014-07-08 2016-01-27 山东省科学院海洋仪器仪表研究所 Near-space airborne-to-ground real-time imaging system
CN105631847A (en) * 2014-10-31 2016-06-01 航天恒星科技有限公司 Multispectral image processing method and device
CN104349142B (en) * 2014-11-03 2018-07-06 南京航空航天大学 A kind of UAV Video adaptive transmission method based on layering expression
CN104349142A (en) * 2014-11-03 2015-02-11 南京航空航天大学 Layered representation-based unmanned plane video adaptive transmission method
US10685426B2 (en) 2015-03-10 2020-06-16 SZ DJI Technology Co., Ltd. System and method for adaptive panoramic image generation
WO2016141543A1 (en) * 2015-03-10 2016-09-15 SZ DJI Technology Co., Ltd. System and method for adaptive panoramic image generation
CN105046909A (en) * 2015-06-17 2015-11-11 中国计量学院 Agricultural loss assessment assisting method based on small-sized unmanned aerial vehicle
CN105460217A (en) * 2015-12-03 2016-04-06 北京奇虎科技有限公司 Continuous shooting method based on unmanned aerial vehicle and unmanned aerial vehicle
CN105460217B (en) * 2015-12-03 2017-11-14 北京奇虎科技有限公司 A kind of continuous shooting method and unmanned vehicle based on unmanned vehicle
WO2017113818A1 (en) * 2015-12-31 2017-07-06 深圳市道通智能航空技术有限公司 Unmanned aerial vehicle and panoramic image stitching method, device and system thereof
CN107390712B (en) * 2016-05-17 2018-11-16 恩康德有限公司 Flight formula in-store advertising system
CN107390712A (en) * 2016-05-17 2017-11-24 恩康德有限公司 Flight formula in-store advertising system
CN106055573A (en) * 2016-05-20 2016-10-26 西安邮电大学 Method and system for shoeprint image retrieval under multi-instance learning framework
CN106055573B (en) * 2016-05-20 2019-12-27 西安邮电大学 Shoe print image retrieval method and system under multi-instance learning framework
CN106828952A (en) * 2016-07-14 2017-06-13 科盾科技股份有限公司北京分公司 A kind of method and device of assisting in flying device safe flight
CN106407885A (en) * 2016-08-22 2017-02-15 苏州华兴源创电子科技有限公司 Small sized unmanned aerial vehicle based affected area estimating method
WO2018170857A1 (en) * 2017-03-23 2018-09-27 深圳市大疆创新科技有限公司 Method for image fusion and unmanned aerial vehicle
WO2019061295A1 (en) * 2017-09-29 2019-04-04 深圳市大疆创新科技有限公司 Video processing method and device, unmanned aerial vehicle and system
US11611811B2 (en) 2017-09-29 2023-03-21 SZ DJI Technology Co., Ltd. Video processing method and device, unmanned aerial vehicle and system
CN108506170A (en) * 2018-03-08 2018-09-07 上海扩博智能技术有限公司 Fan blade detection method, system, equipment and storage medium
WO2020113423A1 (en) * 2018-12-04 2020-06-11 深圳市大疆创新科技有限公司 Target scene three-dimensional reconstruction method and system, and unmanned aerial vehicle
CN109658450A (en) * 2018-12-17 2019-04-19 武汉天乾科技有限责任公司 A kind of quick orthography generation method based on unmanned plane
CN109827547B (en) * 2019-03-27 2021-05-04 中国人民解放军战略支援部队航天工程大学 Distributed multi-sensor space target synchronous correlation method
CN109827547A (en) * 2019-03-27 2019-05-31 中国人民解放军战略支援部队航天工程大学 A kind of distributed multi-sensor extraterrestrial target synchronization association method
CN110675319A (en) * 2019-09-12 2020-01-10 创新奇智(成都)科技有限公司 Mobile phone photographing panoramic image splicing method based on minimum spanning tree
CN111144239A (en) * 2019-12-12 2020-05-12 中国地质大学(武汉) Unmanned aerial vehicle oblique image feature matching method guided by vocabulary tree
CN111144239B (en) * 2019-12-12 2022-03-29 中国地质大学(武汉) Unmanned aerial vehicle oblique image feature matching method guided by vocabulary tree
CN111402579A (en) * 2020-02-29 2020-07-10 深圳壹账通智能科技有限公司 Road congestion degree prediction method, electronic device and readable storage medium
CN113326860A (en) * 2020-05-29 2021-08-31 阿里巴巴集团控股有限公司 Data processing method and device, electronic equipment and computer storage medium
CN113326860B (en) * 2020-05-29 2023-12-15 阿里巴巴集团控股有限公司 Data processing method, device, electronic equipment and computer storage medium
CN111815690B (en) * 2020-09-11 2020-12-08 湖南国科智瞳科技有限公司 Method, system and computer equipment for real-time splicing of microscopic images
CN111815690A (en) * 2020-09-11 2020-10-23 湖南国科智瞳科技有限公司 Method, system and computer equipment for real-time splicing of microscopic images
CN112148909A (en) * 2020-09-18 2020-12-29 微梦创科网络科技(中国)有限公司 Method and system for searching similar pictures
CN112148909B (en) * 2020-09-18 2024-03-29 微梦创科网络科技(中国)有限公司 Method and system for searching similar pictures
CN112215304A (en) * 2020-11-05 2021-01-12 珠海大横琴科技发展有限公司 Gray level image matching method and device for geographic image splicing

Also Published As

Publication number Publication date
CN103679674B (en) 2017-01-11

Similar Documents

Publication Publication Date Title
CN103679674B (en) Method and system for splicing images of unmanned aircrafts in real time
WO2021142902A1 (en) Danet-based unmanned aerial vehicle coastline floating garbage inspection system
Branson et al. From Google Maps to a fine-grained catalog of street trees
Maddern et al. 1 year, 1000 km: The oxford robotcar dataset
Majdik et al. Air‐ground matching: Appearance‐based GPS‐denied urban localization of micro aerial vehicles
CN102426019B (en) Unmanned aerial vehicle scene matching auxiliary navigation method and system
Liu et al. Multiscale U-shaped CNN building instance extraction framework with edge constraint for high-spatial-resolution remote sensing imagery
Kang et al. A survey of deep learning-based object detection methods and datasets for overhead imagery
Yang et al. Concrete defects inspection and 3D mapping using CityFlyer quadrotor robot
CN105844587A (en) Low-altitude unmanned aerial vehicle-borne hyperspectral remote-sensing-image automatic splicing method
CN106373088A (en) Quick mosaic method for aviation images with high tilt rate and low overlapping rate
CN110751077B (en) Optical remote sensing picture ship detection method based on component matching and distance constraint
CN108320304A (en) A kind of automatic edit methods and system of unmanned plane video media
CN113256731A (en) Target detection method and device based on monocular vision
Zhao et al. Probabilistic spatial distribution prior based attentional keypoints matching network
CN110569387B (en) Radar-image cross-modal retrieval method based on depth hash algorithm
Lentsch et al. Slicematch: Geometry-guided aggregation for cross-view pose estimation
CN117218201A (en) Unmanned aerial vehicle image positioning precision improving method and system under GNSS refusing condition
Duarte et al. Damage detection on building façades using multi-temporal aerial oblique imagery
Wang et al. Real-time damaged building region detection based on improved YOLOv5s and embedded system from UAV images
Cao et al. Template matching based on convolution neural network for UAV visual localization
CN111950524B (en) Orchard local sparse mapping method and system based on binocular vision and RTK
Tsintotas et al. Visual place recognition for simultaneous localization and mapping
Maurer et al. Automated inspection of power line corridors to measure vegetation undercut using UAV-based images
Li et al. Driver drowsiness behavior detection and analysis using vision-based multimodal features for driving safety

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant