CN103955888A - High-definition video image mosaic method and device based on SIFT - Google Patents
High-definition video image mosaic method and device based on SIFT Download PDFInfo
- Publication number
- CN103955888A CN103955888A CN201410197659.0A CN201410197659A CN103955888A CN 103955888 A CN103955888 A CN 103955888A CN 201410197659 A CN201410197659 A CN 201410197659A CN 103955888 A CN103955888 A CN 103955888A
- Authority
- CN
- China
- Prior art keywords
- image
- region
- point
- unique point
- width
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Abstract
The invention belongs to the technical field of multi-channel high-definition-video processing and provides a high-definition video image mosaic method and device based on an SIFT algorithm. The method includes the steps that each image is divided into a left area, a middle area and a right area, and feature points in the left area and the right area are extracted by the adoption of the SIFT algorithm; the feature points in the left area of the current image is matched with the feature points in the right area of the previous image, and the feature points in the right area of the current image is matched with the feature points in the left area of the later image; the pixel values in the overlap region of the adjacent images are stacked according to the linear weight to serve as the pixel values after image mosaic, and image mosaic is finished. According to the high-definition video image mosaic method and device based on the SIFT algorithm, the feature points are only extracted in certain areas at the left side and the right side of each high-definition video image, and the feature points in the overlap area of the adjacent images are matched, so that the calculation amount is reduced to a large extent, the calculation speed is improved, meanwhile, matching among the images is facilitated, the matching speed is improved, the error rate in matching is reduced, omnibearing high-definition video seamless real-time image mosaic is achieved, the security airspace observation field is expanded, and image quality is improved.
Description
Technical field
The invention belongs to technical field of image processing, relate in particular to a kind of high clear video image joining method and device based on SIFT.
Background technology
In video monitoring, need the large visual field to monitor in real time, and narrower being difficult to of single channel video camera angular field of view meets the demands, this just needs multi-channel video to take and be spliced to form large-scale panorama different directions, due to SD video object is image blurring can not resolution target, adopt multi-path high-definition video-splicing to form panorama clearly.
Classical SIFT (the Scale Invariant Feature Transform of general employing in the world, yardstick invariant features conversion) algorithm is to entire image extract minutiae, then by stochastic sampling unification algorism coupling entire image, finally splice multiple image with linear algorithm, use this technology can only carry out to multiple image the splicing of off-line, cannot carry out real-time online splicing to HD video.When make in this way 25 frames per second high definition video steaming extract minutiae time; because single-frame images is up to 2,000,000 pixel; the large speed of calculated amount is very slow; because extracted unique point quantity is very many; the matching speed that causes image slowly and the rising of matching error rate, cannot be realized the real-time splicing of multi-path high-definition video.
Summary of the invention
In view of the above problems, the object of the present invention is to provide a kind of high clear video image joining method and device based on SIFT, be intended to solve in conventional images splicing scheme, the matching speed of image slowly, matching error rate is high, technical matters that cannot real-time online splicing multi-path high-definition video image.
On the one hand, the described high clear video image joining method based on SIFT comprises the steps:
Every width image is divided into San Ge region, left, center, right, utilizes SIFT algorithm to extract the unique point in He You region, left region;
The unique point in the left region of present image is mated with the unique point in the right region of previous image, the unique point in the right region of present image is mated with the unique point in a rear left region of image;
The pixel value of adjacent two width images in overlapping region superposeed as spliced pixel value by linear weight, complete Image Mosaics.
On the other hand, the described high clear video image splicing apparatus based on SIFT comprises:
Subregion extraction module, for every width image is divided into San Ge region, left, center, right, utilizes SIFT algorithm to extract the unique point in He You region, left region;
Images match module, for the unique point in the left region of present image is mated with the unique point in the right region of previous image, and mates the unique point in the right region of present image with the unique point in a rear left region of image;
Image Mosaics module, for the pixel value of adjacent two width images in overlapping region superposeed as spliced pixel value by linear weight, completes Image Mosaics.
The invention has the beneficial effects as follows: the present invention is extract minutiae in the certain area of the high clear video image left and right sides only, left feature point mates with upper width image right feature point, right feature point mates with lower width image left feature point, thereby reduce to a great extent calculated amount and improved speed, also be conducive to the coupling between image simultaneously, improve matching speed and lower mistake matching rate, realize the seamless real-time splicing of comprehensive HD video, widen observation visual field, security spatial domain, promoted image quality.
Brief description of the drawings
Fig. 1 is the process flow diagram of the high clear video image joining method based on SIFT that provides of first embodiment of the invention;
Fig. 2 is the concrete preferred flow charts of the one of step S11 in Fig. 1;
Fig. 3 is that image is divided schematic diagram;
Fig. 4 is descriptor construction schematic diagram;
Fig. 5 is the process flow diagram of the high clear video image joining method based on SIFT that provides of first embodiment of the invention;
Fig. 6 is the concrete preferred flow charts of the one of step S53 in Fig. 5;
Fig. 7 is the frame assumption diagram of the high clear video image splicing apparatus based on SIFT that provides of third embodiment of the invention;
Fig. 8 is the preferred structure figure of subregion extraction module;
Fig. 9 is the frame structure of the high clear video image splicing apparatus based on SIFT that provides of third embodiment of the invention.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein, only in order to explain the present invention, is not intended to limit the present invention.
For technical solutions according to the invention are described, describe below by specific embodiment.
embodiment mono-:
Fig. 1 shows the flow process of the high clear video image joining method based on SIFT that first embodiment of the invention provides, and only shows for convenience of explanation the part relevant to the embodiment of the present invention.
The high clear video image joining method based on SIFT that the present embodiment provides comprises the steps:
Step S11, every width image is divided into San Ge region, left, center, right, utilizes SIFT algorithm to extract the unique point in He You region, left region.
According to the actual conditions of application, owing to only splicing in the horizontal direction and equipment is fixed, therefore can determine the probable ranges of overlapping region, image left and right, so extract minutiae in the certain area of the high clear video image left and right sides only.Therefore this step is divided into San Ge region, left, center, right by full HD image according to horizontal direction, then utilizes SIFT algorithm to extract the unique point in He You region, left region.As shown in Figure 2, concrete steps are as follows:
S111, every width image is divided into San Ge region, left, center, right.
In the horizontal direction every width image is divided into San Ge region, left, center, right, enumerates as a kind of example, left field is [0,200], and zone line is [200,1720], and right side area is [1720,1920].Image is divided schematic diagram as shown in Figure 3.
S112, every width image is carried out to convolution algorithm with different scale Gaussian function, obtain pyramid Gaussian Blur image, and adjacent layer image subtraction is obtained to difference image.
D(x,y,σ)=L(x,y,kσ)-L(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y) (1)
Wherein G is Gaussian function, and I is original image, and L is the Gaussian Blur image obtaining, and D is difference image, and x, y are image pixel point coordinate, and k and σ are respectively scale factor and Gaussian function variance.
S113, each pixel of difference image and consecutive point are around compared, using the extreme point that is greater than or less than all consecutive point as candidate feature point.
Utilize the second Taylor series formula (2) of formula (1) to carry out fitting to candidate feature point and obtain accurate position, and delete unique point and the boundary response point of low contrast.
Wherein X is independent variable matrix [x y σ]
t, T representing matrix transposition.
S114, obtain unique point principal direction.
The principal direction of unique point is the corresponding direction of maximal value in the histogram of each point gradient direction in this unique point neighborhood.In the border circular areas centered by unique point, sample, and by the direction of each point gradient in statistics with histogram region.
S115, structure SIFT unique point descriptor.
As shown in Figure 4, image coordinate axle is rotated to the principal direction of unique point, the neighborhood of choosing centered by unique point, as sample window, is divided into 16 4 × 4 subregions; Each 4 × 4 subregions are set up to the 8 post gradient orientation histograms that interval 45 is spent, the gradient direction of image each point in neighborhood is weighted and is added on 8 post gradient orientation histograms by Gauss's weight, generate 8 n dimensional vector ns.Unique point neighborhood is divided into 16 sub regions, and every sub regions has 8 n dimensional vector ns, so altogether can obtain the descriptor vector of 16 × 8=128 dimension.
Owing to only computing extract minutiae being carried out in left side and right side area, zone line does not participate in calculating, and it is original 1/5th that reference area is reduced into, so speed has improved five times.
Step S12, the unique point in the left region of present image is mated with the unique point in the right region of previous image, the unique point in the right region of present image is mated with the unique point in a rear left region of image.
Under synchronization, the image that middle video camera gets is present image, and the image that adjacent left-hand video camera gets is previous image, and the image that adjacent right side video camera gets is a rear image.The unique point coordinate of piece image represents with [xy], [uv] expression for the unique point coordinate of the second width image, and the affined transformation of the unique point of coupling from the first width image mapped to the second width image is:
T in formula
xand t
ytranslational component, m
ito represent affine rotation, convergent-divergent and flexible affine parameter.If have many group couplings right, above formula is rewritten as:
Above formula system of linear equations is expressed as:
Ax=b (5)
Wherein x is affine parameter matrix undetermined, can obtain affine parameter matrix by least square method:
x=[A
TA]
-1A
Tb (6)
Step S13, the pixel value of adjacent two width images in overlapping region superposeed as spliced pixel value by linear weight, complete Image Mosaics.
In the overlapping region of two width images, in order slowly to change and to be transitioned into the second width image from piece image, sampling linear algorithm splices, and is about to the pixel value of two width images in overlapping region and superposes as spliced pixel value by linear weight, and the pixel value in spliced panoramic image is:
Wherein I
1(x, y) and I
2(x, y) is respectively the pixel value of piece image and the second width image, R
1represent Non-overlapping Domain in piece image, R
2represent the overlapping region of two width images, R
3represent Non-overlapping Domain in the second width image, d (x) is weight coefficient, represents that the first width image pixel value accounts for the ratio of total pixel value, and span is [0,1], supposes that piece image is x at the marginal position of overlapping region
0, overlapping region width is w, weight coefficient is:
The relative position and the affine parameter that have mated rear image remain unchanged, so linear weight also will remain unchanged, this step is saved in the weight coefficient calculating in internal memory, after this in splicing, calling these weight coefficients calculates, avoid computing repeatedly, reduce calculated amount, improved splicing speed.
embodiment bis-:
Fig. 5 shows the flow process of the high clear video image joining method based on SIFT that second embodiment of the invention provides, and only shows for convenience of explanation the part relevant to the embodiment of the present invention.
The high clear video image joining method based on SIFT that the present embodiment provides comprises the steps:
Step S51, every width image is divided into San Ge region, left, center, right, utilizes SIFT algorithm to extract the unique point in He You region, left region;
Step S52, the unique point in the left region of present image is mated with the unique point in the right region of previous image, the unique point in the right region of present image is mated with the unique point in a rear left region of image;
Step S53, the mistake that adopts RANSAC algorithm to reject in images match are mated, and obtain the consistent set of interior point and affine parameter;
Step S54, the pixel value of adjacent two width images in overlapping region superposeed as spliced pixel value by linear weight, complete Image Mosaics.
The present embodiment has increased step S53 on the basis of embodiment mono-, in order to reject the mistake coupling in images match, adopt RANSAC (the Random Sample Consensus of strong robustness, stochastic sampling is consistent) algorithm, matched data is divided into correct coupling (interior point) and erroneous matching (exterior point) two parts, the feature that occupies the majority of point in utilizing, automatic rejection the larger minority exterior point of error, finally obtain the consistent set of interior point and affine parameter.Concrete, as shown in Figure 6, described step S53 specifically comprises:
S531, between adjacent two width images, to choose arbitrarily at random 3 stack features points couplings right.
First between two width images, choosing arbitrarily at random 3 stack features points mates [x
1y
1: u
1v
1], [x
2y
2: u
2v
2] and [x
3y
3: u
3v
3];
S532, calculate 6 affine parameters between two width images.
Calculate 6 affine parameter m between two width images by formula (4)
1, m
2, m
3, m
4, t
xand t
y.
S533, search the interior point that meets affine parameter.
Calculate [xy] and be mapped to again the estimated coordinates of another width figure by (3) formula
and relatively obtain error with true coordinate [uv], using error be less than threshold value as in point retain, otherwise abandon as exterior point, travel through all couplings to the interior some set that obtains sampling consistent.
S534, judge that interior quantity of set is whether maximum;
S535, count when maximum when interior, set and the corresponding affine parameter of some quantity maximum in retaining.
The above step preset times that finally circulates, such as 500 times, obtains the set of interior point and optimum affine parameter that maximum quantity stochastic sampling is consistent.
The present embodiment misses coupling by rejecting, has improved the accuracy of Image Mosaics.
embodiment tri-:
Fig. 7 shows the frame structure of the high clear video image splicing apparatus based on SIFT that third embodiment of the invention provides, and only shows for convenience of explanation the part relevant to the embodiment of the present invention.
The high clear video image splicing apparatus based on SIFT that this enforcement provides comprises:
Subregion extraction module 71, for every width image is divided into San Ge region, left, center, right, utilizes SIFT algorithm to extract the unique point in He You region, left region;
Images match module 72, for the unique point in the left region of present image is mated with the unique point in the right region of previous image, and mates the unique point in the right region of present image with the unique point in a rear left region of image;
Image Mosaics module 73, for the pixel value of adjacent two width images in overlapping region superposeed as spliced pixel value by linear weight, completes Image Mosaics.
In the present embodiment device, modules 71-73 correspondence has realized step S11-S13's in embodiment mono-, subregion extraction module 71 carries out subregion and extracts unique point image, images match module 72 completes the Feature Points Matching of adjacent image overlapping region, and last Image Mosaics module 73 is spliced overlapping region and formed panoramic picture.
As a kind of preferred implementation, as shown in Figure 8, described subregion extraction module 71 comprises:
Region division unit 711, for being divided into San Ge region, left, center, right by every width image;
Blur unit 712, for different scale Gaussian function, every width image being carried out to convolution algorithm, obtains pyramid Gaussian Blur image, and adjacent layer image subtraction is obtained to difference image:
Extreme point extraction unit 713, for each pixel of difference image and consecutive point are around compared, using the extreme point that is greater than or less than all consecutive point as candidate feature point;
Principal direction acquiring unit 714, for obtaining unique point principal direction;
Descriptor computation unit 715, for constructing SIFT unique point descriptor.
embodiment tetra-:
Fig. 9 shows the frame structure of the high clear video image splicing apparatus based on SIFT that fourth embodiment of the invention provides, and only shows for convenience of explanation the part relevant to the embodiment of the present invention.
The high clear video image splicing apparatus based on SIFT that this enforcement provides comprises:
Subregion extraction module 91, for every width image is divided into San Ge region, left, center, right, utilizes SIFT algorithm to extract the unique point in He You region, left region;
Images match module 92, for the unique point in the left region of present image is mated with the unique point in the right region of previous image, and mates the unique point in the right region of present image with the unique point in a rear left region of image;
Mate filtering module 93 by mistake, for adopting RANSAC algorithm to reject the mistake coupling of images match, obtain the consistent set of interior point and affine parameter.
Image Mosaics module 94, for the pixel value of adjacent two width images in overlapping region superposeed as spliced pixel value by linear weight, completes Image Mosaics.
Originally be implemented on the basis of embodiment tri-and increased and mated filtering module 93 by mistake, as the concrete preferred implementation of one, described subregion extraction module comprises:
Region division unit, for being divided into San Ge region, left, center, right by every width image;
Blur unit, for different scale Gaussian function, every width image being carried out to convolution algorithm, obtains pyramid Gaussian Blur image, and adjacent layer image subtraction is obtained to difference image:
Extreme point extraction unit, for each pixel of difference image and consecutive point are around compared, using the extreme point that is greater than or less than all consecutive point as candidate feature point;
Principal direction acquiring unit, for obtaining unique point principal direction;
Descriptor computation unit, for constructing SIFT unique point descriptor.
To sum up, the embodiment of the present invention is extract minutiae in the certain area of the high clear video image left and right sides only, left feature point mates with upper width image right feature point, right feature point mates with lower width image left feature point, has improved speed thereby reduce to a great extent calculated amount, is also conducive to the coupling between image simultaneously, improve matching speed and lower mistake matching rate, realize the seamless real-time splicing of comprehensive HD video, widened observation visual field, security spatial domain, promoted image quality.
One of ordinary skill in the art will appreciate that, the all or part of step realizing in above-described embodiment method is can carry out the hardware that instruction is relevant by program to complete, described program can be being stored in a computer read/write memory medium, described storage medium, as ROM/RAM, disk, CD etc.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, all any amendments of doing within the spirit and principles in the present invention, be equal to and replace and improvement etc., within all should being included in protection scope of the present invention.
Claims (8)
1. the high clear video image joining method based on SIFT, is characterized in that, described method comprises:
Every width image is divided into San Ge region, left, center, right, utilizes SIFT algorithm to extract the unique point in He You region, left region;
The unique point in the left region of present image is mated with the unique point in the right region of previous image, the unique point in the right region of present image is mated with the unique point in a rear left region of image;
The pixel value of adjacent two width images in overlapping region superposeed as spliced pixel value by linear weight, complete Image Mosaics.
2. method as claimed in claim 1, it is characterized in that, described the unique point in the left region of present image is mated with the unique point in the right region of previous image, after the unique point in the right region of present image is mated to step with the unique point in a rear left region of image, also comprises:
Adopt RANSAC algorithm to reject the mistake coupling in images match, obtain the consistent set of interior point and affine parameter.
3. method as claimed in claim 2, is characterized in that, described employing RANSAC algorithm is rejected the mistake coupling in images match, obtains consistent interior some set and affine parameter step, specifically comprises:
Between adjacent two width images, choose arbitrarily at random 3 stack features point couplings right;
Calculate 6 affine parameters between two width images;
Search the interior point that meets affine parameter;
Whether interior the quantity that judges set is maximum;
Count when maximum when interior, set and the corresponding affine parameter of some quantity maximum in retaining;
Circulation preset times, obtains the set of interior point and optimum affine parameter that maximum quantity stochastic sampling is consistent.
4. method as described in claim 1-3 any one, is characterized in that, described every width image is divided into San Ge region, left, center, right, utilizes SIFT algorithm to extract the unique point step in He You region, left region, specifically comprises:
Every width image is divided into San Ge region, left, center, right;
Every width image is carried out to convolution algorithm with different scale Gaussian function, obtains pyramid Gaussian Blur image, and adjacent layer image subtraction is obtained to difference image:
Each pixel of difference image and consecutive point are around compared, using the extreme point that is greater than or less than all consecutive point as candidate feature point;
Obtain unique point principal direction;
Structure SIFT unique point descriptor.
5. the high clear video image splicing apparatus based on SIFT, is characterized in that, described device comprises:
Subregion extraction module, for every width image is divided into San Ge region, left, center, right, utilizes SIFT algorithm to extract the unique point in He You region, left region;
Images match module, for the unique point in the left region of present image is mated with the unique point in the right region of previous image, and mates the unique point in the right region of present image with the unique point in a rear left region of image;
Image Mosaics module, for the pixel value of adjacent two width images in overlapping region superposeed as spliced pixel value by linear weight, completes Image Mosaics.
6. install as claimed in claim 5, it is characterized in that, described device also comprises:
Mate filtering module by mistake, for adopting RANSAC algorithm to reject the mistake coupling of images match, obtain the consistent set of interior point and affine parameter.
7. install as claimed in claim 6, it is characterized in that, described mistake is mated filtering module and is comprised:
Data selecting unit is right for choose arbitrarily at random 3 stack features point couplings between adjacent two width images;
Parameter acquiring unit, for calculating 6 affine parameters between two width images;
Interior point is searched unit, for searching the interior point that meets affine parameter;
Whether interior point judges processing unit, maximum for judging interior quantity of set, count when maximum when interior, and set and the corresponding affine parameter of some quantity maximum in retaining.
8. as described in claim 5-7 any one, install, it is characterized in that, described subregion extraction module comprises:
Region division unit, for being divided into San Ge region, left, center, right by every width image;
Blur unit, for different scale Gaussian function, every width image being carried out to convolution algorithm, obtains pyramid Gaussian Blur image, and adjacent layer image subtraction is obtained to difference image:
Extreme point extraction unit, for each pixel of difference image and consecutive point are around compared, using the extreme point that is greater than or less than all consecutive point as candidate feature point;
Principal direction acquiring unit, for obtaining unique point principal direction;
Descriptor computation unit, for constructing SIFT unique point descriptor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410197659.0A CN103955888A (en) | 2014-05-12 | 2014-05-12 | High-definition video image mosaic method and device based on SIFT |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410197659.0A CN103955888A (en) | 2014-05-12 | 2014-05-12 | High-definition video image mosaic method and device based on SIFT |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103955888A true CN103955888A (en) | 2014-07-30 |
Family
ID=51333157
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410197659.0A Pending CN103955888A (en) | 2014-05-12 | 2014-05-12 | High-definition video image mosaic method and device based on SIFT |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103955888A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104574339A (en) * | 2015-02-09 | 2015-04-29 | 上海安威士科技股份有限公司 | Multi-scale cylindrical projection panorama image generating method for video monitoring |
CN105554449A (en) * | 2015-12-11 | 2016-05-04 | 浙江宇视科技有限公司 | Method and device for quickly splicing camera images |
CN105574815A (en) * | 2015-12-21 | 2016-05-11 | 湖南优象科技有限公司 | Image splicing method and device used for scanning mouse |
CN105678721A (en) * | 2014-11-20 | 2016-06-15 | 深圳英飞拓科技股份有限公司 | Method and device for smoothing seams of panoramic stitched image |
CN106204518A (en) * | 2015-05-08 | 2016-12-07 | 无锡天脉聚源传媒科技有限公司 | A kind of shot segmentation method and apparatus |
CN106851092A (en) * | 2016-12-30 | 2017-06-13 | 中国人民解放军空军预警学院监控系统工程研究所 | A kind of infrared video joining method and device |
CN108811520A (en) * | 2017-03-07 | 2018-11-13 | 林克物流有限公司 | All-around video imaging method and the equipment for executing this method |
CN111105351A (en) * | 2019-12-13 | 2020-05-05 | 华中科技大学鄂州工业技术研究院 | Video sequence image splicing method and device |
WO2020107267A1 (en) * | 2018-11-28 | 2020-06-04 | 华为技术有限公司 | Image feature point matching method and device |
CN111712833A (en) * | 2018-06-13 | 2020-09-25 | 华为技术有限公司 | Method and device for screening local feature points |
-
2014
- 2014-05-12 CN CN201410197659.0A patent/CN103955888A/en active Pending
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105678721A (en) * | 2014-11-20 | 2016-06-15 | 深圳英飞拓科技股份有限公司 | Method and device for smoothing seams of panoramic stitched image |
CN104574339A (en) * | 2015-02-09 | 2015-04-29 | 上海安威士科技股份有限公司 | Multi-scale cylindrical projection panorama image generating method for video monitoring |
CN106204518A (en) * | 2015-05-08 | 2016-12-07 | 无锡天脉聚源传媒科技有限公司 | A kind of shot segmentation method and apparatus |
CN105554449A (en) * | 2015-12-11 | 2016-05-04 | 浙江宇视科技有限公司 | Method and device for quickly splicing camera images |
CN105554449B (en) * | 2015-12-11 | 2018-04-27 | 浙江宇视科技有限公司 | A kind of method and device for being used to quickly splice camera review |
CN105574815A (en) * | 2015-12-21 | 2016-05-11 | 湖南优象科技有限公司 | Image splicing method and device used for scanning mouse |
CN106851092B (en) * | 2016-12-30 | 2018-02-09 | 中国人民解放军空军预警学院监控系统工程研究所 | A kind of infrared video joining method and device |
CN106851092A (en) * | 2016-12-30 | 2017-06-13 | 中国人民解放军空军预警学院监控系统工程研究所 | A kind of infrared video joining method and device |
CN108811520A (en) * | 2017-03-07 | 2018-11-13 | 林克物流有限公司 | All-around video imaging method and the equipment for executing this method |
CN108811520B (en) * | 2017-03-07 | 2020-09-01 | 林克物流有限公司 | Omnidirectional image processing equipment and processing method and computer readable storage medium |
CN111712833A (en) * | 2018-06-13 | 2020-09-25 | 华为技术有限公司 | Method and device for screening local feature points |
CN111712833B (en) * | 2018-06-13 | 2023-10-27 | 华为技术有限公司 | Method and device for screening local feature points |
WO2020107267A1 (en) * | 2018-11-28 | 2020-06-04 | 华为技术有限公司 | Image feature point matching method and device |
CN111105351A (en) * | 2019-12-13 | 2020-05-05 | 华中科技大学鄂州工业技术研究院 | Video sequence image splicing method and device |
CN111105351B (en) * | 2019-12-13 | 2023-04-18 | 华中科技大学鄂州工业技术研究院 | Video sequence image splicing method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103955888A (en) | High-definition video image mosaic method and device based on SIFT | |
CN103325112B (en) | Moving target method for quick in dynamic scene | |
CN104599258B (en) | A kind of image split-joint method based on anisotropic character descriptor | |
CN108960211B (en) | Multi-target human body posture detection method and system | |
CN103426182B (en) | The electronic image stabilization method of view-based access control model attention mechanism | |
CN105608667A (en) | Method and device for panoramic stitching | |
Chaudhury et al. | Auto-rectification of user photos | |
CN111723801B (en) | Method and system for detecting and correcting target in fisheye camera picture | |
CN113591968A (en) | Infrared weak and small target detection method based on asymmetric attention feature fusion | |
CN112254656B (en) | Stereoscopic vision three-dimensional displacement measurement method based on structural surface point characteristics | |
CN102859389A (en) | Range measurement using a coded aperture | |
CN102932605A (en) | Method for selecting camera combination in visual perception network | |
CN105678722A (en) | Panoramic stitched image bending correction method and panoramic stitched image bending correction device | |
CN104392416A (en) | Video stitching method for sports scene | |
CN106097383A (en) | A kind of method for tracking target for occlusion issue and equipment | |
CN112364865B (en) | Method for detecting small moving target in complex scene | |
CN105787876A (en) | Panorama video automatic stitching method based on SURF feature tracking matching | |
CN104537381B (en) | A kind of fuzzy image recognition method based on fuzzy invariant features | |
CN112801870B (en) | Image splicing method based on grid optimization, splicing system and readable storage medium | |
CN105678720A (en) | Image matching judging method and image matching judging device for panoramic stitching | |
CN109919832A (en) | One kind being used for unpiloted traffic image joining method | |
CN114331879A (en) | Visible light and infrared image registration method for equalized second-order gradient histogram descriptor | |
CN106780309A (en) | A kind of diameter radar image joining method | |
Li et al. | Panoramic image mosaic technology based on sift algorithm in power monitoring | |
CN105608689A (en) | Method and device for eliminating image feature mismatching for panoramic stitching |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20140730 |
|
RJ01 | Rejection of invention patent application after publication |