CN110378355B - FAST feature point detection method based on FPGA hardware fusion image - Google Patents

FAST feature point detection method based on FPGA hardware fusion image Download PDF

Info

Publication number
CN110378355B
CN110378355B CN201910581890.2A CN201910581890A CN110378355B CN 110378355 B CN110378355 B CN 110378355B CN 201910581890 A CN201910581890 A CN 201910581890A CN 110378355 B CN110378355 B CN 110378355B
Authority
CN
China
Prior art keywords
image
pixel
pyramid
point
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910581890.2A
Other languages
Chinese (zh)
Other versions
CN110378355A (en
Inventor
张俊举
洪宇
黄奕峰
杨刘
严松
李亚
周园松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201910581890.2A priority Critical patent/CN110378355B/en
Publication of CN110378355A publication Critical patent/CN110378355A/en
Application granted granted Critical
Publication of CN110378355B publication Critical patent/CN110378355B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a FAST feature point detection method based on an FPGA hardware fusion image, which comprises the following steps: step S1, inputting two paths of video images of an infrared image and a visible light image; step S2, respectively preprocessing two paths of input images; step S3, establishing a Gaussian pyramid for the two input pictures respectively; step S4, establishing a Gaussian difference pyramid for the two Gaussian pyramids; step S5, fusing the Laplacian pyramid according to a certain functional relation; step S6, performing FAST feature point detection on each pixel in the fused Laplace pyramid image; step S7: and outputting an image through the OLED display module.

Description

FAST feature point detection method based on FPGA hardware fusion image
Technical Field
The invention relates to a video processing technology, in particular to a FAST feature point detection method based on an FPGA hardware fusion image.
Background
The FAST feature detection algorithm is a FAST corner detection algorithm, the calculation speed of which is faster than that of other algorithms, but when the gray contrast of an image is not obvious enough or under complex illumination, a large number of false detection rates are generated. When the number of noise points in the image is large, the robustness is not good, and the FAST operator does not have multi-scale features and therefore has no rotation invariance.
Disclosure of Invention
The invention aims to provide a FAST feature point detection method based on an FPGA hardware fusion image.
The technical scheme for realizing the purpose of the invention is as follows: a FAST feature point detection method based on FPGA hardware fusion images comprises the following steps:
step S1, inputting two paths of video images of an infrared image and a visible light image;
step S2, respectively preprocessing two paths of input images;
step S3, establishing a Gaussian pyramid for the two input pictures respectively;
step S4, establishing a Gaussian difference pyramid for the two Gaussian pyramids;
step S5, fusing the Laplacian pyramid according to a certain functional relation;
step S6, performing FAST feature point detection on each pixel in the fused Laplace pyramid image;
step S7: and outputting an image through the OLED display module.
Further, the preprocessing in step S2 specifically includes:
step S201, two paths of detectors respectively input 8bit signals into an FPGA chip through IO ports;
step S202, stretching the infrared image contrast by selecting a histogram equalization method;
step S203, selecting a median filtering method to remove visible light image noise.
Further, step S201 specifically includes:
storing a complete image in an image effective interval;
calculating the occurrence frequency of each gray value of the current frame image, using a DPRAM on an FPGA chip as a memory record, wherein the address is 0,1 and … 255 for 256 gray levels, and the occurrence frequency of the corresponding gray level in each address register is recorded;
reading data in the RAM in an image blanking interval, performing accumulation calculation, and calculating an upper limit A1 and a lower limit A2 of image gray;
after the effective signal of the next frame image comes, the infrared image is linearly stretched by using the upper and lower limit parameters A1 and A2 in the image blanking interval.
Further, the divisor is moved to the left by n bits through a shift register for amplification, and the result after calculation is moved to the right by n bits for reduction; wherein the total number of image pixels is used as a divisor.
Further, the following steps: for each pixel, a 3x3 template is constructed, pixels in each row are sorted from large to small, the minimum value of the maximum value column, the median value of the middle column and the maximum value of the minimum value column are compared, and the median value among the three is taken to replace the pixel.
Further, the step S3 selects a 3-layer gaussian pyramid, and the specific process is as follows: each G layer of the Gaussian pyramid obtained by low-pass filtering and alternate-line alternate-column downsampling shown in formula (1) 1 ,G 2 ,...G N
Figure BDA0002113376590000021
Wherein, p is the number of decomposition layers, ω (m, N) is the filter coefficient of the corresponding coordinate (m, N) in the filter template, (i, j) is the coordinate position of the current pixel point, l is 0,1 0 Each layer in the pyramid is 1/4 of the previous layer for the original image.
Further, step S4 specifically includes: g obtained after sampling in step S3 1 ,G 2 Interpolating and amplifying by the formula (2) to obtain G' 1 ,G′ 2 So that they maintain the same resolution as the next layer
Figure BDA0002113376590000022
Further, step S5 is specifically: by using
Figure BDA0002113376590000023
Obtaining a first layer LP of the Laplacian pyramid 1
Figure BDA0002113376590000024
Obtaining a Laplacian pyramid intermediate layer LP 2 Uppermost layer G 3 Remain unchanged as the laplacian pyramid top layer.
Further, the step S6 includes:
step S601, constructing a 7 x 7 template and traversing pixel points;
in step S602, a threshold value t is defined,calculate the center pixel p and the point p on the circumference with radius 3 14 、p 74 、p 41 And p 47 The pixel difference of (2); if the absolute value of the pixel difference of more than 3 points is larger than the threshold value t, marking as a candidate point to perform the next operation, otherwise ignoring the point;
step S603, if the pixel p is a candidate point, further calculating the center pixel p and the point p on the circle with radius 3 13 ,p 14 ,p 15 ,p 22 ,p 26 ,p 31 ,p 37 ,p 41 ,p 47 ,p 51 ,p 57 ,p 62 ,p 66 ,p 73 ,p 74 ,p 75 (ii) a If the absolute value of the pixel difference between the continuous more than 9 points and the central point is greater than a threshold value t, marking as a characteristic point;
step S604, calculating the sum of the pixel differences of the 16 points around the feature point and the central point, and recording the sum as S;
step S605, taking a 3x3 template, if more than two angular points exist in the template range, comparing the value of the angular point S, and taking the maximum value as the angular point; if there is only one, the corner value is reserved;
step S606, a laplacian pyramid reconstruction is performed.
Compared with the prior art, the invention has the following advantages: (1) according to the method, the infrared and visible light double-light fusion image is utilized, and the feature point detection is carried out on the fusion image so as to solve the problems of low detection rate and poor robustness of the existing algorithm; (2) the parallel execution capacity and the pipeline operation of the FPGA have strong advantages in the field of image processing with large data volume, and the configuration is flexible and can be modified.
The invention is further described below with reference to the accompanying drawings.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a schematic diagram of points on a circle with a radius of 3 around the central pixel in step 6.
Detailed Description
With reference to fig. 1, a method for detecting FAST feature points based on FPGA hardware fusion images includes the following steps:
step S1, inputting two paths of video images of an infrared image and a visible light image;
step S2, respectively preprocessing two paths of input images;
step S3, establishing a Gaussian pyramid for the two input pictures respectively;
step S4, establishing a Gaussian difference pyramid for the two Gaussian pyramids;
step S5, fusing the Laplacian pyramid according to a certain functional relation;
step S6, performing FAST feature point detection on each pixel in the fused Laplace pyramid image;
step S7: and outputting an image through the OLED display module.
The preprocessing in step S2 specifically includes:
step S201, two paths of detectors respectively input 8bit signals into an FPGA chip through IO ports;
step S202, stretching the infrared image contrast by selecting a histogram equalization method;
step S203, selecting a median filtering method to remove visible light image noise.
Step S201 specifically includes:
storing a complete image in an image effective interval;
calculating the occurrence frequency of each gray value of the current frame image, using a DPRAM on an FPGA chip as a memory record, wherein the address is 0,1 and … 255 for 256 gray levels, and the occurrence frequency of the corresponding gray level in each address register is recorded;
reading data in the RAM in an image blanking interval, performing accumulation calculation, and calculating an upper limit A1 and a lower limit A2 of image gray;
after the effective signal of the next frame image comes, the infrared image is linearly stretched by using the upper and lower limit parameters A1 and A2 in the image blanking interval.
The divisor is moved to the left by n bits through a shift register for amplification, and the result after calculation is moved to the right by n bits for reduction; wherein the total number of image pixels is used as a divisor.
Step S203 specifically includes: for each pixel, a 3x3 template is constructed, pixels in each row are sorted from large to small, the minimum value of the maximum value column, the median value of the middle column and the maximum value of the minimum value column are compared, and the median value among the three is taken to replace the pixel.
In step S3, selecting a 3-level gaussian pyramid, specifically: each G layer of the Gaussian pyramid obtained by low-pass filtering and alternate-row interlaced downsampling shown in formula (1) 1 ,G 2 ,...G N
Figure BDA0002113376590000041
Where, p is the number of decomposition layers, ω (m, N) is the filter coefficient of the corresponding coordinate (m, N) in the filter template, (i, j) is the coordinate position of the current pixel, and l is 0,1 0 Each layer in the pyramid is 1/4 of the previous layer for the original image.
Step S4 specifically includes: g obtained after sampling in step S3 1 ,G 2 Interpolating and amplifying by the formula (2) to obtain G' 1 ,G′ 2 So that they maintain the same resolution as the next layer
Figure BDA0002113376590000051
Step S5 is specifically performed by
Figure BDA0002113376590000052
Obtaining a first layer LP of the Laplacian pyramid 1
Figure BDA0002113376590000053
Obtaining the Laplacian pyramid intermediate layer LP 2 Uppermost layer G 3 Left unchanged as the Laplacian pyramid top level
With reference to fig. 2, the step S6 includes:
step S601, constructing a 7 x 7 template and traversing pixel points;
step S602, defining a threshold t, calculating a center pixel p and a point p on a circle with a radius of 3 14 、p 74 、p 41 And p 47 The pixel difference of (a); if the absolute value of the pixel difference of more than 3 points is larger than the threshold value t, marking as a candidate point to perform the next operation, otherwise ignoring the point;
step S603, if the pixel p is a candidate point, further calculating the center pixel p and the point p on the circle with radius 3 13 ,p 14 ,p 15 ,p 22 ,p 26 ,p 31 ,p 37 ,p 41 ,p 47 ,p 51 ,p 57 ,p 62 ,p 66 ,p 73 ,p 74 ,p 75 (ii) a If the absolute value of the pixel difference between the continuous more than 9 points and the central point is greater than a threshold value t, marking as a characteristic point;
step S604, calculating the sum of the difference values of the 16 points around the feature point and the pixel of the central point, and recording the sum as S;
step S605, taking a 3x3 template, if more than two angular points exist in the template range, comparing the value of the angular point S, and taking the maximum value as the angular point; if there is only one, the corner value is reserved;
step S606, a laplacian pyramid reconstruction is performed.

Claims (8)

1. A FAST feature point detection method based on FPGA hardware fusion images is characterized by comprising the following steps:
step S1, inputting two paths of video images of an infrared image and a visible light image;
step S2, respectively preprocessing two paths of input images;
step S3, establishing a Gaussian pyramid for the two input pictures respectively;
step S4, establishing a Gaussian difference pyramid for the two Gaussian pyramids;
step S5, fusing the Laplacian pyramid according to a certain functional relation;
step S6, performing FAST feature point detection on each pixel in the fused Laplace pyramid image; the method comprises the following steps:
step S601, constructing a 7 x 7 template and traversing pixel points;
step S602, defining a threshold t, calculating a center pixel p and a point p on a circle with a radius of 3 14 、p 74 、p 41 And p 47 The pixel difference of (2); if the absolute value of the pixel difference of more than 3 points is larger than the threshold value t, marking as a candidate point to perform the next operation, otherwise ignoring the point;
in step S603, if the pixel p is a candidate point, the central pixel p and the point p on the circumference with radius 3 are further calculated 13 ,p 14 ,p 15 ,p 22 ,p 26 ,p 31 ,p 37 ,p 41 ,p 47 ,p 51 ,p 57 ,p 62 ,p 66 ,p 73 ,p 74 ,p 75 (ii) a If the absolute value of the pixel difference between the continuous more than 9 points and the central point is greater than a threshold value t, marking as a characteristic point;
step S604, calculating the sum of the pixel differences of the 16 points around the feature point and the central point, and recording the sum as S;
step S605, taking a 3x3 template, if more than two angular points exist in the template range, comparing the value of the angular point S, and taking the maximum value as the angular point; if there is only one, the corner value is reserved;
step S606, reconstructing a Laplacian pyramid;
step S7: and outputting an image through the OLED display module.
2. The method according to claim 1, wherein the preprocessing in step S2 specifically includes:
step S201, two paths of detectors respectively input 8bit signals into an FPGA chip through IO ports;
step S202, stretching the infrared image contrast by selecting a histogram equalization method;
step S203, selecting a median filtering method to remove visible light image noise.
3. The method according to claim 2, wherein step S202 specifically includes:
storing a complete image in an image effective interval;
calculating the occurrence frequency of each gray value of the current frame image, using a DPRAM (dual-port random access memory) on an FPGA (field programmable gate array) chip as a memory record, wherein the addresses are 0,1 and … 255 for 256 gray levels, and recording the occurrence frequency of the corresponding gray levels in each address register;
reading data in the RAM in an image blanking interval, performing accumulation calculation, and calculating an upper limit A1 and a lower limit A2 of image gray;
after the effective signal of the next frame image comes, the infrared image is linearly stretched by using the upper and lower limit parameters A1 and A2 in the image blanking interval.
4. The method of claim 3, wherein the divisor is shifted to the left by n bits through the shift register for amplification, and the result after calculation is shifted to the right by n bits for restoration; where the total number of image pixels is used as a divisor.
5. The method according to claim 2, wherein step S203 specifically comprises: for each pixel, a 3x3 template is constructed, pixels in each row are sorted from large to small, the minimum value of the maximum value column, the median value of the middle column and the maximum value of the minimum value column are compared, and the median value among the three is taken to replace the pixel.
6. The method according to claim 1, wherein the step S3 of selecting the 3-level gaussian pyramid comprises: each G layer of the Gaussian pyramid obtained by low-pass filtering and alternate-line alternate-column downsampling shown in formula (1) 1 ,G 2 ,...G N
Figure FDA0003791713860000021
Where, p is the number of decomposition layers, ω (m, N) is the filter coefficient of the corresponding coordinate (m, N) in the filter template, (i, j) is the coordinate position of the current pixel, and l is 0,1 0 As an original image, a pyramidEach of which is 1/4 of the previous layer.
7. The method according to claim 6, wherein step S4 specifically comprises: g obtained after sampling in step S3 1 ,G 2 Interpolating and amplifying by formula (2) to obtain G' 1 ,G' 2 So that they maintain the same resolution as the next layer
Figure FDA0003791713860000022
8. The method according to claim 7, wherein step S5 is implemented by
Figure FDA0003791713860000023
Obtaining a first layer LP of the Laplacian pyramid 1
Figure FDA0003791713860000024
Obtaining a Laplacian pyramid intermediate layer LP 2 Uppermost layer G 3 Remain unchanged as the laplacian pyramid top layer.
CN201910581890.2A 2019-06-30 2019-06-30 FAST feature point detection method based on FPGA hardware fusion image Active CN110378355B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910581890.2A CN110378355B (en) 2019-06-30 2019-06-30 FAST feature point detection method based on FPGA hardware fusion image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910581890.2A CN110378355B (en) 2019-06-30 2019-06-30 FAST feature point detection method based on FPGA hardware fusion image

Publications (2)

Publication Number Publication Date
CN110378355A CN110378355A (en) 2019-10-25
CN110378355B true CN110378355B (en) 2022-09-30

Family

ID=68251322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910581890.2A Active CN110378355B (en) 2019-06-30 2019-06-30 FAST feature point detection method based on FPGA hardware fusion image

Country Status (1)

Country Link
CN (1) CN110378355B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991291B (en) * 2019-11-26 2021-09-07 清华大学 Image feature extraction method based on parallel computing

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102646269A (en) * 2012-02-29 2012-08-22 中山大学 Image processing method and device based on Laplace pyramid
CN108364272A (en) * 2017-12-30 2018-08-03 广东金泽润技术有限公司 A kind of high-performance Infrared-Visible fusion detection method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102646269A (en) * 2012-02-29 2012-08-22 中山大学 Image processing method and device based on Laplace pyramid
CN108364272A (en) * 2017-12-30 2018-08-03 广东金泽润技术有限公司 A kind of high-performance Infrared-Visible fusion detection method

Also Published As

Publication number Publication date
CN110378355A (en) 2019-10-25

Similar Documents

Publication Publication Date Title
CN111784576A (en) Image splicing method based on improved ORB feature algorithm
CN108960261B (en) Salient object detection method based on attention mechanism
US20230214981A1 (en) Method for detecting appearance defects of a product and electronic device
CN103279952A (en) Target tracking method and device
JP6046927B2 (en) Image processing apparatus and control method thereof
CN108038826B (en) Method and device for correcting perspective deformed shelf image
CN112037129A (en) Image super-resolution reconstruction method, device, equipment and storage medium
JP6849101B2 (en) Fast, gradual, super-voxel-based spatiotemporal video segmentation method
CN111860414A (en) Method for detecting Deepfake video based on multi-feature fusion
CN112614167A (en) Rock slice image alignment method combining single-polarization and orthogonal-polarization images
US20170301072A1 (en) System and method for adaptive pixel filtering
CN110378355B (en) FAST feature point detection method based on FPGA hardware fusion image
CN112734822A (en) Stereo matching algorithm based on infrared and visible light images
CN111223083A (en) Method, system, device and medium for constructing surface scratch detection neural network
Zhang et al. An effective decomposition-enhancement method to restore light field images captured in the dark
CN115883988A (en) Video image splicing method and system, electronic equipment and storage medium
CN115631210A (en) Edge detection method and device
CN110120012B (en) Video stitching method for synchronous key frame extraction based on binocular camera
Teng et al. NEST: Neural event stack for event-based image enhancement
Tamilselvan et al. Survey and analysis of various image fusion techniques for clinical CT and MRI images
CN105744184B (en) Bad pixel correction method and the device for using this method
CN107993193B (en) Tunnel lining image splicing method based on illumination equalization and surf algorithm improvement
CN110717910B (en) CT image target detection method based on convolutional neural network and CT scanner
CN116863170A (en) Image matching method, device and storage medium
CN101461228A (en) Image processing circuit, semiconductor device, and image processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant