CN110222749B - Visible light image and infrared image matching method - Google Patents

Visible light image and infrared image matching method Download PDF

Info

Publication number
CN110222749B
CN110222749B CN201910447148.2A CN201910447148A CN110222749B CN 110222749 B CN110222749 B CN 110222749B CN 201910447148 A CN201910447148 A CN 201910447148A CN 110222749 B CN110222749 B CN 110222749B
Authority
CN
China
Prior art keywords
region
visible light
area
image
mser
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910447148.2A
Other languages
Chinese (zh)
Other versions
CN110222749A (en
Inventor
阿都建华
曾强
张海清
邓成梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu University of Information Technology
Original Assignee
Chengdu University of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu University of Information Technology filed Critical Chengdu University of Information Technology
Priority to CN201910447148.2A priority Critical patent/CN110222749B/en
Publication of CN110222749A publication Critical patent/CN110222749A/en
Application granted granted Critical
Publication of CN110222749B publication Critical patent/CN110222749B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Abstract

The invention discloses a visible light image and infrared image matching method, which comprises the following steps: (1) respectively extracting maximum stable extremum regions from the visible light image and the infrared image; (2) respectively carrying out ellipse fitting on the maximum stable extremum regions of the visible light image and the infrared image, and normalizing the maximum stable extremum regions into a standard circular region; (3) establishing an FBP model, and describing texture information in the standard circle by using binary coding; (4) matching the codes of the regions by using Hamming distance. According to the visible light image and infrared image matching method, an FBP algorithm firstly extracts a most stable extremum region as a region to be matched, then the FBP algorithm is used for carrying out feature description on each region, finally a threshold value is set, and a Hamming distance is used for judging whether each region is matched or not. The experimental result proves that the method has good performance in positioning precision, algorithm speed, stability, flexibility and real-time property.

Description

Visible light image and infrared image matching method
Technical Field
The invention relates to the technical field of image processing, in particular to a visible light and infrared image matching method based on MSER and an improved LBP algorithm.
Background
In the multimode image matching, the infrared image has the advantages of outstanding detection capability at night, better anti-blocking capability, object temperature sensing and the like; visible light imaging can retain rich scene information. Compared with a single-mode image, the multimode image can depict the imaging characteristics of a scene or a target from different aspects, and the acquired information has higher reliability and complementarity. Multimode images are acquired by different types of sensors, such as visible light combined with infrared images, visible light combined with synthetic aperture radar. The infrared sensor has night detection capability, fog penetration capability, anti-blocking capability and the like, and retains rich detailed information of a scene during visible light imaging to form typical complementarity, so that the visible light and the infrared imaging become a common combination mode in multimode image matching. The image matching is a method for searching for similar image targets through analysis of corresponding relations, similarity and consistency of image content, characteristics, structures, relations, textures, gray levels and the like, and aiming at single-mode image matching, the current mainstream algorithms such as SIFT, SURF, Harris and the like can be used for obtaining results with high robustness. For the multimode image matching, due to different imaging operating bands, the same characteristic region or characteristic point in the multimode image cannot be correctly extracted by using a common characteristic detection algorithm.
Many studies have been made at home and abroad on the subject of matching visible light and infrared images. The method of matching by combining visible light and infrared images is earlier to adopt an image contour extraction algorithm, extract edges by using a LOG operator, describe the edges by using chain codes, or further extract angular points on the edges and use invariant moments to carry out second speed, and finally carry out matching on lines or points based on feature description. Aiming at the poor extraction effect of visible light combined with infrared image contour features, Coiras et al further provides a matching method based on segmentation, a virtual triangle is constructed by using extracted straight lines, and matching is performed by taking the triangle as a base.
In addition, the matching of the visible light and the infrared image has the defects of large calculation amount and low efficiency.
Disclosure of Invention
The invention provides a visible light image and infrared image matching method, which aims to solve the problem that the accuracy and high speed of the registration of visible light and infrared images cannot be ensured at the same time in the conventional image registration.
In order to solve the technical problems, the invention adopts the following technical scheme:
a visible light image and infrared image matching method is characterized by comprising the following steps:
(1) respectively extracting maximum stable extremum regions from the visible light image and the infrared image;
(2) respectively carrying out ellipse fitting on the maximum stable extremum regions of the visible light image and the infrared image, and normalizing the maximum stable extremum regions into a standard circular region;
(3) establishing an FBP model, describing texture information in the standard circle by using binary coding, and comprising the following steps:
(31) drawing a circle by taking the center of the standard circular area as a circle center and R as a radius, and defining a circular area as a second circular area, wherein R is R/2, and R is the radius of the standard circular area;
(32) selecting a plurality of boundary points on the circumference of the second circular area, drawing a circle by taking each boundary point as a circle center and r as a radius, and then dividing a plurality of corresponding circular areas into edge circular areas, wherein each boundary point is uniformly distributed on the circumference of the second circular area;
(33) respectively coding each area and the standard circular area by adopting an LBP algorithm, wherein each area comprises a second circular area and all edge circular areas;
(4) and matching the codes of the regions by using Hamming distance, comprising the following steps:
(41) calculating the weight of each region;
(42) calculating the Hamming distance of each region;
(43) calculating the total Hamming distance by using the Hamming distance of each region and the weight of each region;
(44) and comparing the total Hamming distance with a set threshold, if the total Hamming distance is not greater than the set threshold, the visible light image and the infrared image accord with the matching condition, otherwise, the visible light image and the infrared image do not match.
Further, in the step (1), extracting a maximum stable extremum region by using an MSER algorithm includes:
(11) dividing the image into different connected regions, marking the regions and finishing extraction of the MSER block;
(12) gradually increasing the threshold value pixel from 0 to 255, dividing the image into different connected regions, marking the regions, and completing MSER block division;
(13) calculating the area change rate of the MSER block to complete the extraction of the MSER block, wherein the calculation method comprises the following steps
Figure BDA0002073997280000031
Wherein q isiRepresents the area change rate, Q, of the MSER block at the i-th timeiDenotes the area of the MSER block at the i-th increment, Qi+ΔDenotes the area of the MSER block at i + Δ increments, Qi-ΔRepresents the area of the MSER block at the i-delta increments;
(14) judging whether the MSER block is the maximum stable extremum region or not, and when q is the maximum stable extremum regioniWhen the minimum value is smaller, the current region is the maximum stable extremum region.
Further, in the step (2), a least square method is adopted to perform ellipse fitting on the maximum stable extremum region.
Further, a step of performing gaussian blurring processing on the whole image is further included between the step (2) and the step (3).
Further, before the step (33) of encoding each region by using the LBP algorithm, the method further includes the steps of:
and drawing a circle by taking the center of the standard circular area as a circle center and 3 pixels as radiuses, delimiting a circular area as a third circular area, and calculating the average value of all pixel values of the third circular area as the central pixel value of the standard circular area.
Further, in step (33), the encoding each region by using the LBP algorithm includes:
(331) respectively selecting a plurality of coding points on the boundary of each area;
(332) comparing the pixel value of the coding point with the central pixel value of the area where the coding point is located, if the pixel value of the coding point is not less than the central pixel value, the coding point is coded as 1, otherwise, the coding point is coded as 0.
Further, the weight calculation method for each region in step (41) is as follows:
Figure BDA0002073997280000032
wherein, wjIs the weight of the jth region, j is an integer and is more than 0 and less than or equal to n, n is the total number of the second circular region and all the edge circular regions, (x)j,yj) Is the center coordinate of the jth region, (x)c.yc) Is the center coordinate of the standard circular region, rjIs the radius of the jth region.
Further, the hamming distance calculation method of each region is as follows:
Figure BDA0002073997280000041
wherein d isjIs the Hamming distance of the jth region, x [ i ]]For coding of the jth region, y [ i ]]Coding of standard circular area.
Further, the total hamming distance d is calculated by the following method:
Figure BDA0002073997280000042
compared with the prior art, the invention has the advantages and positive effects that: the matching method of the visible light image and the infrared image extracts the maximum stable extremum region, establishes an FBP model for processing and describes the texture information of the characteristic region. The FBP algorithm firstly extracts the most stable extremum region as a region to be matched, then uses the FBP algorithm to carry out feature description on each region, and finally sets a threshold value and uses a Hamming distance to judge whether each region is matched. The experimental result proves that the method has good performance in positioning precision, algorithm speed, stability, flexibility and real-time property.
Other features and advantages of the present invention will become more apparent from the detailed description of the embodiments of the present invention when taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flowchart of an embodiment of a method for matching a visible light image with an infrared image according to the present invention;
fig. 2a is an original image of a visible light in scene 1 in an experimental example of a matching method of a visible light image and an infrared image according to the present invention;
fig. 2b is an infrared original image of scene 1 in an experimental example of the method for matching a visible light image and an infrared image according to the present invention;
FIG. 3a is the MSER extraction result of FIG. 2 a;
FIG. 3b is the MSER extraction result of FIG. 2 b;
FIG. 4a is the result of ellipse fitting of the MSER region of FIG. 3 a;
FIG. 4b is the result of ellipse fitting of the MSER region of FIG. 3 b;
FIG. 5a is a partial view of one of the MSER regions of FIG. 3a fitted with an ellipse;
FIG. 5b is a standard circular area normalized by the MSER area gauge of FIG. 5 a;
FIG. 6a is a partial texture map of FIG. 2 a;
FIG. 6b is a partial texture map of FIG. 2 b;
FIG. 7 the result of image registration in FIGS. 2a and 2 b;
fig. 8 is a schematic diagram of the region defined by the FBP model.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The first embodiment provides a method for matching a visible light image and an infrared image, which includes the following steps:
s1, extracting maximum stable extremum regions from the visible light image and the infrared image respectively;
s2, respectively carrying out ellipse fitting on the maximum stable extreme value areas of the visible light image and the infrared image, and normalizing the maximum stable extreme value areas into standard circular areas;
s3, establishing an FBP model, and describing texture information in the standard circle by using binary coding, wherein the method comprises the following steps:
s31, as shown in fig. 8, drawing a circle with the center O of the standard circular region as the center and R as the radius, and defining a circular region as a second circular region, where R is R/2 and R is the radius of the standard circular region;
s32, selecting a plurality of boundary points on the circumference of the second circular area, such as O1 point in fig. 8, drawing a circle with r as a radius and each boundary point as a center of the circle, and then defining a plurality of corresponding circular areas as edge circular areas, wherein each boundary point is uniformly distributed on the circumference of the second circular area; fig. 8 is a region divided by the FBP model.
S33, respectively coding each area and the standard circular area, wherein each area comprises a second circular area and all edge circular areas;
s4, matching the codes of the regions using hamming distance, including:
s41, calculating the weight of each area;
s42, calculating the Hamming distance of each region;
s43, calculating the total Hamming distance by using the Hamming distance of each region and the weight of each region;
and S44, comparing the total Hamming distance with a set threshold, wherein if the total Hamming distance is not greater than the set threshold, the visible light image and the infrared image accord with the matching condition, otherwise, the visible light image and the infrared image do not match. According to the matching method of the visible light image and the infrared image, the maximum stable extremum region is extracted, an FBP model is established for processing, and texture information of the characteristic region is described. The FBP algorithm firstly extracts the most stable extremum region as a region to be matched, then uses the FBP algorithm to perform feature description on each region, finally sets a threshold value, and uses the Hamming distance to judge whether each region is matched. The experimental result proves that the method has good performance in positioning precision, algorithm speed, stability, flexibility and real-time property.
In the step of describing the texture information, the conventional LBP algorithm is simple and fast, but the LBP only compares the gray value of the central pixel with the gray value of the pixels on the surrounding circular boundary when describing the feature region, the LBP of the feature region with a small area and simple texture can only describe the edge information of the feature region, and the obtained texture feature has robustness. In step S3, a new algorithm FBP is proposed, and the LBP region is further extended by using sector segmentation and circular segmentation, so that the FBP can also effectively describe the feature region when the detected feature region is large and the texture is complex.
In step S1, extracting the maximum stable extremum region by using the MSER algorithm includes:
s11, dividing the image into different connected regions, marking the regions and completing extraction of the MSER block;
s12, gradually increasing the threshold value pixel from 0 to 255, dividing the image into different connected regions, marking the regions, and completing MSER block division;
s13, calculating the area change rate of the MSER block to complete the extraction of the MSER block, wherein the calculation method comprises the following steps
Figure BDA0002073997280000061
Wherein q isiRepresents the area change rate, Q, of the MSER block at the i-th timeiDenotes the area of the MSER block at the i-th increment, Qi+ΔDenotes the area of the MSER block at i + Δ increments, Qi-ΔWhen the i-Delta is increasedArea of MSER block;
s14, judging whether the MSER block is the maximum stable extremum area, when q isiWhen the minimum value is smaller, the current area is the maximum stable extremum area.
In order to simultaneously extract a maximum value region and a minimum value region in an image, the MSER algorithm comprises a forward extraction process and a reverse extraction process. And determining the forward most stable extremum region according to the area change rate in the forward extraction process, and extracting the maximum region in the original image and marking as MSER +. In the reverse extraction process, firstly, the gray value of the original image is reversed, and the most stable extremum region in the reversed image is extracted and recorded as MSER-. In general, forward and backward extraction can stably extract the associated features of corresponding objects in an image, and the good performance of the MSER is due to the similarity of the extraction process and the attention mechanism of the human visual system, and the boundary of a 'prominent part' region and the evolution law thereof are emphasized.
The FBP method enables the finally extracted feature codes to contain more texture information in a circular segmentation mode, the finally obtained feature codes are higher in robustness, the FBP model supports self-adaptive expansion, the FBP model can be expanded according to an appointed rule according to actual requirements, and time performance and feature coding accuracy are considered.
The shape of the most stable extremum regions extracted by the MSER algorithm is arbitrary, and for convenience of processing, fitting needs to be performed on the most stable extremum regions, such as ellipse fitting, polygon fitting, circle fitting and the like. Generally, since the eigenvalues and eigenvectors of the covariance matrix of the eigen regions uniquely define an ellipse, the ellipse fitting is selected for the extracted eigen regions. However, considering that the feature region is finally described by using the concept of the LBP algorithm, but a circle inherently has rotation invariance, we choose to first fit an ellipse to the feature region and then normalize the fitted ellipse into a circle to facilitate description of the feature region. In step S2 of this embodiment, an ellipse fitting is performed on the maximum stable extremal region by the least square method. The fitted ellipse is then normalized to a circle to form a standard circular region to facilitate the description of the feature region.
In order to reduce the influence of noise, a step of performing gaussian blurring on the whole image is further included between step S2 and step S3, so that the gray level of the pixels in the feature region is smoother, the value of the original pixel has the largest gaussian distribution, and all the pixels have the largest weight, and the weights of the adjacent pixels become smaller as the adjacent pixels are farther from the original pixel. This image blurring process retains the edge effect more than other equalizing blurring filters.
In step S33, before encoding each region by using the LBP algorithm, the method further includes the steps of:
in order to prevent the pixel value at the center of the standard circular area from being abnormal and causing low calculation accuracy, the center of the standard circular area is used as a circle center, 3 pixels are used as radiuses to draw a circle, a circular area is defined as a third circular area, and the average value of all pixel values of the third circular area is calculated and used as the center pixel value of the standard circular area. By defining a small region around the center of the standard circular region and taking the average value of the pixel values in the small region as the center pixel value of the standard circular region, even if the true center pixel value is abnormal, the accuracy of the subsequent calculation is not affected.
In step S33, encoding each region by using the LBP algorithm includes:
s331, selecting a plurality of coding points on the boundary of each area respectively;
s332, comparing the pixel value of the coding point with the central pixel value of the area where the coding point is located, if the pixel value of the coding point is not less than the central pixel value, the coding point is coded as 1, otherwise, the coding point is coded as 0. That is, the j-th region is encoded as:
Figure BDA0002073997280000081
wherein, giIs the pixel value of the ith encoding point, gcIs the central pixel value of the region where the encoded point is located.
And j is 0,1,2, and n, so that n coded values can be obtained finally, and all the n coded values have gray scale invariance but have no rotation invariance, firstly, the LBP codes in the circular neighborhood are continuously rotated, and the LBP characteristic with the minimum LBP characteristic value is selected from the LBP characteristic values to serve as the LBP characteristic of the central pixel. The FBP method has n coding values, selects the rotation invariant LBP characteristic for a circle sleeve of 3 multiplied by 3 neighborhood at the circle center, calculates a rotation direction, and the remaining n-1 codes are subjected to rotation coding by taking the rotation direction as the main direction, and finally obtains n coding values with gray scale invariance and rotation invariance.
In the registration process, the equivalent mode concept is used for reference, the coding dimension is reduced, and the Hamming distance is finally used as the measurement standard.
The method for calculating the weight of each region in step S41 is:
Figure BDA0002073997280000091
wherein, wjIs the weight of the jth region, j is an integer and is more than 0 and less than or equal to n, n is the total number of the second circular region and all the edge circular regions, (x)j,yj) Is the center coordinate of the jth region, (x)c.yc) Is the center coordinate of the standard circular region, rjIs the radius of the jth region. L (| x)j,yj)-(xc.yc) And | represents the distance from the center of the jth region to the center of the standard circle.
The Hamming distance calculation method of each region comprises the following steps:
Figure BDA0002073997280000092
wherein d isjIs the Hamming distance of the jth region, x [ i ]]For coding of the jth region, y [ i ]]For the encoding of the standard circular area,
Figure BDA0002073997280000093
representing an exclusive or operation.
The calculation method of the total Hamming distance d comprises the following steps:
Figure BDA0002073997280000094
experimental example:
in order to verify the effectiveness of the algorithm, the FBP algorithm is used for respectively registering the visible light image and the infrared image under two scenes. The experimental environment is as follows: the Intel (R) processor (CPU E5-2603v3@1.60GHz 1.50GHz, the memory 16.0GB and the window7 system, wherein the coding environment is VS2015 and opencv2.4.13 computing platform. Two sets of road edge monitoring video frame sequential images are respectively selected in the experiment, for example, fig. 2a and fig. 2b are the 1 st frame image in the scene 1.
The experiment is carried out by firstly converting the images of the scene 1 and the scene 2 into gray level images, carrying out MSER + and MSER-operations on the gray level images to extract the most stable extremum regions in the images, so that the most stable extremum regions extracted from the visible light images and the infrared images are similar to the most stable extremum regions extracted from the infrared images and the observation of human eyes, the prominent regions are highlighted, and the result is shown in fig. 3a and 3b, and the white regions are the extracted most stable extremum regions.
The extracted mser most stable extremal region is irregular in shape, so that subsequent image processing is not facilitated, each region needs to be subjected to ellipse fitting, finally, the fitted ellipse region is used as a registration feature region of the image to be selected, and ellipse fitting results are shown in fig. 4a and 4 b.
The FBP algorithm proposed by the method is directed at a circular region, and it is not difficult to find that the MSER region is subjected to ellipse fitting, and the processing for the elliptical region is obviously not as convenient as that for the circular region. To build the FBP algorithm model, the ellipse is regularized by the transformation matrix into a circular region, and the result is normalized as shown in fig. 5a and 5 b.
The normalized circular area is a texture image, the visible light image and the infrared image have different imaging mechanisms, so that the information contained in the image is different, but the visible light image and the infrared image both retain complete image contour information, a feature descriptor can be generated by utilizing the relationship between pixels between the textures, and the texture information of the feature area of the scene 1 is shown in fig. 6a and 6 b.
And (3) respectively carrying out FBP algorithm processing on the infrared image and the visible light image by taking a circle center coordinate point of the elliptical region as a final matching point position to obtain FBP coding values for registration, carrying out coding registration by using an FBP model aiming at the scene 1, and obtaining a result as shown in fig. 7. As can be seen from fig. 7, the FBP algorithm can correctly find the matching region and realize the registration of the visible light image and the infrared image.
It is to be understood that the above description is not intended to limit the present invention, and the present invention is not limited to the above examples, and those skilled in the art may make modifications, alterations, additions or substitutions within the spirit and scope of the present invention.

Claims (8)

1. A visible light image and infrared image matching method is characterized by comprising the following steps:
(1) respectively extracting maximum stable extremum regions from the visible light image and the infrared image;
(2) respectively carrying out ellipse fitting on the maximum stable extremum regions of the visible light image and the infrared image, and normalizing the maximum stable extremum regions into a standard circular region;
(3) establishing an FBP model, describing texture information in the standard circle by using binary coding, and comprising the following steps:
(31) drawing a circle by taking the center of the standard circular area as a circle center and R as a radius, and defining a circular area as a second circular area, wherein R is R/2, and R is the radius of the standard circular area;
(32) selecting a plurality of boundary points on the circumference of the second circular area, drawing a circle by taking each boundary point as a circle center and r as a radius, and then dividing a plurality of corresponding circular areas into edge circular areas, wherein each boundary point is uniformly distributed on the circumference of the second circular area;
(33) respectively coding each area and the standard circular area based on an LBP algorithm, drawing a circle by taking the center of the standard circular area as a circle center and 3 pixels as radiuses, delimiting a circular area as a third circular area, and calculating the average value of all pixel values of the third circular area as the central pixel value of the standard circular area;
(34) respectively coding each area and the standard circular area, wherein each area comprises a second circular area and all edge circular areas;
(4) and matching the codes of the regions by using the Hamming distance, comprising:
(41) calculating the weight of each region;
(42) calculating the Hamming distance of each region;
(43) calculating the total Hamming distance by using the Hamming distance of each region and the weight of each region;
(44) and comparing the total Hamming distance with a set threshold, if the total Hamming distance is not greater than the set threshold, the visible light image and the infrared image accord with the matching condition, otherwise, the visible light image and the infrared image do not match.
2. The visible light image and infrared image matching method according to claim 1, wherein in the step (1), extracting the maximally stable extremal region by using the MSER algorithm includes:
(11) dividing the image into different connected regions, marking the regions and finishing the extraction of the MSER block;
(12) gradually increasing the threshold value pixel from 0 to 255, dividing the image into different connected regions, marking the regions, and completing MSER block division;
(13) calculating the area change rate of the MSER block to complete the extraction of the MSER block, wherein the calculation method comprises the following steps
Figure FDA0003516837020000021
Wherein q isiDenotes the rate of change of area, Q, of the MSER block at time iiDenotes the area of the MSER block at the i-th increment, Qi+ΔDenotes the area of the MSER block at i + Δ increments, Qi-ΔRepresents the area of the MSER block at the i-delta increments;
(14) judging whether the MSER block is the mostLarge stable extremal region, when qiWhen the minimum value is smaller, the current region is the maximum stable extremum region.
3. The method for matching a visible light image with an infrared image according to claim 1, wherein in the step (2), the least square method is adopted to perform ellipse fitting on the maximum stable extremum region.
4. The visible light image and infrared image matching method according to claim 1, further comprising a step of performing gaussian blurring processing on the whole image between the step (2) and the step (3).
5. The visible-light image and infrared image matching method as claimed in claim 1, wherein the encoding of each region by using the LBP algorithm in step (33) comprises:
(331) respectively selecting a plurality of coding points on the boundary of each area;
(332) comparing the pixel value of the coding point with the central pixel value of the area where the coding point is located, if the pixel value of the coding point is not less than the central pixel value, the coding point is coded as 1, otherwise, the coding point is coded as 0.
6. The visible light image and infrared image matching method according to claim 5, wherein the weight calculation method for each region in step (41) is:
Figure FDA0003516837020000022
wherein, wjIs the weight of the jth region, j is an integer and is more than 0 and less than or equal to n, n is the total number of the second circular region and all the edge circular regions, (x)j,yj) Is the center coordinate of the jth region, (x)c.yc) Is the center coordinate of the standard circular region, rjIs the radius of the jth region.
7. The visible light image and infrared image matching method according to claim 6, wherein the hamming distance calculation method for each region is:
Figure FDA0003516837020000031
wherein d isjIs the Hamming distance of the jth region, x [ i ]]For coding of the jth region, y [ i ]]Coding of standard circular area.
8. The method for matching a visible light image with an infrared image according to claim 7, wherein the total hamming distance d is calculated by:
Figure FDA0003516837020000032
CN201910447148.2A 2019-05-27 2019-05-27 Visible light image and infrared image matching method Active CN110222749B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910447148.2A CN110222749B (en) 2019-05-27 2019-05-27 Visible light image and infrared image matching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910447148.2A CN110222749B (en) 2019-05-27 2019-05-27 Visible light image and infrared image matching method

Publications (2)

Publication Number Publication Date
CN110222749A CN110222749A (en) 2019-09-10
CN110222749B true CN110222749B (en) 2022-06-07

Family

ID=67818406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910447148.2A Active CN110222749B (en) 2019-05-27 2019-05-27 Visible light image and infrared image matching method

Country Status (1)

Country Link
CN (1) CN110222749B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113570808A (en) * 2021-07-08 2021-10-29 郑州海为电子科技有限公司 Wireless smoke detector based on ZYNQ7020

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400384A (en) * 2013-07-22 2013-11-20 西安电子科技大学 Large viewing angle image matching method capable of combining region matching and point matching
CN103971385A (en) * 2014-05-27 2014-08-06 重庆大学 Detecting method for moving object in video
CN104376548A (en) * 2014-11-07 2015-02-25 中国电子科技集团公司第二十八研究所 Fast image splicing method based on improved SURF algorithm
RU2557755C1 (en) * 2014-02-25 2015-07-27 Открытое акционерное общество "Центр судоремонта "Звездочка" Method for image compression during fractal coding
CN106529591A (en) * 2016-11-07 2017-03-22 湖南源信光电科技有限公司 Improved MSER image matching algorithm
CN108198157A (en) * 2017-12-22 2018-06-22 湖南源信光电科技股份有限公司 Heterologous image interfusion method based on well-marked target extracted region and NSST
CN108197585A (en) * 2017-12-13 2018-06-22 北京深醒科技有限公司 Recognition algorithms and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6442152B2 (en) * 2014-04-03 2018-12-19 キヤノン株式会社 Image processing apparatus and image processing method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400384A (en) * 2013-07-22 2013-11-20 西安电子科技大学 Large viewing angle image matching method capable of combining region matching and point matching
RU2557755C1 (en) * 2014-02-25 2015-07-27 Открытое акционерное общество "Центр судоремонта "Звездочка" Method for image compression during fractal coding
CN103971385A (en) * 2014-05-27 2014-08-06 重庆大学 Detecting method for moving object in video
CN104376548A (en) * 2014-11-07 2015-02-25 中国电子科技集团公司第二十八研究所 Fast image splicing method based on improved SURF algorithm
CN106529591A (en) * 2016-11-07 2017-03-22 湖南源信光电科技有限公司 Improved MSER image matching algorithm
CN108197585A (en) * 2017-12-13 2018-06-22 北京深醒科技有限公司 Recognition algorithms and device
CN108198157A (en) * 2017-12-22 2018-06-22 湖南源信光电科技股份有限公司 Heterologous image interfusion method based on well-marked target extracted region and NSST

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Human and Machine Performance on Periocular Biometrics Under Near-Infrared Light and Visible Light;Karen P. Hollingsworth等;《IEEE Transactions on Information Forensics and Security 》;20111027;第7卷(第2期);第588-601页 *
一种基于聚类分析的红外图像配准算法;尹丽华等;《半导体光电》;20170815;第38卷(第4期);第571-576页 *
基于LBP 核密度估计的动态目标分割模型研究;何黄凯等;《计算机应用研究》;20120731;第29卷(第7期);第2719-2721、第2732页 *
浅谈改进的LBP算法;房德峰;《现代企业教育》;20130823(第16期);第262-263页 *

Also Published As

Publication number Publication date
CN110222749A (en) 2019-09-10

Similar Documents

Publication Publication Date Title
CN110097093B (en) Method for accurately matching heterogeneous images
US9824258B2 (en) Method and apparatus for fingerprint identification
CN111178245B (en) Lane line detection method, lane line detection device, computer equipment and storage medium
CN110807473B (en) Target detection method, device and computer storage medium
CN106981077B (en) Infrared image and visible light image registration method based on DCE and LSS
CN103870808A (en) Finger vein identification method
CN110569861B (en) Image matching positioning method based on point feature and contour feature fusion
CN110084238B (en) Finger vein image segmentation method and device based on LadderNet network and storage medium
CN107066969A (en) A kind of face identification method
Xu et al. An invariant generalized Hough transform based method of inshore ships detection
CN108805915A (en) A kind of close-range image provincial characteristics matching process of anti-visual angle change
CN111105452A (en) High-low resolution fusion stereo matching method based on binocular vision
CN116051822A (en) Concave obstacle recognition method and device, processor and electronic equipment
CN110929598B (en) Unmanned aerial vehicle-mounted SAR image matching method based on contour features
CN111709893A (en) ORB-SLAM2 improved algorithm based on information entropy and sharpening adjustment
CN110222749B (en) Visible light image and infrared image matching method
KR20140074905A (en) Identification by iris recognition
CN108665470B (en) Interactive contour extraction method
Wang et al. LBP-based edge detection method for depth images with low resolutions
Pflug et al. Ear detection in 3D profile images based on surface curvature
CN112308044B (en) Image enhancement processing method and palm vein identification method for palm vein image
CN110781745B (en) Tail eyelash detection method based on composite window and gradient weighted direction filtering
CN112381042A (en) Method for extracting palm vein features from palm vein image and palm vein identification method
CN111079551A (en) Finger vein identification method and device based on singular value decomposition and storage medium
CN111311657A (en) Infrared image homologous registration method based on improved corner main direction distribution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant