CN112381748A - Terahertz and visible light image registration method and device based on texture feature points - Google Patents

Terahertz and visible light image registration method and device based on texture feature points Download PDF

Info

Publication number
CN112381748A
CN112381748A CN202011279946.8A CN202011279946A CN112381748A CN 112381748 A CN112381748 A CN 112381748A CN 202011279946 A CN202011279946 A CN 202011279946A CN 112381748 A CN112381748 A CN 112381748A
Authority
CN
China
Prior art keywords
image
terahertz
visible light
texture feature
feature points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011279946.8A
Other languages
Chinese (zh)
Inventor
罗贵友
俞旭辉
侯丽伟
谢巍
刘仕望
孙义兴
侯树海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Henglin Photoelectric Technology Co ltd
Original Assignee
Shanghai Henglin Photoelectric Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Henglin Photoelectric Technology Co ltd filed Critical Shanghai Henglin Photoelectric Technology Co ltd
Priority to CN202011279946.8A priority Critical patent/CN112381748A/en
Publication of CN112381748A publication Critical patent/CN112381748A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a terahertz and visible light image registration method and device based on texture feature points, wherein the method comprises the following steps: s1, extracting foreground images of the visible light image and the terahertz image; s2, extracting texture information from the foreground image to obtain a texture feature map; s3, extracting texture feature points in the texture feature map and describing the feature points; s4, performing feature point matching, registration parameter calculation and image registration; s5, judging whether correction is needed, if yes, carrying out correction by using a supervision descending method, and then entering the next step, if not, directly entering the next step; and S6, fusing the terahertz image and the visible light image. According to the method, the texture feature points are extracted, so that the interference caused by the environment is reduced, and meanwhile, the similarity between the terahertz image and the visible light image can be described in the form of the feature points, so that the real-time registration and fusion of the visible light image and the terahertz image are realized.

Description

Terahertz and visible light image registration method and device based on texture feature points
Technical Field
The invention relates to the field of security inspection and security, in particular to a terahertz and visible light image registration method and device based on texture feature points.
Background
The information acquired by the single sensor is often single, and the actual requirements cannot be met under most conditions, so that the multi-sensor fusion scheme has a good effect.
The terahertz security check instrument is also used for detecting hidden objects carried by a detected person from a terahertz image, but the terahertz image causes serious loss of the appearance features of the detected person due to imaging characteristics, while the visible light image does not easily detect the hidden objects but contains rich appearance features, and if the hidden object information of the terahertz image is fused with the appearance information of the visible light image, the applicability of the terahertz security check instrument is stronger. For the terahertz detector and the camera, the imaging principle, the view field, the installation position and the like of the terahertz detector and the camera are different, which causes the phenomena of unmatched translation, scaling, rotation and the like possibly existing between the images acquired by the two detectors, and the images need to be registered by using an image registration technology.
The common image registration algorithm is mainly based on features, correlation, transformation domain and the like, and with the development of artificial intelligence, a plurality of methods for deep learning in the image registration field are available, and the effect is better by ASLFeat. The feature-based registration method extracts the significant features of the image, so the calculation amount is small, the speed is high, the robustness on the gray level change of the image is good, the feature extraction and matching are sensitive, and a reliable feature and a good robustness matching method are needed; although the correlation-based registration method does not need to extract features, the correlation-based registration method is sensitive to gray scale transformation and is usually large in calculation amount; the registration method based on the transformation domain has higher requirement on the overlapping area of the registered images; the ASLFeat is an improvement on D2-Net, and the ASLFeat introduces a hole convolution to increase a receptive field, thereby enhancing the characteristic extraction capability of a network, providing a new multi-scale detection mechanism and improving the accuracy of key points. However, the registration effect on the heterogeneous images is still not ideal, especially for images with relatively large difference between terahertz and visible light, and the speed is relatively slow, so that the real-time requirement cannot be met.
Disclosure of Invention
The invention aims to provide a terahertz and visible light image registration method and device based on texture feature points so as to realize accurate and real-time registration and fusion of a terahertz image and a visible light image.
The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.
According to an aspect of the invention, a terahertz and visible light image registration method based on texture feature points is provided, and the method comprises the following steps:
s1, extracting foreground images of the visible light image and the terahertz image;
s2, extracting texture information from the foreground image to obtain a texture feature map;
s3, extracting texture feature points in the texture feature map and describing the feature points;
s4, performing feature point matching, registration parameter calculation and image registration;
s5, judging whether correction is needed, if yes, carrying out correction by using a supervision descending method, and then entering the next step, if not, directly entering the next step;
and S6, fusing the terahertz image and the visible light image.
In one embodiment, the S1 of the method includes: performing foreground extraction on a visible light image through frequency domain processing on the basis of even tree complex wavelet transform, and processing the visible light image into a binary image; and performing foreground extraction on the terahertz image by using self-adaptive binarization.
In one embodiment, the S2 of the method includes: preprocessing the extracted foreground image, wherein the preprocessing comprises closed operation and smoothing; and (5) convolving the foreground image by using a gabor template to obtain a texture feature map.
In an embodiment, the extracting the texture feature points in the texture feature map in S3 of the method includes: and extracting pixel-level angular points, and solving sub-pixel angular points.
In one embodiment, the method adopts Shi-Tomasi corner detection operators to extract pixel-level corners and adopts a least square method to calculate sub-pixel corners.
In one embodiment, the characterization of the characteristic points in S3 of the method includes: and (4) counting the spatial distribution of the target contour shape by using a shape context algorithm, and establishing a global shape context descriptor.
In one embodiment, the feature point matching in S4 of the method includes:
and (3) calculating the matching degree, wherein the calculation formula of the direct matching degree statistic value of the two characteristic points is as follows:
Figure BDA0002780432470000031
wherein HT(i)、HV(k) Descriptors respectively representing ith characteristic point of the terahertz image and kth characteristic point of the visible light image, wherein the smaller the statistical value in the formula is, the higher the matching degree is;
and performing bidirectional matching according to the matching degree, and selecting the optimal matching pair as a matching point pair.
In an embodiment, the determining whether the modification is required in S5 of the method includes: and judging whether correction is needed or not by calculating the intersection ratio and/or the registration error.
In one embodiment, the S6 of the method includes: weighting and fusing the terahertz image and the visible light image; the terahertz weighting coefficient is larger in a region where a hidden object exists, and the visible weighting coefficient is larger in a region where no hidden object exists.
According to another aspect of the invention, there is also provided a terahertz and visible light image registration device based on texture feature points, including: the foreground image extraction module is used for extracting foreground images of the visible light image and the terahertz image; the texture feature map generation module is used for extracting texture information from the foreground image to obtain a texture feature map; the texture feature point extraction module is used for extracting texture feature points in the texture feature map and describing the feature points; the image registration module is used for carrying out feature point matching, registration parameter calculation and image registration; the correction module is used for judging whether correction is needed or not, if so, correction is carried out by using a supervision descending method; and the fusion module is used for fusing the terahertz image and the visible light image.
The embodiment of the invention has the beneficial effects that: by extracting the texture feature points, the interference caused by the environment is reduced, and meanwhile, the similarity between the terahertz image and the visible light image can be described in the form of the feature points, so that the real-time registration and fusion of the visible light image and the terahertz image are realized. Preferably, the texture feature points are described by using a shape context description method, and the descriptor of each feature point contains more information of the adjacent points, so that the anti-mismatching performance is better.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
The above features and advantages of the present disclosure will be better understood upon reading the detailed description of embodiments of the disclosure in conjunction with the following drawings. In the drawings, components are not necessarily drawn to scale, and components having similar relative characteristics or features may have the same or similar reference numerals.
FIG. 1 is a schematic diagram of a visible light foreground extraction result in an embodiment of the method of the present invention;
FIG. 2 is a diagram illustrating a terahertz profile extraction result in an embodiment of the method of the present invention;
FIG. 3 is a graph of texture features in an embodiment of the method of the present invention;
FIG. 4 is a schematic diagram of texture feature points in an embodiment of the method of the present invention;
FIG. 5 is a schematic diagram of the point-to-point comparison of texture feature points and contour feature points in an embodiment of the method of the present invention;
FIG. 6 shows T of an embodiment of the method of the inventioniAnd Vi2Matching the schematic diagram;
FIG. 7 is a schematic diagram of a multi-match optimization of an embodiment of the method of the present invention;
FIG. 8 is a schematic diagram of the fusion of terahertz and visible light foreground in an embodiment of the method of the present invention;
FIG. 9 is a flow chart of a method embodiment of the present invention;
fig. 10 is a block diagram of an embodiment of the apparatus of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. It is noted that the aspects described below in connection with the figures and the specific embodiments are only exemplary and should not be construed as imposing any limitation on the scope of the present invention.
As shown in fig. 9, an embodiment of the present invention provides a terahertz and visible light image registration method based on texture feature points, including the following steps:
s1, extracting foreground images of the visible light image and the terahertz image;
in a possible embodiment, S1 includes: performing foreground extraction on a visible light image through frequency domain processing on the basis of even tree complex wavelet transform, and processing the visible light image into a binary image; for details, see "a transform domain texture image feature extraction and segmentation method" of Yueyai chrysanthemum et al. In order to reduce the detail difference between the visible light image and the terahertz image, the foreground image is processed into a binary image, as shown in fig. 1. The terahertz image background is obviously different from the foreground, and the background can be removed by using self-adaptive binarization (such as OTSU), so that the effect shown in FIG. 2 can be obtained.
S2, extracting texture information from the foreground image to obtain a texture feature map;
the method specifically comprises the following steps: preprocessing the extracted foreground image, wherein the preprocessing includes closed operation and smoothing, and mainly aims to reduce the influence of fracture and burrs, and then convolving the foreground image by using a gabor template to obtain a texture feature map, as shown in fig. 3.
S3, extracting texture feature points in the texture feature map and describing the feature points;
firstly, extracting pixel-level angular points by adopting a Shi-Tomasi angular point detection operator, and then calculating sub-pixel angular points by adopting a least square method. The extracted texture feature point effects are shown in fig. 4 and 5.
Although the detail information of the visible light image is greatly different from that of the terahertz image, the outline shape has similarity. And (4) counting the spatial distribution of the target contour shape by using a shape context algorithm, and establishing a global shape context descriptor. For the distance value and the angle value between the kth feature point and the ith feature point, the following formula can be used for calculation:
Figure BDA0002780432470000061
and (3) performing global shape context description on all the characteristic points by using a formula (1) to obtain a distance matrix and an angle value matrix.
The distance matrix is normalized by using the formula (2), and a distance coding value with the value of 0, 1, 2.. 4 is obtained.
Figure BDA0002780432470000062
Wherein d ismaxIndicating the distance from the point farthest from the current feature point, such point being designated as the reference point.
Similarly, the angle matrix is normalized by using the formula (3), and an angle code value of 1, 2, 3.. 8 is obtained.
Figure BDA0002780432470000071
Wherein theta ismaxRepresenting the value of the angle between the reference point and the current feature point.
Finally, combining the angle code value and the distance code value according to the formula (4) to obtain a histogram, namely a descriptor
Figure BDA0002780432470000072
S4, performing feature point matching, registration parameter calculation and image registration;
s4.1, calculation of matching degree
The basis of matching, i.e. degree of matching, needs to be calculated before feature matching, using χ2The function calculates the matching degree between the two feature points, as shown in formula (5):
Figure BDA0002780432470000073
wherein HT(i)、HV(k) The descriptors respectively represent the ith characteristic point of terahertz and the kth characteristic point of visible light, and the smaller the statistical value of the formula (5), the higher the matching degree.
S4.2, bidirectional matching
Each feature point in the terahertz image and each feature point in the visible light image are calculated together to obtain a matching degree, a single feature point corresponds to a matching degree vector, the matching degree vectors are sorted from small to large, then the number m of matching candidate points is set through a threshold, for a feature point K, whether m candidate points of the feature point K and the feature point K are candidate points of each other is sequentially checked, if yes, a matching point pair is formed, and if a many-to-one situation exists, the best matching pair is selected as the matching point pair, as shown in fig. 7, the best matching point pair is selected, and if the many-to-one situation exists, the best matching point pair is selected as the matching point pairWhen m is more than 2, T in terahertziAnd TjAll corresponding to V in visible lighti2Match, but from Vi2Can see T in the matching vectoriAnd Vi2Has a matching degree higher than TjAnd Vi2So that the resulting matching pair is Ti←→Vi2
4.3 registration parameter calculation
And the RANSAC algorithm is adopted for the registration parameter calculation and the feature point screening so as to reduce the influence caused by matching errors. In practice, the RANSAC algorithm can be carried by using the findhomograph () function of opencv to obtain the conversion matrix H.
4.4 image registration
Image registration, namely multiplying the image to be registered with a registration matrix to perform transmission transformation to obtain a registered image, and setting the coordinate of a certain point of the image to be registered as (x)0,y0) If the registered coordinates are (x, y, the registration can be achieved by equation (6), where H is the registration matrix.
[u,v,w]=[x0,y0,1]H,
Figure BDA0002780432470000081
S5, judging whether correction is needed, if yes, carrying out correction by using a supervision descending method, and then entering the next step, if not, directly entering the next step;
s5.1, registration evaluation
The registration evaluation indexes are two, one is the intersection and comparison IOU of a common target in two images, the other is the registration error ML, taking a human body as an example, the visible light and terahertz images are segmented to extract the human body, the first step of the step is completed, then the human body is binarized to calculate the intersection and comparison, for example, as shown in a B diagram in FIG. 8, the foreground fusion result of terahertz and visible light is not completely overlapped with terahertz because clothes exist in the visible light, a white part (a gray value 255) in the diagram represents the intersection part of the foreground of terahertz and visible light, a gray part (a gray value 127) represents the non-intersection part, the intersection of the foreground is easily obtained as a C diagram in FIG. 8, and the union is shown as a D diagram in FIG. 8, and the IOU can be calculated by an equation (7) after the calculation method of the intersection and the union is known.
Figure BDA0002780432470000091
Wherein p isTNumber of foreground pixels, p, representing terahertz imageVRepresenting the number of foreground pixels of the visible image.
The calculation of the registration error can be obtained by using equation (8), where the pixel coordinate to be registered is assumed to be (x)i0,yi0) The registered coordinates are (x)i,yi) The transformation matrix is
Figure BDA0002780432470000092
The number of foreground pixels participating in the evaluation is n.
Figure BDA0002780432470000093
And during registration evaluation, the IOU and the ML can be integrated to form a comprehensive evaluation index, or individual indexes can be used for evaluation in sequence, if and only if the two indexes meet the requirements, the registration is considered to reach the standard, the larger the IOU is, the better the registration performance is, and the smaller the ML is, the better the registration performance is, for example, the IOU is more than 80% and the ML is less than 5.
S5.2, correction
The correction can adopt an algorithm of face key point alignment, such as TCDNN, SDM and the like.
And S6, fusing the terahertz image and the visible light image.
In order to clearly identify pedestrians and carry hidden objects, a weighting fusion scheme is adopted for fusion of terahertz and visible light images, different weighting coefficients are utilized according to the detection result of the hidden objects, namely the terahertz weight of places with the hidden objects is great, and the weight of visible light of places without the hidden objects is greater than that of the fusion scheme shown in the formula (9):
Figure BDA0002780432470000101
corresponding to the above method, an embodiment of the present invention further provides a terahertz and visible light image registration apparatus based on texture feature points, as shown in fig. 10, including:
a foreground image extraction module 201, configured to extract foreground images of the visible light image and the terahertz image;
the texture feature map generating module 202 is configured to extract texture information from the foreground image to obtain a texture feature map;
the texture feature point extraction module 203 is used for extracting texture feature points in the texture feature map and describing the feature points;
an image registration module 204, configured to perform feature point matching, registration parameter calculation, and image registration;
a correction module 205, configured to determine whether correction is needed, if so, perform correction by using a supervision drop method;
and the fusion module 206 is used for fusing the terahertz image and the visible light image.
In summary, the present invention extracts the foreground through frequency domain processing, so as to reduce the interference caused by the environment. By extracting the characteristic points of the texture characteristics, the interference caused by the environment is further reduced, and the similarity of the terahertz and the visible light image is described in the form of the characteristic points. Because the texture feature points are added with the adjacent points, the feature points are described by using a shape context description method during description, so that a descriptor of each feature point contains more information of the adjacent points, and the anti-mismatching performance is better than that of the contour feature points. For a single feature point, it is equivalent to add more positioning information, and as shown in fig. 5, the descriptor is more discriminative.
While, for purposes of simplicity of explanation, the methodologies are shown and described as a series of acts, it is to be understood and appreciated that the methodologies are not limited by the order of acts, as some acts may, in accordance with one or more embodiments, occur in different orders and/or concurrently with other acts from that shown and described herein or not shown and described herein, as would be understood by one skilled in the art.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The above description is only a preferred example of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present application should be included in the scope of the present application.

Claims (10)

1. A terahertz and visible light image registration method based on texture feature points is characterized by comprising the following steps:
s1, extracting foreground images of the visible light image and the terahertz image;
s2, extracting texture information from the foreground image to obtain a texture feature map;
s3, extracting texture feature points in the texture feature map and describing the feature points;
s4, performing feature point matching, registration parameter calculation and image registration;
s5, judging whether correction is needed, if yes, carrying out correction by using a supervision descending method, and then entering the next step, if not, directly entering the next step;
and S6, fusing the terahertz image and the visible light image.
2. The thz-to-visible image registration method based on texture feature points as claimed in claim 1, wherein the S1 includes: performing foreground extraction on a visible light image through frequency domain processing on the basis of even tree complex wavelet transform, and processing the visible light image into a binary image; and performing foreground extraction on the terahertz image by using self-adaptive binarization.
3. The terahertz and visible light image real-time registration and information fusion method according to claim 1, wherein the S2 comprises: preprocessing the extracted foreground image, wherein the preprocessing comprises closed operation and smoothing; and (5) convolving the foreground image by using a gabor template to obtain a texture feature map.
4. The thz-to-visible image registration method based on texture feature points as claimed in claim 1, wherein the extracting texture feature points in the texture feature map in S3 includes: and extracting pixel-level angular points, and solving sub-pixel angular points.
5. The terahertz and visible light image registration method based on texture feature points as claimed in claim 4, wherein the pixel-level corner points are extracted by using a Shi-Tomasi corner point detection operator, and the sub-pixel corner points are calculated by using a least square method.
6. The terahertz and visible light image registration method based on texture feature points as claimed in claim 1, wherein the feature point description in S3 includes: and (4) counting the spatial distribution of the target contour shape by using a shape context algorithm, and establishing a global shape context descriptor.
7. The thz-to-visible image registration method based on texture feature points as claimed in claim 6, wherein the feature point matching in S4 includes:
and (3) calculating the matching degree, wherein the calculation formula of the direct matching degree statistic value of the two characteristic points is as follows:
Figure FDA0002780432460000021
wherein HT(i)、HV(k) Are respectively provided withA descriptor representing the ith characteristic point of the terahertz image and the kth characteristic point of the visible light image, wherein the smaller the statistical value in the formula is, the higher the matching degree is;
and performing bidirectional matching according to the matching degree, and selecting the optimal matching pair as a matching point pair.
8. The thz-to-visible image registration method based on texture feature points as claimed in claim 1, wherein the determining in S5 whether correction is required includes: and judging whether correction is needed or not by calculating the intersection ratio and/or the registration error.
9. The thz-to-visible image registration method based on texture feature points as claimed in claim 1, wherein the S6 includes: weighting and fusing the terahertz image and the visible light image; the terahertz weighting coefficient is larger in a region where a hidden object exists, and the visible weighting coefficient is larger in a region where no hidden object exists.
10. A terahertz and visible light image registration device based on texture feature points is characterized by comprising:
the foreground image extraction module is used for extracting foreground images of the visible light image and the terahertz image;
the texture feature map generation module is used for extracting texture information from the foreground image to obtain a texture feature map;
the texture feature point extraction module is used for extracting texture feature points in the texture feature map and describing the feature points;
the image registration module is used for carrying out feature point matching, registration parameter calculation and image registration;
the correction module is used for judging whether correction is needed or not, if so, correction is carried out by using a supervision descending method;
and the fusion module is used for fusing the terahertz image and the visible light image.
CN202011279946.8A 2020-11-16 2020-11-16 Terahertz and visible light image registration method and device based on texture feature points Pending CN112381748A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011279946.8A CN112381748A (en) 2020-11-16 2020-11-16 Terahertz and visible light image registration method and device based on texture feature points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011279946.8A CN112381748A (en) 2020-11-16 2020-11-16 Terahertz and visible light image registration method and device based on texture feature points

Publications (1)

Publication Number Publication Date
CN112381748A true CN112381748A (en) 2021-02-19

Family

ID=74585535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011279946.8A Pending CN112381748A (en) 2020-11-16 2020-11-16 Terahertz and visible light image registration method and device based on texture feature points

Country Status (1)

Country Link
CN (1) CN112381748A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104376548A (en) * 2014-11-07 2015-02-25 中国电子科技集团公司第二十八研究所 Fast image splicing method based on improved SURF algorithm
CN107301661A (en) * 2017-07-10 2017-10-27 中国科学院遥感与数字地球研究所 High-resolution remote sensing image method for registering based on edge point feature
CN108846823A (en) * 2018-06-22 2018-11-20 西安天和防务技术股份有限公司 A kind of fusion method of terahertz image and visible images
CN108876749A (en) * 2018-07-02 2018-11-23 南京汇川工业视觉技术开发有限公司 A kind of lens distortion calibration method of robust
CN109544521A (en) * 2018-11-12 2019-03-29 北京航空航天大学 The method for registering of passive millimeter wave image and visible images in a kind of human body safety check
CN110796691A (en) * 2018-08-03 2020-02-14 中国科学院沈阳自动化研究所 Heterogeneous image registration method based on shape context and HOG characteristics
US20200226413A1 (en) * 2017-08-31 2020-07-16 Southwest Jiaotong University Fast and robust multimodal remote sensing images matching method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104376548A (en) * 2014-11-07 2015-02-25 中国电子科技集团公司第二十八研究所 Fast image splicing method based on improved SURF algorithm
CN107301661A (en) * 2017-07-10 2017-10-27 中国科学院遥感与数字地球研究所 High-resolution remote sensing image method for registering based on edge point feature
US20200226413A1 (en) * 2017-08-31 2020-07-16 Southwest Jiaotong University Fast and robust multimodal remote sensing images matching method and system
CN108846823A (en) * 2018-06-22 2018-11-20 西安天和防务技术股份有限公司 A kind of fusion method of terahertz image and visible images
CN108876749A (en) * 2018-07-02 2018-11-23 南京汇川工业视觉技术开发有限公司 A kind of lens distortion calibration method of robust
CN110796691A (en) * 2018-08-03 2020-02-14 中国科学院沈阳自动化研究所 Heterogeneous image registration method based on shape context and HOG characteristics
CN109544521A (en) * 2018-11-12 2019-03-29 北京航空航天大学 The method for registering of passive millimeter wave image and visible images in a kind of human body safety check

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YUANMENG ZHAO等: "Terahertz/Visible Dual-band Image Fusion Based on Hybrid Principal Component Analysis", 《JOURNAL OF PHYSICS: CONFERENCE SERIES》, vol. 1187, no. 04, pages 1 - 6 *
孙兴龙等: "采用轮廓特征匹配的红外-可见光视频自动配准", 《光学精密工程》, vol. 28, no. 05, pages 1140 - 1151 *
岳爱菊等: "一种变换域纹理图像特征提取和分割方法", 《计算机工程与应用》, no. 37, pages 50 - 52 *

Similar Documents

Publication Publication Date Title
Liu et al. A detection and recognition system of pointer meters in substations based on computer vision
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN109615611B (en) Inspection image-based insulator self-explosion defect detection method
CN103035013B (en) A kind of precise motion shadow detection method based on multi-feature fusion
CN109949340A (en) Target scale adaptive tracking method based on OpenCV
Nayar et al. Computing reflectance ratios from an image
CN111160291B (en) Human eye detection method based on depth information and CNN
Yuan et al. Combining maps and street level images for building height and facade estimation
CN108550165A (en) A kind of image matching method based on local invariant feature
CN104318559A (en) Quick feature point detecting method for video image matching
KR101753360B1 (en) A feature matching method which is robust to the viewpoint change
CN113996500A (en) Intelligent dispensing identification system based on visual dispensing robot
JP2010157093A (en) Motion estimation device and program
JP2009163682A (en) Image discrimination device and program
CN116486287A (en) Target detection method and system based on environment self-adaptive robot vision system
CN113989604A (en) Tire DOT information identification method based on end-to-end deep learning
CN110348307B (en) Path edge identification method and system for crane metal structure climbing robot
CN111311596A (en) Remote sensing image change detection method based on improved LBP (local binary pattern) characteristics
CN114022845A (en) Real-time detection method and computer readable medium for electrician insulating gloves
CN110910497B (en) Method and system for realizing augmented reality map
CN112381747A (en) Terahertz and visible light image registration method and device based on contour feature points
CN109785318B (en) Remote sensing image change detection method based on facial line primitive association constraint
KR101357581B1 (en) A Method of Detecting Human Skin Region Utilizing Depth Information
CN107330436B (en) Scale criterion-based panoramic image SIFT optimization method
CN115984712A (en) Multi-scale feature-based remote sensing image small target detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination