CN107253485A - Foreign matter invades detection method and foreign matter intrusion detection means - Google Patents
Foreign matter invades detection method and foreign matter intrusion detection means Download PDFInfo
- Publication number
- CN107253485A CN107253485A CN201710342757.2A CN201710342757A CN107253485A CN 107253485 A CN107253485 A CN 107253485A CN 201710342757 A CN201710342757 A CN 201710342757A CN 107253485 A CN107253485 A CN 107253485A
- Authority
- CN
- China
- Prior art keywords
- image
- foreign matter
- infrared
- doubtful
- background
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 58
- 238000000034 method Methods 0.000 claims abstract description 66
- 238000012545 processing Methods 0.000 claims abstract description 58
- 238000012544 monitoring process Methods 0.000 claims abstract description 29
- 238000000605 extraction Methods 0.000 claims abstract description 26
- 239000000126 substance Substances 0.000 claims abstract description 13
- 238000011049 filling Methods 0.000 claims abstract description 10
- 230000009466 transformation Effects 0.000 claims description 88
- 238000004422 calculation algorithm Methods 0.000 claims description 61
- 230000004927 fusion Effects 0.000 claims description 48
- 238000006243 chemical reaction Methods 0.000 claims description 22
- 230000008859 change Effects 0.000 claims description 19
- 239000000284 extract Substances 0.000 claims description 14
- HUTDUHSNJYTCAR-UHFFFAOYSA-N ancymidol Chemical compound C1=CC(OC)=CC=C1C(O)(C=1C=NC=NC=1)C1CC1 HUTDUHSNJYTCAR-UHFFFAOYSA-N 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 11
- 238000005286 illumination Methods 0.000 claims description 9
- 230000001186 cumulative effect Effects 0.000 claims description 7
- 238000009825 accumulation Methods 0.000 claims description 6
- 238000009434 installation Methods 0.000 claims description 6
- 230000000750 progressive effect Effects 0.000 claims description 6
- 230000009471 action Effects 0.000 claims description 3
- 239000000779 smoke Substances 0.000 abstract description 2
- 239000011159 matrix material Substances 0.000 description 8
- 230000008901 benefit Effects 0.000 description 6
- 238000000354 decomposition reaction Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 238000002156 mixing Methods 0.000 description 6
- 238000012216 screening Methods 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000009826 distribution Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000001965 increasing effect Effects 0.000 description 5
- 239000000203 mixture Substances 0.000 description 5
- 238000007500 overflow downdraw method Methods 0.000 description 5
- 230000004044 response Effects 0.000 description 5
- 238000012360 testing method Methods 0.000 description 5
- 241000208340 Araliaceae Species 0.000 description 4
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 4
- 235000003140 Panax quinquefolius Nutrition 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 238000002474 experimental method Methods 0.000 description 4
- 235000008434 ginseng Nutrition 0.000 description 4
- 238000013519 translation Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 230000000052 comparative effect Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 238000013441 quality evaluation Methods 0.000 description 2
- 238000004379 similarity theory Methods 0.000 description 2
- 241000894007 species Species 0.000 description 2
- 238000012952 Resampling Methods 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 238000005267 amalgamation Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000004141 dimensional analysis Methods 0.000 description 1
- 238000002592 echocardiography Methods 0.000 description 1
- 230000003028 elevating effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000002844 melting Methods 0.000 description 1
- 230000008018 melting Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000003014 reinforcing effect Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
- 230000003245 working effect Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B61—RAILWAYS
- B61L—GUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
- B61L23/00—Control, warning or like safety means along the route or between vehicles or trains
- B61L23/04—Control, warning or like safety means along the route or between vehicles or trains for monitoring the mechanical state of the route
- B61L23/041—Obstacle detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30236—Traffic on road, railway or crossing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computing Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Mechanical Engineering (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
Abstract
A kind of foreign matter intrusion detection method and foreign matter intrusion detection means, methods described include step:The infrared image in monitoring range is obtained using infrared camera and transmits it to image collection processing system;Image collection processing system determines whether occur doubtful foreign matter in the monitoring range of infrared camera according to infrared image;When there is doubtful foreign matter, LASER Light Source is focused on Visible Light Camera on the doubtful foreign matter in monitoring range and carry out laser light filling using LASER Light Source to doubtful foreign matter;Obtain the visible images of doubtful foreign matter and transmit it to image collection processing system;Image collection processing system will be seen that light image carries out image registration with merging with the doubtful foreign matter area image in infrared image;Doubtful foreign substance information is provided using fused image, feature extraction and classifying is carried out to doubtful foreign matter using doubtful foreign substance information, automatic identification and the alarm of doubtful foreign matter are realized, abundant complete image information can be obtained in dark, densely covered smoke, mists and clouds, low visibility.
Description
Technical field
The present invention relates to railway operation safety detection technology field, particularly foreign matter intrusion detection method and foreign matter intrusion inspection
Survey device.
Background technology
Constantly expand the fast development with bullet train equipment manufacturing technology along with China Express Railway network size, it is right
The attention rate of high ferro operation security is also constantly increasing.And also expose one during high speed railway foundation facility long service
The problem of meriting attention, the personnel of any intrusion railway clearance, foreign matter all can seriously threaten the peace of high-speed railway during operation
Row for the national games, and serious railway accident will be caused.Therefore, the foreign matter that intrusion track clearance is detected accurately and timely is to ensure rail
The key of road traffic safety operation, can be divided into contact and contactless two kinds by Cleaning Principle.
Contact measurement mainly uses protection network, can be divided into power network detection (such as application number according to detection protection network type
201210172059.X, 200910242554.1,201210282394.5) and fiber laser arrays (such as application number:
201110406903.6,200910272765.X) etc. mode.The protection network of contact Railway Construction install on a large scale compared with
For difficulty, simultaneously because construction period condition of construction is complex, for example, it there may be in work skylight and invade limit operation, protection network
It is inconvenient for use, and once damaged reparation in time is more difficult.The technology can only detect the larger thing dropped on protection network
Body, for very thin reinforcing bar and crosses protection network and drops to the object of orbit plane and can not detect, also can not judgment object it is big
Small and position.
Non-contact detection method includes the method based on infrared, laser, microwave and video, and curtain is used infrared and laser more
Wall scheme.For example, the section of foreign body intrusion (such as falling rocks) easily occurs in tunnel face etc. for Hispanic high-speed railway, it is mounted with
Junk monitoring system based on infrared ray light curtain, what is had also falls into track in track both sides installation ultrasonic detector detection
Foreign matter.The patent of invention of Patent No. 201010230606.6 is disclosed builds laser curtain wall using two-dimensional laser sensor
Contactless railway foreign body intrusion detection system.Both approaches can accurately detect the object through detection curtain wall, but to sky
Between object it is helpless.
Intrusion detection based on video is widely used in safety-security area, and these equipment are passed using the vision of single kind mostly
Sensor is monitored, based on Visible Light Camera, is obtained after visible images, then first defined area utilizes image processing method
Method is distinguished object and is in region or outside region, can also realize the tracking of intrusion object.Sehchan Oh et al. are in " A
Platform Surveillance Monitoring System using Image Processing for Passenger
Described in Safety in Railway Station " based on visible images and to realize station station track using image processing method
Foreign bodies detection, wherein recognizing prospect and background using image difference, the size and shape for passing through foreign matter distinguishes vehicle and row
People.This method can obtain good effect under experimental conditions, but be due to that visible images are influenceed by ambient light illumination, night
Working effect is poor, it will have influence on the accuracy and reliability of these detecting systems, and said system does not account for system yet
Adaptability problem under the environment of night visual isopter difference.
The content of the invention
Therefore, in order to solve above mentioned problem present in contact and contactless foreign matter intrusion detection method and device,
The beneficial effect better than prior art is realized, the present invention is realized.
According to an aspect of the invention, there is provided a kind of foreign matter intrusion detection method, methods described comprises the following steps:
The infrared image in monitoring range is obtained using infrared camera and transmits it to image collection processing system;Described image is adopted
Collection processing system determines whether occur doubtful foreign matter in the monitoring range of the infrared camera according to the infrared image;
In the case where there is doubtful foreign matter, LASER Light Source is set to be focused on Visible Light Camera described doubtful in the monitoring range
Laser light filling is carried out on foreign matter and using the LASER Light Source to the doubtful foreign matter;Obtain the visible ray of the doubtful foreign matter
Image simultaneously transmits it to described image acquisition processing system;Described image acquisition processing system by the visible images with
Doubtful foreign matter area image in the infrared image carries out image registration with merging;There is provided doubtful different using fused image
Thing information, carries out feature extraction and classifying to the doubtful foreign matter using the doubtful foreign substance information, realizes the doubtful foreign matter
Automatic identification and alarm.
It is further, described that LASER Light Source is focused on Visible Light Camera described doubtful different in the monitoring range
Step on thing includes:A) picture point of the doubtful foreign matter is obtained;B) angle of the fixation of the mounted infrared camera is utilized
The camera coordinates system and the relation of world coordinate system of degree, focal length and the infrared camera, and obtained by the infrared camera
Image described in doubtful foreign matter pixel position, to calculate actual sky of the doubtful foreign matter under world coordinate system
Between in azimuth;C) azimuth using the doubtful foreign matter calculated in the real space under world coordinate system, with
And relative position and relative attitude of the LASER Light Source with the Visible Light Camera relative to the infrared camera, to determine
The anglec of rotation and luffing angle of the LASER Light Source and Visible Light Camera;D) LASER Light Source and the Visible Light Camera
Rotated according to the anglec of rotation and luffing angle and pitching motion, so that the LASER Light Source and the visible ray phase
Machine is focused on the doubtful foreign matter.
Further, it is described that feature extraction and classifying is carried out to the doubtful foreign matter using the doubtful foreign substance information
Step includes:Using the profile of the image offer doubtful foreign matter, texture, temperature, the information of color, based on the profile, line
Reason, temperature, the information of color, extract the feature of the doubtful foreign matter and the feature are classified.
Further, the step of whether described determination doubtful foreign matter occurs in the monitoring range of the infrared camera is wrapped
Include:A) background extracting based on multiframe frame difference method:The back of the body is extracted from the infrared image using described image acquisition processing system
Scape, adds up to obtain the background, the accumulative step of the multiframe frame difference image includes using multiframe frame difference image:1. utilize
Video carries out difference frame by frame, is compared by difference value with fixed threshold, and difference value is corresponding less than the pixel position of threshold value to be
Background area, corresponding more than the pixel position of threshold value is foreground target region;2. according to obtained background area and prospect
Target area, the pixel dotted state to input picture is marked, and the pixel in the foreground target region is determined as prospect
Pixel, is not involved in background calculating;Pixel in the background area is determined as background pixel point, participates in background and calculates;3.
100 frame successive image frames are taken, background and foreground pixel in each image are distinguished using previous methods, an accumulator is introduced,
Initial value is 0, is counted for each pixel of all two field picture same positions, is determined as that accumulator value is not during foreground pixel point
Become, be determined as that accumulator value plus 1 during background pixel point, finally utilize cumulative obtained gradation of image accumulated value divided by corresponding
Accumulator value obtains current initial background, and the initial background is extracted background;B) foreign matter based on background difference
Extract:The doubtful foreign matter is extracted using background subtraction to every two field picture in video sequence.
Further, the background subtraction includes:If the background image of t is fb (x, y, t), current frame image is
Fc (x, y, t), then background difference image is fd (x, y, t)=fc (x, y, t)-fb (x, y, t), right using suitable threshold value T
Background difference image fd (x, y, t) carries out binary conversion treatment, obtains the two-value foreground picture of doubtful foreign matter, i.e., doubtful in image
Foreign matter target area.
Further, the step of described image registration includes:Utilize the office of the infrared image and the visible images
Portion's invariant features, the progress infrared image is registering with the visible images, and the local invariant feature refers to that image exists
The feature of stability is still kept when Geometrical change, illumination variation, noise jamming.The step of image registration, also includes:1. it is based on
SURF feature point extraction is with just matching:Characteristic point inspection is carried out to the infrared image and the visible images using SURF
Survey with description, be then based on the ratio between arest neighbors and secondary neighbour progress initial characteristicses point to matching using Euclidean distance;2. error hiding
Point is to rejecting:Mismatching point is carried out to rejecting using the progressive method of three-level, wherein, set up first according to camera mounting means
The related geometry constraint conditions of image are screened;Then further rejected using similar triangles matching principle;Finally, base
Essence matching is realized in RANSAC;3. based on geometric transformation model solution of the multiple image sequences match point to accumulation:Due to single frames
It is infrared less with visible images correct matching double points numbers, it is not enough to solve transformation model ginseng when less than 4 points pair
Number;Even if matching double points number meets to calculate and required, also due to characteristic point skewness causes the geometric transformation mould obtained
There is deviation in type, and accumulating enough correct matching double points based on multiple image sequence passes through least square method progress geometry change
Change model solution and can solve the problem that above mentioned problem;4. the geometric transformation model obtained is applied on visible images, then carried out
Bilinear interpolation, is completed infrared registering with visible images.Method for registering images based on local invariant feature is mainly wrapped
Include:SIFT, SURF and MSER, these algorithms have good anti-scaling, angle rotation, viewpoint change and local deformation
Ability, the apparatus according to the invention mainly carries out the feature extracting and matching of image according to above-mentioned algorithm, and in the algorithm
On the basis of be improved and optimize, carry out image co-registration again after the completion of registration.
Image registration is that two width or multiple image for being obtained different sensors, different phases, different angles are transformed to
Process under the same coordinate system, is the prerequisite of image co-registration.It is domestic at present in infrared and visible light image registration direction
Outside all also without especially ripe algorithm.The present inventors have noted that this be primarily due to it is infrared be in from visible images it is different
Wave band, the correlation between image is smaller, and different sensor images has different nonlinear distortions.Therefore, base is utilized
It is extremely difficult to accurately match somebody with somebody alignment request in the method for registering of gray scale.And the method for registering of feature based usually extracts all kinds of images
In common notable feature (such as center of marginal point, closed region) as the reference information of two images registration, then set up
Corresponding relation between two images feature carries out characteristic matching.But, because infrared image resolution ratio is relatively low and edge mould
Paste, causes infrared and visible images common features to be difficult to obtain, easy according to the method for registering of general feature based
Cause error hiding.
Further, the infrared image is realized using the contourlet transformation image interfusion method based on local energy
With merging for the visible images after the conversion, the step of described image is merged includes:1. to infrared image and registration after
Visible images carry out multiple dimensioned and multidirectional contourlet transformation respectively;High frequency coefficient and low frequency after being converted
Coefficient;2. fusion rule is determined by being analyzed the coefficient after contourlet transformation:Consider infrared image
Property and Riming time of algorithm, the respectively fusion to low frequency coefficient and high frequency coefficient using weighted average and based on local energy
Rule;3. the Contourlet coefficients after fusion are subjected to inverse transformation, obtain fused images.
The present inventor propose it is improved image registration is carried out based on SURF feature matching methods, due to infrared image with can
See that light image image-forming principle is different, first infrared image is pre-processed, then carry out feature point extraction detection and matching, introduce
The ratio between arest neighbors and time neighbour realize preliminary feature point pair matching, and same place is screened then in conjunction with geometry constraint conditions,
Recycle similar triangles matching principle to reject same place, finally realize essence matching using RANSAC, it is ensured that final
With stability of the point to accuracy rate.During geometric transformation is solved, collected using the matching double points formation of multiple image
Close, to increase the accuracy for solving geometric transformation, finally solved with least square method, and figure is realized with bilinear interpolation
The registration of picture.
In order to strengthen the definition and intelligibility of target image under railway scene, the present inventor enters to the image after registration
Row pixel-level image fusion, enhances the visualization of fused images and highlights object content.
According to another aspect of the invention, it is proposed that a kind of foreign matter intrusion detection means, described device includes:Infrared phase
Machine, image collection processing system, LASER Light Source and Visible Light Camera, it is infrared closely adjacent when being installed with Visible Light Camera, can
Parallel installation also being capable of mounted on top, it is ensured that its photocentre is tried one's best close;LASER Light Source wave band is included in the sensitive ripple of Visible Light Camera
In section, but it is not included in infrared camera sensitive band, wherein, infrared camera, it is configured to obtain infrared in monitoring range
Image;Image collection processing system, it is connected with the infrared camera and is configured to receive the institute from the infrared camera
State infrared image and determined whether to occur in the monitoring range of the infrared camera according to the infrared image doubtful different
Thing;LASER Light Source, it is configured in the case where there is the doubtful foreign matter, is focused on described doubtful in the monitoring range
Laser light filling is carried out like on foreign matter and to the doubtful foreign matter;Visible Light Camera, it is set to LASER Light Source linkage simultaneously
And be connected with described image acquisition processing system, and be configured to obtain the visible images of the doubtful foreign matter and incite somebody to action described
Visible images are transmitted to described image acquisition processing system;Wherein, be configured to will be described for described image acquisition processing system
Visible images carry out image registration with merging with the doubtful foreign matter area image in the infrared image;Utilize fused image
Doubtful foreign substance information is provided, feature extraction and classifying is carried out to the doubtful foreign matter using the doubtful foreign substance information, institute is realized
State automatic identification and the alarm of doubtful foreign matter.
Further, described image acquisition processing system is configured to:A) picture point of the doubtful foreign matter is obtained;B) institute is calculated
State azimuth of the doubtful foreign matter in the real space under world coordinate system;C) using the azimuth that calculates and described swash
Radiant, relative to the relative position and relative attitude of the infrared camera, determines the laser light with the Visible Light Camera
Source and the anglec of rotation and luffing angle of Visible Light Camera;The LASER Light Source and the Visible Light Camera are configured to according to institute
State the anglec of rotation and luffing angle is rotated and pitching motion, focus on the doubtful foreign matter.
Further, described image acquisition processing system is configured to extract background from the infrared image and extracts described
Doubtful foreign matter:A) background extracting based on multiframe frame difference method:Using described image acquisition processing system from the infrared image
Background is extracted, adds up to obtain the background using multiframe frame difference image, the accumulative step of the multiframe frame difference image includes:
1. carrying out difference frame by frame using video, compared by difference value with fixed threshold, difference value is less than the pixel position pair of threshold value
What is answered is background area, and corresponding more than the pixel position of threshold value is foreground target region;2. according to obtained background area
With foreground target region, the pixel dotted state to input picture is marked, and the pixel in the foreground target region judges
For foreground pixel point, background calculating is not involved in;Pixel in the background area is determined as background pixel point, participates in background
Calculate;3. taking 100 frame successive image frames, background and foreground pixel in each image are distinguished using previous methods, one is introduced
Accumulator, initial value is 0, is counted for each pixel of all two field picture same positions, is determined as cumulative during foreground pixel point
Device value is constant, is determined as that accumulator value plus 1 during background pixel point, finally using cumulative obtained gradation of image accumulated value divided by
Corresponding accumulator value obtains current initial background, and the initial background is extracted background;B) it is based on background difference
Foreign matter extract:The doubtful foreign matter is extracted using background subtraction to every two field picture in video sequence.
Further, the background subtraction includes:If the background image of t is fb (x, y, t), current frame image is
Fc (x, y, t), then background difference image is fd (x, y, t)=fc (x, y, t)-fb (x, y, t), right using suitable threshold value T
Background difference image fd (x, y, t) carries out binary conversion treatment, obtains the two-value foreground picture of doubtful foreign matter, i.e., doubtful in image
Foreign matter target area.
Further, described image acquisition processing system be configured to following steps by the visible images with it is described red
Doubtful foreign matter area image in outer image carries out image registration:Utilize the office of the infrared image and the visible images
Portion's invariant features, the progress infrared image is registering with the visible images, and the local invariant feature refers to that image exists
The step of feature of stability, image registration is still kept to include when Geometrical change, illumination variation, noise jamming:1. it is based on SURF
Feature point extraction with just matching:Using SURF the infrared image and the visible images are carried out feature point detection with
Description, is then based on the ratio between arest neighbors and secondary neighbour progress initial characteristicses point to matching using Euclidean distance;2. Mismatching point pair
Reject:Mismatching point is carried out to rejecting using the progressive method of three-level, wherein, image is set up according to camera mounting means first
Related geometry constraint conditions screened;Then further rejected using similar triangles matching principle;Finally, it is based on
RANSAC realizes essence matching;3. based on geometric transformation model solution of the multiple image sequences match point to accumulation:Because single frames is red
Correct matching double points number outside with visible images is less, is not enough to solve transformation model ginseng when less than 4 points pair
Number;Even if matching double points number meets to calculate and required, also due to characteristic point skewness causes the geometric transformation mould obtained
There is deviation in type, and accumulate enough correct matching double points based on multiple image sequence carries out geometric transformation by least square method
Model solution can solve the problem that above mentioned problem;4. the geometric transformation model obtained is applied on visible images, then carried out double
Linear interpolation, is completed infrared registering with visible images.
Further, described image acquisition processing system is configured with the contourlet transformation figure based on local energy
As fusion method realizes merging for the infrared image and the visible images after the conversion, the step of described image is merged
Including:1. multiple dimensioned and multidirectional contourlet transformation is carried out respectively to the visible images after infrared image and registration;
High frequency coefficient and low frequency coefficient after being converted;2. determined by being analyzed the coefficient after contourlet transformation
Fusion rule:Consider the property and Riming time of algorithm of infrared image, low frequency coefficient and high frequency coefficient are used added respectively
Weight average and the fusion rule based on local energy;3. the Contourlet coefficients after fusion are subjected to inverse transformation, merged
Image.
Using the apparatus according to the invention and method, no matter daytime, night, or gather in light very dark, smoke, mists and clouds, can see
Under the low adverse circumstances of degree, abundant complete image information is resulted in.Obtained using the apparatus according to the invention and method
The higher and more illustrative image of definition, wherein the temperature information of foreign matter target can not only be obtained, moreover it is possible to enriched
The information such as complexion, profile, color so that foreign bodies detection and the difficulty of identification in the case where solving night and severe weather conditions
On the basis of, additionally it is possible to the species of doubtful foreign matter target is determined, foreign matter alarm accuracy rate is improved, the identification essence of system is improved
Degree, enhances the reliability and security of system.
The characteristics of invading detection means according to the foreign matter of the present invention is:
Infrared camera and LASER Light Source are close adjacent on same cabinet with the integral camera of visible ray, can be with parallel
Install and mounted on top, after installation, position relationship each other is fixed;
Because infrared camera is used to carry out the video surveillance under large scene, so its posture is exactly after infrared camera is installed
Fixed value, and using the angle and focal length and picpointed coordinate of its fixation can calculate some target in real space
Azimuth;
LASER Light Source may be mounted on the head that can accurately control rotation and the angle of pitch with the integral camera of visible ray, its
With respect to infrared camera initial position and posture by demarcating determination in advance;
, can after azimuth of the doubtful foreign matter target in actual scene is obtained from the video image that infrared camera is obtained
To be rotated and elevating movement by adjusting head, LASER Light Source is locked in doubtful foreign matter target and by visible ray
Camera obtains the video image of the doubtful foreign matter target.
Merge to show doubtful foreign matter target abundant profile, texture, temperature, color etc. with infrared image by visible ray
Information, is conducive to classification and identification to doubtful foreign matter target, improves alarm accuracy rate.
Whole system realizes that the course of work of foreign matter intrusion detection is as follows:
Video image under the large scene obtained by infrared camera carries out round-the-clock foreign bodies detection, image acquisition and processing
System will detect doubtful foreign matter target using Infrared video image by doubtful foreign matter algorithm of target detection, obtain described doubt
Like the picture point of foreign matter;
Once detecting doubtful foreign matter target, the angle of the fixation of the mounted infrared camera, focal length and institute are utilized
State the camera coordinates system of infrared camera and the relation of world coordinate system, and the institute in the image obtained by the infrared camera
The pixel position of doubtful foreign matter is stated, to calculate azimuth of the doubtful foreign matter under world coordinate system;
Using azimuth of the doubtful foreign matter calculated under world coordinate system, and the LASER Light Source with it is described
Visible Light Camera relative to the infrared camera relative position and relative attitude, to determine the LASER Light Source and visible ray
The anglec of rotation and luffing angle of camera;
The LASER Light Source and the Visible Light Camera are rotated and pitching according to the anglec of rotation and luffing angle
Action, so that the LASER Light Source and the Visible Light Camera are focused on the doubtful foreign matter, carries out adopting for video image
Collection;
After the visible images for obtaining having doubtful foreign matter target, by its with it is doubtful different in the infrared image under large scene
Thing target area image carries out image registration with merging, and feature is carried out to doubtful foreign matter target using the target information after fusion
Extract with classifying, to realize automatic identification and the alarm of foreign matter target.
Other features and advantages of the present invention will be illustrated in the following description, also, partly be become from specification
Obtain it is clear that or being understood by implementing the present invention.The purpose of the present invention and other advantages are in specification, claim
Specifically noted structure is realized and obtained in book and accompanying drawing.
To enable the above objects, features and advantages of the present invention to become apparent, preferred embodiment cited below particularly, and coordinate
Appended accompanying drawing, is described in detail below.
Brief description of the drawings
, below will be to specific in order to illustrate more clearly of the specific embodiment of the invention or technical scheme of the prior art
The accompanying drawing used required in embodiment or description of the prior art is briefly described, it should be apparent that, in describing below
Accompanying drawing is some embodiments of the present invention, for those of ordinary skill in the art, before creative work is not paid
Put, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 shows that the position of infrared camera and LASER Light Source and the integral camera of visible ray in a kind of embodiment of the invention is tied
Composition.
Fig. 2 shows to be loaded with the signal of LASER Light Source and the head of the integral camera of visible ray in a kind of embodiment of the present invention
Figure.
Fig. 3 shows the schematic diagram of viewing field of camera in a kind of embodiment of the invention.
Fig. 4 shows the schematic diagram of camera imaging model in another embodiment of the invention.
Fig. 5 (a) shows the example of the infrared image that camera is obtained in a kind of embodiment of the invention.
Fig. 5 (b) shows the example of the visible images that camera is obtained in a kind of embodiment of the invention.
Fig. 6 shows the flow chart of the infrared image and Detection Method in Optical Image Sequences registration arrangement according to the present invention.
Fig. 7 (a) shows SIFT feature testing result, and Fig. 7 (b) shows SURF feature point detection results, and Fig. 7 (c) is shown
MSER feature point detection results.
Fig. 8 (a) shows SURF feature point detection results, and Fig. 8 (b) shows SURF candidate feature matching double points results, Fig. 8
(c) geometry constraint conditions matching double points the selection result is shown, Fig. 8 (d) is shown based on the theoretical Mismatching point of structural similarity to picking
Division result, Fig. 8 (e) shows the smart matching results of RANSAC.
Fig. 9 shows the image co-registration framework based on contourlet transformation.
Figure 10 (a) shows infrared source images, and Figure 10 (b) shows visible light source image, and Figure 10 (c) shows to be based on intensity-weighted
Average image co-registration result, and Figure 10 (d) show the image co-registration result based on contourlet transformation.
Embodiment
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with accompanying drawing to the present invention
Technical scheme be clearly and completely described, it is clear that described embodiment is a part of embodiment of the invention, rather than
Whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art are not making creative work premise
Lower obtained every other embodiment, belongs to the scope of protection of the invention.
Included according to the foreign matter of present invention intrusion detection means:Infrared camera 1, image collection processing system, LASER Light Source
With Visible Light Camera 2.Reference picture 1, Fig. 1 shows infrared camera 1 and LASER Light Source and visible ray in a kind of embodiment of the invention
The position assumption diagram of integral camera 2, wherein, LASER Light Source and the integral camera 2 of visible ray and infrared camera 1 can be placed in together
One height, can also be installed in parallel on cabinet 4.
Reference picture 2, Fig. 2 shows to be loaded with the cloud of LASER Light Source and the integral camera 2 of visible ray in a kind of embodiment of the present invention
The schematic diagram of platform 3, wherein, head 3 include luffing mechanism 31 and rotating mechanism 32, head 3 can using stepper motor (or its
His Electromechanical Control element) accurate control LASER Light Source and the integral camera of visible ray rotation and pitching.
Infrared camera 1 is used for the infrared image for obtaining large scene, and LASER Light Source is then used for doubting with the integral camera 2 of visible ray
Carry out laser light filling like foreign matter target and focus on doubtful foreign matter target to obtain visible images.
Reference picture 3, Fig. 3 shows the schematic diagram of viewing field of camera in a kind of embodiment of the invention, wherein according to the present invention's
Foreign matter intrusion detection means may be mounted on the column of the contact net bar above railway or circuit side.Such as Fig. 3 institutes
Show, infrared camera 1 is used for obtaining the video image under large scene, and its field range is A1-A2, and LASER Light Source and visible ray
Integral camera 2 is then used for obtaining the scene image with doubtful foreign matter target, and wherein LASER Light Source plays a part of light filling, especially
It is directed to night and severe weather conditions, and its field range is B1-B2.
Referring to figs. 1 to Fig. 3, according to one embodiment of present invention, foreign matter intrusion detection means includes:Infrared camera 1, figure
It is infrared closely adjacent when being installed with Visible Light Camera as acquisition processing system, LASER Light Source 22 and Visible Light Camera 21, can
Parallel installation also being capable of mounted on top, it is ensured that its photocentre is tried one's best close;LASER Light Source wave band is included in the sensitive ripple of Visible Light Camera
In section, but it is not included in infrared camera sensitive band, wherein, infrared camera 1, it is configured to obtain red in monitoring range
Outer image;Image collection processing system, it, which is connected with the infrared camera 1 and is configured to receive, comes from the infrared camera 1
The infrared image and determine whether occur in the monitoring range of the infrared camera 1 according to the infrared image
Doubtful foreign matter;LASER Light Source 22, it is configured to, in the case where there is the doubtful foreign matter, focus in the monitoring range
The doubtful foreign matter on and laser light filling is carried out to the doubtful foreign matter;Visible Light Camera 21, it is set to and the laser
Light source 22 links and is connected with described image acquisition processing system, and is configured to obtain the visible ray of the doubtful foreign matter
Image simultaneously transmits the visible images to described image acquisition processing system;Wherein, described image acquisition processing system is also
It is configured to the visible images and the doubtful foreign matter area image in the infrared image carrying out image registration with merging;
Doubtful foreign substance information is provided using fused image, feature extraction is carried out to the doubtful foreign matter using the doubtful foreign substance information
With classification, automatic identification and the alarm of the doubtful foreign matter are realized.
Preferably, described image acquisition processing system is configured to:A) picture point of the doubtful foreign matter is obtained;B) calculate described
Azimuth of the doubtful foreign matter in the real space under world coordinate system;C) using the position that calculates and azimuth with
And the LASER Light Source and the Visible Light Camera determine institute relative to the relative position and relative attitude of the infrared camera
State the anglec of rotation and luffing angle of LASER Light Source 22 and Visible Light Camera 21;The LASER Light Source 22 and the visible ray phase
Machine 21 is configured to be rotated and pitching motion according to the anglec of rotation and luffing angle, focuses on the doubtful foreign matter.
Image collection processing system is responsible for gathering the image of infrared camera and carries out foreign bodies detection, can be divided into background extracting
With renewal and the step of foreign matter Objective extraction two.Wherein background extracting adds up to obtain the back of the body with update method using multiframe frame difference image
Scape.Preferably, described image acquisition processing system is configured to extract background from the infrared image and extracted described doubtful different
Thing:A) background extracting based on multiframe frame difference method:The back of the body is extracted from the infrared image using described image acquisition processing system
Scape, adds up to obtain the background, the accumulative step of the multiframe frame difference image includes using multiframe frame difference image:1. utilize
Video carries out difference frame by frame, is compared by difference value with fixed threshold, and difference value is corresponding less than the pixel position of threshold value to be
Background area, corresponding more than the pixel position of threshold value is foreground target region;2. according to obtained background area and prospect
Target area, the pixel dotted state to input picture is marked, and the pixel in the foreground target region is determined as prospect
Pixel, is not involved in background calculating;Pixel in the background area is determined as background pixel point, participates in background and calculates;3.
100 frame successive image frames are taken, background and foreground pixel in each image are distinguished using previous methods, an accumulator is introduced,
Initial value is 0, is counted for each pixel of all two field picture same positions, is determined as that accumulator value is not during foreground pixel point
Become, be determined as that accumulator value plus 1 during background pixel point, finally utilize cumulative obtained gradation of image accumulated value divided by corresponding
Accumulator value obtains current initial background, and the initial background is extracted background;B) foreign matter based on background difference
Extract:The doubtful foreign matter is extracted using background subtraction to every two field picture in video sequence.The background so obtained is steady
It is fixed reliable, and the influence of the slowly varying factor such as daylight illumination is effectively eliminated, it is the extraction of foreign matter and judges foundation well
Basis.
Foreign matter target detection uses background subtraction.Background subtraction be using current frame image and background image subtraction come
Detect foreign matter target.In the image that a frame contains target or foreign matter, the region of target correspondence position is corresponding with background image
The pixel value difference of position is larger, and background area is belonged to elsewhere, very little is differed.Preferably, the background subtraction bag
Include:If the background image of t is fb (x, y, t), current frame image is fc (x, y, t), then background difference image is
Fd (x, y, t)=fc (x, y, t)-fb (x, y, t)
With suitable threshold value T, binary conversion treatment is carried out to background difference image fd (x, y, t), doubtful foreign matter is just obtained
Doubtful foreign matter target area in two-value foreground picture, i.e. image.
Preferably, described image acquisition processing system be configured to following steps by the visible images with it is described infrared
Doubtful foreign matter area image in image carries out image registration:Utilize the part of the infrared image and the visible images
Invariant features, the progress infrared image is registering with the visible images, and the local invariant feature refers to image several
The step of feature of stability, image registration is still kept to include during what change, illumination variation, noise jamming:1. based on SURF's
Feature point extraction is with just matching:Feature point detection is carried out with retouching to the infrared image and the visible images using SURF
State, be then based on the ratio between arest neighbors and time neighbour progress initial characteristicses point to matching using Euclidean distance;2. Mismatching point is to picking
Remove:Mismatching point is carried out to rejecting using the progressive method of three-level, wherein, image is set up according to camera mounting means first
Related geometry constraint conditions are screened;Then further rejected using similar triangles matching principle;Finally, it is based on
RANSAC realizes essence matching;3. based on geometric transformation model solution of the multiple image sequences match point to accumulation:Because single frames is red
Correct matching double points number outside with visible images is less, is not enough to solve transformation model ginseng when less than 4 points pair
Number;Even if matching double points number meets to calculate and required, also due to characteristic point skewness causes the geometric transformation mould obtained
There is deviation in type, and accumulating enough correct matching double points based on multiple image sequence passes through least square method progress geometry change
Change model solution and can solve the problem that above mentioned problem;4. the geometric transformation model obtained is applied on visible images, then carried out
Bilinear interpolation, is completed infrared registering with visible images.
Preferably, described image acquisition processing system is configured with the contourlet transformation image based on local energy
Fusion method realizes merging for the infrared image and the visible images after the conversion, and the step of described image is merged is wrapped
Include:1. multiple dimensioned and multidirectional contourlet transformation is carried out respectively to the visible images after infrared image and registration;
High frequency coefficient and low frequency coefficient after to conversion;2. determine to melt by analyzing the coefficient after contourlet transformation
Normally:The property and Riming time of algorithm of infrared image are considered, respectively to low frequency coefficient and high frequency coefficient using weighting
The average and fusion rule based on local energy;3. the Contourlet coefficients after fusion are subjected to inverse transformation, obtain fusion figure
Picture.
According to an aspect of the invention, there is provided a kind of foreign matter intrusion detection method, methods described comprises the following steps:
The infrared image in monitoring range is obtained using infrared camera and transmits it to image collection processing system;Described image is adopted
Collection processing system determines whether occur doubtful foreign matter in the monitoring range of the infrared camera according to the infrared image;
In the case where there is doubtful foreign matter, LASER Light Source is set to be focused on Visible Light Camera described doubtful in the monitoring range
Laser light filling is carried out on foreign matter and using the LASER Light Source to the doubtful foreign matter;Obtain the visible ray of the doubtful foreign matter
Image simultaneously transmits it to described image acquisition processing system;Described image acquisition processing system by the visible images with
Doubtful foreign matter area image in the infrared image carries out image registration with merging;There is provided doubtful different using fused image
Thing information, carries out feature extraction and classifying to the doubtful foreign matter using the doubtful foreign substance information, realizes the doubtful foreign matter
Automatic identification and alarm.
Preferably, the described doubtful foreign matter for making LASER Light Source be focused on Visible Light Camera in the monitoring range
On step include:A) picture point of the doubtful foreign matter is obtained;B) angle of the fixation of the mounted infrared camera is utilized
The camera coordinates system and the relation of world coordinate system of degree, focal length and the infrared camera, and obtained by the infrared camera
Image described in doubtful foreign matter pixel position, to calculate actual sky of the doubtful foreign matter under world coordinate system
Between in azimuth;C) azimuth using the doubtful foreign matter calculated in the real space under world coordinate system, with
And relative position and relative attitude of the LASER Light Source with the Visible Light Camera relative to the infrared camera, to determine
The anglec of rotation and luffing angle of the LASER Light Source and Visible Light Camera;D) LASER Light Source and the Visible Light Camera
Rotated according to the anglec of rotation and luffing angle and pitching motion, so that the LASER Light Source and the visible ray phase
Machine is focused on the doubtful foreign matter.
Preferably, the described step for carrying out feature extraction and classifying to the doubtful foreign matter using the doubtful foreign substance information
Suddenly include:Using the profile of the image offer doubtful foreign matter, texture, temperature, the information of color, based on the profile, line
Reason, temperature, the information of color, extract the feature of the doubtful foreign matter and the feature are classified.
Preferably, the step of whether described determination doubtful foreign matter occurs in the monitoring range of the infrared camera is wrapped
Include:A) background extracting based on multiframe frame difference method:The back of the body is extracted from the infrared image using described image acquisition processing system
Scape, adds up to obtain the background, the accumulative step of the multiframe frame difference image includes using multiframe frame difference image:1. utilize
Video carries out difference frame by frame, is compared by difference value with fixed threshold, and difference value is corresponding less than the pixel position of threshold value to be
Background area, corresponding more than the pixel position of threshold value is foreground target region;2. according to obtained background area and prospect
Target area, the pixel dotted state to input picture is marked, and the pixel in the foreground target region is determined as prospect
Pixel, is not involved in background calculating;Pixel in the background area is determined as background pixel point, participates in background and calculates;3.
100 frame successive image frames are taken, background and foreground pixel in each image are distinguished using previous methods, an accumulator is introduced,
Initial value is 0, is counted for each pixel of all two field picture same positions, is determined as that accumulator value is not during foreground pixel point
Become, be determined as that accumulator value plus 1 during background pixel point, finally utilize cumulative obtained gradation of image accumulated value divided by corresponding
Accumulator value obtains current initial background, and the initial background is extracted background;B) foreign matter based on background difference
Extract:The doubtful foreign matter is extracted using background subtraction to every two field picture in video sequence.
Preferably, the background subtraction includes:If the background image of t is fb (x, y, t), current frame image is fc
(x, y, t), then background difference image is fd (x, y, t)=fc (x, y, t)-fb (x, y, t), using suitable threshold value T, to the back of the body
Scape difference image fd (x, y, t) carries out binary conversion treatment, obtains the two-value foreground picture of doubtful foreign matter, i.e., doubtful different in image
Thing target area.
Preferably, the step of described image registration includes:Utilize the part of the infrared image and the visible images
Invariant features, the progress infrared image is registering with the visible images, and the local invariant feature refers to image several
The feature of stability is still kept during what change, illumination variation, noise jamming.The step of image registration, also includes:1. it is based on SURF
Feature point extraction with just matching:Using SURF the infrared image and the visible images are carried out feature point detection with
Description, is then based on the ratio between arest neighbors and secondary neighbour progress initial characteristicses point to matching using Euclidean distance;2. Mismatching point pair
Reject:Mismatching point is carried out to rejecting using the progressive method of three-level, wherein, image is set up according to camera mounting means first
Related geometry constraint conditions screened;Then further rejected using similar triangles matching principle;Finally, it is based on
RANSAC realizes essence matching;3. based on geometric transformation model solution of the multiple image sequences match point to accumulation:Because single frames is red
Correct matching double points number outside with visible images is less, is not enough to solve transformation model ginseng when less than 4 points pair
Number;Even if matching double points number meets to calculate and required, also due to characteristic point skewness causes the geometric transformation mould obtained
There is deviation in type, and accumulating enough correct matching double points based on multiple image sequence passes through least square method progress geometry change
Change model solution and can solve the problem that above mentioned problem;4. the geometric transformation model obtained is applied on visible images, then carried out
Bilinear interpolation, is completed infrared registering with visible images.Method for registering images based on local invariant feature is mainly wrapped
Include:SIFT, SURF and MSER, these algorithms have good anti-scaling, angle rotation, viewpoint change and local deformation
Ability, the apparatus according to the invention mainly carries out the feature extracting and matching of image according to above-mentioned algorithm, and in the algorithm
On the basis of be improved and optimize, carry out image co-registration again after the completion of registration.
Preferably, using the contourlet transformation image interfusion method based on local energy realize the infrared image with
The fusion of visible images after the conversion, the step of described image is merged includes:1. to infrared image and registration after can
See that light image carries out multiple dimensioned and multidirectional contourlet transformation respectively;High frequency coefficient and low frequency system after being converted
Number;2. fusion rule is determined by being analyzed the coefficient after contourlet transformation:Consider the property of infrared image
Matter and Riming time of algorithm, the respectively fusion to low frequency coefficient and high frequency coefficient using weighted average and based on local energy are advised
Then;3. the Contourlet coefficients after fusion are subjected to inverse transformation, obtain fused images.
Obtain behind doubtful foreign matter target area with regard to moving target central point can be obtained in pixel seat both horizontally and vertically
Mark (x, y), angle (including the zenith of light by object and picture point and camera optical axis can be determined using the pixel coordinate
Angle α and azimuth), because the angle of pitch γ of infrared camera has been determined in advance before the use, according to zenith angle α, orientation
AnglePosture of the corresponding light of picture point relative to LASER Light Source and the integral camera of visible ray can be calculated with angle of pitch γ, will
Attitude value passes to such as head and carries out pose adjustment to lock doubtful foreign matter target, and doubtful foreign matter target is carried out
The video image is obtained while light filling.
1. the improvement registration Algorithm based on SURF
Image characteristic point is extracted using SURF herein and be described, then by testing than more typical part not
Become the advantage and disadvantage of feature extraction algorithm;In Mismatching point to rejecting the stage, first screened using geometry constraint conditions, then
Exterior point is further rejected using similar triangles matching principle, finally realizes essence matching to ensure final with RANSAC algorithms
With point to accuracy and reliability.In view of the matching double points negligible amounts after essence matching, it is possible to be not enough to ask for geometry
Transformation matrix, and in order to improve the accuracy of geometric transformation model, so the present invention is in geometric transformation solution procedure,
Multiple image characteristic point is set up to collection using the final correct matching double points of the image of the different doubtful foreign matter target locations of multiframe
Close, then solve transformation model using least square method.
1.1. algorithm is summarized
Infrared and Visible Light Camera imaging model in the present invention under railway scene as shown in figure 4, wherein infrared camera 1 with
LASER Light Source can be installed with the integral camera 2 of visible ray in neighbouring.Translation, yardstick and rotation and change are there is between image
Deformation, using Perspective transformation model.
Infrared camera 1 is responsible for the image under collection large scene in the system, and LASER Light Source is caught with the integral camera 2 of visible ray
Image containing doubtful foreign matter target.According to the present invention, to the doubtful foreign matter target of infrared large scene image and visible ray of acquisition
Area image carries out registration and merged, and obtains the high and more illustrative image of definition, can determine doubtful foreign matter target
Species, improves the accuracy rate of foreign matter alarm.
Fig. 5 shows the example of the infrared image that camera is obtained in a kind of embodiment of the invention and visible images.
Because infrared camera 1 and LASER Light Source are different from the installation site of the integral camera 2 of visible ray, respective visual field size
It is inconsistent, so obtained two images have the conversion such as yardstick, rotation, scaling, before being merged to two images
Need to carry out registration to two class images.Because infrared and visible light image registration belongs to multi-modality image registration, the two image
Gray difference is larger, it is necessary to carry out registration using its image local invariant features, and conventional method can have based on Harris angles
Point matching, based on SIFT feature matching, based on SURF characteristic matchings etc..
Fig. 6 shows the flow chart of infrared image and the Detection Method in Optical Image Sequences registration arrangement according to the present invention, wherein with can
It is input to see light-infrared synchronous video image, mainly including the generation of heterologous image candidate matching double points, Mismatching point to rejecting
And transformation model solves three modules.
What infrared image was utilized is the emittance of target, and it is object for the anti-of visible ray that visible images, which are utilized,
Penetrate, the image-forming principle of the two is different, determines that the imaging effect of its image is mutually far short of what is expected.The present inventors have noted that, infrared image
Negative-appearing image each pixel of infrared image is negated first closer to visible images, therefore the present inventor, i.e., subtracted with 255
The pixel value of the point is gone to replace original pixel value.
Algorithm realizes that step is as follows:
1) first the infrared image of acquisition is carried out negating processing;
2) feature extraction and description are carried out to the infrared and visible images after processing using SURF algorithm, then with to spy
Levy and to carry out matching based on arest neighbors and the ratio between secondary neighbour;
3) after the matching double points for obtaining candidate, matching double points are screened first with geometry constraint conditions, are then based on
Similar triangles matching principle is further rejected, and smart matching is finally carried out with RANSAC algorithms;
4) choose multiple image to repeat to operate above, and form final multiple image characteristic point to set, recycle most
Small square law solves transformation model parameter;Visible images subject to registration are entered with line translation and bilinear interpolation, image is completed and matches somebody with somebody
It is accurate.
1.2. heterologous image candidate matching double points generation
Translation, rotation, scaling and dimensional variation are not only there is with visible images due to infrared under railway scene, is also deposited
The characteristics of visual field is of different sizes, so it is determined that during heterologous image linked character, extracted and calculated using local invariant feature
Method.The present inventor chooses SIFT, SURF and MSER.Inventors believe that SURF algorithm, which is used for reference, simplifies approximate thought in SIFT,
Integral image and cassette filter are introduced, SIFT is superior in terms of the accuracy, arithmetic speed and robustness of feature detection
Algorithm, so using SURF as feature point extraction and description method, and based on the ratio between arest neighbors and time neighbour generation candidate
With point pair.
1) feature point extraction
SURF algorithm is on the premise of keeping source images size constant, by constantly increasing box Filtering Template size and product
Partial image asks for filter response and sets up yardstick pyramid, and by the local maximums of approximate Hessian matrix determinants come
Detection and Extraction characteristic point.
To the point I (x, y) in image I, its Hessian matrix under yardstick σ is
In formula (1), Lxx(x, σ) is Gaussian function second-order differential at point I (x, y) place and image I convolution, and other are similarly.
SURF algorithm replaces Gauss second-order differential template, the then simplification of Hessian matrix determinants using cassette filter
Form is
Det (H)=DxxDyy-(0.9·Dxy)2 (2)
In formula (2), Dxx、DxyAnd DyyRespectively cassette filter carries out the result of convolution with image I.
The detection and extraction of characteristic point, are related to the number of characteristic point, distribution situation and whether extract crucial angle point
It is to carry out the crucial step of image registration etc. factor.For superiority of the checking SURF in feature point extraction, now by its with
The situation that SIFT and MSER algorithms extract characteristic point is compared.It is the image under railway scene, pixel value size to test picture
For 576x960.Its testing result is as shown in fig. 7, Fig. 7 (a) shows SIFT feature testing result, and Fig. 7 (b) is shown
SURF feature point detection results, Fig. 7 (c) shows MSER feature point detection results.
The number and its distribution situation of characteristic point are extracted by comparing these three algorithms, as shown in table 1:
Wherein, the visible images characteristic point that SIFT algorithms are extracted is too many, increases the probability of overdue appearance.MSER algorithms
Very little, and architectural profile, almost without characteristic point, is unfavorable for the registration of the overall situation to the Infrared Image Features point of extraction at a distance.
And the characteristic point distribution uniform that SURF is detected, feature points are moderate and detected in global scope a number of
Characteristic point.This explanation SURF algorithm is advantageous compared to other two kinds of algorithms, and guarantor is provided for follow-up registration process
Barrier.
2) feature point description
In order to ensure image rotational invariance, it is necessary to try to achieve one according to the local image structure of the characteristic point detected
Directional reference.SURF algorithms are united after the response computing of the Haar small echos of integral image of feature vertex neighborhood is completed using histogram
Count the gradient direction and modulus value of pixel in neighborhood.Direction corresponding to maximum Haar responses accumulated value is characterized a principal direction, i.e.,
Direction corresponding to histogram peak-peak.
Centered on characteristic point, image that size is the σ of 20 σ × 20 is divided into 4 × 4 sub-blocks along principal direction, chi is used
It is very little be 2 σ Haar templates to each sub-block carry out response calculating, obtain the dy along the principal direction and dx perpendicular to principal direction,
And Gauss weighting processing is carried out to it, strengthen the robustness to geometric transformation.Finally the response to each sub-block is counted,
The characteristic vector of sub-block is obtained, as shown in formula (3):
To each characteristic point, it describes the characteristic vector that son is 4 × 4 × 4=64 dimension.Now, SURF descriptions tool
There are yardstick and rotational invariance, characteristic vector is normalized, can make SURF features that there is illumination invariant.
3) the ratio between arest neighbors and time neighbour generation matching double points
The ratio between arest neighbors and time neighbour generation matching double points are the spies by similarity measurement of Euclidean distance to two images
Levy description to be matched, as shown in formula (4):
dND/dNND<ε (4)
Some key point in infrared image is taken, and finds out it and is closed with European closest the first two in visible images
Key point, in the two key points, if nearest apart from dNNDivided by secondary closely dNNDLess than some proportion threshold value ε, then connect
By this pair of match points, when ε is too small, SURF matching double points number can be reduced, and be unfavorable for follow-up variation model and solved;Work as ε
When excessive, the increase of SURF matching double points number, but can also introduce Mismatching point pair.ε takes 0.8. in this experiment
1.3. Mismatching point is to rejecting
The present inventor proposed in Mismatching point to rejecting the stage, a large amount of rejectings of Mismatching point pair and accurate match point pair
Reservation is crucial, and this will be directly connected to the solution of geometric transformation, influences the precision of image registration.The present inventor is first
Matching double points are screened using geometry constraint conditions, then further rejected using image structure similarity is theoretical, finally
Smart matching is carried out using RANSAC algorithms, the matching double points of geometric transformation solution will finally be participated in by being produced with this.
1) geometry constraint conditions screening matching double points
Because the initial matching point centering that SURF algorithm is obtained has a part of manifest error matching double points, in order to reduce
The follow-up amount of calculation for rejecting Mismatching point pair, increase retains the probability of correct matching double points, and the present invention is proposed with specific aim
Geometry constraint conditions.The geometry constraint conditions are the relative positions put according to this experiment infrared camera and Visible Light Camera
Determine, because two cameras are closely stacked together up and down, as shown in figure 4, in longer-distance shooting process, it is infrared
Camera and the picture centre that Visible Light Camera is obtained be believed that it is approximate consistent, and because the image that infrared camera is shot is big field
Image P1P2 under scape, and the coverage of visible images is P3P4, included in infrared image.Now remember infrared image table
It is shown as Ir (x, y), it is seen that light image is expressed as Iv (x, y).The center position of infrared image is Ir (xor,yor), it is seen that light
The center position of image is Iv (xov,yov), optional a pair of matching double points Ir (x therein1r,y1r), Iv (x1v,y1v), then should
Meet following geometry constraint conditions simultaneously:
1. the angle of inclination of corresponding matching double points and respective picture centre line is approximately uniform, as shown in formula (5),
T is the threshold value of inclination angle difference, when poor obtained value is made at the angle of inclination of corresponding matching double points and respective picture centre is more than T,
Then reject.
|actan((xor-x1r)/(yor-y1r))-actan((xov-x1v)/(yov-y1v))|<T (5)
2. corresponding matching double points position should be located at the same orientation of respective image center.I.e. corresponding matching double points with
The difference of the abscissa of respective picture centre is positive and negative should be consistent, and the difference of ordinate also should be consistent, specifically as shown in formula (6).
(xor-x1r)*(xov-x1v)>0&&(yor-y1r)*(yov-y1v)>0 (6)
3. match point and the distance of image center in infrared image be less than in visible images Corresponding matching point with
The distance of picture centre.As shown in formula (7):
More than matching double points simultaneously satisfaction during three geometry constraint conditions, then remain and further screened;If
One of any three above condition is unsatisfactory for, then is directly rejected.
2) exterior point is rejected in similar triangles matching
In the present invention, similar triangles matching principle is mainly used further to reject the matching double points after screening.
Found in experimentation, after geometry constraint conditions screening, although accurate matching double points proportion increased, but also
It there is " many-one " and situation about intersecting.Generally, correct matching double points are in reference map and figure subject to registration
Relative position is all fixed.The triangle and corresponding point that arbitrary three correct match points are formed in reference map are being treated
The triangle formed in registering figure is similar (approximate to meet), and based on this feature, this paper presents new similar triangles
Matching principle rejects error matching points pair.
Assuming that any 3 points group in the matching point set that P and Q is remained after geometry constraint conditions are screened, infrared image
Into triangle be Δ PiPjPk,(i<j<k,Pi,Pj,Pk∈ P), corresponding triangle is Δ Q in visible imagesiQjQk(i
<j<k,Qi,Qj,Qk∈Q)。
Had according to the property that three sides of similar triangles correspondence are proportional:
In formula (9), the ratio dd on the ratio between corresponding side in the adjacent both sides of similar triangles1、dd2Can be by picture noise and spy
Interference a little is levied, the value is approximately equal to 1. so making the following judgment:
Work as dd1、dd2When meeting above formula, it may appear that dd1、dd2Respectively greater than 1 and the situation less than 1, i.e. triangle it is longer
The ratio between side and shorter edge and 1 make the ratio between absolute value triangle shorter edge corresponding with its of difference and longer sides and the poor absolute value of 1 work
Above-mentioned threshold requirement is met, but the corresponding short relation of three length of sides of such similar triangles is not consistent, so in this base
Need to be ranked up the length relation of similar triangles on plinth, when sorting consistent, similar triangles are retained.
After the treatment, it there is also the problem of matching double points line intersects, although the length of side of similar triangles three is short to close
System's correspondence is consistent, but position relationship is overturn.In order to reject this result, increase Rule of judgment is as follows:
The vector that two summits are constituted i.e. in similar trianglesThe vector constituted with two summits of corresponding triangleIts
Unit vector should be approximately uniform, and the vector of two groups of summit compositions should be also met such as co-relation in addition.So avoid
The appearance of upset situation.
Similar triangles judge it is since the point (Euclidean distance is minimum) most matched, to three pairs of match points of arbitrary neighborhood
Verify whether to meet similar triangles condition, if meeting, choose wherein two pairs points as datum mark, and then to others point pair
Judged, this method point for most matching of acquiescence to for correct matching double points, there is certain risk in this method, and
And it cannot be guaranteed that the point most matched is correct.
This paper is in order to ensure the reliability of the correct matching double points of reservation, using traversing triangle and the side for setting up accumulator
Method, i.e., tonThe point set P and Q of individual point arbitrarily 3 pairs of matching double points compositions of selectionTo triangle, and to every a pair of matching double points
Introduce accumulator mechanism.When every a pair of triangles meet the condition of above-mentioned similar triangles, corresponding accumulator adds 1.When time
Go through after all triangles, the value carried out to each point to the value in accumulator in size judgement, accumulator is bigger, then represents the point
The number that similar triangles condition is met to the triangle of composition is more, and the point is higher to correct reliability.Now, then will
Accumulator value and the threshold value T set up2It is compared, more than threshold value T2When, the matching double points retain;Conversely, then rejecting.
3) RANSAC algorithms essence matching
After the screening of matching double points progress geometry constraint conditions and the rejecting based on structural similarity theory, remain
Matching double points be substantially correct, but do not exclude the presence of the situation of single error matching double points.And RANSAC algorithms can be with
Suitable dominating pair of vertices is selected in the range of predetermined accuracy, it is most of with exterior point data capability with that can tolerate, it is various
Robust estimation problem general choice.So accuracy rate and reliability in order to increase correct matching double points, the present invention are used
RANSAC algorithms carry out last essence matching.
1.4. the transformation parameter based on multiple image characteristic matching point pair is solved
Mismatching point is rejected to rear, the calculating that final matching double points will participate in geometric transformation is obtained.
Because the smart matching double points quantity remained by above-mentioned screening process per two field picture is seldom, when quantity is very few
When, it will be not enough to calculate geometric transformation;When quantity is met, it may be led due to its matching double points skewness
The geometric transformation obtained is caused to have deviation.The present invention chooses the doubtful foreign matter mesh that multiframe in image sequence is located at diverse location
The matching double points formation set of logo image, then carry out the solution of geometric transformation.
Due to finally can at least produce 3 pairs of matching double points to rejecting stage, each corresponding image of frame in Mismatching point, and
Computational geometry model at least needs 4 pairs of matching double points, in order to improve the accuracy of model, in this experiment, doubtful using multiframe
Foreign matter target is located at the characteristic matching point of the image of diverse location to participating in calculating.It will be developed in details below.
The infrared and visible light image registration model of this problem is complex perspective projection model, can be used such as formula
(12) matrix form describes space coordinate transformation model:
In formula (12) (x, y), (x', y') is the coordinate of corresponding points in two images respectively, and M is parameter matrix, its each point
Amount effect is as shown in table 2, and 8 parameters determine the transformational relation between two images coordinate in M.Only need 4 pairs of points, it is possible to
Determine this 8 parameters.
The present invention chooses the doubtful foreign matter target image of multiframe for being located at diverse location in image sequence, chooses spy to it respectively
Levy a little to pts, then form accumulative characteristic point to set Tpts, as shown in formula (13),
Tpts:{pts(1),...,pts(i),...,pts(n)} (13)
N is the frame number chosen in formula, and pts (i) is accurate of the image randomly selected from the 1st frame into n-th frame image
With point pair, Tpts is set of all frame image features points chosen to composition.Transformation model is solved by least square method again
Parameter.
So far, using the 1st frame to n-th frame image characteristic matching point to just can directly by trying to achieve transformation model square
Battle array enters line translation, participates in above-mentioned image characteristic point detection and the process matched without being repeated to each two field picture, reduces one
Fixed amount of calculation.Obtain after geometric transformation parameter, row interpolation is entered to visible images subject to registration using bilinear interpolation
Resampling, completes infrared and visible light image registration.
1.5. experimental result
The infrared and visible images that the present invention have chosen under railway scene are registering image, and image size is 576x960.
Experiment condition:Windows7, MATLAB2012.
Fig. 8 (a) represents the SURF characteristic points extracted on infrared negative and visible images, 553 and 2766 respectively,
Represented with "+";Fig. 8 (b) is represented by matching 72 pairs of obtained matching double points at the beginning of SURF algorithm;Fig. 8 (c) represents to pass through geometry
Constraints screens 12 pairs of obtained matching double points after judging;Fig. 8 (d) represents to miss by rejecting based on structural similarity theory
The result of matching double points, remains 8 pairs of matching double points;Fig. 8 (e) is the 3 pairs of matching double points obtained by RANSAC essence matchings.
It can be seen that inventive algorithm passes through the screening and rejecting to matching double points progressively from the process so that the accurate match of reservation
Point logarithm proportion is increasing, and the result of final Feature Points Matching is correct, this demonstrate that the algorithm is effective
Property, at the same also ensure that finishing screen select and remain under matching double points accuracy and the reliability of result.
Registration Algorithm of the present invention is compared with traditional SIFT, SURF registration Algorithm, shown in comparative result table 3, passed
Although SIFT the and SURF algorithms of system have very strong applicability in same source image registration, apply in multi-source image with punctual
Just show its limitation.The present invention is improved on the basis of based on SURF algorithm, to infrared and visible images
Registration there is feasibility.
2. the Image Fusion based on contourlet transformation
It is this because the conventional image interfusion method average based on intensity-weighted is actually the smoothing processing to pixel
Processing often makes edge, profile in image thicken to a certain extent while noise in image is reduced.And
When the gray difference of fused images is very big, obvious splicing vestige just occurs, is unfavorable for eye recognition and follow-up target
Identification process.So on the basis of above-mentioned accurate registration, using the Image Fusion pair based on multiresolution analysis
Image is merged, and wherein contourlet transformation fusion not only has multiple dimensioned, good time-frequency local characteristicses, also has
Multi-direction characteristic, can effectively reduce influence of the registration error to fusion performance.The present invention is using contourlet transformation fusion
Method improves the infrared and visual image fusion effect under railway scene, closer to the effect of eye-observation.
2.1.Contourlet changing image fusion method
Contourlet transformation is a kind of multi-direction Multi-Scale Calculation framework of discrete picture, many in its conversion process
Dimensional analysis and Orientation are carried out separately.Multi-resolution decomposition is carried out to image by Laplacian Pyramid Transform first
Put with " capture " unusual;Then, will by anisotropic filter group to the high frequency classification travel direction filtering of every one-level pyramid decomposition
It is distributed in unidirectional singular point and respectively synthesizes a coefficient.
Image co-registration framework based on contourlet transformation is as shown in figure 9, comprise the following steps that:
1) multiple dimensioned and multidirectional Contourlet is carried out respectively to the visible images after infrared image and registration to become
Change.Multi-resolution decomposition is carried out to image come the singular point in capture images with LP conversion first in Contourlet conversion.So
High-frequency signal after being decomposed afterwards to LP on each yardstick is decomposed using DFB travel directions, will be distributed over the singular point on equidirectional
Synthesize a coefficient.After Contourlet conversion, given parameter nlevels is relevant when its coefficient distribution is with decomposing,
Nlevels determines the vectorial number of coefficient distribution.
2) fusion rule is determined by being analyzed the coefficient after contourlet transformation.Fusion rule major embodiment
After contourlet transformation in the low frequency sub-band of image and the optimization processing of high-frequency sub-band.Consider infrared image and can
See property and the algorithm operation time of light image, weighted average is designed and based on office to low frequency sub-band and high-frequency sub-band respectively
The fusion rule of portion's energy.
3) the Contourlet coefficients after fusion are subjected to inverse transformation, obtain fused images.
2.2. the fusion rule based on local energy
Infrared is the highlighted feature and visible ray for merging target in infrared image with the main purpose of visual image fusion
The definition of image Scene.Because low frequency coefficient occupies most energy of image after conversion after decomposition, source is reflected
The essential characteristic of image, and infrared image herein is large scene image, it is seen that light image only occupies infrared after registration
A part for image, more in low frequency part infrared image proportion, the large scene that actual fused image is reflected is overall
Information is just apparent, so the fusion rule herein in low frequency sub-band part is regular using simple weighted average.
HFS major embodiment after decomposition be image detailed information, correspond to edge, the texture of such as image
Key character, this is particularly important for the reflection of target information, so what is taken herein in high-frequency sub-band is melting for region energy
Normally, corresponding each pixel in fused images is not only considered participation in, and to consider participation in the local adjacent of fusion pixel
Domain.It is specific as follows:
1) by taking two images A, B as an example, the office of the position centered on (n, m) on the corresponding decomposition layer of two images is calculated respectively
Portion region energy El,AAnd El,B:
In above formula, El(n, m) is represented on Laplacian pyramid l layers, the regional area of position centered on (n, m)
Energy;LPlRepresent the pyramidal l tomographic images of Laplacian;W'(n', m') be and LPlCorresponding weight coefficient;J, K are defined
The size of regional area, n', m' excursion is in J, K.
2) then calculate two images correspondence regional area matching degree MAB:
E thereinl,A、El,BCalculated by formula (15).
3) finally according to matching degree size, different amalgamation modes are taken.
Work as Ml,AB(n,m)<During α (α typically takes 0.85), correlation is than relatively low between illustrating source images coefficient, so choosing area
The big coefficient of energy in domain is more reasonable for coefficient after fusion:
Work as Ml,ABDuring (n, m) >=α, correlation is more reasonable using average weighted method than larger between illustrating coefficient:
Wherein,
Wl,max(n, m)=1-Wl,min(n, m), (19)
Fusion rule based on region energy is reduced to the quick of edge due to considering the correlation between adjacent pixel
Perception, can effectively reduce the mistake selection of fusion pixel, the robustness of blending algorithm is significantly improved to a certain extent,
So as to improve syncretizing effect.
2.3. fusion results are analyzed
It is the visible images after infrared source images and registration to participate in the image of fusion, and image size is 576x960,
The LP used in contourlet transformation is decomposed into 3 grades, and DFB direction numbers are 8-4-4.
3.3.1 single frames and frames fusion Comparative result and analysis
Image sequence registration with merging, if all carrying out to each two field picture registering with merging, this will be greatly increased
Amount of calculation, and single-frame images existed in registration process characteristic matching point to number not enough and geometric transformation can not be calculated
The situation of matrix, the interruption for causing algorithm to run, by the follow-up conversion of influence and fusion process.And utilize multiframe to be located at difference
Solution of the characteristic matching point of the movement destination image of position to the registering geometrical model of progress, fundamentally ensure that algorithm
Smoothness.Simultaneously because the movement destination image positioned at diverse location is used, so the geometric transformation can be applied to
Corresponding image sequence, it is to avoid the calculating of a large amount of repetitions, in turn ensure that the applicability of matrix.
3.3.2 the fusion results analysis of nighttime image
The infrared advantage merged with visible light video image is more prominent in the case of night visual isopter condition difference.
Compared to single infrared image, there is the information such as more texture color details in fused images, be more convenient for scene and mesh
Target understands;And compared to single visible images, due to be it is poor in night light, target become it is more dim even
Almost lumped together with background, but objective contour not only becomes obvious in the image after fusion, moreover it is possible to temperature is seen from image
Spend information.Fused images reflect real scene, and target information is enriched again.
3.3.3 different blending algorithm outcome quality evaluations are with being compared
In order to prove that this paper blending algorithms have preferable fusion mass, now with traditional intensity-weighted average algorithm, base
It is compared in the blending algorithm of wavelet transformation, wherein the fusion rule of Wavelet Transform Fusion algorithm is consistent with this paper.
Quality evaluation is carried out to above-mentioned fusion results, using this four evaluations of standard deviation, comentropy, cross entropy and definition
Index.As a result it is as shown in table 4:
The image co-registration evaluation of result of table 4 is with being compared
Fusion mass is evaluated | Standard deviation | Comentropy | Cross entropy | Definition |
Intensity-weighted is averaged | 20.36 | 5.43 | 1.57 | 1.44 |
Wavelet Transform Fusion | 20.53 | 5.50 | 1.61 | 1.82 |
This paper algorithms | 23.84 | 5.66 | 1.74 | 2.70 |
As shown in Table 4, this paper algorithms are all more advantageous in each evaluation index, both highlighted the temperature of infrared image
Characteristic, and the detailed information of visible images is maintained well.
Therefore, the present invention proposes the infrared and visible light video image sequence autoregistration under a kind of railway scene
With blending algorithm, for the problem of the accurate alarm rate of foreign matter is not high caused by nighttime image less effective, using it is infrared with can
See the complementarity and redundancy of light image information, it is proposed that improved SURF image registration algorithms with based on local energy
Contourlet transformation blending algorithm.Judge to miss with the rejecting of similar triangles matching principle using geometry constraint conditions with punctual
Matching double points, and matched with RANSAC algorithms essence, then improve change mold changing using the thought of multiple image matching double points set
The applicability of shape parameter, turn avoid the calculating largely repeated.Compared to traditional SIFT and SURF registration Algorithms, calculate herein
Method has higher accuracy rate.The fused images and other two kinds of classic algorithms obtained based on contourlet transformation fusion method
Compare, at least improve 16.12%, 2.91%, 8.07% and respectively in terms of standard deviation, comentropy, mutual information and definition
48.35%, more conducively eye-observation and follow-up target identification process, to improve the accurate alarm rate of foreign matter in railway scene
A Tiao Xin roads are opened, to ensureing that the exploitation of railway operation security system is significant.
Finally it should be noted that:Embodiment described above, is only the embodiment of the present invention, to illustrate the present invention
Technical scheme, rather than its limitations, protection scope of the present invention is not limited thereto, although with reference to the foregoing embodiments to this
Invention is described in detail, it will be understood by those within the art that:Any technology people for being familiar with the art
Member the invention discloses technical scope in, its technical scheme described in previous embodiment can still be modified or
Change can be readily occurred in, or equivalent substitution is carried out to which part technical characteristic;And these modifications, change or replacement, and
The essence of appropriate technical solution is departed from the spirit and scope of technical scheme of the embodiment of the present invention, should all cover in the present invention
Protection domain within.Therefore, protection scope of the present invention described should be defined by scope of the claims.
Claims (11)
1. a kind of foreign matter invades detection method, it is characterised in that comprise the following steps:
The infrared image in monitoring range is obtained using infrared camera and transmits it to image collection processing system;
Described image acquisition processing system determined according to the infrared image in the monitoring range of the infrared camera whether
There is doubtful foreign matter;
In the case where there is doubtful foreign matter, LASER Light Source is set to be focused on Visible Light Camera described doubtful in the monitoring range
Laser light filling is carried out like on foreign matter and using the LASER Light Source to the doubtful foreign matter;
Obtain the visible images of the doubtful foreign matter and transmit it to described image acquisition processing system;
Described image acquisition processing system enters the visible images with the doubtful foreign matter area image in the infrared image
Row image registration is with merging;
Doubtful foreign substance information is provided using fused image, feature is carried out to the doubtful foreign matter using the doubtful foreign substance information
Extract with classifying, realize automatic identification and the alarm of the doubtful foreign matter.
2. foreign matter according to claim 1 invades detection method, it is characterised in that
Described makes the step bag that LASER Light Source is focused on the doubtful foreign matter in the monitoring range with Visible Light Camera
Include:
A) picture point of the doubtful foreign matter is obtained;
B) camera coordinates system and generation of angle, focal length and the infrared camera of the fixation of the mounted infrared camera are utilized
The relation of boundary's coordinate system, and the doubtful foreign matter described in the image obtained by the infrared camera pixel position, to count
Calculate azimuth of the doubtful foreign matter in the real space under world coordinate system;
C) azimuth using the doubtful foreign matter calculated in the real space under world coordinate system, and the laser
Relative position and relative attitude of the light source with the Visible Light Camera relative to the infrared camera, to determine the LASER Light Source
With the anglec of rotation and luffing angle of Visible Light Camera;
D) LASER Light Source and the Visible Light Camera according to the anglec of rotation and luffing angle rotate moves with pitching
Make, so that the LASER Light Source and the Visible Light Camera are focused on the doubtful foreign matter.
3. foreign matter according to claim 1 invades detection method, it is characterised in that the described utilization doubtful foreign matter letter
The step of breath carries out feature extraction and classifying to the doubtful foreign matter includes:
Using the profile of the image offer doubtful foreign matter, texture, temperature, the information of color, based on the profile, texture, temperature
Degree, the information of color, extract the feature of the doubtful foreign matter and the feature are classified.
4. foreign matter according to claim 1 invades detection method, it is characterised in that described determination is in the infrared camera
Monitoring range in include the step of whether there is doubtful foreign matter:
A) background extracting based on multiframe frame difference method
Background is extracted from the infrared image using described image acquisition processing system, adds up to obtain using multiframe frame difference image
The background is taken, the accumulative step of the multiframe frame difference image includes:
1. difference frame by frame is carried out using video, is compared by difference value with fixed threshold, difference value is less than the pixel position of threshold value
It is background area to put corresponding, and corresponding more than the pixel position of threshold value is foreground target region;
2. according to obtained background area and foreground target region, the pixel dotted state to input picture is marked, it is described before
Pixel in scape target area is determined as foreground pixel point, is not involved in background calculating;Pixel in the background area is sentenced
It is set to background pixel point, participates in background and calculate;
3. 100 frame successive image frames are taken, background and foreground pixel in each image are distinguished using previous methods, one is introduced and tires out
Plus device, initial value is 0, is counted for each pixel of all two field picture same positions, is determined as accumulator during foreground pixel point
Value is constant, is determined as that accumulator value plus 1 during background pixel point, finally using cumulative obtained gradation of image accumulated value divided by correspondingly
Accumulator value obtain current initial background, the initial background is extracted background;
B) foreign matter based on background difference is extracted
The doubtful foreign matter is extracted using background subtraction to every two field picture in video sequence.
5. foreign matter according to claim 4 invades detection method, it is characterised in that the background subtraction includes:
If the background image of t is fb (x, y, t), current frame image is fc (x, y, t), then background difference image is
Fd (x, y, t)=fc (x, y, t)-fb (x, y, t)
Using suitable threshold value T, binary conversion treatment is carried out to background difference image fd (x, y, t), the two-value of doubtful foreign matter is obtained
Doubtful foreign matter target area in foreground picture, i.e. image.
6. foreign matter according to claim 1 invades detection method, it is characterised in that the step of described image registration includes:
Using the local invariant feature of the infrared image and the visible images, carry out the infrared image with it is described visible
The registration of light image, the local invariant feature refers to that image still keeps stable in Geometrical change, illumination variation, noise jamming
The feature of property,
The step of image registration, also includes:
1. the feature point extraction based on SURF is with just matching:The infrared image and the visible images are carried out using SURF
Feature point detection and description, are then based on the ratio between arest neighbors and secondary neighbour progress initial characteristicses point to matching using Euclidean distance;
2. Mismatching point is to rejecting:Mismatching point is carried out to rejecting using the progressive method of three-level, wherein, pacified first according to camera
The related geometry constraint conditions that dress mode sets up image are screened;Then further picked using similar triangles matching principle
Remove;Finally, essence matching is realized based on RANSAC;
3. based on geometric transformation model solution of the multiple image sequences match point to accumulation:Because single frames is infrared and visible images
Correct matching double points number it is less, be not enough to solve transformation model parameter when less than 4 points pair;Even if match point logarithm
Mesh meets to calculate and required, also due to characteristic point skewness causes the geometric transformation model obtained deviation occur, based on many
Frame image sequence is accumulated enough correct matching double points and can solve the problem that by least square method progress geometric transformation model solution
Above mentioned problem;
4. the geometric transformation model obtained is applied on visible images, then carries out bilinear interpolation, completion is infrared and can
See the registration of light image.
7. foreign matter according to claim 1 invades detection method, it is characterised in that using based on local energy
Contourlet transformation image interfusion method realizes merging for the infrared image and the visible images after the conversion, described
The step of image co-registration, includes:
1. multiple dimensioned and multidirectional contourlet transformation is carried out respectively to the visible images after infrared image and registration;
High frequency coefficient and low frequency coefficient after to conversion;
2. fusion rule is determined by being analyzed the coefficient after contourlet transformation:Consider the property of infrared image
Matter and Riming time of algorithm, the respectively fusion to low frequency coefficient and high frequency coefficient using weighted average and based on local energy are advised
Then;
3. the Contourlet coefficients after fusion are subjected to inverse transformation, obtain fused images.
8. a kind of foreign matter invades detection means, it is characterised in that including:
Infrared camera, image collection processing system, LASER Light Source and Visible Light Camera, it is infrared close when being installed with Visible Light Camera
It is adjacent, can parallel installation also being capable of mounted on top, it is ensured that its photocentre is tried one's best close;LASER Light Source wave band is included in visible ray phase
In machine sensitive band, but it is not included in infrared camera sensitive band,
Infrared camera, it is configured to obtain the infrared image in monitoring range;
Image collection processing system, it is connected with the infrared camera and is configured to receive from described in the infrared camera
Infrared image and determine whether occur doubtful foreign matter in the monitoring range of the infrared camera according to the infrared image;
LASER Light Source, it is configured in the case where there is the doubtful foreign matter, is focused on described doubtful in the monitoring range
Laser light filling is carried out like on foreign matter and to the doubtful foreign matter;
Visible Light Camera, it is set to link and be connected with described image acquisition processing system with the LASER Light Source, and
It is configured to obtain the visible images of the doubtful foreign matter and transmits the visible images to described image acquisition process system
System;
Wherein, described image acquisition processing system be configured to by the visible images with it is doubtful different in the infrared image
Object area image carries out image registration with merging;Doubtful foreign substance information is provided using fused image, the doubtful foreign matter is utilized
Information carries out feature extraction and classifying to the doubtful foreign matter, realizes automatic identification and the alarm of the doubtful foreign matter.
9. foreign matter according to claim 8 invades detection means, it is characterised in that
Described image acquisition processing system is configured to:
A) picture point of the doubtful foreign matter is obtained;
B) azimuth of the doubtful foreign matter in the real space under world coordinate system is calculated;
C) using the azimuth and the LASER Light Source that calculate and the Visible Light Camera relative to the infrared camera
Relative position and relative attitude, determine the anglec of rotation and luffing angle of the LASER Light Source and Visible Light Camera;
The LASER Light Source and the Visible Light Camera are configured to be rotated and bowed according to the anglec of rotation and luffing angle
Action is faced upward, is focused on the doubtful foreign matter.
10. foreign matter according to claim 8 invades detection means, it is characterised in that
Described image acquisition processing system is configured to extract background from the infrared image and extracts the doubtful foreign matter:
A) background extracting based on multiframe frame difference method
Background is extracted from the infrared image using described image acquisition processing system, adds up to obtain using multiframe frame difference image
The background is taken, the accumulative step of the multiframe frame difference image includes:
1. difference frame by frame is carried out using video, is compared by difference value with fixed threshold, difference value is less than the pixel position of threshold value
It is background area to put corresponding, and corresponding more than the pixel position of threshold value is foreground target region;
2. according to obtained background area and foreground target region, the pixel dotted state to input picture is marked, it is described before
Pixel in scape target area is determined as foreground pixel point, is not involved in background calculating;Pixel in the background area is sentenced
It is set to background pixel point, participates in background and calculate;
3. 100 frame successive image frames are taken, background and foreground pixel in each image are distinguished using previous methods, one is introduced and tires out
Plus device, initial value is 0, is counted for each pixel of all two field picture same positions, is determined as accumulator during foreground pixel point
Value is constant, is determined as that accumulator value plus 1 during background pixel point, finally using cumulative obtained gradation of image accumulated value divided by correspondingly
Accumulator value obtain current initial background, the initial background is extracted background;
B) foreign matter based on background difference is extracted
The doubtful foreign matter is extracted using background subtraction to every two field picture in video sequence,
The background subtraction includes:
If the background image of t is fb (x, y, t), current frame image is fc (x, y, t), then background difference image is
Fd (x, y, t)=fc (x, y, t)-fb (x, y, t)
Using suitable threshold value T, binary conversion treatment is carried out to background difference image fd (x, y, t), the two-value of doubtful foreign matter is obtained
Doubtful foreign matter target area in foreground picture, i.e. image.
11. foreign matter according to claim 8 invades detection means, it is characterised in that described image acquisition processing system is matched somebody with somebody
It is set to and the doubtful foreign matter area image in the visible images and the infrared image is subjected to image registration with following steps:
Using the local invariant feature of the infrared image and the visible images, carry out the infrared image with it is described visible
The registration of light image, the local invariant feature refers to that image still keeps stable in Geometrical change, illumination variation, noise jamming
The feature of property,
The step of image registration, includes:
1. the feature point extraction based on SURF is with just matching:The infrared image and the visible images are carried out using SURF
Feature point detection and description, are then based on the ratio between arest neighbors and secondary neighbour progress initial characteristicses point to matching using Euclidean distance;
2. Mismatching point is to rejecting:Mismatching point is carried out to rejecting using the progressive method of three-level, wherein, pacified first according to camera
The related geometry constraint conditions that dress mode sets up image are screened;Then further picked using similar triangles matching principle
Remove;Finally, essence matching is realized based on RANSAC;
3. based on geometric transformation model solution of the multiple image sequences match point to accumulation:Because single frames is infrared and visible images
Correct matching double points number it is less, be not enough to solve transformation model parameter when less than 4 points pair;Even if match point logarithm
Mesh meets to calculate and required, also due to characteristic point skewness causes the geometric transformation model obtained deviation occur, based on many
Frame image sequence is accumulated enough correct matching double points and can solve the problem that by least square method progress geometric transformation model solution
Above mentioned problem;
4. the geometric transformation model obtained is applied on visible images, then carries out bilinear interpolation, completion is infrared and can
See the registration of light image,
It is real that described image acquisition processing system is configured with the contourlet transformation image interfusion method based on local energy
The existing infrared image is merged with the visible images after the conversion, and the step of described image is merged includes:
1. multiple dimensioned and multidirectional contourlet transformation is carried out respectively to the visible images after infrared image and registration;
High frequency coefficient and low frequency coefficient after to conversion;
2. fusion rule is determined by being analyzed the coefficient after contourlet transformation:Consider the property of infrared image
Matter and Riming time of algorithm, the respectively fusion to low frequency coefficient and high frequency coefficient using weighted average and based on local energy are advised
Then;
3. the Contourlet coefficients after fusion are subjected to inverse transformation, obtain fused images.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710342757.2A CN107253485B (en) | 2017-05-16 | 2017-05-16 | Foreign matter invades detection method and foreign matter invades detection device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710342757.2A CN107253485B (en) | 2017-05-16 | 2017-05-16 | Foreign matter invades detection method and foreign matter invades detection device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107253485A true CN107253485A (en) | 2017-10-17 |
CN107253485B CN107253485B (en) | 2019-07-23 |
Family
ID=60027956
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710342757.2A Active CN107253485B (en) | 2017-05-16 | 2017-05-16 | Foreign matter invades detection method and foreign matter invades detection device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107253485B (en) |
Cited By (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108007346A (en) * | 2017-12-07 | 2018-05-08 | 西南交通大学 | One kind visualization Metro Clearance Detection |
CN108163014A (en) * | 2017-12-26 | 2018-06-15 | 郑州畅想高科股份有限公司 | A kind of engine drivers in locomotive depot Fu Zhu lookout method for early warning and device |
CN108364287A (en) * | 2018-02-11 | 2018-08-03 | 北京简易科技有限公司 | A kind of monitoring foreign bodies method, apparatus and stamping system |
CN108427922A (en) * | 2018-03-06 | 2018-08-21 | 深圳市创艺工业技术有限公司 | A kind of efficient indoor environment regulating system |
CN108491073A (en) * | 2018-03-06 | 2018-09-04 | 深圳凯达通光电科技有限公司 | A kind of good man-machine interactive system of interaction effect |
CN108482427A (en) * | 2018-02-22 | 2018-09-04 | 中车长春轨道客车股份有限公司 | A kind of contactless rail vehicle obstacle detection system and method for controlling security |
CN108537764A (en) * | 2018-03-06 | 2018-09-14 | 深圳明创自控技术有限公司 | A kind of man-machine hybrid intelligent control loop |
CN109087411A (en) * | 2018-06-04 | 2018-12-25 | 上海灵纽智能科技有限公司 | A kind of recognition of face lock based on distributed camera array |
CN109116370A (en) * | 2018-07-17 | 2019-01-01 | 北京盖博瑞尔科技发展有限公司 | A kind of object detection method and system |
CN109253688A (en) * | 2018-10-30 | 2019-01-22 | 安徽合力股份有限公司 | A kind of door frame shaking detection method of reach truck |
CN109410161A (en) * | 2018-10-09 | 2019-03-01 | 湖南源信光电科技股份有限公司 | A kind of fusion method of the infrared polarization image separated based on YUV and multiple features |
CN109466588A (en) * | 2018-12-03 | 2019-03-15 | 大连维德轨道装备有限公司 | A kind of tunnel anti-collision system for trains and method based on 3D technology |
CN109727188A (en) * | 2017-10-31 | 2019-05-07 | 比亚迪股份有限公司 | Image processing method and its device, safe driving method and its device |
CN109729256A (en) * | 2017-10-31 | 2019-05-07 | 比亚迪股份有限公司 | The control method and device of double photographic devices in vehicle |
CN109881437A (en) * | 2019-02-25 | 2019-06-14 | 珠海格力电器股份有限公司 | Inner cylinder, washings processing equipment and foreign matter detecting method |
CN110136083A (en) * | 2019-05-14 | 2019-08-16 | 深圳大学 | A kind of the base map update method and device of combination interactive mode |
CN110264466A (en) * | 2019-06-28 | 2019-09-20 | 广州市颐创信息科技有限公司 | A kind of reinforcing bar detection method based on depth convolutional neural networks |
CN110335271A (en) * | 2019-07-10 | 2019-10-15 | 浙江铁素体智能科技有限公司 | A kind of infrared detection method and device of electrical component failures |
CN110458176A (en) * | 2019-07-11 | 2019-11-15 | 中科光绘(上海)科技有限公司 | Foreign body intrusion detection method for laser foreign matter remover |
CN110570454A (en) * | 2019-07-19 | 2019-12-13 | 华瑞新智科技(北京)有限公司 | Method and device for detecting foreign matter invasion |
CN110619293A (en) * | 2019-09-06 | 2019-12-27 | 沈阳天眼智云信息科技有限公司 | Flame detection method based on binocular vision |
CN110633682A (en) * | 2019-09-19 | 2019-12-31 | 合肥英睿系统技术有限公司 | Infrared image anomaly monitoring method, device and equipment based on double-light fusion |
CN110930375A (en) * | 2019-11-13 | 2020-03-27 | 广东国地规划科技股份有限公司 | Method, system and device for monitoring land coverage change and storage medium |
CN110942458A (en) * | 2019-12-06 | 2020-03-31 | 汕头大学 | Temperature anomaly defect detection and positioning method and system |
CN111063148A (en) * | 2019-12-30 | 2020-04-24 | 神思电子技术股份有限公司 | Remote night vision target video detection method |
CN111079546A (en) * | 2019-11-22 | 2020-04-28 | 重庆师范大学 | Unmanned aerial vehicle pest detection method |
CN111680537A (en) * | 2020-03-31 | 2020-09-18 | 上海航天控制技术研究所 | Target detection method and system based on laser infrared compounding |
CN111754477A (en) * | 2020-06-19 | 2020-10-09 | 北京交通大学 | Railway perimeter foreign matter intrusion detection method based on dynamic candidate area multi-scale images |
CN111765974A (en) * | 2020-07-07 | 2020-10-13 | 中国环境科学研究院 | Wild animal observation system and method based on miniature refrigeration thermal infrared imager |
CN111856436A (en) * | 2020-07-02 | 2020-10-30 | 大连理工大学 | Combined calibration device and calibration method for multi-line laser radar and infrared camera |
CN112317962A (en) * | 2020-10-16 | 2021-02-05 | 广州黑格智造信息科技有限公司 | Marking system and method for invisible appliance production |
US20210046959A1 (en) * | 2018-02-08 | 2021-02-18 | Mitsubishi Electric Corporation | Obstacle detection device and obstacle detection method |
CN112595730A (en) * | 2020-11-13 | 2021-04-02 | 深圳供电局有限公司 | Cable breakage identification method and device and computer equipment |
CN112686107A (en) * | 2020-12-21 | 2021-04-20 | 中国铁道科学研究院集团有限公司电子计算技术研究所 | Tunnel invading object detection method and device |
CN112810669A (en) * | 2020-07-17 | 2021-05-18 | 周慧 | Intercity train operation control platform and method |
CN113033518A (en) * | 2021-05-25 | 2021-06-25 | 北京中科闻歌科技股份有限公司 | Image detection method, image detection device, electronic equipment and storage medium |
CN113674319A (en) * | 2021-08-23 | 2021-11-19 | 浙江大华技术股份有限公司 | Target tracking method, system, equipment and computer storage medium |
CN113784026A (en) * | 2021-08-30 | 2021-12-10 | 鹏城实验室 | Method, apparatus, device and storage medium for calculating position information based on image |
CN113869159A (en) * | 2021-09-16 | 2021-12-31 | 泰州蝶金软件有限公司 | Cloud server data management system |
CN114056385A (en) * | 2020-07-31 | 2022-02-18 | 比亚迪股份有限公司 | Train control method and device and train |
CN115035412A (en) * | 2022-06-23 | 2022-09-09 | 郑州儒慧信息技术有限责任公司 | Method for identifying contact net foreign matter |
CN116309569A (en) * | 2023-05-18 | 2023-06-23 | 中国民用航空飞行学院 | Airport environment anomaly identification system based on infrared and visible light image registration |
CN117329929A (en) * | 2023-11-03 | 2024-01-02 | 连云港市公安局 | Automatic target reporting system based on active ultraviolet light acquisition and positioning |
CN112785587B (en) * | 2021-02-04 | 2024-05-31 | 上海电气集团股份有限公司 | Foreign matter detection method, system, equipment and medium in stacking production process |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009078687A (en) * | 2007-09-26 | 2009-04-16 | Railway Technical Res Inst | Method and device for checking visibility of railroad signal |
CN104590319A (en) * | 2014-06-11 | 2015-05-06 | 北京交通大学 | Device for foreign body invasion detection and method for foreign body invasion detection |
CN205890910U (en) * | 2016-06-29 | 2017-01-18 | 南京雅信科技集团有限公司 | Limit detecting device is invaded with track foreign matter that infrared light combines to visible light |
-
2017
- 2017-05-16 CN CN201710342757.2A patent/CN107253485B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009078687A (en) * | 2007-09-26 | 2009-04-16 | Railway Technical Res Inst | Method and device for checking visibility of railroad signal |
CN104590319A (en) * | 2014-06-11 | 2015-05-06 | 北京交通大学 | Device for foreign body invasion detection and method for foreign body invasion detection |
CN205890910U (en) * | 2016-06-29 | 2017-01-18 | 南京雅信科技集团有限公司 | Limit detecting device is invaded with track foreign matter that infrared light combines to visible light |
Non-Patent Citations (4)
Title |
---|
张蕾等: "采用非采样Contourlet变_省略_与区域分类的红外和可见光图像融合", 《光学精密工程》 * |
李寒等: "基于灰度冗余和SURF算法的电气设备红外和可见光图像配准", 《电力系统保护与控制》 * |
李颖宏等: "《道路交通信息检测技术及应用》", 31 August 2013 * |
陈洁等: "基于相似三角形匹配的红外与可见光图像配准方法", 《激光与红外》 * |
Cited By (61)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109729256B (en) * | 2017-10-31 | 2020-10-23 | 比亚迪股份有限公司 | Control method and device for double camera devices in vehicle |
WO2019085930A1 (en) * | 2017-10-31 | 2019-05-09 | 比亚迪股份有限公司 | Method and apparatus for controlling dual-camera apparatus in vehicle |
CN109729256A (en) * | 2017-10-31 | 2019-05-07 | 比亚迪股份有限公司 | The control method and device of double photographic devices in vehicle |
CN109727188A (en) * | 2017-10-31 | 2019-05-07 | 比亚迪股份有限公司 | Image processing method and its device, safe driving method and its device |
CN108007346A (en) * | 2017-12-07 | 2018-05-08 | 西南交通大学 | One kind visualization Metro Clearance Detection |
CN108163014A (en) * | 2017-12-26 | 2018-06-15 | 郑州畅想高科股份有限公司 | A kind of engine drivers in locomotive depot Fu Zhu lookout method for early warning and device |
US11845482B2 (en) * | 2018-02-08 | 2023-12-19 | Mitsubishi Electric Corporation | Obstacle detection device and obstacle detection method |
US20210046959A1 (en) * | 2018-02-08 | 2021-02-18 | Mitsubishi Electric Corporation | Obstacle detection device and obstacle detection method |
CN108364287A (en) * | 2018-02-11 | 2018-08-03 | 北京简易科技有限公司 | A kind of monitoring foreign bodies method, apparatus and stamping system |
CN108482427A (en) * | 2018-02-22 | 2018-09-04 | 中车长春轨道客车股份有限公司 | A kind of contactless rail vehicle obstacle detection system and method for controlling security |
CN108491073A (en) * | 2018-03-06 | 2018-09-04 | 深圳凯达通光电科技有限公司 | A kind of good man-machine interactive system of interaction effect |
CN108537764A (en) * | 2018-03-06 | 2018-09-14 | 深圳明创自控技术有限公司 | A kind of man-machine hybrid intelligent control loop |
CN108427922A (en) * | 2018-03-06 | 2018-08-21 | 深圳市创艺工业技术有限公司 | A kind of efficient indoor environment regulating system |
CN109087411A (en) * | 2018-06-04 | 2018-12-25 | 上海灵纽智能科技有限公司 | A kind of recognition of face lock based on distributed camera array |
CN109116370B (en) * | 2018-07-17 | 2021-10-19 | 深圳市前海腾际创新科技有限公司 | Target detection method and system |
CN109116370A (en) * | 2018-07-17 | 2019-01-01 | 北京盖博瑞尔科技发展有限公司 | A kind of object detection method and system |
CN109410161A (en) * | 2018-10-09 | 2019-03-01 | 湖南源信光电科技股份有限公司 | A kind of fusion method of the infrared polarization image separated based on YUV and multiple features |
CN109410161B (en) * | 2018-10-09 | 2020-11-13 | 湖南源信光电科技股份有限公司 | Fusion method of infrared polarization images based on YUV and multi-feature separation |
CN109253688A (en) * | 2018-10-30 | 2019-01-22 | 安徽合力股份有限公司 | A kind of door frame shaking detection method of reach truck |
CN109466588A (en) * | 2018-12-03 | 2019-03-15 | 大连维德轨道装备有限公司 | A kind of tunnel anti-collision system for trains and method based on 3D technology |
CN109881437A (en) * | 2019-02-25 | 2019-06-14 | 珠海格力电器股份有限公司 | Inner cylinder, washings processing equipment and foreign matter detecting method |
CN110136083A (en) * | 2019-05-14 | 2019-08-16 | 深圳大学 | A kind of the base map update method and device of combination interactive mode |
CN110264466A (en) * | 2019-06-28 | 2019-09-20 | 广州市颐创信息科技有限公司 | A kind of reinforcing bar detection method based on depth convolutional neural networks |
CN110335271B (en) * | 2019-07-10 | 2021-05-25 | 浙江铁素体智能科技有限公司 | Infrared detection method and device for electrical component fault |
CN110335271A (en) * | 2019-07-10 | 2019-10-15 | 浙江铁素体智能科技有限公司 | A kind of infrared detection method and device of electrical component failures |
CN110458176B (en) * | 2019-07-11 | 2022-11-04 | 中科光绘(上海)科技有限公司 | Foreign body intrusion detection method for laser foreign body cleaner |
CN110458176A (en) * | 2019-07-11 | 2019-11-15 | 中科光绘(上海)科技有限公司 | Foreign body intrusion detection method for laser foreign matter remover |
CN110570454B (en) * | 2019-07-19 | 2022-03-22 | 华瑞新智科技(北京)有限公司 | Method and device for detecting foreign matter invasion |
CN110570454A (en) * | 2019-07-19 | 2019-12-13 | 华瑞新智科技(北京)有限公司 | Method and device for detecting foreign matter invasion |
CN110619293A (en) * | 2019-09-06 | 2019-12-27 | 沈阳天眼智云信息科技有限公司 | Flame detection method based on binocular vision |
CN110633682B (en) * | 2019-09-19 | 2022-07-12 | 合肥英睿系统技术有限公司 | Infrared image anomaly monitoring method, device and equipment based on double-light fusion |
CN110633682A (en) * | 2019-09-19 | 2019-12-31 | 合肥英睿系统技术有限公司 | Infrared image anomaly monitoring method, device and equipment based on double-light fusion |
CN110930375A (en) * | 2019-11-13 | 2020-03-27 | 广东国地规划科技股份有限公司 | Method, system and device for monitoring land coverage change and storage medium |
CN111079546A (en) * | 2019-11-22 | 2020-04-28 | 重庆师范大学 | Unmanned aerial vehicle pest detection method |
CN111079546B (en) * | 2019-11-22 | 2022-06-07 | 重庆师范大学 | Unmanned aerial vehicle pest detection method |
CN110942458B (en) * | 2019-12-06 | 2023-05-16 | 汕头大学 | Temperature anomaly defect detection and positioning method and system |
CN110942458A (en) * | 2019-12-06 | 2020-03-31 | 汕头大学 | Temperature anomaly defect detection and positioning method and system |
CN111063148A (en) * | 2019-12-30 | 2020-04-24 | 神思电子技术股份有限公司 | Remote night vision target video detection method |
CN111680537A (en) * | 2020-03-31 | 2020-09-18 | 上海航天控制技术研究所 | Target detection method and system based on laser infrared compounding |
CN111754477B (en) * | 2020-06-19 | 2024-02-09 | 北京交通大学 | Railway perimeter foreign matter intrusion detection method based on dynamic candidate area multi-scale image |
CN111754477A (en) * | 2020-06-19 | 2020-10-09 | 北京交通大学 | Railway perimeter foreign matter intrusion detection method based on dynamic candidate area multi-scale images |
CN111856436A (en) * | 2020-07-02 | 2020-10-30 | 大连理工大学 | Combined calibration device and calibration method for multi-line laser radar and infrared camera |
CN111765974B (en) * | 2020-07-07 | 2021-04-13 | 中国环境科学研究院 | Wild animal observation system and method based on miniature refrigeration thermal infrared imager |
CN111765974A (en) * | 2020-07-07 | 2020-10-13 | 中国环境科学研究院 | Wild animal observation system and method based on miniature refrigeration thermal infrared imager |
CN112810669A (en) * | 2020-07-17 | 2021-05-18 | 周慧 | Intercity train operation control platform and method |
CN114056385A (en) * | 2020-07-31 | 2022-02-18 | 比亚迪股份有限公司 | Train control method and device and train |
CN112317962A (en) * | 2020-10-16 | 2021-02-05 | 广州黑格智造信息科技有限公司 | Marking system and method for invisible appliance production |
CN112595730A (en) * | 2020-11-13 | 2021-04-02 | 深圳供电局有限公司 | Cable breakage identification method and device and computer equipment |
CN112686107A (en) * | 2020-12-21 | 2021-04-20 | 中国铁道科学研究院集团有限公司电子计算技术研究所 | Tunnel invading object detection method and device |
CN112785587B (en) * | 2021-02-04 | 2024-05-31 | 上海电气集团股份有限公司 | Foreign matter detection method, system, equipment and medium in stacking production process |
CN113033518A (en) * | 2021-05-25 | 2021-06-25 | 北京中科闻歌科技股份有限公司 | Image detection method, image detection device, electronic equipment and storage medium |
CN113033518B (en) * | 2021-05-25 | 2021-08-31 | 北京中科闻歌科技股份有限公司 | Image detection method, image detection device, electronic equipment and storage medium |
CN113674319A (en) * | 2021-08-23 | 2021-11-19 | 浙江大华技术股份有限公司 | Target tracking method, system, equipment and computer storage medium |
CN113784026A (en) * | 2021-08-30 | 2021-12-10 | 鹏城实验室 | Method, apparatus, device and storage medium for calculating position information based on image |
CN113784026B (en) * | 2021-08-30 | 2023-04-18 | 鹏城实验室 | Method, apparatus, device and storage medium for calculating position information based on image |
CN113869159A (en) * | 2021-09-16 | 2021-12-31 | 泰州蝶金软件有限公司 | Cloud server data management system |
CN115035412A (en) * | 2022-06-23 | 2022-09-09 | 郑州儒慧信息技术有限责任公司 | Method for identifying contact net foreign matter |
CN115035412B (en) * | 2022-06-23 | 2024-04-12 | 郑州儒慧信息技术有限责任公司 | Method for identifying foreign matters of overhead contact system |
CN116309569B (en) * | 2023-05-18 | 2023-08-22 | 中国民用航空飞行学院 | Airport environment anomaly identification system based on infrared and visible light image registration |
CN116309569A (en) * | 2023-05-18 | 2023-06-23 | 中国民用航空飞行学院 | Airport environment anomaly identification system based on infrared and visible light image registration |
CN117329929A (en) * | 2023-11-03 | 2024-01-02 | 连云港市公安局 | Automatic target reporting system based on active ultraviolet light acquisition and positioning |
Also Published As
Publication number | Publication date |
---|---|
CN107253485B (en) | 2019-07-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107253485A (en) | Foreign matter invades detection method and foreign matter intrusion detection means | |
CN110956094B (en) | RGB-D multi-mode fusion personnel detection method based on asymmetric double-flow network | |
Kong et al. | Detecting abandoned objects with a moving camera | |
CN103824070B (en) | A kind of rapid pedestrian detection method based on computer vision | |
WO2018028103A1 (en) | Unmanned aerial vehicle power line inspection method based on characteristics of human vision | |
CN104063702B (en) | Three-dimensional gait recognition based on shielding recovery and partial similarity matching | |
CN102932605B (en) | Method for selecting camera combination in visual perception network | |
CN106709436A (en) | Cross-camera suspicious pedestrian target tracking system for rail transit panoramic monitoring | |
CN110175576A (en) | A kind of driving vehicle visible detection method of combination laser point cloud data | |
CN104091171B (en) | Vehicle-mounted far infrared pedestrian detecting system and method based on local feature | |
CN107564062A (en) | Pose method for detecting abnormality and device | |
CN106845364B (en) | Rapid automatic target detection method | |
CN110232389A (en) | A kind of stereoscopic vision air navigation aid based on green crop feature extraction invariance | |
WO2004042673A2 (en) | Automatic, real time and complete identification of vehicles | |
CN106686280A (en) | Image repairing system and method thereof | |
EP2659668A1 (en) | Calibration device and method for use in a surveillance system for event detection | |
CN117036641A (en) | Road scene three-dimensional reconstruction and defect detection method based on binocular vision | |
CN106778633B (en) | Pedestrian identification method based on region segmentation | |
CN111462128A (en) | Pixel-level image segmentation system and method based on multi-modal spectral image | |
Pollard et al. | A volumetric approach to change detection in satellite images | |
CN112184604A (en) | Color image enhancement method based on image fusion | |
CN114973028B (en) | Aerial video image real-time change detection method and system | |
CN111199556A (en) | Indoor pedestrian detection and tracking method based on camera | |
CN110189375A (en) | A kind of images steganalysis method based on monocular vision measurement | |
CN109523583A (en) | A kind of power equipment based on feedback mechanism is infrared and visible light image registration method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |