CN107253485B - Foreign matter invades detection method and foreign matter invades detection device - Google Patents

Foreign matter invades detection method and foreign matter invades detection device Download PDF

Info

Publication number
CN107253485B
CN107253485B CN201710342757.2A CN201710342757A CN107253485B CN 107253485 B CN107253485 B CN 107253485B CN 201710342757 A CN201710342757 A CN 201710342757A CN 107253485 B CN107253485 B CN 107253485B
Authority
CN
China
Prior art keywords
image
foreign matter
infrared
doubtful
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710342757.2A
Other languages
Chinese (zh)
Other versions
CN107253485A (en
Inventor
王尧
余祖俊
郭保青
朱力强
宁滨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN201710342757.2A priority Critical patent/CN107253485B/en
Publication of CN107253485A publication Critical patent/CN107253485A/en
Application granted granted Critical
Publication of CN107253485B publication Critical patent/CN107253485B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61LGUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
    • B61L23/00Control, warning, or like safety means along the route or between vehicles or vehicle trains
    • B61L23/04Control, warning, or like safety means along the route or between vehicles or vehicle trains for monitoring the mechanical state of the route
    • B61L23/041Obstacle detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing

Abstract

A kind of foreign matter intrusion detection method and foreign matter invade detection device, the method includes the steps: the infrared image in monitoring range is obtained using infrared camera and transmits it to image collection processing system;Image collection processing system determines whether occur doubtful foreign matter in the monitoring range of infrared camera according to infrared image;When there is doubtful foreign matter, focuses on laser light source and Visible Light Camera on the doubtful foreign matter in monitoring range and carry out laser light filling using laser light source to doubtful foreign matter;It obtains the visible images of doubtful foreign matter and transmits it to image collection processing system;Image collection processing system will be seen that light image in infrared image doubtful foreign matter area image carry out image registration with merge;Doubtful foreign substance information is provided using fused image, feature extraction and classifying is carried out to doubtful foreign matter using doubtful foreign substance information, realize the automatic identification and alarm of doubtful foreign matter, can dark, smoke, mists and clouds gather, low visibility when obtain enrich complete image information.

Description

Foreign matter invades detection method and foreign matter invades detection device
Technical field
The present invention relates to railway operation safety detection technology field, especially foreign matter intrusion detection methods and foreign matter intrusion inspection Survey device.
Background technique
Constantly expand along with China express railway network size and the fast development of bullet train equipment manufacturing technology, it is right The attention rate of high-speed rail operation security is also increasing.And it is also exposed during high speed railway foundation facility long service The problem of meriting attention, any intrusion personnel of railway clearance during operation, foreign matter can seriously threaten the safety of high-speed railway Operation, and serious railway accident will be caused.Therefore, the foreign matter for detecting intrusion track clearance accurately and timely is to guarantee that track is handed over The key of logical safe operation can be divided into contact and two kinds contactless by testing principle.
Contact measurement mainly uses protective net, can be divided into power grid detection (such as application number according to detection protective net type 201210172059.X, 200910242554.1,201210282394.5) and fiber laser arrays (such as application number: 201110406903.6 200910272765.X) etc. modes.The protective net of contact is installed more on a large scale in Railway Construction Difficulty simultaneously because construction period condition of construction is complex, such as is used in work skylight there may be limit operation, protective net is invaded Inconvenience, and once damaged reparation in time is more difficult.The technology can only detect the larger object fallen on protective net, for Very thin reinforcing bar and cross protective net and fall to the object of orbit plane and can not detect, also can not judgment object size and position It sets.
Non-contact detection method includes the method based on infrared, laser, microwave and video, and infrared and laser mostly uses curtain Wall scheme.For example, Hispanic high-speed railway is easy to happen the section of foreign body intrusion (such as falling rocks) in tunnel face etc., it is mounted with base System is monitored in the junk of infrared ray light curtain, some also falls into the different of track in track two sides installation ultrasonic detector detection Object.The patent of invention of Patent No. 201010230606.6 is disclosed to be connect using two-dimensional laser sensor building the non-of laser curtain wall Touch railway foreign body intrusion detection system.Both methods can accurately detect the object across detection curtain wall, but to space object Body is helpless.
Intrusion detection based on video is widely used in safety-security area, these equipment mostly use greatly the vision of single kind to pass Sensor is monitored, and based on Visible Light Camera, after obtaining visible images, then first defined area utilizes image processing method It distinguishes object to be in region outside still region, can also be achieved the tracking of intrusion object.Sehchan Oh et al. is in " A Platform Surveillance Monitoring System using Image Processing for Passenger It is described in Safety in Railway Station " and realizes station station track using image processing method based on visible images Foreign bodies detection, wherein using image difference identification prospect and background, the size and shape for passing through foreign matter distinguishes vehicle and pedestrian. This method can obtain good effect under experimental conditions, but since visible images are influenced by ambient light illumination, night work It is poor to make effect, it will the accuracy and reliability of these detection systems is influenced, and above system does not account for system yet and exists Adaptability problem under the environment of night visual isopter difference.
Summary of the invention
Therefore, in order to solve the above problem present in contact and contactless foreign matter intrusion detection method and device, It realizes the beneficial effect for being better than the prior art, realizes the present invention.
According to an aspect of the invention, there is provided a kind of foreign matter invades detection method, described method includes following steps: The infrared image in monitoring range is obtained using infrared camera and transmits it to image collection processing system;Described image acquisition Processing system determines whether occur doubtful foreign matter in the monitoring range of the infrared camera according to the infrared image;Going out In the case where existing doubtful foreign matter, focus on laser light source and Visible Light Camera on the doubtful foreign matter in the monitoring range And laser light filling is carried out to the doubtful foreign matter using the laser light source;It obtains the visible images of the doubtful foreign matter and incites somebody to action It is transmitted to described image acquisition processing system;Described image acquisition processing system is by the visible images and the infrared figure As in doubtful foreign matter area image carry out image registration with merge;Doubtful foreign substance information is provided using fused image, is utilized The doubtful foreign substance information carries out feature extraction and classifying to the doubtful foreign matter, realize the automatic identification of the doubtful foreign matter with Alarm.
It is further, described that focus on laser light source with Visible Light Camera described doubtful different in the monitoring range Step on object includes: a) to obtain the picture point of the doubtful foreign matter;B) angle of the fixation of the mounted infrared camera is utilized The relationship of degree, the camera coordinates system of focal length and the infrared camera and world coordinate system, and obtained by the infrared camera Image described in doubtful foreign matter pixel position, to calculate real space of the doubtful foreign matter under world coordinate system In azimuth;C) azimuth using the calculated doubtful foreign matter in the real space under world coordinate system, and The laser light source and relative position and relative attitude of the Visible Light Camera relative to the infrared camera, it is described to determine The rotation angle and pitch angle of laser light source and Visible Light Camera;D) laser light source and the Visible Light Camera are according to institute It states rotation angle and pitch angle carries out rotation and pitching motion, so that the laser light source and the Visible Light Camera focus on On the doubtful foreign matter.
Further, described that feature extraction and classifying is carried out to the doubtful foreign matter using the doubtful foreign substance information Step includes: to provide the profile, texture, temperature, the information of color of the doubtful foreign matter using image, is based on the profile, line Reason, temperature, the information of color extract the feature of the doubtful foreign matter and classify to the feature.
Further, whether the determination there is the step of doubtful foreign matter packet in the monitoring range of the infrared camera It includes: background extracting a) based on multiframe frame difference method: extracting back from the infrared image using described image acquisition processing system Scape is added up using multiframe frame difference image to obtain the background, and the accumulative step of the multiframe frame difference image includes: 1. utilization views Frequency carries out difference frame by frame, and through difference value compared with fixed threshold, difference value corresponding less than the pixel position of threshold value is back Scene area, corresponding greater than the pixel position of threshold value is foreground target region;2. according to obtained background area and prospect mesh Region is marked, the pixel dotted state of input picture is marked, the pixel in the foreground target region is determined as prospect picture Vegetarian refreshments is not involved in background calculating;Pixel in the background area is determined as background pixel point, participates in background and calculates;3. taking 100 frame successive image frames distinguish background and foreground pixel in each image using previous methods, introduce an accumulator, just Initial value is 0, and each pixel of all frame image same positions is counted, is determined as that accumulator value is constant when foreground pixel point, It is determined as that accumulator value adds 1 when background pixel point, finally using cumulative obtained image grayscale accumulated value divided by corresponding cumulative Device value obtains current initial background, and the initial background is extracted background;B) foreign matter based on background difference extracts: The doubtful foreign matter is extracted using background subtraction to every frame image in video sequence.
Further, the background subtraction includes: to set the background image of t moment as fb (x, y, t), and current frame image is Fc (x, y, t), then background difference image is fd (x, y, t)=fc (x, y, t)-fb (x, y, t), using suitable threshold value T, to back Scape difference image fd (x, y, t) carries out binary conversion treatment, obtains the two-value foreground picture of doubtful foreign matter, i.e., the doubtful foreign matter in image Target area.
Further, the step of described image is registrated includes: the office using the infrared image and the visible images Portion's invariant features carry out being registrated for the infrared image and the visible images, and the local invariant feature refers to that image exists The feature of stability is still kept when Geometrical change, illumination variation, noise jamming.The step of image registration further include: be 1. based on The feature point extraction of SURF with just match: characteristic point detection is carried out to the infrared image and the visible images using SURF With description, the ratio between arest neighbors and time neighbour then are based on using Euclidean distance and carries out initial characteristics point to matching;2. Mismatching point To rejecting: carrying out Mismatching point to rejecting using the progressive method of three-level, wherein establish image according to camera mounting means first Related geometry constraint conditions screened;Then it is further rejected using similar triangles matching principle;Finally, being based on RANSAC realizes essence matching;3. based on multiple image sequences match point to the geometric transformation model solution of accumulation: since single frames is red It is less with the correct matching double points number of visible images outside, it is not enough to solve transformation model parameter when being less than 4 clock synchronizations; It requires even if matching double points number meets to calculate, goes out also due to characteristic point is unevenly distributed the geometric transformation model for causing to find out Existing deviation accumulates enough correct matching double points based on multiple image sequence and carries out geometric transformation model by least square method Solution is able to solve the above problem;4. the geometric transformation model found out is applied on visible images, bilinearity is then carried out Interpolation completes infrared being registrated with visible images.Method for registering images based on local invariant feature specifically include that SIFT, SURF and MSER, these algorithms have the ability of good anti-scaling, angle rotation, viewpoint variation and local deformation, root The feature extracting and matching of image is mainly carried out according to above-mentioned algorithm according to the device of the invention, and on the basis of the algorithm into Row improves and optimization, carries out image co-registration again after the completion of registration.
Image registration is to transform to different sensors, different phases, different angle two width obtained or multiple image Process under the same coordinate system is the prerequisite of image co-registration.At present in infrared and visible light image registration direction, both at home and abroad All there are no especially mature algorithms.The present inventors have noted that this is primarily due to infrared be in different waves from visible images Section, the correlation between image is smaller, and different sensor images have different nonlinear distortions.Therefore, using based on ash The method for registering of degree is extremely difficult to accurately match alignment request.And the method for registering based on feature usually extracts in all kinds of images jointly The reference information that is registrated as two images of notable feature (such as center of marginal point, closed region), then establish two width figures Characteristic matching is carried out as the corresponding relationship between feature.But since infrared image resolution ratio is lower and edge blurry, cause Infrared and visible images common features are difficult to obtain, and be easy to cause mistake according to the general method for registering based on feature Match.
Further, the infrared image is realized using the contourlet transformation image interfusion method based on local energy With merging for the transformed visible images, described image merge the step of include: 1. to infrared image and registration after Visible images carry out multiple dimensioned and multidirectional contourlet transformation respectively;Obtain transformed high frequency coefficient and low frequency system Number;2. determining fusion rule by being analyzed the coefficient after contourlet transformation: comprehensively considering the property of infrared image Matter and Riming time of algorithm are respectively advised low frequency coefficient and high frequency coefficient using weighted average and the fusion based on local energy Then;3. fused Contourlet coefficient is carried out inverse transformation, blending image is obtained.
The present inventor propose it is improved image registration is carried out based on SURF feature matching method, due to infrared image with can Light-exposed image image-forming principle is different, first pre-processes to infrared image, then carries out feature point extraction detection and matching, introduces most The ratio between neighbour and time neighbour realize preliminary feature point pair matching, screen then in conjunction with geometry constraint conditions to same place, then Same place is rejected using similar triangles matching principle, finally realizes essence matching using RANSAC, guarantees final matching Stability of the point to accuracy rate.During solving geometric transformation, set is formed using the matching double points of multiple image, To increase the accuracy for solving geometric transformation, finally solved with least square method, and realize image with bilinear interpolation Registration.
In order to enhance the clarity and comprehensibility of target image under railway scene, the present inventor to the image after registration into Row pixel-level image fusion enhances the visualization of blending image and highlights object content.
According to another aspect of the invention, it is proposed that a kind of foreign matter invades detection device, described device includes: infrared phase Machine, image collection processing system, laser light source and Visible Light Camera, it is infrared closely adjacent when installing with Visible Light Camera, it can Parallel installation also being capable of mounted on top, it is ensured that its optical center is close as far as possible;Laser light source wave band is included in Visible Light Camera sensitivity wave It in section, but is not included in infrared camera sensitive band, wherein infrared camera is configured to obtain infrared in monitoring range Image;Image collection processing system connect with the infrared camera and is configured to receive the institute from the infrared camera State infrared image and determined according to the infrared image whether occur in the monitoring range of the infrared camera it is doubtful different Object;Laser light source is configured in the case where there is the doubtful foreign matter, is focused on described doubtful in the monitoring range Laser light filling is carried out on foreign matter and to the doubtful foreign matter;Visible Light Camera, be set as with the laser light source link and It is connect with described image acquisition processing system, and is configured to obtain the visible images of the doubtful foreign matter and will be described visible Light image is transmitted to described image acquisition processing system;Wherein, be configured to will be described visible for described image acquisition processing system Light image in the infrared image doubtful foreign matter area image carry out image registration with merge;It is provided using fused image Doubtful foreign substance information carries out feature extraction and classifying to the doubtful foreign matter using the doubtful foreign substance information, realizes described doubt Like the automatic identification and alarm of foreign matter.
Further, described image acquisition processing system is configured to: a) obtaining the picture point of the doubtful foreign matter;B) institute is calculated State azimuth of the doubtful foreign matter in the real space under world coordinate system;C) calculated azimuth and the laser are utilized Light source and relative position and relative attitude of the Visible Light Camera relative to the infrared camera, determine the laser light source with The rotation angle and pitch angle of Visible Light Camera;The laser light source and the Visible Light Camera are configured to according to the rotation Angle and pitch angle carry out rotation and pitching motion, focus on the doubtful foreign matter.
Further, described image acquisition processing system is configured to extract from the infrared image described in background and extraction Doubtful foreign matter: background extracting a) based on multiframe frame difference method: using described image acquisition processing system from the infrared image Background is extracted, is added up using multiframe frame difference image to obtain the background, the accumulative step of the multiframe frame difference image includes: 1. Difference frame by frame is carried out using video, through difference value compared with fixed threshold, the pixel position that difference value is less than threshold value is corresponding Be background area, corresponding greater than the pixel position of threshold value is foreground target region;2. according to obtained background area with The pixel dotted state of input picture is marked in foreground target region, and the pixel in the foreground target region is determined as Foreground pixel point is not involved in background calculating;Pixel in the background area is determined as background pixel point, participates in background meter It calculates;3. taking 100 frame successive image frames, background and foreground pixel in each image are distinguished using previous methods, one is introduced and tires out Add device, initial value 0 counts each pixel of all frame image same positions, is determined as accumulator when foreground pixel point It is worth constant, is determined as that accumulator value adds 1 when background pixel point, finally using cumulative obtained image grayscale accumulated value divided by correspondence Accumulator value obtain current initial background, the initial background is extracted background;B) based on the different of background difference Object extracts: the doubtful foreign matter is extracted using background subtraction to every frame image in video sequence.
Further, the background subtraction includes: to set the background image of t moment as fb (x, y, t), and current frame image is Fc (x, y, t), then background difference image is fd (x, y, t)=fc (x, y, t)-fb (x, y, t), using suitable threshold value T, to back Scape difference image fd (x, y, t) carries out binary conversion treatment, obtains the two-value foreground picture of doubtful foreign matter, i.e., the doubtful foreign matter in image Target area.
Further, described image acquisition processing system be configured to following steps by the visible images with it is described red Doubtful foreign matter area image in outer image carries out image registration: utilizing the part of the infrared image and the visible images Invariant features carry out being registrated for the infrared image and the visible images, and the local invariant feature refers to image several Still keep the feature of stability when what variation, illumination variation, noise jamming, the step of image registration includes: 1. based on SURF's Feature point extraction with just match: characteristic point detection is carried out to the infrared image and the visible images using SURF and is retouched It states, is then based on the ratio between arest neighbors and time neighbour using Euclidean distance and carries out initial characteristics point to matching;2. Mismatching point is to picking It removes: Mismatching point being carried out to rejecting using the progressive method of three-level, wherein establish the phase of image according to camera mounting means first Geometry constraint conditions are closed to be screened;Then it is further rejected using similar triangles matching principle;Finally, real based on RANSAC Existing essence matching;3. based on multiple image sequences match point to the geometric transformation model solution of accumulation: due to single frames it is infrared with it is visible The correct matching double points number of light image is less, is not enough to solve transformation model parameter when being less than 4 clock synchronizations;Even if matching Point, which meets number to calculate, to be required, also due to characteristic point is unevenly distributed the geometric transformation model for causing to find out and deviation occurs, Enough correct matching double points are accumulated based on multiple image sequence, and geometric transformation model solution energy is carried out by least square method Enough solve the above problems;4. the geometric transformation model found out is applied on visible images, bilinear interpolation is then carried out, it is complete At infrared being registrated with visible images.
Further, described image acquisition processing system is configured with the contourlet transformation figure based on local energy As the step of fusion method realizes merging for the infrared image and the transformed visible images, and described image merges is wrapped It includes: multiple dimensioned and multidirectional contourlet transformation 1. being carried out respectively to the visible images after infrared image and registration;? To transformed high frequency coefficient and low frequency coefficient;2. determining fusion by being analyzed the coefficient after contourlet transformation Rule: comprehensively considering the property and Riming time of algorithm of infrared image, flat using weighting to low frequency coefficient and high frequency coefficient respectively The equal and fusion rule based on local energy;3. fused Contourlet coefficient is carried out inverse transformation, blending image is obtained.
Using the apparatus according to the invention and method, no matter daytime, night, or light is very dark, smoke, mists and clouds gathers, can see It spends under low adverse circumstances, can obtain and enrich complete image information.It is obtained using the apparatus according to the invention and method The higher and more illustrative image of clarity, wherein the temperature information of foreign matter target can not only be obtained, moreover it is possible to be enriched The information such as complexion, profile, color, thus in the case where solving night and severe weather conditions foreign bodies detection and identification it is difficult On the basis of, additionally it is possible to it determines the type of doubtful foreign matter target, improves foreign matter alarm accuracy rate, improve the accuracy of identification of system, Enhance the reliability and security of system.
The characteristics of foreign matter intrusion detection device according to the present invention, is:
Infrared camera and laser light source and visible light one camera it is close it is adjacent be mounted on the same cabinet, can be parallel Installation and mounted on top, after being installed, mutual positional relationship is fixed;
Since infrared camera is used to carry out the video surveillance under large scene, so its posture is exactly after infrared camera installs Fixed value, and the angle and focal length and picpointed coordinate fixed using it can calculate the side in real space of some target Parallactic angle;
Laser light source and visible light one camera may be mounted on the holder that can accurately control rotation and pitch angle, The initial position and posture of opposite infrared camera are determined by calibration in advance;
It, can after obtaining azimuth of the doubtful foreign matter target in actual scene from the video image that infrared camera obtains To carry out rotation and pitching movement by adjusting holder, laser light source is locked in doubtful foreign matter target and by visible light phase Machine obtains the video image of the doubtful foreign matter target.
It is merged by visible light with infrared image to show doubtful foreign matter target profile abundant, texture, temperature, color etc. Information is conducive to classification and identification to doubtful foreign matter target, improves alarm accuracy rate.
Whole system realizes that the course of work of foreign matter intrusion detection is as follows:
Round-the-clock foreign bodies detection, image acquisition and processing are carried out by the video image under the large scene of infrared camera acquisition System will detect doubtful foreign matter target using Infrared video image by doubtful foreign matter algorithm of target detection, obtain described doubtful The picture point of foreign matter;
Once detecting doubtful foreign matter target, the angle of the fixation of the mounted infrared camera, focal length and institute are utilized The camera coordinates system of infrared camera and the relationship of world coordinate system are stated, and described in the image obtained as the infrared camera The pixel position of doubtful foreign matter, to calculate azimuth of the doubtful foreign matter under world coordinate system;
Using azimuth of the calculated doubtful foreign matter under world coordinate system and the laser light source with it is described Relative position and relative attitude of the Visible Light Camera relative to the infrared camera, to determine the laser light source and visible light phase The rotation angle and pitch angle of machine;
The laser light source and the Visible Light Camera carry out rotation and pitching according to the rotation angle and pitch angle Movement, so that the laser light source and the Visible Light Camera focus on the doubtful foreign matter, carries out the acquisition of video image;
Obtain there are after the visible images of doubtful foreign matter target, by its with it is doubtful different in the infrared image under large scene Object target area image carry out image registration with merge, using fused target information to doubtful foreign matter target progress feature mention It takes and classifies, the automatic identification and alarm of Lai Shixian foreign matter target.
Other features and advantages of the present invention will illustrate in the following description, also, partly become from specification It obtains it is clear that understand through the implementation of the invention.The objectives and other advantages of the invention are in specification, claims And specifically noted structure is achieved and obtained in attached drawing.
To enable the above objects, features and advantages of the present invention to be clearer and more comprehensible, preferred embodiment is cited below particularly, and cooperate Appended attached drawing, is described in detail below.
Detailed description of the invention
It, below will be to specific in order to illustrate more clearly of the specific embodiment of the invention or technical solution in the prior art Embodiment or attached drawing needed to be used in the description of the prior art be briefly described, it should be apparent that, it is described below Attached drawing is some embodiments of the present invention, for those of ordinary skill in the art, before not making the creative labor It puts, is also possible to obtain other drawings based on these drawings.
Fig. 1 shows the position of infrared camera and laser light source and visible light one camera in a kind of embodiment of the invention and ties Composition.
Fig. 2 shows the signals for the holder that laser light source and visible light one camera are loaded in a kind of embodiment of the invention Figure.
Fig. 3 shows the schematic diagram of viewing field of camera in a kind of embodiment of the invention.
Fig. 4 shows the schematic diagram of camera imaging model in another embodiment of the invention.
Fig. 5 (a) shows the example for the infrared image that camera obtains in a kind of embodiment of the invention.
Fig. 5 (b) shows the example for the visible images that camera obtains in a kind of embodiment of the invention.
Fig. 6 shows the flow chart of infrared image according to the present invention Yu Detection Method in Optical Image Sequences registration arrangement.
Fig. 7 (a) shows SIFT feature testing result, and Fig. 7 (b) shows SURF characteristic point testing result, and Fig. 7 (c) is shown MSER characteristic point testing result.
Fig. 8 (a) shows SURF characteristic point testing result, and Fig. 8 (b) shows SURF candidate feature matching double points as a result, Fig. 8 (c) geometry constraint conditions matching double points the selection result is shown, Fig. 8 (d) is shown based on structural similarity theory Mismatching point to picking Division result, Fig. 8 (e) show RANSAC essence matching result.
Fig. 9 shows the image co-registration frame based on contourlet transformation.
Figure 10 (a) shows infrared source images, and Figure 10 (b) shows visible light source image, and Figure 10 (c) is shown based on intensity-weighted Average image co-registration as a result, and Figure 10 (d) the image co-registration result based on contourlet transformation is shown.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with attached drawing to the present invention Technical solution be clearly and completely described, it is clear that described embodiments are some of the embodiments of the present invention, rather than Whole embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art are not making creative work premise Under every other embodiment obtained, shall fall within the protection scope of the present invention.
Foreign matter intrusion detection device according to the present invention includes: infrared camera 1, image collection processing system, laser light source With Visible Light Camera 2.Referring to Fig.1, Fig. 1 shows infrared camera 1 and laser light source and visible light in a kind of embodiment of the invention The position assumption diagram of integrated camera 2, wherein can be placed in laser light source with visible light one camera 2 and infrared camera 1 same Highly, it can also be installed in parallel on cabinet 4.
Referring to Fig. 2, Fig. 2 shows the clouds that laser light source Yu visible light one camera 2 are loaded in a kind of embodiment of the invention The schematic diagram of platform 3, wherein holder 3 include luffing mechanism 31 and rotating mechanism 32, holder 3 can using stepper motor (or other Electromechanical Control element) accurate control laser light source and visible light one camera rotation and pitching.
Infrared camera 1 is used to obtain the infrared image of large scene, and laser light source and visible light one camera 2 are then for doubtful It carries out laser light filling like foreign matter target and focuses doubtful foreign matter target to obtain visible images.
Referring to Fig. 3, Fig. 3 shows the schematic diagram of viewing field of camera in a kind of embodiment of the invention, wherein according to the present invention Foreign matter intrusion detection device may be mounted on the contact net bar above railway or the column of route side.As shown in figure 3, Infrared camera 1 is used to obtain the video image under large scene, field range A1-A2, and laser light source and visible light are integrated Camera 2 is then used to obtain the scene image with doubtful foreign matter target, and wherein laser light source plays the role of light filling, especially for Night and severe weather conditions, field range B1-B2.
Referring to figs. 1 to Fig. 3, according to one embodiment of present invention, foreign matter intrusion detection device includes: infrared camera 1, figure As acquisition processing system, laser light source 22 and Visible Light Camera 21, closely adjacent, Neng Gouping when the infrared installation with Visible Light Camera Row installation also being capable of mounted on top, it is ensured that its optical center is close as far as possible;Laser light source wave band is included in Visible Light Camera sensitive band It is interior, but be not included in infrared camera sensitive band, wherein infrared camera 1 is configured to obtain the infrared figure in monitoring range Picture;Image collection processing system connect with the infrared camera 1 and is configured to receive the institute from the infrared camera 1 State infrared image and determined according to the infrared image whether occur in the monitoring range of the infrared camera 1 it is doubtful different Object;Laser light source 22 is configured in the case where there is the doubtful foreign matter, is focused on described doubtful in the monitoring range Laser light filling is carried out like on foreign matter and to the doubtful foreign matter;Visible Light Camera 21 is set as and the laser light source 22 It is dynamic and connect with described image acquisition processing system, and it is configured to obtain the visible images of the doubtful foreign matter and by institute It states visible images and is transmitted to described image acquisition processing system;Wherein, described image acquisition processing system is configured to institute State visible images in the infrared image doubtful foreign matter area image carry out image registration with merge;Using scheming after fusion As providing doubtful foreign substance information, feature extraction and classifying is carried out to the doubtful foreign matter using the doubtful foreign substance information, is realized The automatic identification and alarm of the doubtful foreign matter.
Preferably, described image acquisition processing system is configured to: a) obtaining the picture point of the doubtful foreign matter;B) described in calculating Azimuth of the doubtful foreign matter in the real space under world coordinate system;C) using the calculated position and azimuth and The laser light source and relative position and relative attitude of the Visible Light Camera relative to the infrared camera determine described sharp The rotation angle and pitch angle of radiant 22 and Visible Light Camera 21;The laser light source 22 and the Visible Light Camera 21 are matched It is set to and rotation and pitching motion is carried out according to the rotation angle and pitch angle, focus on the doubtful foreign matter.
Image collection processing system is responsible for acquiring the image of infrared camera and carries out foreign bodies detection, and background extracting can be divided into With update and two step of foreign matter Objective extraction.Wherein background extracting and update method are added up using multiframe frame difference image to obtain back Scape.Preferably, described image acquisition processing system is configured to extract background from the infrared image and extract described doubtful different Object: back background extracting a) based on multiframe frame difference method: is extracted from the infrared image using described image acquisition processing system Scape is added up using multiframe frame difference image to obtain the background, and the accumulative step of the multiframe frame difference image includes: 1. utilization views Frequency carries out difference frame by frame, and through difference value compared with fixed threshold, difference value corresponding less than the pixel position of threshold value is back Scene area, corresponding greater than the pixel position of threshold value is foreground target region;2. according to obtained background area and prospect mesh Region is marked, the pixel dotted state of input picture is marked, the pixel in the foreground target region is determined as prospect picture Vegetarian refreshments is not involved in background calculating;Pixel in the background area is determined as background pixel point, participates in background and calculates;3. taking 100 frame successive image frames distinguish background and foreground pixel in each image using previous methods, introduce an accumulator, just Initial value is 0, and each pixel of all frame image same positions is counted, is determined as that accumulator value is constant when foreground pixel point, It is determined as that accumulator value adds 1 when background pixel point, finally using cumulative obtained image grayscale accumulated value divided by corresponding cumulative Device value obtains current initial background, and the initial background is extracted background;B) foreign matter based on background difference extracts: The doubtful foreign matter is extracted using background subtraction to every frame image in video sequence.The background stabilization obtained in this way can It the influence of slowly varying factor such as leans on, and effectively eliminates daylight illumination, establish good basis for the extraction and judgement of foreign matter.
Foreign matter target detection uses background subtraction.Background subtraction be using current frame image and background image subtraction come Detect foreign matter target.In the image that a frame contains target or foreign matter, the region of target corresponding position is corresponding with background image The pixel value difference of position is larger, belongs to background area elsewhere, differs very little.Preferably, the background subtraction includes: If the background image of t moment is fb (x, y, t), current frame image is fc (x, y, t), then background difference image is
Fd (x, y, t)=fc (x, y, t)-fb (x, y, t)
With suitable threshold value T, binary conversion treatment is carried out to background difference image fd (x, y, t), just obtains doubtful foreign matter Two-value foreground picture, i.e., the doubtful foreign matter target area in image.
Preferably, described image acquisition processing system be configured to following steps by the visible images with it is described infrared Doubtful foreign matter area image in image carries out image registration: not using the part of the infrared image and the visible images Become feature, carries out being registrated for the infrared image and the visible images, the local invariant feature refers to image in geometry Still keep the feature of stability when variation, illumination variation, noise jamming, the step of image registration includes: spy 1. based on SURF Sign point is extracted and is just matched: characteristic point detection and description are carried out to the infrared image and the visible images using SURF, Then the ratio between arest neighbors and time neighbour are based on using Euclidean distance and carry out initial characteristics point to matching;2. Mismatching point is to rejecting: Mismatching point is carried out to rejecting using the progressive method of three-level, wherein establish the correlation of image according to camera mounting means first Geometry constraint conditions are screened;Then it is further rejected using similar triangles matching principle;Finally, being realized based on RANSAC Essence matching;3. based on multiple image sequences match point to the geometric transformation model solution of accumulation: since single frames is infrared and visible light The correct matching double points number of image is less, is not enough to solve transformation model parameter when being less than 4 clock synchronizations;Even if match point Number is met to calculate and is required, also due to characteristic point, which is unevenly distributed the geometric transformation model for causing to find out, deviation, base occurs Accumulating enough correct matching double points in multiple image sequence can by least square method progress geometric transformation model solution It solves the above problems;4. the geometric transformation model found out is applied on visible images, bilinear interpolation is then carried out, is completed Infrared being registrated with visible images.
Preferably, described image acquisition processing system is configured with the contourlet transformation image based on local energy The step of fusion method realizes merging for the infrared image and the transformed visible images, and described image merges is wrapped It includes: multiple dimensioned and multidirectional contourlet transformation 1. being carried out respectively to the visible images after infrared image and registration;? To transformed high frequency coefficient and low frequency coefficient;2. determining fusion by being analyzed the coefficient after contourlet transformation Rule: comprehensively considering the property and Riming time of algorithm of infrared image, flat using weighting to low frequency coefficient and high frequency coefficient respectively The equal and fusion rule based on local energy;3. fused Contourlet coefficient is carried out inverse transformation, blending image is obtained.
According to an aspect of the invention, there is provided a kind of foreign matter invades detection method, described method includes following steps: The infrared image in monitoring range is obtained using infrared camera and transmits it to image collection processing system;Described image acquisition Processing system determines whether occur doubtful foreign matter in the monitoring range of the infrared camera according to the infrared image;Going out In the case where existing doubtful foreign matter, focus on laser light source and Visible Light Camera on the doubtful foreign matter in the monitoring range And laser light filling is carried out to the doubtful foreign matter using the laser light source;It obtains the visible images of the doubtful foreign matter and incites somebody to action It is transmitted to described image acquisition processing system;Described image acquisition processing system is by the visible images and the infrared figure As in doubtful foreign matter area image carry out image registration with merge;Doubtful foreign substance information is provided using fused image, is utilized The doubtful foreign substance information carries out feature extraction and classifying to the doubtful foreign matter, realize the automatic identification of the doubtful foreign matter with Alarm.
Preferably, the doubtful foreign matter for focusing on laser light source with Visible Light Camera in the monitoring range On step include: a) to obtain the picture point of the doubtful foreign matter;B) using the mounted infrared camera fixation angle, The relationship of the camera coordinates system and world coordinate system of focal length and the infrared camera, and in the figure obtained by the infrared camera The pixel position of the doubtful foreign matter as described in, to calculate the doubtful foreign matter in the real space under world coordinate system Azimuth;C) azimuth of the calculated doubtful foreign matter in the real space under world coordinate system and described is utilized Laser light source and relative position and relative attitude of the Visible Light Camera relative to the infrared camera, to determine the laser The rotation angle and pitch angle of light source and Visible Light Camera;D) laser light source and the Visible Light Camera are according to the rotation Gyration and pitch angle carry out rotation and pitching motion so that the laser light source and the Visible Light Camera focus on it is described On doubtful foreign matter.
Preferably, the step for carrying out feature extraction and classifying to the doubtful foreign matter using the doubtful foreign substance information Suddenly includes: to provide the profile, texture, temperature, the information of color of the doubtful foreign matter using image, based on the profile, texture, Temperature, the information of color extract the feature of the doubtful foreign matter and classify to the feature.
Preferably, whether the determination there is the step of doubtful foreign matter packet in the monitoring range of the infrared camera It includes: background extracting a) based on multiframe frame difference method: extracting back from the infrared image using described image acquisition processing system Scape is added up using multiframe frame difference image to obtain the background, and the accumulative step of the multiframe frame difference image includes: 1. utilization views Frequency carries out difference frame by frame, and through difference value compared with fixed threshold, difference value corresponding less than the pixel position of threshold value is back Scene area, corresponding greater than the pixel position of threshold value is foreground target region;2. according to obtained background area and prospect mesh Region is marked, the pixel dotted state of input picture is marked, the pixel in the foreground target region is determined as prospect picture Vegetarian refreshments is not involved in background calculating;Pixel in the background area is determined as background pixel point, participates in background and calculates;3. taking 100 frame successive image frames distinguish background and foreground pixel in each image using previous methods, introduce an accumulator, just Initial value is 0, and each pixel of all frame image same positions is counted, is determined as that accumulator value is constant when foreground pixel point, It is determined as that accumulator value adds 1 when background pixel point, finally using cumulative obtained image grayscale accumulated value divided by corresponding cumulative Device value obtains current initial background, and the initial background is extracted background;B) foreign matter based on background difference extracts: The doubtful foreign matter is extracted using background subtraction to every frame image in video sequence.
Preferably, the background subtraction includes: to set the background image of t moment as fb (x, y, t), current frame image fc (x, y, t), then background difference image is fd (x, y, t)=fc (x, y, t)-fb (x, y, t), using suitable threshold value T, to background Difference image fd (x, y, t) carries out binary conversion treatment, obtains the two-value foreground picture of doubtful foreign matter, i.e., the doubtful foreign matter mesh in image Mark region.
Preferably, the step of described image is registrated includes: the part using the infrared image and the visible images Invariant features carry out being registrated for the infrared image and the visible images, and the local invariant feature refers to image several The feature of stability is still kept when what variation, illumination variation, noise jamming.The step of image registration further include: be 1. based on SURF Feature point extraction with just match: characteristic point detection is carried out to the infrared image and the visible images using SURF and is retouched It states, is then based on the ratio between arest neighbors and time neighbour using Euclidean distance and carries out initial characteristics point to matching;2. Mismatching point is to picking It removes: Mismatching point being carried out to rejecting using the progressive method of three-level, wherein establish the phase of image according to camera mounting means first Geometry constraint conditions are closed to be screened;Then it is further rejected using similar triangles matching principle;Finally, real based on RANSAC Existing essence matching;3. based on multiple image sequences match point to the geometric transformation model solution of accumulation: due to single frames it is infrared with it is visible The correct matching double points number of light image is less, is not enough to solve transformation model parameter when being less than 4 clock synchronizations;Even if matching Point, which meets number to calculate, to be required, also due to characteristic point is unevenly distributed the geometric transformation model for causing to find out and deviation occurs, Enough correct matching double points are accumulated based on multiple image sequence, and geometric transformation model solution energy is carried out by least square method Enough solve the above problems;4. the geometric transformation model found out is applied on visible images, bilinear interpolation is then carried out, it is complete At infrared being registrated with visible images.Method for registering images based on local invariant feature specifically include that SIFT, SURF and MSER, these algorithms have the ability of good anti-scaling, angle rotation, viewpoint variation and local deformation, according to this hair Bright device mainly carries out the feature extracting and matching of image according to above-mentioned algorithm, and improves on the basis of the algorithm With optimization, registration carries out image co-registration again after the completion.
Preferably, using the contourlet transformation image interfusion method based on local energy realize the infrared image with The fusion of the transformed visible images, described image merge the step of include: 1. to infrared image and registration after can Light-exposed image carries out multiple dimensioned and multidirectional contourlet transformation respectively;Obtain transformed high frequency coefficient and low frequency system Number;2. determining fusion rule by being analyzed the coefficient after contourlet transformation: comprehensively considering the property of infrared image Matter and Riming time of algorithm are respectively advised low frequency coefficient and high frequency coefficient using weighted average and the fusion based on local energy Then;3. fused Contourlet coefficient is carried out inverse transformation, blending image is obtained.
Moving target central point can be obtained in pixel seat both horizontally and vertically after obtaining doubtful foreign matter target area It marks (x, y), angle (including the zenith angle of the light by object and picture point and camera optical axis can be determined using the pixel coordinate α and azimuth), since the pitch angle γ of infrared camera has been determined in advance before the use, according to zenith angle α, azimuth Posture of the corresponding light of picture point relative to laser light source and visible light one camera can be calculated with pitch angle γ, by posture Value passes to such as holder and carries out pose adjustment to lock to doubtful foreign matter target, carries out light filling to doubtful foreign matter target The video image is obtained simultaneously.
1. the improvement registration Algorithm based on SURF
Image characteristic point is extracted using SURF herein and is described, then not than more typical part by experiment Become the advantage and disadvantage of feature extraction algorithm;In Mismatching point to the stage of rejecting, first screened using geometry constraint conditions, it is then sharp Exterior point is further rejected with similar triangles matching principle, finally realizes essence matching to guarantee final match point with RANSAC algorithm Pair accuracy and reliability.In view of the matching double points negligible amounts after essence matching, it is possible to be not enough to seek geometric transformation Matrix, and in order to improve the accuracy of geometric transformation model, so the present invention is in geometric transformation solution procedure, using more The final correct matching double points of the image of the different doubtful foreign matter target position of frame establish multiple image characteristic point to set, then sharp Transformation model is solved with least square method.
1.1. algorithm is summarized
Infrared and Visible Light Camera imaging model in the present invention under railway scene as shown in figure 4, wherein infrared camera 1 with Laser light source and visible light one camera 2 can be in neighbouring installation.There is translation, scale and rotation and deformations between image Variation, using Perspective transformation model.
Infrared camera 1 is responsible for the image under acquisition large scene in the system, and laser light source and visible light one camera 2 capture Image containing doubtful foreign matter target.According to the present invention, to the doubtful foreign matter target of infrared large scene image and visible light of acquisition Area image is registrated and is merged, and is obtained the high and more illustrative image of clarity, be can determine doubtful foreign matter target Type improves the accuracy rate of foreign matter alarm.
Fig. 5 shows the example of the infrared image that camera obtains in a kind of embodiment of the invention and visible images.
Due to infrared camera 1 and laser light source is different from the installation site of visible light one camera 2, respective visual field size It is inconsistent, so there is the transformation such as scale, rotation, scaling for obtained two images, before merging to two images It needs to be registrated two class images.Belong to multi-modality image registration with visible light image registration due to infrared, the two image Gray difference is larger, needs to be registrated using its image local invariant features, common method can have based on the angle Harris Point matching is matched based on SIFT feature, based on SURF characteristic matching etc..
Fig. 6 shows the flow chart of infrared image according to the present invention Yu Detection Method in Optical Image Sequences registration arrangement, wherein with can Light-exposed-infrared synchronous video image be input, mainly include heterologous image candidate matching double points generate, Mismatching point to reject with And transformation model solves three modules.
What infrared image utilized is the radiation energy of target, and what visible images utilized is object for the anti-of visible light It penetrates, the image-forming principle of the two is different, determines that the imaging effect of its image is mutually far short of what is expected.The present inventors have noted that infrared image Negative-appearing image closer to visible images, therefore the present inventor first negates each pixel of infrared image, i.e., is subtracted with 255 The pixel value of the point replaces original pixel value.
Algorithm realizes that steps are as follows:
1) infrared image of acquisition is carried out first negating processing;
2) it uses SURF algorithm infrared to treated and visible images carries out feature extraction and description, then with to spy Sign point carries out the matching based on arest neighbors with the ratio between time neighbour;
3) after obtaining candidate matching double points, matching double points is screened first with geometry constraint conditions, are then based on Similar triangles matching principle is further rejected, and finally carries out smart matching with RANSAC algorithm;
4) it chooses multiple image to repeat above operation, and forms final multiple image characteristic point to set, recycle most Small square law solves transformation model parameter;Transformation and bilinear interpolation are carried out to visible images to be registered, image is completed and matches It is quasi-.
1.2. heterologous image candidate matching double points generate
Since infrared under railway scene and visible images are not only there is translation, rotation, scaling and dimensional variation, also deposit In visual field feature of different sizes, so when determining heterologous image linked character, using local invariant feature extraction algorithm. The present inventor chooses SIFT, SURF and MSER.Inventors believe that SURF algorithm, which is used for reference, simplifies approximate thought in SIFT, introduce Integral image and cassette filter, feature detection accuracy, arithmetic speed and in terms of be superior to SIFT algorithm, So using SURF as feature point extraction and description method, and candidate matches point is generated based on the ratio between arest neighbors and time neighbour It is right.
1) feature point extraction
SURF algorithm is under the premise of keeping source images size constant, by constantly increasing box Filtering Template size and product Partial image seeks filter response and establishes scale pyramid, and by the local maximum of approximate Hessian matrix determinant come Detection and Extraction characteristic point.
To the point I (x, y) in image I, the Hessian matrix at scale σ is
In formula (1), Lxx(x, σ) be Gaussian function second-order differential at point I (x, y) with the convolution of image I, other are similarly.
SURF algorithm replaces Gauss second-order differential template, the then simplification of Hessian matrix determinant using cassette filter Form is
Det (H)=DxxDyy-(0.9·Dxy)2 (2)
In formula (2), Dxx、DxyAnd DyyThe respectively result of cassette filter and image I progress convolution.
The detection and extraction of characteristic point are related to the number of characteristic point, distribution situation and whether extract crucial angle point Etc. factors, be carry out image registration key a step.To verify superiority of the SURF in feature point extraction, now by itself and SIFT The case where extracting characteristic point with MSER algorithm is compared.Testing picture is the image under railway scene, and pixel value size is 576x960.For its testing result as shown in fig. 7, Fig. 7 (a) shows SIFT feature testing result, Fig. 7 (b) shows SURF spy Sign point testing result, Fig. 7 (c) show MSER characteristic point testing result.
The number and its distribution situation of characteristic point are extracted by comparing these three algorithms, as shown in table 1:
Wherein, the visible images characteristic point that SIFT algorithm extracts is too many, increases the probability of overdue appearance.MSER algorithm The Infrared Image Features point of extraction is very little, and the profile in building is unfavorable for global registration almost without characteristic point at a distance.And The characteristic point distribution uniform that SURF is detected, feature points are moderate and detected a certain number of features in global scope Point.This illustrate SURF algorithm compared to other two kinds of algorithms be it is advantageous, provide guarantee for subsequent registration process.
2) feature point description
In order to ensure the rotational invariance of image, need to acquire one according to the local image structure of the characteristic point detected Directional reference.SURF algorithm is united after completing the response operation of the Haar small echo of integral image of feature vertex neighborhood using histogram Count the gradient direction and modulus value of pixel in neighborhood.Direction corresponding to maximum Haar response accumulated value is characterized principal direction, i.e., Direction corresponding to histogram peak-peak.
Centered on characteristic point, the image that size is the σ of 20 σ × 20 is divided into 4 × 4 sub-blocks along principal direction, uses ruler The very little Haar template for being 2 σ carries out response calculating to each sub-block, obtains the dy along the principal direction and dx perpendicular to principal direction, And Gauss weighting processing is carried out to it, reinforce the robustness to geometric transformation.Finally the response of each sub-block is counted, is obtained To the feature vector of sub-block, as shown in formula (3):
V=| ∑ dx, ∑ | dx |, ∑ dy, ∑ | dy |] (3)
To each characteristic point, the feature vector that son is 4 × 4 × 4=64 dimension is described.At this point, SURF description son tool There are scale and rotational invariance, feature vector is normalized, can make SURF feature that there is illumination invariant.
3) the ratio between arest neighbors and time neighbour generate matching double points
It is the spy using Euclidean distance as similarity measurement to two images that the ratio between arest neighbors and time neighbour, which generate matching double points, Sign description is matched, as shown in formula (4):
dND/dNND< ε (4)
Some key point in infrared image is taken, and finds out it and is closed with the nearest the first two of Euclidean distance in visible images Key point, in the two key points, if nearest distance dNNDivided by secondary short distance dNNDLess than some proportion threshold epsilon, then receive This pair of of match point, when ε is too small, SURF matching double points number can be reduced, and be unfavorable for subsequent variation model and solved;When ε mistake When big, SURF matching double points number increases, but can also introduce Mismatching point pair.ε takes 0.8. in this experiment
1.3. Mismatching point is to rejecting
The present inventor proposes in Mismatching point to the stage of rejecting, and a large amount of of Mismatching point pair reject and accurate match point pair Reservation is key, this will be directly related to the solution of geometric transformation, influence the precision of image registration.The present inventor adopts first Matching double points are screened with geometry constraint conditions, are then further rejected using image structure similarity is theoretical, last benefit With RANSAC algorithm smart matching is carried out, the matching double points that finally participate in geometric transformation solution are generated with this.
1) geometry constraint conditions screen matching double points
There are a part of manifest error matching double points for the initial matching point centering obtained due to SURF algorithm, in order to reduce The subsequent calculation amount for rejecting Mismatching point pair, increases the probability for retaining correct matching double points, the invention proposes with specific aim Geometry constraint conditions.The geometry constraint conditions are that the relative position put according to this experiment infrared camera and Visible Light Camera is true Fixed, since two cameras are closely stacked together up and down, as shown in figure 4, in longer-distance shooting process, infrared camera Be believed that with the picture centre that Visible Light Camera obtains it is approximate consistent, and since the image of infrared camera shooting is under large scene Image P1P2, and the coverage of visible images is P3P4, is included in infrared image.Now note infrared image is expressed as Ir (x, y), it is seen that light image is expressed as Iv (x, y).The center position of infrared image is Ir (xor,yor), it is seen that in light image Heart point is set to Iv (xov,yov), optional a pair of matching double points Ir (x therein1r,y1r), Iv (x1v,y1v), then it should meet simultaneously Following geometry constraint conditions:
1. the tilt angle of corresponding matching double points and respective picture centre line be it is approximately uniform, as shown in formula (5), T is the threshold value of inclination angle difference, when the tilt angle of corresponding matching double points and respective picture centre, which makees the value that difference obtains, is greater than T, Then reject.
|ac tan((xor-x1r)/(yor-y1r))-ac tan((xov-x1v)/(yov-y1v)) | < T (5)
2. corresponding matching double points position should be located at the same orientation of respective image center.I.e. corresponding matching double points with The difference of the abscissa of respective picture centre is positive and negative should be consistent, and the difference of ordinate also should be consistent, specifically as shown in formula (6).
(xor-x1r)*(xov-x1v) > 0&& (yor-y1r)*(yov-y1v) > 0 (6)
3. be less than in visible images at a distance from the match point and image center in infrared image Corresponding matching point with The distance of picture centre.As shown in formula (7):
More than matching double points simultaneously satisfaction when three geometry constraint conditions, then remains and further screened;If It is unsatisfactory for one of any three above condition, then is directly rejected.
2) exterior point is rejected in similar triangles matching
In the present invention, mainly the matching double points after screening are further rejected using similar triangles matching principle. It finds in experimentation, after geometry constraint conditions screening, although accurately matching double points proportion increased, also deposits " many-one " and intersect the case where.Under normal circumstances, phase of the correct matching double points in reference map and figure subject to registration Position is fixed.The triangle and corresponding point that arbitrary three correct match points are formed in reference map are subject to registration The triangle formed in figure is similar (approximation meets), is based on this feature, this paper presents the matchings of new similar triangles Principle rejects error matching points pair.
Assuming thatPThe matching point set remained after geometry constraint conditions screen with Q, any 3 points group in infrared image At triangle be Δ PiPjPk, (i < j < k, Pi,Pj,Pk∈ P), corresponding triangle is Δ Q in visible imagesiQjQk (i < j < k, Qi,Qj,Qk∈Q)。
According to corresponding three sides of similar triangles, proportional property has:
In formula (9), the ratio dd on the ratio between corresponding side in the adjacent both sides of similar triangles1、dd2It can be by picture noise and spy The interference of point is levied, which is approximately equal to 1. so making the following judgment:
Work as dd1、dd2When meeting above formula, it may appear that dd1、dd2Respectively greater than 1 and the case where less than 1, i.e., triangle is longer The ratio between side and shorter edge and 1 make the ratio between absolute value of the difference triangle shorter edge corresponding with its and longer sides and 1 and make absolute value of the difference Meet above-mentioned threshold requirement, but the corresponding short relationship of three side lengths of such similar triangles is not consistent, so in this base It needs to be ranked up the length relationship of similar triangles on plinth, when sorting consistent, similar triangles are retained.
After the treatment, it there is also the problem of matching double points line intersects, although the short pass of three side length of similar triangles System is corresponding consistent, but positional relationship is overturn.In order to reject this as a result, increase Rule of judgment is as follows:
The vector that two vertex form i.e. in similar trianglesThe vector formed with two vertex of corresponding triangleIts Unit vector should be it is approximately uniform, in addition two groups of vertex composition vector should also meet such as co-relation.It avoids in this way Overturn the appearance of situation.
Similar triangles judgement is since most matched point (Euclidean distance is minimum), to three pairs of match points of arbitrary neighborhood Verify whether to meet similar triangles condition, if satisfied, choose wherein two pairs of points as datum mark, and then to it is other put into Row judgement, this method default most matched point to for correct matching double points, and there are certain risks for this method, and not It can guarantee that most matched point is correct.
Herein in order to guarantee retain correct matching double points reliability, using traversing triangle and the side for establishing accumulator Method arbitrarily selects 3 pairs of matching double points compositions to point set P and Q with n pointTo triangle, and to every a pair of of match point To introducing accumulator mechanism.When every a pair of of triangle meets the condition of above-mentioned similar triangles, corresponding accumulator adds 1.When After traversing all triangles, size judgement is carried out to the value in accumulator to each point, the value in accumulator is bigger, then represents the point The number for meeting similar triangles condition to the triangle of composition is more, and the point is higher to correct reliability.At this point, will tire out again The threshold value T for adding device value and setting up2It is compared, is greater than threshold value T2When, which retains;Conversely, then rejecting.
3) RANSAC algorithm essence matches
When matching double points carry out the screening of geometry constraint conditions and after based on the rejecting of structural similarity theory, remain Matching double points it is substantially correct, but the case where do not exclude the presence of single error matching double points.And RANSAC algorithm can be with Suitable dominating pair of vertices is selected within the scope of predetermined accuracy, it is most of with exterior point data capability with that can tolerate, it is various Robust estimation problem general choice.So in order to increase the accuracy rate of correct matching double points and reliability, the present invention is used RANSAC algorithm carries out last essence matching.
1.4. the transformation parameter based on multiple image characteristic matching point pair solves
Mismatching point is rejected to rear, obtains the calculating that final matching double points will participate in geometric transformation.
Since the smart matching double points quantity remained by the every frame image of above-mentioned screening process is seldom, when quantity is very few When, it will be not enough to calculate geometric transformation;When quantity meets, may cause since its matching double points is unevenly distributed The geometric transformation found out has deviation.The present invention chooses the doubtful foreign matter target figure that multiframe in image sequence is located at different location The matching double points of picture form set, then carry out the solution of geometric transformation.
Since in Mismatching point, to the stage of rejecting, each corresponding image of frame finally can at least generate 3 pairs of matching double points, and Computational geometry model at least needs 4 pairs of matching double points, in this experiment, doubtful using multiframe in order to improve the accuracy of model The characteristic matching point that foreign matter target is located at the image of different location calculates participation.It will be developed in details below.
The infrared and visible light image registration model of this project is more complicated perspective projection model, can be using such as formula (12) matrix form describes space coordinate transformation model:
In formula (12) (x, y), (x', y') is the coordinate of corresponding points in two images respectively, and M is parameter matrix, each point As shown in table 2,8 parameters determine the transformational relation between two images coordinate in M for amount effect.Only need 4 pairs of points, so that it may really This fixed 8 parameters.
The present invention chooses the doubtful foreign matter target image of multiframe for being located at different location in image sequence, chooses spy to it respectively Then sign point forms accumulative characteristic point to set Tpts to pts, as shown in formula (13),
Tpts:{pts(1),...,pts(i),...,pts(n)} (13)
N is the frame number chosen in formula, and pts (i) is accurate of the image randomly selected from the 1st frame into n-th frame image With point pair, Tpts is set of all frame image features points to composition of selection.Transformation model is solved by least square method again Parameter.
So far, using the characteristic matching point of the 1st frame to n-th frame image to can be directly by the transformation model square that acquires Battle array is converted, and is participated in above-mentioned image characteristic point detection and matched process without repeating to each frame image, is reduced one Fixed calculation amount.After obtaining geometric transformation parameter, interpolation is carried out to visible images subject to registration using bilinear interpolation Infrared and visible light image registration is completed in resampling.
1.5. experimental result
It is to be registrated image that the present invention, which has chosen the infrared and visible images under railway scene, and image size is 576x960. Experiment condition: Windows7, MATLAB2012.
Fig. 8 (a) indicates the SURF characteristic point extracted on infrared negative and visible images, 553 and 2766 respectively, It is indicated with "+";Fig. 8 (b) indicates just to match 72 pairs of obtained matching double points by SURF algorithm;Fig. 8 (c) is indicated by geometry about The 12 pairs of matching double points screened after the judgement of beam condition;Fig. 8 (d) is indicated by rejecting error hiding based on structural similarity theory Point pair as a result, remaining 8 pairs of matching double points;Fig. 8 (e) is the 3 pairs of matching double points matched by RANSAC essence.From the mistake Journey can be seen that inventive algorithm by the screening and rejecting to matching double points gradually, so that the accurate match point logarithm institute retained Accounting example is increasing, and final Feature Points Matching the result is that correct, this demonstrate that the validity of the algorithm, while also protecting The reliability of the accuracy and result that finally screen the matching double points left is demonstrate,proved.
Registration Algorithm of the present invention is compared with traditional SIFT, SURF registration Algorithm, shown in comparison result table 3, is passed Although the SIFT and SURF algorithm of system have very strong applicability in same source image registration, applies and match on time just in multi-source image Show its limitation.The present invention is improved on the basis of being based on SURF algorithm, to infrared and visible images Registration has feasibility.
2. the Image Fusion based on contourlet transformation
Due to commonly based on the average image interfusion method of intensity-weighted be actually to the smoothing processing of pixel, it is this Processing often makes edge, profile in image thicken to a certain extent while reducing noise in image.And work as It when the gray difference of blending image is very big, just will appear apparent splicing trace, be unfavorable for eye recognition and subsequent target is known Other process.So using the Image Fusion based on multiresolution analysis to figure on the basis of above-mentioned accurate registration As being merged, wherein contourlet transformation fusion not only has multiple dimensioned, good time-frequency local characteristics, also has multi-party To characteristic, influence of the registration error to fusion performance can be effectively reduced.The present invention uses contourlet transformation fusion method Improve the infrared and visual image fusion effect under railway scene, closer to the effect of eye-observation.
2.1.Contourlet changing image fusion method
Contourlet transformation is a kind of multi-direction Multi-Scale Calculation frame of discrete picture, more in its conversion process Dimensional analysis and Orientation are carried out separately.First by Laplacian Pyramid Transform to image carry out multi-resolution decomposition with " capture " point is unusual;Then trend pass filtering is carried out to the high frequency classification of every level-one pyramid decomposition, will be divided by anisotropic filter group Cloth respectively synthesizes a coefficient in unidirectional singular point.
Image co-registration frame based on contourlet transformation is as shown in Figure 9, the specific steps are as follows:
1) multiple dimensioned and multidirectional Contourlet is carried out respectively to the visible images after infrared image and registration to become It changes.Multi-resolution decomposition is carried out to image with LP transformation to capture the singular point in image first in contourlet transformation.So High-frequency signal after decomposing afterwards to LP on each scale carries out Directional Decomposition using DFB, will be distributed over the singular point on equidirectional Synthesize a coefficient.After contourlet transformation, its coefficient distribution is related with given parameter nlevels when decomposing, Nlevels determines the vector number of coefficient distribution.
2) fusion rule is determined by being analyzed the coefficient after contourlet transformation.Fusion rule major embodiment After contourlet transformation in the low frequency sub-band of image and the optimization processing of high-frequency sub-band.Comprehensively consider infrared image and visible The property of light image and algorithm operation time, respectively to low frequency sub-band and high-frequency sub-band design weighted average and based on local energy The fusion rule of amount.
3) fused Contourlet coefficient is subjected to inverse transformation, obtains blending image.
2.2. based on the fusion rule of local energy
Infrared with the main purpose of visual image fusion is to merge the highlighted feature and visible light of target in infrared image The clarity of scene in image.Since low frequency coefficient occupies most energy of image after transformation after decomposing, source is reflected The essential characteristic of image, and infrared image herein is large scene image, it is seen that light image only occupies infrared after registration A part of image, more in low frequency part infrared image proportion, the large scene that actual fused image reflects is whole Information is with regard to apparent, so the fusion rule herein in low frequency sub-band part is regular using simple weighted average.
High frequency section major embodiment after decomposition be image detailed information, correspond to edge, the texture of such as image Important feature, this is particularly important for the reflection of target information, so what is taken herein in high-frequency sub-band is melting for region energy Normally, corresponding each pixel in blending image is not only considered participation in, but also to consider participation in the local neighborhood of fusion pixel. It is specific as follows:
1) by taking two images A, B as an example, the office on the corresponding decomposition layer of two images with (n, m) for center position is calculated separately Portion region energy El,AAnd El,B:
In above formula, El(n, m) is indicated on Laplacian pyramid l layer, take (n, m) as the regional area of center position Energy;LPlIndicate the pyramidal l tomographic image of Laplacian;W'(n', m') be and LPlCorresponding weight coefficient;J, K are defined The size of regional area, the variation range of n', m' are in J, K.
2) the matching degree M that two images correspond to regional area is then calculatedAB:
E thereinl,A、El,BIt is calculated by formula (15).
3) different amalgamation modes is finally taken according to matching degree size.
Work as Ml,ABWhen (n, m) < α (α generally takes 0.85), correlation is relatively low between illustrating source images coefficient, so choosing area The big coefficient of energy in domain is that coefficient is more reasonable after fusion:
Work as Ml,ABWhen (n, m) >=α, correlation is bigger between illustrating coefficient, more reasonable using average weighted method:
Wherein,
Wl,max(n,m)=1-Wl,min(n,m), (19)
Based on the fusion rule of region energy due to considering the correlation between adjacent pixel, reduce to the quick of edge Perception, the mistake that can effectively reduce fusion pixel are chosen, and significantly improve the robustness of blending algorithm to a certain extent, from And improve syncretizing effect.
2.3. fusion results are analyzed
The image for participating in fusion is infrared source images and the visible images after registration, and image size is 576x960, The LP used in contourlet transformation is decomposed into 3 grades, and DFB direction number is 8-4-4.
3.3.1 single frames and frames fusion Comparative result and analysis
The registration of image sequence with merge, if each frame image is all registrated and is merged, this will be greatly increased Calculation amount, and single-frame images in registration process there is characteristic matching point to number not enough and geometric transformation square cannot be calculated The case where battle array, the interruption for causing algorithm to run will affect subsequent transformation and fusion process.And it is located at different location using multiframe Movement destination image characteristic matching point to carry out registration geometrical model solution, fundamentally ensure that the unimpeded of algorithm Property.Simultaneously because using the movement destination image for being located at different location, so the geometric transformation can be applied to correspond to Image sequence, avoid a large amount of duplicate calculating, in turn ensure the applicability of matrix.
3.3.2 the fusion results analysis of nighttime image
In the case where night visual isopter condition is poor, the infrared advantage merged with visible light video image is more prominent. Compared to single infrared image, there are the information such as more texture color details in blending image, be more convenient for scene and mesh Target understands;And compared to single visible images, due to be it is poor in night light, target becomes dimer or even several And background is lumped together, but objective contour not only becomes obvious in fused image, moreover it is possible to temperature is seen from image Information.Blending image reflects real scene, and enriches target information.
3.3.3 different blending algorithm outcome quality evaluations are compared with
In order to prove this paper blending algorithm have preferable fusion mass, now with traditional intensity-weighted average algorithm, base It is compared in the blending algorithm of wavelet transformation, wherein the fusion rule of Wavelet Transform Fusion algorithm is consistent with this paper.
Quality evaluation is carried out to above-mentioned fusion results, using this four evaluations of standard deviation, comentropy, cross entropy and clarity Index.The results are shown in Table 4:
4 image co-registration evaluation of result of table is compared with
Fusion mass evaluation Standard deviation Comentropy Cross entropy Clarity
Intensity-weighted is average 20.36 5.43 1.57 1.44
Wavelet Transform Fusion 20.53 5.50 1.61 1.82
This paper algorithm 23.84 5.66 1.74 2.70
As shown in Table 4, this paper algorithm is all more advantageous in each evaluation index, has both highlighted the temperature of infrared image Characteristic, and the detailed information of visible images is maintained well.
Therefore the invention proposes the infrared and visible light video image sequence autoregistrations under a kind of railway scene With blending algorithm, for the not high problem of the accurate alarm rate of foreign matter caused by nighttime image less effective, using it is infrared with can The complementarity and redundancy of light-exposed image information propose improved SURF image registration algorithm and based on local energy Contourlet transformation blending algorithm.With on time using geometry constraint conditions judgement and similar triangles matching principle reject accidentally It is matched with point pair, and with RANSAC algorithm essence, then improves transformation model using the thought of multiple image matching double points set The applicability of parameter in turn avoids a large amount of duplicate calculating.Compared to traditional SIFT and SURF registration Algorithm, this paper algorithm has Higher accuracy rate.The blending image obtained based on contourlet transformation fusion method compared with other two kinds of classic algorithms, At least improve 16.12%, 2.91%, 8.07% and respectively in terms of standard deviation, comentropy, mutual information and clarity 48.35%, more conducively eye-observation and subsequent target identification process are opened to improve the accurate alarm rate of foreign matter in railway scene The road Liao Yitiaoxin is warded off, is of great significance to the exploitation for ensureing railway operation security system.
Finally, it should be noted that embodiment described above, only a specific embodiment of the invention, to illustrate the present invention Technical solution, rather than its limitations, scope of protection of the present invention is not limited thereto, although with reference to the foregoing embodiments to this hair It is bright to be described in detail, those skilled in the art should understand that: anyone skilled in the art In the technical scope disclosed by the present invention, it can still modify to technical solution documented by previous embodiment or can be light It is readily conceivable that variation or equivalent replacement of some of the technical features;And these modifications, variation or replacement, do not make The essence of corresponding technical solution is detached from the spirit and scope of technical solution of the embodiment of the present invention, should all cover in protection of the invention Within the scope of.Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

1. a kind of foreign matter invades detection method, which comprises the steps of:
The infrared image in monitoring range is obtained using infrared camera and transmits it to image collection processing system;
Described image acquisition processing system determined according to the infrared image in the monitoring range of the infrared camera whether There is doubtful foreign matter;
In the case where there is doubtful foreign matter, focus on laser light source and Visible Light Camera described doubtful in the monitoring range Laser light filling is carried out like on foreign matter and using the laser light source to the doubtful foreign matter;
It obtains the visible images of the doubtful foreign matter and transmits it to described image acquisition processing system;
Described image acquisition processing system by the doubtful foreign matter area image in the visible images and the infrared image into Row image registration with merge;
Doubtful foreign substance information is provided using fused image, feature is carried out to the doubtful foreign matter using the doubtful foreign substance information It extracts and classifies, realize the automatic identification and alarm of the doubtful foreign matter;
Described image be registrated the step of include:
Using the local invariant feature of the infrared image and the visible images, carry out the infrared image with it is described visible The registration of light image, the local invariant feature refer to that image still keeps stable in Geometrical change, illumination variation, noise jamming The feature of property,
The step of image registration further include:
1. based on the feature point extraction of SURF with just match: the infrared image and the visible images are carried out using SURF Then characteristic point detection and description are based on the ratio between arest neighbors and time neighbour using Euclidean distance and carry out initial characteristics point to matching;
2. Mismatching point is to rejecting: carrying out Mismatching point to rejecting using the progressive method of three-level, wherein pacify first according to camera The related geometry constraint conditions that dress mode establishes image are screened;Then it is further picked using similar triangles matching principle It removes;Finally, realizing essence matching based on RANSAC;
3. based on multiple image sequences match point to the geometric transformation model solution of accumulation: since single frames is infrared and visible images Correct matching double points number it is less, when being not enough to solve transformation model parameter less than 4 clock synchronizations;Even if match point logarithm Mesh, which meets to calculate, to be required, also due to characteristic point, which is unevenly distributed the geometric transformation model for causing to find out, deviation occurs, based on more Frame image sequence is accumulated enough correct matching double points and is able to solve by least square method progress geometric transformation model solution The above problem;
4. the geometric transformation model found out is applied on visible images, bilinear interpolation is then carried out, completion is infrared and can The registration of light-exposed image.
2. foreign matter according to claim 1 invades detection method, which is characterized in that
Described makes laser light source and Visible Light Camera focus on the step packet on the doubtful foreign matter in the monitoring range It includes:
A) picture point of the doubtful foreign matter is obtained;
B) angle, the camera coordinates system and generation of focal length and the infrared camera of the fixation of the mounted infrared camera are utilized The relationship of boundary's coordinate system, and the pixel position of the doubtful foreign matter described in the image obtained as the infrared camera, to count Calculate azimuth of the doubtful foreign matter in the real space under world coordinate system;
C) azimuth and the laser using the calculated doubtful foreign matter in the real space under world coordinate system Light source and relative position and relative attitude of the Visible Light Camera relative to the infrared camera, to determine the laser light source With the rotation angle and pitch angle of Visible Light Camera;
D) laser light source and the Visible Light Camera rotate according to the rotation angle and pitch angle dynamic with pitching Make, so that the laser light source and the Visible Light Camera focus on the doubtful foreign matter.
3. foreign matter according to claim 1 invades detection method, which is characterized in that described to be believed using the doubtful foreign matter Ceasing the step of carrying out feature extraction and classifying to the doubtful foreign matter includes:
The profile, texture, temperature, the information of color of the doubtful foreign matter are provided using image, is based on the profile, texture, temperature The information of degree, color extracts the feature of the doubtful foreign matter and classifies to the feature.
4. foreign matter according to claim 1 invades detection method, which is characterized in that the determination is in the infrared camera Monitoring range in whether there is the step of doubtful foreign matter and include:
A) background extracting based on multiframe frame difference method
Background is extracted from the infrared image using described image acquisition processing system, is added up using multiframe frame difference image to obtain The background is taken, the accumulative step of the multiframe frame difference image includes:
1. carrying out difference frame by frame using video, through difference value compared with fixed threshold, difference value is less than the pixel point of threshold value Setting corresponding is background area, and corresponding greater than the pixel position of threshold value is foreground target region;
2. the pixel dotted state of input picture is marked, before described according to obtained background area and foreground target region Pixel in scape target area is determined as foreground pixel point, is not involved in background calculating;Pixel in the background area is sentenced It is set to background pixel point, participates in background and calculate;
3. taking 100 frame successive image frames, background and foreground pixel in each image are distinguished using previous methods, one is introduced and tires out Add device, initial value 0 counts each pixel of all frame image same positions, is determined as accumulator when foreground pixel point It is worth constant, is determined as that accumulator value adds 1 when background pixel point, finally using cumulative obtained image grayscale accumulated value divided by correspondence Accumulator value obtain current initial background, the initial background is extracted background;
B) foreign matter based on background difference extracts
The doubtful foreign matter is extracted using background subtraction to every frame image in video sequence.
5. foreign matter according to claim 4 invades detection method, which is characterized in that the background subtraction includes:
If the background image of t moment is fb (x, y, t), current frame image is fc (x, y, t), then background difference image is
Fd (x, y, t)=fc (x, y, t)-fb (x, y, t)
Using suitable threshold value T, binary conversion treatment is carried out to background difference image fd (x, y, t), obtains the two-value of doubtful foreign matter Foreground picture, i.e., the doubtful foreign matter target area in image.
6. foreign matter according to claim 1 invades detection method, which is characterized in that using based on local energy Contourlet transformation image interfusion method realizes merging for the infrared image and the transformed visible images, described The step of image co-registration includes:
1. carrying out multiple dimensioned and multidirectional contourlet transformation respectively to the visible images after infrared image and registration;? To transformed high frequency coefficient and low frequency coefficient;
2. determining fusion rule by being analyzed the coefficient after contourlet transformation: comprehensively considering the property of infrared image Matter and Riming time of algorithm are respectively advised low frequency coefficient and high frequency coefficient using weighted average and the fusion based on local energy Then;
3. fused Contourlet coefficient is carried out inverse transformation, blending image is obtained.
7. a kind of foreign matter invades detection device characterized by comprising
Infrared camera, image collection processing system, laser light source and Visible Light Camera, it is infrared close when installing with Visible Light Camera Adjacent, can install in parallel also being capable of mounted on top, it is ensured that its optical center is close as far as possible;Laser light source wave band is included in visible light phase In machine sensitive band, but it is not included in infrared camera sensitive band,
Infrared camera is configured to obtain the infrared image in monitoring range;
Image collection processing system connect with the infrared camera and is configured to receive from described in the infrared camera Infrared image and determine whether occur doubtful foreign matter in the monitoring range of the infrared camera according to the infrared image;
Laser light source is configured in the case where there is the doubtful foreign matter, is focused on described doubtful in the monitoring range Laser light filling is carried out like on foreign matter and to the doubtful foreign matter;
Visible Light Camera is set as linking with the laser light source and connecting with described image acquisition processing system, and It is configured to obtain the visible images of the doubtful foreign matter and the visible images is transmitted to described image acquisition process system System;
Wherein, described image acquisition processing system be configured to by the visible images with it is doubtful different in the infrared image Object area image carry out image registration with merge;Doubtful foreign substance information is provided using fused image, utilizes the doubtful foreign matter Information carries out feature extraction and classifying to the doubtful foreign matter, realizes the automatic identification and alarm of the doubtful foreign matter;
Described image acquisition processing system is configured to will be doubtful in the visible images and the infrared image with following steps Image registration is carried out like foreign matter area image:
Using the local invariant feature of the infrared image and the visible images, carry out the infrared image with it is described visible The registration of light image, the local invariant feature refer to that image still keeps stable in Geometrical change, illumination variation, noise jamming The feature of property,
The step of image registration includes:
1. based on the feature point extraction of SURF with just match: the infrared image and the visible images are carried out using SURF Then characteristic point detection and description are based on the ratio between arest neighbors and time neighbour using Euclidean distance and carry out initial characteristics point to matching;
2. Mismatching point is to rejecting: carrying out Mismatching point to rejecting using the progressive method of three-level, wherein pacify first according to camera The related geometry constraint conditions that dress mode establishes image are screened;Then it is further picked using similar triangles matching principle It removes;Finally, realizing essence matching based on RANSAC;
3. based on multiple image sequences match point to the geometric transformation model solution of accumulation: since single frames is infrared and visible images Correct matching double points number it is less, when being not enough to solve transformation model parameter less than 4 clock synchronizations;Even if match point logarithm Mesh, which meets to calculate, to be required, also due to characteristic point, which is unevenly distributed the geometric transformation model for causing to find out, deviation occurs, based on more Frame image sequence is accumulated enough correct matching double points and is able to solve by least square method progress geometric transformation model solution The above problem;
4. the geometric transformation model found out is applied on visible images, bilinear interpolation is then carried out, completion is infrared and can The registration of light-exposed image;
It is real that described image acquisition processing system is configured with the contourlet transformation image interfusion method based on local energy The step of existing infrared image is merged with the transformed visible images, described image fusion include:
1. carrying out multiple dimensioned and multidirectional contourlet transformation respectively to the visible images after infrared image and registration;? To transformed high frequency coefficient and low frequency coefficient;
2. determining fusion rule by being analyzed the coefficient after contourlet transformation: comprehensively considering the property of infrared image Matter and Riming time of algorithm are respectively advised low frequency coefficient and high frequency coefficient using weighted average and the fusion based on local energy Then;
3. fused Contourlet coefficient is carried out inverse transformation, blending image is obtained.
8. foreign matter according to claim 7 invades detection device, which is characterized in that
Described image acquisition processing system is configured to:
A) picture point of the doubtful foreign matter is obtained;
B) azimuth of the doubtful foreign matter in the real space under world coordinate system is calculated;
C) using calculated azimuth and the laser light source and the Visible Light Camera relative to the infrared camera Relative position and relative attitude determine the rotation angle and pitch angle of the laser light source and Visible Light Camera;
The laser light source and the Visible Light Camera are configured to be rotated and bowed according to the rotation angle and pitch angle Movement is faced upward, is focused on the doubtful foreign matter.
9. foreign matter according to claim 7 invades detection device, which is characterized in that
Described image acquisition processing system is configured to extract background from the infrared image and extracts the doubtful foreign matter:
A) background extracting based on multiframe frame difference method
Background is extracted from the infrared image using described image acquisition processing system, is added up using multiframe frame difference image to obtain The background is taken, the accumulative step of the multiframe frame difference image includes:
1. carrying out difference frame by frame using video, through difference value compared with fixed threshold, difference value is less than the pixel point of threshold value Setting corresponding is background area, and corresponding greater than the pixel position of threshold value is foreground target region;
2. the pixel dotted state of input picture is marked, before described according to obtained background area and foreground target region Pixel in scape target area is determined as foreground pixel point, is not involved in background calculating;Pixel in the background area is sentenced It is set to background pixel point, participates in background and calculate;
3. taking 100 frame successive image frames, background and foreground pixel in each image are distinguished using previous methods, one is introduced and tires out Add device, initial value 0 counts each pixel of all frame image same positions, is determined as accumulator when foreground pixel point It is worth constant, is determined as that accumulator value adds 1 when background pixel point, finally using cumulative obtained image grayscale accumulated value divided by correspondence Accumulator value obtain current initial background, the initial background is extracted background;
B) foreign matter based on background difference extracts
The doubtful foreign matter is extracted using background subtraction to every frame image in video sequence,
The background subtraction includes:
If the background image of t moment is fb (x, y, t), current frame image is fc (x, y, t), then background difference image is
Fd (x, y, t)=fc (x, y, t)-fb (x, y, t)
Using suitable threshold value T, binary conversion treatment is carried out to background difference image fd (x, y, t), obtains the two-value of doubtful foreign matter Foreground picture, i.e., the doubtful foreign matter target area in image.
CN201710342757.2A 2017-05-16 2017-05-16 Foreign matter invades detection method and foreign matter invades detection device Active CN107253485B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710342757.2A CN107253485B (en) 2017-05-16 2017-05-16 Foreign matter invades detection method and foreign matter invades detection device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710342757.2A CN107253485B (en) 2017-05-16 2017-05-16 Foreign matter invades detection method and foreign matter invades detection device

Publications (2)

Publication Number Publication Date
CN107253485A CN107253485A (en) 2017-10-17
CN107253485B true CN107253485B (en) 2019-07-23

Family

ID=60027956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710342757.2A Active CN107253485B (en) 2017-05-16 2017-05-16 Foreign matter invades detection method and foreign matter invades detection device

Country Status (1)

Country Link
CN (1) CN107253485B (en)

Families Citing this family (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109729256B (en) * 2017-10-31 2020-10-23 比亚迪股份有限公司 Control method and device for double camera devices in vehicle
CN109727188A (en) * 2017-10-31 2019-05-07 比亚迪股份有限公司 Image processing method and its device, safe driving method and its device
CN108007346A (en) * 2017-12-07 2018-05-08 西南交通大学 One kind visualization Metro Clearance Detection
CN108163014A (en) * 2017-12-26 2018-06-15 郑州畅想高科股份有限公司 A kind of engine drivers in locomotive depot Fu Zhu lookout method for early warning and device
JP6843274B2 (en) * 2018-02-08 2021-03-17 三菱電機株式会社 Obstacle detection device and obstacle detection method
CN108364287B (en) * 2018-02-11 2021-11-02 北京简易科技有限公司 Foreign matter monitoring method and device and stamping system
CN108482427A (en) * 2018-02-22 2018-09-04 中车长春轨道客车股份有限公司 A kind of contactless rail vehicle obstacle detection system and method for controlling security
CN108491073A (en) * 2018-03-06 2018-09-04 深圳凯达通光电科技有限公司 A kind of good man-machine interactive system of interaction effect
CN108427922A (en) * 2018-03-06 2018-08-21 深圳市创艺工业技术有限公司 A kind of efficient indoor environment regulating system
CN108537764A (en) * 2018-03-06 2018-09-14 深圳明创自控技术有限公司 A kind of man-machine hybrid intelligent control loop
CN109087411A (en) * 2018-06-04 2018-12-25 上海灵纽智能科技有限公司 A kind of recognition of face lock based on distributed camera array
CN109116370B (en) * 2018-07-17 2021-10-19 深圳市前海腾际创新科技有限公司 Target detection method and system
CN109410161B (en) * 2018-10-09 2020-11-13 湖南源信光电科技股份有限公司 Fusion method of infrared polarization images based on YUV and multi-feature separation
CN109253688A (en) * 2018-10-30 2019-01-22 安徽合力股份有限公司 A kind of door frame shaking detection method of reach truck
CN109466588B (en) * 2018-12-03 2021-02-09 大连维德集成电路有限公司 Tunnel train anti-collision system and method based on 3D technology
CN109881437B (en) * 2019-02-25 2020-07-07 珠海格力电器股份有限公司 Inner cylinder, washing processing equipment and foreign matter detection method
CN110136083B (en) * 2019-05-14 2021-11-05 深圳大学 Base map updating method and device combined with interaction
CN110264466B (en) * 2019-06-28 2021-08-06 广州市颐创信息科技有限公司 Reinforcing steel bar detection method based on deep convolutional neural network
CN110335271B (en) * 2019-07-10 2021-05-25 浙江铁素体智能科技有限公司 Infrared detection method and device for electrical component fault
CN110458176B (en) * 2019-07-11 2022-11-04 中科光绘(上海)科技有限公司 Foreign body intrusion detection method for laser foreign body cleaner
CN110570454B (en) * 2019-07-19 2022-03-22 华瑞新智科技(北京)有限公司 Method and device for detecting foreign matter invasion
CN110619293A (en) * 2019-09-06 2019-12-27 沈阳天眼智云信息科技有限公司 Flame detection method based on binocular vision
CN110633682B (en) * 2019-09-19 2022-07-12 合肥英睿系统技术有限公司 Infrared image anomaly monitoring method, device and equipment based on double-light fusion
CN110930375B (en) * 2019-11-13 2021-02-09 广东国地规划科技股份有限公司 Method, system and device for monitoring land coverage change and storage medium
CN111079546B (en) * 2019-11-22 2022-06-07 重庆师范大学 Unmanned aerial vehicle pest detection method
CN110942458B (en) * 2019-12-06 2023-05-16 汕头大学 Temperature anomaly defect detection and positioning method and system
CN111063148A (en) * 2019-12-30 2020-04-24 神思电子技术股份有限公司 Remote night vision target video detection method
CN111680537A (en) * 2020-03-31 2020-09-18 上海航天控制技术研究所 Target detection method and system based on laser infrared compounding
CN111754477B (en) * 2020-06-19 2024-02-09 北京交通大学 Railway perimeter foreign matter intrusion detection method based on dynamic candidate area multi-scale image
CN112317962B (en) * 2020-10-16 2022-09-02 广州黑格智造信息科技有限公司 Marking system and method for invisible appliance production
CN111856436A (en) * 2020-07-02 2020-10-30 大连理工大学 Combined calibration device and calibration method for multi-line laser radar and infrared camera
CN111765974B (en) * 2020-07-07 2021-04-13 中国环境科学研究院 Wild animal observation system and method based on miniature refrigeration thermal infrared imager
CN112810669A (en) * 2020-07-17 2021-05-18 周慧 Intercity train operation control platform and method
CN114056385A (en) * 2020-07-31 2022-02-18 比亚迪股份有限公司 Train control method and device and train
CN112595730A (en) * 2020-11-13 2021-04-02 深圳供电局有限公司 Cable breakage identification method and device and computer equipment
CN113033518B (en) * 2021-05-25 2021-08-31 北京中科闻歌科技股份有限公司 Image detection method, image detection device, electronic equipment and storage medium
CN113674319A (en) * 2021-08-23 2021-11-19 浙江大华技术股份有限公司 Target tracking method, system, equipment and computer storage medium
CN113784026B (en) * 2021-08-30 2023-04-18 鹏城实验室 Method, apparatus, device and storage medium for calculating position information based on image
CN113869159B (en) * 2021-09-16 2022-06-10 深圳市创宜隆科技有限公司 Cloud server data management system
CN115035412B (en) * 2022-06-23 2024-04-12 郑州儒慧信息技术有限责任公司 Method for identifying foreign matters of overhead contact system
CN116309569B (en) * 2023-05-18 2023-08-22 中国民用航空飞行学院 Airport environment anomaly identification system based on infrared and visible light image registration

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4925987B2 (en) * 2007-09-26 2012-05-09 公益財団法人鉄道総合技術研究所 Method and apparatus for confirming approval or disapproval of railway traffic signal
CN104590319B (en) * 2014-06-11 2016-07-06 北京交通大学 Foreign body invades detecting device and foreign body invades detection method
CN205890910U (en) * 2016-06-29 2017-01-18 南京雅信科技集团有限公司 Limit detecting device is invaded with track foreign matter that infrared light combines to visible light

Also Published As

Publication number Publication date
CN107253485A (en) 2017-10-17

Similar Documents

Publication Publication Date Title
CN107253485B (en) Foreign matter invades detection method and foreign matter invades detection device
Kong et al. Detecting abandoned objects with a moving camera
CN106960179B (en) Rail line Environmental security intelligent monitoring method and device
CN104574393B (en) A kind of three-dimensional pavement crack pattern picture generates system and method
CN111079556A (en) Multi-temporal unmanned aerial vehicle video image change area detection and classification method
Zhao et al. Road network extraction from airborne LiDAR data using scene context
CN115439424A (en) Intelligent detection method for aerial video image of unmanned aerial vehicle
Pollard et al. A volumetric approach to change detection in satellite images
CN109559324A (en) A kind of objective contour detection method in linear array images
Aschwanden et al. Comparison of five numerical codes for automated tracing of coronal loops
CN110084243A (en) It is a kind of based on the archives of two dimensional code and monocular camera identification and localization method
CN114973028B (en) Aerial video image real-time change detection method and system
CN108694349A (en) A kind of pantograph image extraction method and device based on line-scan digital camera
CN104182992B (en) Method for detecting small targets on the sea on the basis of panoramic vision
Chauvin et al. Cloud motion estimation using a sky imager
CN117036641A (en) Road scene three-dimensional reconstruction and defect detection method based on binocular vision
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN108510544A (en) A kind of striation localization method of feature based cluster
CN110134148A (en) A kind of transmission line of electricity helicopter make an inspection tour in tracking along transmission line of electricity
CN115690190B (en) Moving target detection and positioning method based on optical flow image and pinhole imaging
CN107730535A (en) A kind of cascaded infrared video tracing method of visible ray
CN115619623A (en) Parallel fisheye camera image splicing method based on moving least square transformation
CN115330832A (en) Computer vision-based transmission tower full-freedom displacement monitoring system and method
Tiefeng et al. Pseudo-color processing of gray images for human visual detection and recognition
CN107239754B (en) Automobile logo identification method based on sparse sampling intensity profile and gradient distribution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant