CN105718911A - Outdoor transformer capable of target identification - Google Patents

Outdoor transformer capable of target identification Download PDF

Info

Publication number
CN105718911A
CN105718911A CN201610049025.XA CN201610049025A CN105718911A CN 105718911 A CN105718911 A CN 105718911A CN 201610049025 A CN201610049025 A CN 201610049025A CN 105718911 A CN105718911 A CN 105718911A
Authority
CN
China
Prior art keywords
image
target
submodule
feature
outdoor transformer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610049025.XA
Other languages
Chinese (zh)
Inventor
张健敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201610049025.XA priority Critical patent/CN105718911A/en
Publication of CN105718911A publication Critical patent/CN105718911A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an outdoor transformer capable of target identification. The outdoor transformer comprises an outdoor transformer body and a monitoring device mounted on the outdoor transformer body; the monitoring device comprises a preprocessing module, a detection and tracking module and an identification output module concretely; the preprocessing module comprises an image conversion submodule, an image filtering submodule and an image enhancement submodule; and the detection and tracking module comprises a construction submodule, a loss discrimination submodule and an updating submodule. According to the outdoor transformer, a video image technology is applied to the outdoor transformer, malicious damage behaviors can be monitored and recorded effectively, and the outdoor transformer also has the advantages including high instantaneity, accurate positioning, high self-adaptation, complete reservation of image details and high robustness.

Description

A kind of outdoor transformer with target recognition function
Technical field
The present invention relates to outdoor transformer field, be specifically related to a kind of outdoor transformer with target recognition function.
Background technology
Transformator) it is that the principle utilizing electromagnetic induction is to change the device of alternating voltage, main member is primary coil, secondary coil and iron core, and major function has: voltage transformation, current transformation, impedance transformation, isolation, voltage stabilizing (magnetic saturation transformator) etc..Transformator can be divided into outdoor transformer and indoor transformer according to installation site, and wherein outdoor transformer is because be mounted in outdoor open environment, and its protective measure is relatively weak relative to indoor transformer.
And outdoor transformer is as the important expensive device being related to power supply safety, therefore the safety of outdoor transformer is particularly important, it is necessary to can prevent and monitor malicious sabotage behavior.
Summary of the invention
For the problems referred to above, the present invention provides a kind of outdoor transformer with target recognition function.
The purpose of the present invention realizes by the following technical solutions:
A kind of outdoor transformer with target recognition function, including outdoor transformer and the monitoring device being arranged on outdoor transformer, monitoring device for carrying out video image monitoring to the activity near outdoor transformer, and monitoring device includes pretreatment module, detecting and tracking module, identifies output module;
(1) pretreatment module, for the image received is carried out pretreatment, specifically includes image transformant module, image filtering submodule and image enhaucament submodule:
Image transformant module, for coloured image is converted into gray level image:
H ( x , y ) = max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + m i n ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 ( max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - m i n ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) )
Wherein, (x, y), (x, y), (x, (x, y) the intensity red green blue value at place, (x y) represents coordinate (x, y) grey scale pixel value at place to H to B to G to R y) to represent pixel respectively;Image is sized to m × n;
Image filtering submodule, for gray level image is filtered:
Adopt Wiener filtering to carry out after first-level filtering removes, define svlm image, be designated as Msvlm(x, y), being specifically defined formula is: Msvlm(x, y)=a1J1(x, y)+a2J2(x, y)+a3J3(x, y)+a4J4(x, y), wherein a1、a2、a3、a4For variable weight,I=1,2,3,4;(x, y) for the image after filtered for J;
Image enhaucament submodule:
When | 128 - m | > | ω - 50 | 3 Time, L ( x , y ) = 255 × ( H ( x , y ) 255 ) ψ ( x , y ) , Wherein, (x, y) for enhanced gray value for L;(x y) is the gamma correction coefficient including local message, now to ψα be range for 0 to 1 variable element,
When | 128 - m | ≤ | ω - 50 | 3 And during ω > 50, L ( x , y ) = 255 × ( H ( x , y ) 255 ) ψ ( x , y ) × ( 1 - ω - 50 ω 2 ) , Wherein ψ (x, y)=ψα(Msvlm(x, y)),mHIt is the average of the gray value all pixels higher than 128, m in imageLIt is the average of the gray value all pixels lower than 128, and now m=min (mH, mL), when α value is known, calculates 256 ψ correction coefficients as look-up table, be designated asWherein i is index value, utilizes Msvlm(x, gray value y) is as index, according to ψ (x, y)=ψα(Msvlm(x, y)) quickly obtain each pixel in image gamma correction coefficient ψ (x, y);For template correction factor;
(2) detecting and tracking module, specifically includes structure submodule, loses differentiation submodule and update submodule:
Build submodule, for the structure of visual dictionary:
Obtain the position and yardstick of following the tracks of target at initial frame, choosing positive and negative sample training tracker about, result will be followed the tracks of as training set X={x1, x2... xN}T;And the every width target image in training set is extracted the SIFT feature of 128 dimensionsWherein StThe number of SIFT feature in t width target image in expression training set;After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, it is designated asThe feature total amount that can extractWherein K < < FN, andAfter visual dictionary builds, every width training image is expressed as the form of feature bag, for representing the frequency that in visual dictionary, feature word occurs, with rectangular histogram h (xt) represent, h (xt) obtain in the following manner: by a width training image XtIn each featureProjecting to visual dictionary, the feature word the shortest with projector distance represents this feature, after all Projection Characters, adds up the frequency of occurrences of each feature word, and normalization obtains training image XtFeature histogram h (xt);
Lose and differentiate submodule, for differentiating that the loss of target is whether:
When a new two field picture arrives, from K histogram, randomly select K histogram of Z < and Z=4, form the new sub-rectangular histogram h being sized to Z(z)(xt), sub histogrammic number is up toIndividual;Calculate candidate target region son histogrammic similarity Φ corresponding to certain target area in training sett_z,Wherein t=1,2 ..., N, z=1,2 ..., Ns, then calculate overall similarity Φt=1-∏z(1-Φt_z);Similarity Φ=max{ Φ of candidate target region and targett, t} represents, then track rejection judges that formula is: u = s i g n ( &Phi; ) = 1 &Phi; &GreaterEqual; g s 0 &Phi; < g s , Wherein gs be manually set sentence mistake threshold values;As u=1, target is by tenacious tracking, as u=0, and track rejection;
When track rejection, define affine Transform Model: x t y t = s . c o s ( &mu; 1 &times; &theta; ) s . s i n ( &mu; 1 &times; &theta; ) - s . s i n ( &mu; 1 &times; &theta; ) s . c o s ( &mu; 1 &times; &theta; ) x t - 1 y t - 1 + &mu; 2 e f , Wherein (xt, yt) and (xt-1, yt-1) the respectively position coordinates of certain SITF characteristic point and the position coordinates of Corresponding matching characteristic point in previous frame target in present frame target, both are known quantity;S is scale coefficient, and θ is coefficient of rotary, and e and f represents translation coefficient, &mu; 1 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 For temperature rotation correction coefficient, &mu; 2 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 Correction factor, μ is translated for temperature1And μ2For revising because the image rotation that causes of ambient temperature deviation and translation error, T0For the standard temperature being manually set, being set to 20 degree, T is monitored the temperature value obtained in real time by temperature sensor;Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, under new yardstick s and coefficient of rotary θ, finally gather positive negative sample, update grader;
Update submodule, for the renewal of visual dictionary:
After every two field picture obtains target location, the result of calculation according to affine transformation parameter, collect all SIFT feature points meeting result parameterAfter F=3 frame, it is thus achieved that new feature point setWherein St-FRepresent the total characteristic obtained from F two field picture to count;Utilize following formula that new and old characteristic point re-starts K cluster: WhereinRepresenting new visual dictionary, the size of visual dictionary remains unchanged;It is forgetting factor, it was shown that proportion shared by old dictionary,More little, the judgement of track rejection is contributed more many by new feature, takes
(3) output module is identified, identification and output for image: utilize track algorithm to obtain target area in image sequence to be identified, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
Preferably, adopting Wiener filtering to carry out after first-level filtering removes, now image information also includes the noise of remnants, adopts following two-stage filter to carry out secondary filtering:
J ( x , y ) = &Sigma; i = - m / 2 m / 2 &Sigma; j = - n / 2 n / 2 H ( x , y ) P g ( x + i , y + j )
Wherein, J (x, y) be after filtering after image;Pg(x+i, y+j) represents the function that yardstick is m × n and Pg(x+i, y+j)=q × exp (-(x2+y2)/ω), wherein q is by the coefficient of function normalization, it may be assumed that ∫ ∫ q × exp (-(x2+y2)/ω) dxdy=1.
This outdoor transformer have the beneficial effect that at image pre-processing phase, the image strengthened can according to the size adaptation adjustment of template, improve reinforced effects, and can automatically revise at the Rule of judgment when different templates size, and consider visual custom and human eye to non-linear relation with colouring intensity of the perceptibility of different color;M × N number of power exponent computing is reduced to 256, improves computational efficiency;At target detection and tracking phase, the error that different temperatures causes the rotation of image and translation to cause can be eliminated, improve discrimination, image detail after treatment becomes apparent from, and amount of calculation is greatly reduced relative to traditional method, can effectively adapt to target scale change, and can accurately judge whether target loses, can by detection tenacious tracking again after target comes back to visual field.Additionally, this outdoor transformer has, real-time is good, the advantage of accurate positioning and strong robustness, and achieves good effect in quickly having the target detection blocked and tracking.
Accompanying drawing explanation
The invention will be further described to utilize accompanying drawing, but the embodiment in accompanying drawing does not constitute any limitation of the invention, for those of ordinary skill in the art, under the premise not paying creative work, it is also possible to obtain other accompanying drawing according to the following drawings.
Fig. 1 is the structured flowchart of a kind of outdoor transformer with target recognition function;
Fig. 2 is the outside schematic diagram of a kind of outdoor transformer with target recognition function.
Detailed description of the invention
The invention will be further described with the following Examples.
Embodiment 1: as shown in Figure 1-2, a kind of outdoor transformer with target recognition function, including outdoor transformer 5 and the monitoring device 4 being arranged on outdoor transformer 5, monitoring device 4 for carrying out video image monitoring to the activity near outdoor transformer, and monitoring device 4 includes pretreatment module 1, detecting and tracking module 2, identifies output module 3.
(1) pretreatment module 1, for the image received is carried out pretreatment, specifically includes image transformant module 11, image filtering submodule 12 and image enhaucament submodule 13:
Image transformant module 11, for coloured image is converted into gray level image:
H ( x , y ) = max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + m i n ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 ( max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - m i n ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) )
Wherein, (x, y), (x, y), (x, (x, y) the intensity red green blue value at place, (x y) represents coordinate (x, y) grey scale pixel value at place to H to B to G to R y) to represent pixel respectively;Image is sized to m × n;
Image filtering submodule 12, for gray level image is filtered:
Adopt Wiener filtering to carry out after first-level filtering removes, define svlm image, be designated as Msvlm(x, y), being specifically defined formula is: Msvlm(x, y)=a1J1(x, y)+a2J2(x, y)+a3J3(x, y)+a4J4(x, y), wherein a1、a2、a3、a4For variable weight,I=1,2,3,4;(x, y) for the image after filtered for J;
Image enhaucament submodule 13:
When | 128 - m | > | &omega; - 50 | 3 Time, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) , Wherein, (x, y) for enhanced gray value for L;(x y) is the gamma correction coefficient including local message, now to ψα be range for 0 to 1 variable element,
When | 128 - m | &le; | &omega; - 50 | 3 And during ω > 50, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) &times; ( 1 - &omega; - 50 &omega; 2 ) , Wherein ψ (x, y)=ψα(Msvlm(x, y)),mHIt is the average of the gray value all pixels higher than 128, m in imageLIt is the average of the gray value all pixels lower than 128, and now m=min (mH, mL), when α value is known, calculates 256 ψ correction coefficients as look-up table, be designated asWherein i is index value, utilizes Msvlm(x, gray value y) is as index, according to ψ (x, y)=ψα(Msvlm(x, y)) quickly obtain each pixel in image gamma correction coefficient ψ (x, y);For template correction factor;
(2) detecting and tracking module 2, specifically includes structure submodule 21, loses differentiation submodule 22 and update submodule 23:
Build submodule 21, for the structure of visual dictionary:
Obtain the position and yardstick of following the tracks of target at initial frame, choosing positive and negative sample training tracker about, result will be followed the tracks of as training set X={x1, x2... xN}T;And the every width target image in training set is extracted the SIFT feature of 128 dimensionsWherein StThe number of SIFT feature in t width target image in expression training set;After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, it is designated asThe feature total amount that can extractWherein K < < FN, andAfter visual dictionary builds, every width training image is expressed as the form of feature bag, for representing the frequency that in visual dictionary, feature word occurs, with rectangular histogram h (xt) represent, h (xt) obtain in the following manner: by a width training image XtIn each featureProjecting to visual dictionary, the feature word the shortest with projector distance represents this feature, after all Projection Characters, adds up the frequency of occurrences of each feature word, and normalization obtains training image XtFeature histogram h (xt);
Lose and differentiate submodule 22, for differentiating that the loss of target is whether:
When a new two field picture arrives, from K histogram, randomly select K histogram of Z < and Z=4, form the new sub-rectangular histogram h being sized to Z(z)(xt), sub histogrammic number is up toIndividual;Calculate candidate target region son histogrammic similarity Φ corresponding to certain target area in training sett_z,Wherein t=1,2 ..., N, z=1,2 ..., Ns, then calculate overall similarity Φt=1-∏z(1-Φt_z);Similarity Φ=max{ Φ of candidate target region and targett, t} represents, then track rejection judges that formula is: u = s i g n ( &Phi; ) = 1 &Phi; &GreaterEqual; g s 0 &Phi; < g s , Wherein gs be manually set sentence mistake threshold values;As u=1, target is by tenacious tracking, as u=0, and track rejection;
When track rejection, define affine Transform Model: x t y t = s . c o s ( &mu; 1 &times; &theta; ) s . s i n ( &mu; 1 &times; &theta; ) - s . s i n ( &mu; 1 &times; &theta; ) s . c o s ( &mu; 1 &times; &theta; ) x t - 1 y t - 1 + &mu; 2 e f , Wherein (xt, yt) and (xt-1, yt-1) the respectively position coordinates of certain SITF characteristic point and the position coordinates of Corresponding matching characteristic point in previous frame target in present frame target, both are known quantity;S is scale coefficient, and θ is coefficient of rotary, and e and f represents translation coefficient, &mu; 1 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 For temperature rotation correction coefficient, &mu; 2 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 Correction factor, μ is translated for temperature1And μ2For revising because the image rotation that causes of ambient temperature deviation and translation error, T0For the standard temperature being manually set, being set to 20 degree, T is monitored the temperature value obtained in real time by temperature sensor;Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, under new yardstick s and coefficient of rotary θ, finally gather positive negative sample, update grader;
Update submodule 23, for the renewal of visual dictionary:
After every two field picture obtains target location, the result of calculation according to affine transformation parameter, collect all SIFT feature points meeting result parameterAfter F=3 frame, it is thus achieved that new feature point setWherein St-FRepresent the total characteristic obtained from F two field picture to count;Utilize following formula that new and old characteristic point re-starts K cluster: WhereinRepresenting new visual dictionary, the size of visual dictionary remains unchanged;It is forgetting factor, it was shown that proportion shared by old dictionary,More little, the judgement of track rejection is contributed more many by new feature, takes
(3) output module is identified, identification and output for image: utilize track algorithm to obtain target area in image sequence to be identified, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
Preferably, adopting Wiener filtering to carry out after first-level filtering removes, now image information also includes the noise of remnants, adopts following two-stage filter to carry out secondary filtering:
J ( x , y ) = &Sigma; i = - m / 2 m / 2 &Sigma; j = - n / 2 n / 2 H ( x , y ) P g ( x + i , y + j )
Wherein, J (x, y) be after filtering after image;Pg(x+i, y+j) represents the function that yardstick is m × n and Pg(x+i, y+j)=q × exp (-(x2+y2)/ω), wherein q is by the coefficient of function normalization, it may be assumed that ∫ ∫ q × exp (-(x2+y2)/ω) dxdy=1.
The outdoor transformer of this embodiment, at image pre-processing phase, the image strengthened can according to the size adaptation adjustment of template, improve reinforced effects, and can automatically revise at the Rule of judgment when different templates size, and consider visual custom and human eye to non-linear relation with colouring intensity of the perceptibility of different color;Take full advantage of local feature and the global characteristics of image, there is adaptivity, it is possible to suppress excessively to strengthen, the image enhancement effects obtained under complex illumination environment is obvious;M × N number of power exponent computing is reduced to 256, improves computational efficiency, Z=4, F=3,Calculating average frame per second is 15FPS, and amount of calculation is less than the dictionary algorithm of same type;At target detection and tracking phase, the error that different temperatures causes the rotation of image and translation to cause can be eliminated, improve discrimination, image detail after treatment becomes apparent from, and amount of calculation is greatly reduced relative to traditional method, it is possible to effectively adapt to target scale change, and can accurately judge whether target loses, can again be detected and tenacious tracking after target comes back to visual field, until remaining to tenacious tracking target after 110 frames.Additionally, this outdoor transformer has, real-time is good, the advantage of accurate positioning and strong robustness, and has good effect in quickly having the target detection blocked and tracking, achieves beyond thought effect.
Embodiment 2: as shown in Figure 1-2, a kind of outdoor transformer with target recognition function, including outdoor transformer 5 and the monitoring device 4 being arranged on outdoor transformer 5, monitoring device 4 for carrying out video image monitoring to the activity near outdoor transformer 5, and monitoring device 4 includes pretreatment module 1, detecting and tracking module 2, identifies output module 3.
(1) pretreatment module 1, for the image received is carried out pretreatment, specifically includes image transformant module 11, image filtering submodule 12 and image enhaucament submodule 13:
Image transformant module 11, for coloured image is converted into gray level image:
H ( x , y ) = max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + m i n ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 ( max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - m i n ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) )
Wherein, (x, y), (x, y), (x, (x, y) the intensity red green blue value at place, (x y) represents coordinate (x, y) grey scale pixel value at place to H to B to G to R y) to represent pixel respectively;Image is sized to m × n;
Image filtering submodule 12, for gray level image is filtered:
Adopt Wiener filtering to carry out after first-level filtering removes, define svlm image, be designated as Msvlm(x, y), being specifically defined formula is: Msvlm(x, y)=a1J1(x, y)+a2J2(x, y)+a3J3(x, y)+a4J4(x, y), wherein a1、a2、a3、a4For variable weight,I=1,2,3,4;(x, y) for the image after filtered for J;
Image enhaucament submodule 13:
When | 128 - m | > | &omega; - 50 | 3 Time, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) , Wherein, (x, y) for enhanced gray value for L;(x y) is the gamma correction coefficient including local message, now to ψα be range for 0 to 1 variable element,
When | 128 - m | &le; | &omega; - 50 | 3 And during ω > 50, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) &times; ( 1 - &omega; - 50 &omega; 2 ) , Wherein ψ (x, y)=ψα(Msvlm(x, y)),mHIt is the average of the gray value all pixels higher than 128, m in imageLIt is the average of the gray value all pixels lower than 128, and now m=min (mH, mL), when α value is known, calculates 256 ψ correction coefficients as look-up table, be designated asWherein i is index value, utilizes Msvlm(x, gray value y) is as index, according to ψ (x, y)=ψα(Msvlm(x, y)) quickly obtain each pixel in image gamma correction coefficient ψ (x, y);For template correction factor;
(2) detecting and tracking module 2, specifically includes structure submodule 21, loses differentiation submodule 22 and update submodule 23:
Build submodule 21, for the structure of visual dictionary:
Obtain the position and yardstick of following the tracks of target at initial frame, choosing positive and negative sample training tracker about, result will be followed the tracks of as training set X={x1, x2... xN}T;And the every width target image in training set is extracted the SIFT feature of 128 dimensionsWherein StThe number of SIFT feature in t width target image in expression training set;After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, it is designated asThe feature total amount that can extractWherein K < < FN, andAfter visual dictionary builds, every width training image is expressed as the form of feature bag, for representing the frequency that in visual dictionary, feature word occurs, with rectangular histogram h (xt) represent, h (xt) obtain in the following manner: by a width training image XtIn each featureProjecting to visual dictionary, the feature word the shortest with projector distance represents this feature, after all Projection Characters, adds up the frequency of occurrences of each feature word, and normalization obtains training image XtFeature histogram h (xt);
Lose and differentiate submodule 22, for differentiating that the loss of target is whether:
When a new two field picture arrives, from K histogram, randomly select K histogram of Z < and Z=5, form the new sub-rectangular histogram h being sized to Z(z)(xt), sub histogrammic number is up toIndividual;Calculate candidate target region son histogrammic similarity Φ corresponding to certain target area in training sett_z,Wherein t=1,2 ..., N, z=1,2 ..., Ns, then calculate overall similarity Φt=1-∏z(1-Φt_z);Similarity Φ=max{ Φ of candidate target region and targett, t} represents, then track rejection judges that formula is: u = s i g n ( &Phi; ) = 1 &Phi; &GreaterEqual; g s 0 &Phi; < g s , Wherein gs be manually set sentence mistake threshold values;As u=1, target is by tenacious tracking, as u=0, and track rejection;
When track rejection, define affine Transform Model: x t y t = s . c o s ( &mu; 1 &times; &theta; ) s . s i n ( &mu; 1 &times; &theta; ) - s . s i n ( &mu; 1 &times; &theta; ) s . c o s ( &mu; 1 &times; &theta; ) x t - 1 y t - 1 + &mu; 2 e f , Wherein (xt, yt) and (xt-1, yt-1) the respectively position coordinates of certain SITF characteristic point and the position coordinates of Corresponding matching characteristic point in previous frame target in present frame target, both are known quantity;S is scale coefficient, and θ is coefficient of rotary, and e and f represents translation coefficient, &mu; 1 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 For temperature rotation correction coefficient, &mu; 2 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 Correction factor, μ is translated for temperature1And μ2For revising because the image rotation that causes of ambient temperature deviation and translation error, T0For the standard temperature being manually set, being set to 20 degree, T is monitored the temperature value obtained in real time by temperature sensor;Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, under new yardstick s and coefficient of rotary θ, finally gather positive negative sample, update grader;
Update submodule 23, for the renewal of visual dictionary:
After every two field picture obtains target location, the result of calculation according to affine transformation parameter, collect all SIFT feature points meeting result parameterAfter F=4 frame, it is thus achieved that new feature point setWherein St-FRepresent the total characteristic obtained from F two field picture to count;Utilize following formula that new and old characteristic point re-starts K cluster: WhereinRepresenting new visual dictionary, the size of visual dictionary remains unchanged;It is forgetting factor, it was shown that proportion shared by old dictionary,More little, the judgement of track rejection is contributed more many by new feature, takes
(3) output module is identified, identification and output for image: utilize track algorithm to obtain target area in image sequence to be identified, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
Preferably, adopting Wiener filtering to carry out after first-level filtering removes, now image information also includes the noise of remnants, adopts following two-stage filter to carry out secondary filtering:
J ( x , y ) = &Sigma; i = - m / 2 m / 2 &Sigma; j = - n / 2 n / 2 H ( x , y ) P g ( x + i , y + j )
Wherein, J (x, y) be after filtering after image;Pg(x+i, y+j) represents the function that yardstick is m × n and Pg(x+i, y+j)=q × exp (-(x2+y2)/ω), wherein q is by the coefficient of function normalization, it may be assumed that ∫ ∫ q × exp (-(x2+y2)/ω) dxdy=1.
The outdoor transformer of this embodiment, at image pre-processing phase, the image strengthened can according to the size adaptation adjustment of template, improve reinforced effects, and can automatically revise at the Rule of judgment when different templates size, and consider visual custom and human eye to non-linear relation with colouring intensity of the perceptibility of different color;Take full advantage of local feature and the global characteristics of image, there is adaptivity, it is possible to suppress excessively to strengthen, the image enhancement effects obtained under complex illumination environment is obvious;M × N number of power exponent computing is reduced to 256, improves computational efficiency, Z=5, F=4,Calculating average frame per second is 16FPS, and amount of calculation is less than the dictionary algorithm of same type;At target detection and tracking phase, the error that different temperatures causes the rotation of image and translation to cause can be eliminated, improve discrimination, image detail after treatment becomes apparent from, and amount of calculation is greatly reduced relative to traditional method, it is possible to effectively adapt to target scale change, and can accurately judge whether target loses, can again be detected and tenacious tracking after target comes back to visual field, until remaining to tenacious tracking target after 115 frames.Additionally, this outdoor transformer has, real-time is good, the advantage of accurate positioning and strong robustness, and has good effect in quickly having the target detection blocked and tracking, achieves beyond thought effect.
Embodiment 3: as shown in Figure 1-2, a kind of outdoor transformer with target recognition function, including outdoor transformer 5 and the monitoring device 4 being arranged on outdoor transformer 5, monitoring device 4 for carrying out video image monitoring to the activity near outdoor transformer 5, and monitoring device 4 includes pretreatment module 1, detecting and tracking module 2, identifies output module 3.
(1) pretreatment module 1, for the image received is carried out pretreatment, specifically includes image transformant module 11, image filtering submodule 12 and image enhaucament submodule 13:
Image transformant module 11, for coloured image is converted into gray level image:
H ( x , y ) = max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + m i n ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 ( max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - m i n ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) )
Wherein, (x, y), (x, y), (x, (x, y) the intensity red green blue value at place, (x y) represents coordinate (x, y) grey scale pixel value at place to H to B to G to R y) to represent pixel respectively;Image is sized to m × n;
Image filtering submodule 12, for gray level image is filtered:
Adopt Wiener filtering to carry out after first-level filtering removes, define svlm image, be designated as Msvlm(x, y), being specifically defined formula is: Msvlm(x, y)=a1J1(x, y)+a2J2(x, y)+a3J3(x, y)+a4J4(x, y), wherein a1、a2、a3、a4For variable weight,I=1,2,3,4;(x, y) for the image after filtered for J;
Image enhaucament submodule 13:
When | 128 - m | > | &omega; - 50 | 3 Time, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) , Wherein, (x, y) for enhanced gray value for L;(x y) is the gamma correction coefficient including local message, now to ψα be range for 0 to 1 variable element,
When | 128 - m | &le; | &omega; - 50 | 3 And during ω > 50, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) &times; ( 1 - &omega; - 50 &omega; 2 ) , Wherein ψ (x, y)=ψα(Msvlm(x, y)),mHIt is the average of the gray value all pixels higher than 128, m in imageLIt is the average of the gray value all pixels lower than 128, and now m=min (mH, mL), when α value is known, calculates 256 ψ correction coefficients as look-up table, be designated asWherein i is index value, utilizes Msvlm(x, gray value y) is as index, according to ψ (x, y)=ψα(Msvlm(x, y)) quickly obtain each pixel in image gamma correction coefficient ψ (x, y);For template correction factor;
(2) detecting and tracking module 2, specifically includes structure submodule 21, loses differentiation submodule 22 and update submodule 23:
Build submodule 21, for the structure of visual dictionary:
Obtain the position and yardstick of following the tracks of target at initial frame, choosing positive and negative sample training tracker about, result will be followed the tracks of as training set X={x1, x2... xN}T;And the every width target image in training set is extracted the SIFT feature of 128 dimensionsWherein StThe number of SIFT feature in t width target image in expression training set;After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, it is designated asThe feature total amount that can extractWherein K < < FN, andAfter visual dictionary builds, every width training image is expressed as the form of feature bag, for representing the frequency that in visual dictionary, feature word occurs, with rectangular histogram h (xt) represent, h (xt) obtain in the following manner: by a width training image XtIn each featureProjecting to visual dictionary, the feature word the shortest with projector distance represents this feature, after all Projection Characters, adds up the frequency of occurrences of each feature word, and normalization obtains training image XtFeature histogram h (xt);
Lose and differentiate submodule 22, for differentiating that the loss of target is whether:
When a new two field picture arrives, from K histogram, randomly select K histogram of Z < and Z=6, form the new sub-rectangular histogram h being sized to Z(z)(xt), sub histogrammic number is up toIndividual;Calculate candidate target region son histogrammic similarity Φ corresponding to certain target area in training sett_z,Wherein t=1,2 ..., N, z=1,2 ..., Ns, then calculate overall similarity Φt=1-∏z(1-Φt_z);Similarity Φ=max{ Φ of candidate target region and targett, t} represents, then track rejection judges that formula is: u = s i g n ( &Phi; ) = 1 &Phi; &GreaterEqual; g s 0 &Phi; < g s , Wherein gs be manually set sentence mistake threshold values;As u=1, target is by tenacious tracking, as u=0, and track rejection;
When track rejection, define affine Transform Model: x t y t = s . c o s ( &mu; 1 &times; &theta; ) s . s i n ( &mu; 1 &times; &theta; ) - s . s i n ( &mu; 1 &times; &theta; ) s . c o s ( &mu; 1 &times; &theta; ) x t - 1 y t - 1 + &mu; 2 e f , Wherein (xt, yt) and (xt-1, yt-1) the respectively position coordinates of certain SITF characteristic point and the position coordinates of Corresponding matching characteristic point in previous frame target in present frame target, both are known quantity;S is scale coefficient, and θ is coefficient of rotary, and e and f represents translation coefficient, &mu; 1 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 For temperature rotation correction coefficient, &mu; 2 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 Correction factor, μ is translated for temperature1And μ2For revising because the image rotation that causes of ambient temperature deviation and translation error, T0For the standard temperature being manually set, being set to 20 degree, T is monitored the temperature value obtained in real time by temperature sensor;Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, under new yardstick s and coefficient of rotary θ, finally gather positive negative sample, update grader;
Update submodule 23, for the renewal of visual dictionary:
After every two field picture obtains target location, the result of calculation according to affine transformation parameter, collect all SIFT feature points meeting result parameterAfter F=5 frame, it is thus achieved that new feature point setWherein St-FRepresent the total characteristic obtained from F two field picture to count;Utilize following formula that new and old characteristic point re-starts K cluster: WhereinRepresenting new visual dictionary, the size of visual dictionary remains unchanged;It is forgetting factor, it was shown that proportion shared by old dictionary,More little, the judgement of track rejection is contributed more many by new feature, takes
(3) output module 3 is identified, identification and output for image: utilize track algorithm to obtain target area in image sequence to be identified, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
Preferably, adopting Wiener filtering to carry out after first-level filtering removes, now image information also includes the noise of remnants, adopts following two-stage filter to carry out secondary filtering:
J ( x , y ) = &Sigma; i = - m / 2 m / 2 &Sigma; j = - n / 2 n / 2 H ( x , y ) P g ( x + i , y + j )
Wherein, J (x, y) be after filtering after image;Pg(x+i, y+j) represents the function that yardstick is m × n and Pg(x+i, y+j)=q × exp (-(x2+y2)/ω), wherein q is by the coefficient of function normalization, it may be assumed that ∫ ∫ q × exp (-(x2+y2)/ω) dxdy=1.
The outdoor transformer of this embodiment, at image pre-processing phase, the image strengthened can according to the size adaptation adjustment of template, improve reinforced effects, and can automatically revise at the Rule of judgment when different templates size, and consider visual custom and human eye to non-linear relation with colouring intensity of the perceptibility of different color;Take full advantage of local feature and the global characteristics of image, there is adaptivity, it is possible to suppress excessively to strengthen, the image enhancement effects obtained under complex illumination environment is obvious;M × N number of power exponent computing is reduced to 256, improves computational efficiency, Z=6, F=5,Calculating average frame per second is 17FPS, and amount of calculation is less than the dictionary algorithm of same type;At target detection and tracking phase, the error that different temperatures causes the rotation of image and translation to cause can be eliminated, improve discrimination, image detail after treatment becomes apparent from, and amount of calculation is greatly reduced relative to traditional method, it is possible to effectively adapt to target scale change, and can accurately judge whether target loses, can again be detected and tenacious tracking after target comes back to visual field, until remaining to tenacious tracking target after 120 frames.Additionally, this outdoor transformer has, real-time is good, the advantage of accurate positioning and strong robustness, and has good effect in quickly having the target detection blocked and tracking, achieves beyond thought effect.
Embodiment 4: as shown in Figure 1-2, a kind of outdoor transformer with target recognition function, including outdoor transformer 5 and the monitoring device 4 being arranged on outdoor transformer 5, monitoring device 4 for carrying out video image monitoring to the activity near outdoor transformer 5, and monitoring device 4 includes pretreatment module 1, detecting and tracking module 2, identifies output module 3.
(1) pretreatment module 1, for the image received is carried out pretreatment, specifically includes image transformant module 11, image filtering submodule 12 and image enhaucament submodule 13:
Image transformant module 11, for coloured image is converted into gray level image:
H ( x , y ) = max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + m i n ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 ( max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - m i n ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) )
Wherein, (x, y), (x, y), (x, (x, y) the intensity red green blue value at place, (x y) represents coordinate (x, y) grey scale pixel value at place to H to B to G to R y) to represent pixel respectively;Image is sized to m × n;
Image filtering submodule 12, for gray level image is filtered:
Adopt Wiener filtering to carry out after first-level filtering removes, define svlm image, be designated as Msvlm(x, y), being specifically defined formula is: Msvlm(x, y)=a1J1(x, y)+a2J2(x, y)+a3J3(x, y)+a4J4(x, y), wherein a1、a2、a3、a4For variable weight,I=1,2,3,4;(x, y) for the image after filtered for J;
Image enhaucament submodule 13:
When | 128 - m | > | &omega; - 50 | 3 Time, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) , Wherein, (x, y) for enhanced gray value for L;(x y) is the gamma correction coefficient including local message, now to ψα be range for 0 to 1 variable element,
When | 128 - m | &le; | &omega; - 50 | 3 And during ω > 50, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) &times; ( 1 - &omega; - 50 &omega; 2 ) , Wherein ψ (x, y)=ψα(Msvlm(x, y)),mHIt is the average of the gray value all pixels higher than 128, m in imageLIt is the average of the gray value all pixels lower than 128, and now m=min (mH, mL), when α value is known, calculates 256 ψ correction coefficients as look-up table, be designated asWherein i is index value, utilizes Msvlm(x, gray value y) is as index, according to ψ (x, y)=ψα(Msvlm(x, y)) quickly obtain each pixel in image gamma correction coefficient ψ (x, y);For template correction factor;
(2) detecting and tracking module 2, specifically includes structure submodule 21, loses differentiation submodule 22 and update submodule 23:
Build submodule 21, for the structure of visual dictionary:
Obtain the position and yardstick of following the tracks of target at initial frame, choosing positive and negative sample training tracker about, result will be followed the tracks of as training set X={x1, x2... xN}T;And the every width target image in training set is extracted the SIFT feature of 128 dimensionsWherein StThe number of SIFT feature in t width target image in expression training set;After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, it is designated asThe feature total amount that can extractWherein K < < FN, andAfter visual dictionary builds, every width training image is expressed as the form of feature bag, for representing the frequency that in visual dictionary, feature word occurs, with rectangular histogram h (xt) represent, h (xt) obtain in the following manner: by a width training image XtIn each featureProjecting to visual dictionary, the feature word the shortest with projector distance represents this feature, after all Projection Characters, adds up the frequency of occurrences of each feature word, and normalization obtains training image XtFeature histogram h (xt);
Lose and differentiate submodule 22, for differentiating that the loss of target is whether:
When a new two field picture arrives, from K histogram, randomly select K histogram of Z < and Z=7, form the new sub-rectangular histogram h being sized to Z(z)(xt), sub histogrammic number is up toIndividual;Calculate candidate target region son histogrammic similarity Φ corresponding to certain target area in training sett_z,Wherein t=1,2 ..., N, z=1,2 ..., Ns, then calculate overall similarity Φt=1-∏z(1-Φt_z);Similarity Φ=max{ Φ of candidate target region and targett, t} represents, then track rejection judges that formula is: u = s i g n ( &Phi; ) = 1 &Phi; &GreaterEqual; g s 0 &Phi; < g s , Wherein gs be manually set sentence mistake threshold values;As u=1, target is by tenacious tracking, as u=0, and track rejection;
When track rejection, define affine Transform Model: x t y t = s . c o s ( &mu; 1 &times; &theta; ) s . s i n ( &mu; 1 &times; &theta; ) - s . s i n ( &mu; 1 &times; &theta; ) s . c o s ( &mu; 1 &times; &theta; ) x t - 1 y t - 1 + &mu; 2 e f , Wherein (xt, yt) and (xt-1, yt-1) the respectively position coordinates of certain SITF characteristic point and the position coordinates of Corresponding matching characteristic point in previous frame target in present frame target, both are known quantity;S is scale coefficient, and θ is coefficient of rotary, and e and f represents translation coefficient, &mu; 1 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 For temperature rotation correction coefficient, &mu; 2 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 Correction factor, μ is translated for temperature1And μ2For revising because the image rotation that causes of ambient temperature deviation and translation error, T0For the standard temperature being manually set, being set to 20 degree, T is monitored the temperature value obtained in real time by temperature sensor;Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, under new yardstick s and coefficient of rotary θ, finally gather positive negative sample, update grader;
Update submodule 23, for the renewal of visual dictionary:
After every two field picture obtains target location, the result of calculation according to affine transformation parameter, collect all SIFT feature points meeting result parameterAfter F=6 frame, it is thus achieved that new feature point setWherein St-FRepresent the total characteristic obtained from F two field picture to count;Utilize following formula that new and old characteristic point re-starts K cluster: WhereinRepresenting new visual dictionary, the size of visual dictionary remains unchanged;It is forgetting factor, it was shown that proportion shared by old dictionary,More little, the judgement of track rejection is contributed more many by new feature, takes
(3) output module 3 is identified, identification and output for image: utilize track algorithm to obtain target area in image sequence to be identified, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
Preferably, adopting Wiener filtering to carry out after first-level filtering removes, now image information also includes the noise of remnants, adopts following two-stage filter to carry out secondary filtering:
J ( x , y ) = &Sigma; i = - m / 2 m / 2 &Sigma; j = - n / 2 n / 2 H ( x , y ) P g ( x + i , y + j )
Wherein, J (x, y) be after filtering after image;Pg(x+i, y+j) represents the function that yardstick is m × n and Pg(x+i, y+j)=q × exp (-(x2+y2)/ω), wherein q is by the coefficient of function normalization, it may be assumed that ∫ ∫ q × exp (-(x2+y2)/ω) dxdy=1.
The outdoor transformer of this embodiment, at image pre-processing phase, the image strengthened can according to the size adaptation adjustment of template, improve reinforced effects, and can automatically revise at the Rule of judgment when different templates size, and consider visual custom and human eye to non-linear relation with colouring intensity of the perceptibility of different color;Take full advantage of local feature and the global characteristics of image, there is adaptivity, it is possible to suppress excessively to strengthen, the image enhancement effects obtained under complex illumination environment is obvious;M × N number of power exponent computing being reduced to 256, improves computational efficiency, Z=7, F=6, φ=0.18, calculating average frame per second is 18FPS, and amount of calculation is less than the dictionary algorithm of same type;At target detection and tracking phase, the error that different temperatures causes the rotation of image and translation to cause can be eliminated, improve discrimination, image detail after treatment becomes apparent from, and amount of calculation is greatly reduced relative to traditional method, it is possible to effectively adapt to target scale change, and can accurately judge whether target loses, can again be detected and tenacious tracking after target comes back to visual field, until remaining to tenacious tracking target after 125 frames.Additionally, this outdoor transformer has, real-time is good, the advantage of accurate positioning and strong robustness, and has good effect in quickly having the target detection blocked and tracking, achieves beyond thought effect.
Embodiment 5: as shown in Figure 1-2, a kind of outdoor transformer with target recognition function, including outdoor transformer 5 and the monitoring device 4 being arranged on outdoor transformer 5, monitoring device 4 for carrying out video image monitoring to the activity near outdoor transformer 5, and monitoring device 4 includes pretreatment module 1, detecting and tracking module 2, identifies output module 3.
(1) pretreatment module 1, for the image received is carried out pretreatment, specifically includes image transformant module 11, image filtering submodule 12 and image enhaucament submodule 13:
Image transformant module 11, for coloured image is converted into gray level image:
H ( x , y ) = max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + m i n ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 ( max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - m i n ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) )
Wherein, (x, y), (x, y), (x, (x, y) the intensity red green blue value at place, (x y) represents coordinate (x, y) grey scale pixel value at place to H to B to G to R y) to represent pixel respectively;Image is sized to m × n;
Image filtering submodule 12, for gray level image is filtered:
Adopt Wiener filtering to carry out after first-level filtering removes, define svlm image, be designated as Msvlm(x, y), being specifically defined formula is: Msvlm(x, y)=a1J1(x, y)+a2J2(x, y)+a3J3(x, y)+a4J4(x, y), wherein a1、a2、a3、a4For variable weight,I=1,2,3,4;(x, y) for the image after filtered for J;
Image enhaucament submodule 13:
When | 128 - m | > | &omega; - 50 | 3 Time, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) , Wherein, (x, y) for enhanced gray value for L;(x y) is the gamma correction coefficient including local message, now to ψα be range for 0 to 1 variable element,
When | 128 - m | &le; | &omega; - 50 | 3 And during ω > 50, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) &times; ( 1 - &omega; - 50 &omega; 2 ) , Wherein ψ (x, y)=ψα(Msvlm(x, y)),mHIt is the average of the gray value all pixels higher than 128, m in imageLIt is the average of the gray value all pixels lower than 128, and now m=min (mH, mL), when α value is known, calculates 256 ψ correction coefficients as look-up table, be designated asWherein i is index value, utilizes Msvlm(x, gray value y) is as index, according to ψ (x, y)=ψα(Msvlm(x, y)) quickly obtain each pixel in image gamma correction coefficient ψ (x, y);For template correction factor;
(2) detecting and tracking module 2, specifically includes structure submodule 21, loses differentiation submodule 22 and update submodule 23:
Build submodule 21, for the structure of visual dictionary:
Obtain the position and yardstick of following the tracks of target at initial frame, choosing positive and negative sample training tracker about, result will be followed the tracks of as training set X={x1, x2... xN}T;And the every width target image in training set is extracted the SIFT feature of 128 dimensionsWherein StThe number of SIFT feature in t width target image in expression training set;After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, it is designated asThe feature total amount that can extractWherein K < < FN, andAfter visual dictionary builds, every width training image is expressed as the form of feature bag, for representing the frequency that in visual dictionary, feature word occurs, with rectangular histogram h (xt) represent, h (xt) obtain in the following manner: by a width training image XtIn each featureProjecting to visual dictionary, the feature word the shortest with projector distance represents this feature, after all Projection Characters, adds up the frequency of occurrences of each feature word, and normalization obtains training image XtFeature histogram h (xt);
Lose and differentiate submodule 22, for differentiating that the loss of target is whether:
When a new two field picture arrives, from K histogram, randomly select K histogram of Z < and Z=8, form the new sub-rectangular histogram h being sized to Z(z)(xt), sub histogrammic number is up toIndividual;Calculate candidate target region son histogrammic similarity Φ corresponding to certain target area in training sett_z,Wherein t=1,2 ..., N, z=1,2 ..., Ns, then calculate overall similarity Φt=1-∏z(1-Φt_z);Similarity Φ=max{ Φ of candidate target region and targett, t} represents, then track rejection judges that formula is: u = s i g n ( &Phi; ) = 1 &Phi; &GreaterEqual; g s 0 &Phi; < g s , Wherein gs be manually set sentence mistake threshold values;As u=1, target is by tenacious tracking, as u=0, and track rejection;
When track rejection, define affine Transform Model: x t y t = s . c o s ( &mu; 1 &times; &theta; ) s . s i n ( &mu; 1 &times; &theta; ) - s . s i n ( &mu; 1 &times; &theta; ) s . c o s ( &mu; 1 &times; &theta; ) x t - 1 y t - 1 + &mu; 2 e f , Wherein (xt, yt) and (xt-1, yt-1) the respectively position coordinates of certain SITF characteristic point and the position coordinates of Corresponding matching characteristic point in previous frame target in present frame target, both are known quantity;S is scale coefficient, and θ is coefficient of rotary, and e and f represents translation coefficient, &mu; 1 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 For temperature rotation correction coefficient, &mu; 2 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 Correction factor, μ is translated for temperature1And μ2For revising because the image rotation that causes of ambient temperature deviation and translation error, T0For the standard temperature being manually set, being set to 20 degree, T is monitored the temperature value obtained in real time by temperature sensor;Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, under new yardstick s and coefficient of rotary θ, finally gather positive negative sample, update grader;
Update submodule 23, for the renewal of visual dictionary:
After every two field picture obtains target location, the result of calculation according to affine transformation parameter, collect all SIFT feature points meeting result parameterAfter F=7 frame, it is thus achieved that new feature point setWherein St-FRepresent the total characteristic obtained from F two field picture to count;Utilize following formula that new and old characteristic point re-starts K cluster: WhereinRepresenting new visual dictionary, the size of visual dictionary remains unchanged;It is forgetting factor, it was shown that proportion shared by old dictionary,More little, the judgement of track rejection is contributed more many by new feature, takes
(3) output module 3 is identified, identification and output for image: utilize track algorithm to obtain target area in image sequence to be identified, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
Preferably, adopting Wiener filtering to carry out after first-level filtering removes, now image information also includes the noise of remnants, adopts following two-stage filter to carry out secondary filtering:
J ( x , y ) = &Sigma; i = - m / 2 m / 2 &Sigma; j = - n / 2 n / 2 H ( x , y ) P g ( x + i , y + j )
Wherein, J (x, y) be after filtering after image;Pg(x+i, y+j) represents the function that yardstick is m × n and Pg(x+i, y+j)=q × exp (-(x2+y2)/ω), wherein q is by the coefficient of function normalization, it may be assumed that ∫ ∫ q × exp (-(x2+y2)/ω) dxdy=1.
The outdoor transformer of this embodiment, at image pre-processing phase, the image strengthened can according to the size adaptation adjustment of template, improve reinforced effects, and can automatically revise at the Rule of judgment when different templates size, and consider visual custom and human eye to non-linear relation with colouring intensity of the perceptibility of different color;Take full advantage of local feature and the global characteristics of image, there is adaptivity, it is possible to suppress excessively to strengthen, the image enhancement effects obtained under complex illumination environment is obvious;M × N number of power exponent computing is reduced to 256, improves computational efficiency, Z=8, F=7,Calculating average frame per second is 19FPS, and amount of calculation is less than the dictionary algorithm of same type;At target detection and tracking phase, the error that different temperatures causes the rotation of image and translation to cause can be eliminated, improve discrimination, image detail after treatment becomes apparent from, and amount of calculation is greatly reduced relative to traditional method, it is possible to effectively adapt to target scale change, and can accurately judge whether target loses, can again be detected and tenacious tracking after target comes back to visual field, until remaining to tenacious tracking target after 130 frames.Additionally, this outdoor transformer has, real-time is good, the advantage of accurate positioning and strong robustness, and has good effect in quickly having the target detection blocked and tracking, achieves beyond thought effect.

Claims (2)

1. an outdoor transformer with target recognition function, including outdoor transformer and the monitoring device being arranged on outdoor transformer, monitoring device is for carrying out video image monitoring to the activity near outdoor transformer, it is characterized in that, monitoring device includes pretreatment module, detecting and tracking module, identifies output module;
(1) pretreatment module, for the image received is carried out pretreatment, specifically includes image transformant module, image filtering submodule and image enhaucament submodule:
Image transformant module, for coloured image is converted into gray level image:
H ( x , y ) = max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + m i n ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 ( max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - m i n ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) )
Wherein, (x, y), (x, y), (x, (x, y) the intensity red green blue value at place, (x y) represents coordinate (x, y) grey scale pixel value at place to H to B to G to R y) to represent pixel respectively;Image is sized to m × n;
Image filtering submodule, for gray level image is filtered:
Adopt Wiener filtering to carry out after first-level filtering removes, define svlm image, be designated as Msvlm(x, y), being specifically defined formula is: Msvlm(x, y)=a1J1(x,y)+a2J2(x,y)+a3J3(x,y)+a4J4(x, y), wherein a1、a2、a3、a)For variable weight,I=1,2,3,4;(x, y) for the image after filtered for J;
Image enhaucament submodule:
When | 128 - m | > | &omega; - 50 | 3 Time, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) , Wherein, (x, y) for enhanced gray value for L;(x y) is the gamma correction coefficient including local message, now to ψα be range for 0 to 1 variable element,ω is template scale size parameter, and yardstick is more big, and the neighborhood territory pixel information comprised in template is more many, and input picture is through different scale ωiTemplate, the image j obtainediThe neighborhood information of different range will be comprised;
When | 128 - m | &le; | &omega; - 50 | 3 And during ω > 50, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) &times; ( 1 - &omega; - 50 &omega; 2 ) , Wherein ψ (x, y)=ψα(Msvlm(x, y)),mHIt is the average of the gray value all pixels higher than 128, m in imageLIt is the average of the gray value all pixels lower than 128, and now m=min (mH, mL), when α value is known, calculates 256 ψ correction coefficients as look-up table, be designated asWherein i is index value, utilizes Msvlm(x, gray value y) is as index, according to ψ (x, y)=ψα(Msvlm(x, y)) quickly obtains the gamma correction coefficient of each pixel in imageFor template correction factor;
(2) detecting and tracking module, specifically includes structure submodule, loses differentiation submodule and update submodule:
Build submodule, for the structure of visual dictionary:
Obtain the position and yardstick of following the tracks of target at initial frame, choosing positive and negative sample training tracker about, result will be followed the tracks of as training set X={x1,x2,……xN}T;And the every width target image in training set is extracted the SIFT feature of 128 dimensionsWherein StThe number of SIFT feature in t width target image in expression training set;After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, it is designated asThe feature total amount that can extractWherein K < < FN, andAfter visual dictionary builds, every width training image is expressed as the form of feature bag, for representing the frequency that in visual dictionary, feature word occurs, with rectangular histogram h (xt) represent, h (xt) obtain in the following manner: by a width training image XtIn each feature fs (t)Projecting to visual dictionary, the feature word the shortest with projector distance represents this feature, after all Projection Characters, adds up the frequency of occurrences of each feature word, and normalization obtains training image XtFeature histogram h (xt);
Lose and differentiate submodule, for differentiating that the loss of target is whether:
When a new two field picture arrives, from K histogram, randomly select Z < K histogram, and Z=4, form the new sub-rectangular histogram h being sized to Z(z)(Xt), sub histogrammic number is up toIndividual;Calculate candidate target region son histogrammic similarity Ф corresponding to certain target area in training sett_z,Wherein t=1,2 ..., N, z=1,2 ..., Ns, then calculate overall similarity Фt=1-∏z(1-Фt_z);Similarity Ф=max{ Ф of candidate target region and targett, t} represents, then track rejection judges that formula is: u = s i g n ( &Phi; ) = 1 &Phi; &GreaterEqual; g s 0 &Phi; < g s , Wherein gs be manually set sentence mistake threshold values;As u=1, target is by tenacious tracking, as u=0, and track rejection;When track rejection, define affine Transform Model: x t y t = s . c o s ( &mu; 1 &times; &theta; ) s . s i n ( &mu; 1 &times; &theta; ) - s . s i n ( &mu; 1 &times; &theta; ) s . c o s ( &mu; 1 &times; &theta; ) x t - 1 y t - 1 + &mu; 2 e f , Wherein (xt,yt) and (xt-1,yt-1) the respectively position coordinates of certain SITF characteristic point and the position coordinates of Corresponding matching characteristic point in previous frame target in present frame target, both are known quantity;S is scale coefficient, and θ is coefficient of rotary, and e and f represents translation coefficient, &mu; 1 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 For temperature rotation correction coefficient, &mu; 2 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 Correction factor, μ is translated for temperature1And μ2For revising because the image rotation that causes of ambient temperature deviation and translation error, T0For the standard temperature being manually set, being set to 20 degree, T is monitored the temperature value obtained in real time by temperature sensor;Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, under new yardstick s and coefficient of rotary θ, finally gather positive negative sample, update grader;
Update submodule, for the renewal of visual dictionary:
After every two field picture obtains target location, the result of calculation according to affine transformation parameter, collect all SIFT feature points meeting result parameterAfter F=3 frame, it is thus achieved that new feature point setWherein St-FRepresent the total characteristic obtained from F two field picture to count;Utilize following formula that new and old characteristic point re-starts K cluster: WhereinRepresenting new visual dictionary, the size of visual dictionary remains unchanged;It is forgetting factor, it was shown that proportion shared by old dictionary,More little, the judgement of track rejection is contributed more many by new feature, takes
(3) output module is identified, identification and output for image: utilize track algorithm to obtain target area in image sequence to be identified, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
2. a kind of outdoor transformer with target recognition function according to claim 1, is characterized in that, adopts Wiener filtering to carry out after first-level filtering removes, and now image information also includes the noise of remnants, adopts following two-stage filter to carry out secondary filtering:
J ( x , y ) = &Sigma; i = - m / 2 m / 2 &Sigma; j = - n / 2 n / 2 H ( x , y ) P g ( x + i , y + j )
Wherein, J (x, y) be after filtering after image;Pg(x+i, y+j) represents the function that yardstick is m × n and Pg(x+i, y+j)=q × exp (-(x2+y2)/ω), wherein q is by the coefficient of function normalization, it may be assumed that ∫ ∫ q × exp (-(x2+y2)/ω) dxdy=1.
CN201610049025.XA 2016-01-22 2016-01-22 Outdoor transformer capable of target identification Pending CN105718911A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610049025.XA CN105718911A (en) 2016-01-22 2016-01-22 Outdoor transformer capable of target identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610049025.XA CN105718911A (en) 2016-01-22 2016-01-22 Outdoor transformer capable of target identification

Publications (1)

Publication Number Publication Date
CN105718911A true CN105718911A (en) 2016-06-29

Family

ID=56154068

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610049025.XA Pending CN105718911A (en) 2016-01-22 2016-01-22 Outdoor transformer capable of target identification

Country Status (1)

Country Link
CN (1) CN105718911A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107977959A (en) * 2017-11-21 2018-05-01 武汉中元华电科技股份有限公司 A kind of respirator state identification method suitable for electric operating robot

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810723A (en) * 2014-02-27 2014-05-21 西安电子科技大学 Target tracking method based on inter-frame constraint super-pixel encoding

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810723A (en) * 2014-02-27 2014-05-21 西安电子科技大学 Target tracking method based on inter-frame constraint super-pixel encoding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴京辉: "视频监控目标的跟踪与识别研究", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107977959A (en) * 2017-11-21 2018-05-01 武汉中元华电科技股份有限公司 A kind of respirator state identification method suitable for electric operating robot
CN107977959B (en) * 2017-11-21 2021-10-12 武汉中元华电科技股份有限公司 Respirator state identification method suitable for electric power robot

Similar Documents

Publication Publication Date Title
CN110363047B (en) Face recognition method and device, electronic equipment and storage medium
CN112149761B (en) Electric power intelligent construction site violation detection method based on YOLOv4 improved algorithm
CN104408406B (en) Personnel based on frame difference method and background subtraction leave the post detection method
CN105718896A (en) Intelligent robot with target recognition function
CN107301376B (en) Pedestrian detection method based on deep learning multi-layer stimulation
CN107895157B (en) Method for accurately positioning iris center of low-resolution image
CN111914761A (en) Thermal infrared face recognition method and system
CN103473564A (en) Front human face detection method based on sensitive area
CN107944403A (en) Pedestrian&#39;s attribute detection method and device in a kind of image
CN113344475A (en) Transformer bushing defect identification method and system based on sequence modal decomposition
CN109117855A (en) Abnormal power equipment image identification system
CN109583295A (en) A kind of notch of switch machine automatic testing method based on convolutional neural networks
CN107832721A (en) Method and apparatus for output information
CN105718895A (en) Unmanned aerial vehicle based on visual characteristics
CN104123714A (en) Optimal target detection scale generation method in people flow statistics
CN109948570B (en) Real-time detection method for unmanned aerial vehicle in dynamic environment
CN104915951A (en) Stippled DPM two-dimensional code area positioning method
CN105740768A (en) Unmanned forklift device based on combination of global and local features
CN105631410B (en) A kind of classroom detection method based on intelligent video processing technique
CN105718911A (en) Outdoor transformer capable of target identification
CN115937793B (en) Student behavior abnormality detection method based on image processing
CN105718897A (en) Numerical control lathe based on visual characteristics
CN105574517A (en) Electric vehicle charging pile with stable tracking function
CN114022468B (en) Method for detecting article left-over and lost in security monitoring
CN112288019B (en) Cook cap detection method based on key point positioning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160629