CN105740865A - Bridge construction apparatus with local and global features combined - Google Patents

Bridge construction apparatus with local and global features combined Download PDF

Info

Publication number
CN105740865A
CN105740865A CN201610045943.5A CN201610045943A CN105740865A CN 105740865 A CN105740865 A CN 105740865A CN 201610045943 A CN201610045943 A CN 201610045943A CN 105740865 A CN105740865 A CN 105740865A
Authority
CN
China
Prior art keywords
image
target
module
feature
bridge construction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610045943.5A
Other languages
Chinese (zh)
Inventor
张健敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201610045943.5A priority Critical patent/CN105740865A/en
Publication of CN105740865A publication Critical patent/CN105740865A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention discloses a bridge construction apparatus with local and global features combined. The bridge construction apparatus comprises a bridge construction vehicle and a monitoring apparatus mounted on the bridge construction vehicle; and the monitoring apparatus specifically comprises a pre-processing module, a detecting tracking module and an identification output module, wherein the pre-processing module comprises three sub-modules, i.e. an image conversion sub-module, an image filtration sub-module and an image enhancement sub-module, and the detection tracking module comprises three sub-modules, i.e. a construction sub-module, a loss discrimination sub-module and an update sub-module. A video image technology is applied to the bridge construction vehicle, so that the bridge construction apparatus is capable of effectively monitoring and recording malicious damage behaviors and has the advantages of a good real-time property, accurate positioning, a strong self-adaptive ability, complete image detail reservation and high robustness and the like.

Description

The bridge construction device that a kind of local and global characteristics combine
Technical field
The present invention relates to field of bridge construction, be specifically related to a kind of local and bridge construction device that global characteristics combines.
Background technology
Bridge construction refers to according to design content, builds the process of bridge;Refer mainly to the contents such as bridge Construction Technology and construction organization, construction management, construction quality.In field of bridge construction, the bridge construction car such as bridge crane is very heavy bridge construction means.
It addition, bridge construction device is as a kind of important expensive device, its safety is particularly important, it is necessary to can prevent and monitor malicious sabotage behavior.
Summary of the invention
For the problems referred to above, the present invention provides the bridge construction device that a kind of local and global characteristics combine.
The purpose of the present invention realizes by the following technical solutions:
The bridge construction device that a kind of local and global characteristics combine, including bridge construction car and the monitoring device being arranged on bridge construction car, monitoring device for carrying out video image monitoring to the activity near bridge construction car, and monitoring device includes pretreatment module, detecting and tracking module, identifies output module;
(1) pretreatment module, for the image received is carried out pretreatment, specifically includes image transformant module, image filtering submodule and image enhaucament submodule:
Image transformant module, for coloured image is converted into gray level image:
H ( x , y ) = max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 ( max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) )
Wherein, (x, y), (x, y), (x, (x, y) the intensity red green blue value at place, (x y) represents coordinate (x, y) grey scale pixel value at place to H to B to G to R y) to represent pixel respectively;Image is sized to m × n;
Image filtering submodule, for gray level image is filtered:
Adopt Wiener filtering to carry out after first-level filtering removes, define svlm image, be designated as Msvlm(x, y), being specifically defined formula is: Msvlm(x, y)=a1J1(x,y)+a2J2(x,y)+a3J3(x,y)+a4J4(x, y), wherein a1、a2、a3、a4For variable weight, a i = Ji J 1 + J 2 + J 3 + J 4 , i = 1,2,3,4 ; (x, y) for the image after filtered for J;
Image enhaucament submodule:
When | 128 - m | > | ω - 50 | 3 Time, L ( x , y ) = 255 × ( H ( x , y ) 255 ) ψ ( x , y ) , Wherein, (x, y) for enhanced gray value for L;(x y) is the gamma correction coefficient including local message, now to ψα be range for 0 to 1 variable element,
When | 128 - m | ≤ | ω - 50 | 3 And during ω > 50, L ( x , y ) = 255 × ( H ( x , y ) 255 ) ψ ( x , y ) × ( 1 - ω - 50 ω 2 ) , Wherein ψ ( x , y ) = ψ α ( M s v l m ( x , y ) ) , α = 1 - | 128 - min ( m L , m H ) 128 | , mHIt is the average of the gray value all pixels higher than 128, m in imageLIt is the average of the gray value all pixels lower than 128, and now m=min (mH, mL), when α value is known, calculates 256 ψ correction coefficients as look-up table, be designated asWherein i is index value, utilizes Msvlm(x, gray value y) is as index, according to ψ (x, y)=ψα(Msvlm(x, y)) quickly obtain each pixel in image gamma correction coefficient ψ (x, y);For template correction factor;
(2) detecting and tracking module, specifically includes structure submodule, loses differentiation submodule and update submodule:
Build submodule, for the structure of visual dictionary:
Obtain the position and yardstick of following the tracks of target at initial frame, choosing positive and negative sample training tracker about, result will be followed the tracks of as training set X={x1,x2,……xN}T;And the every width target image in training set is extracted the SIFT feature of 128 dimensionsWherein StThe number of SIFT feature in t width target image in expression training set;After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, it is designated asThe feature total amount that can extractWherein K < < FN, andAfter visual dictionary builds, every width training image is expressed as the form of feature bag, for representing the frequency that in visual dictionary, feature word occurs, with rectangular histogram h (xt) represent, h (xt) obtain in the following manner: by a width training image XtIn each feature fs (t)Projecting to visual dictionary, the feature word the shortest with projector distance represents this feature, after all Projection Characters, adds up the frequency of occurrences of each feature word, and normalization obtains training image XtFeature histogram h (xt);
Lose and differentiate submodule, for differentiating that the loss of target is whether:
When a new two field picture arrives, from K histogram, randomly select Z < K histogram, and Z=4, form the new sub-rectangular histogram h being sized to Z(z)(xt), sub histogrammic number is up toIndividual;Calculate candidate target region son histogrammic similarity Ф corresponding to certain target area in training sett_z,Wherein t=1,2 ..., N, z=1,2 ..., Ns, then calculate overall similarity Фt=1-∏z(1-Фt_z);Similarity Ф=max{ Ф of candidate target region and targett, t} represents, then track rejection judges that formula is: u = s i g n ( &Phi; ) = 1 &Phi; &GreaterEqual; g s 0 &Phi; < g s , Wherein gs be manually set sentence mistake threshold values;As u=1, target is by tenacious tracking, as u=0, and track rejection;
When track rejection, define affine Transform Model: x t y t = s . cos ( &mu; 1 &times; &theta; ) s . sin ( &mu; 1 &times; &theta; ) - s . sin ( &mu; 1 &times; &theta; ) s . cos ( &mu; 1 &times; &theta; ) x t - 1 y t - 1 + &mu; 2 e f , Wherein (xt,yt) and (xt-1,yt-1) the respectively position coordinates of certain SITF characteristic point and the position coordinates of Corresponding matching characteristic point in previous frame target in present frame target, both are known quantity;S is scale coefficient, and θ is coefficient of rotary, and e and f represents translation coefficient, &mu; 1 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 For temperature rotation correction coefficient, &mu; 2 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 Correction factor, μ is translated for temperature1And μ2For revising because the image rotation that causes of ambient temperature deviation and translation error, T0For the standard temperature being manually set, being set to 20 degree, T is monitored the temperature value obtained in real time by temperature sensor;Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, under new yardstick s and coefficient of rotary θ, finally gather positive negative sample, update grader;
Update submodule, for the renewal of visual dictionary:
After every two field picture obtains target location, the result of calculation according to affine transformation parameter, collect all SIFT feature points meeting result parameterAfter F=3 frame, it is thus achieved that new feature point setWherein St-FRepresent the total characteristic obtained from F two field picture to count;Utilize following formula that new and old characteristic point re-starts K cluster: WhereinRepresenting new visual dictionary, the size of visual dictionary remains unchanged;It is forgetting factor, it was shown that proportion shared by old dictionary,More little, the judgement of track rejection is contributed more many by new feature, takes
(3) output module is identified, identification and output for image: utilize track algorithm to obtain target area in image sequence to be identified, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
Preferably, adopting Wiener filtering to carry out after first-level filtering removes, now image information also includes the noise of remnants, adopts following two-stage filter to carry out secondary filtering:
J ( x , y ) = &Sigma; i = - m / 2 m / 2 &Sigma; j = - n / 2 n / 2 H ( x , y ) P g ( x + i , y + j )
Wherein, J (x, y) be after filtering after image;Pg(x+i, y+j) represents the function that yardstick is m × n and Pg(x+i, y+j)=q × exp (-(x2+y2)/ω), wherein q is by the coefficient of function normalization, it may be assumed that ∫ ∫ q × exp (-(x2+y2)/ω) dxdy=1.
This bridge construction car have the beneficial effect that at image pre-processing phase, the image strengthened can according to the size adaptation adjustment of template, improve reinforced effects, and can automatically revise at the Rule of judgment when different templates size, and consider visual custom and human eye to non-linear relation with colouring intensity of the perceptibility of different color;M × N number of power exponent computing is reduced to 256, improves computational efficiency;At target detection and tracking phase, the error that different temperatures causes the rotation of image and translation to cause can be eliminated, improve discrimination, image detail after treatment becomes apparent from, and amount of calculation is greatly reduced relative to traditional method, can effectively adapt to target scale change, and can accurately judge whether target loses, can by detection tenacious tracking again after target comes back to visual field.Additionally, this bridge construction car has, real-time is good, the advantage of accurate positioning and strong robustness, and achieves good effect in quickly having the target detection blocked and tracking.
Accompanying drawing explanation
The invention will be further described to utilize accompanying drawing, but the embodiment in accompanying drawing does not constitute any limitation of the invention, for those of ordinary skill in the art, under the premise not paying creative work, it is also possible to obtain other accompanying drawing according to the following drawings.
The structured flowchart of the bridge construction device that Fig. 1 is a kind of local and global characteristics combines;
The outside schematic diagram of the bridge construction device that Fig. 2 is a kind of local and global characteristics combines.
Detailed description of the invention
The invention will be further described with the following Examples.
Embodiment 1: as shown in Figure 1-2, the bridge construction device that a kind of local and global characteristics combine, including bridge construction car 5 and the monitoring device 4 being arranged on bridge construction car 5, monitoring device 4 for carrying out video image monitoring to the activity near bridge construction car, and monitoring device 4 includes pretreatment module 1, detecting and tracking module 2, identifies output module 3.
(1) pretreatment module 1, for the image received is carried out pretreatment, specifically includes image transformant module 11, image filtering submodule 12 and image enhaucament submodule 13:
Image transformant module 11, for coloured image is converted into gray level image:
H ( x , y ) = max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 ( max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) )
Wherein, (x, y), (x, y), (x, (x, y) the intensity red green blue value at place, (x y) represents coordinate (x, y) grey scale pixel value at place to H to B to G to R y) to represent pixel respectively;Image is sized to m × n;
Image filtering submodule 12, for gray level image is filtered:
Adopt Wiener filtering to carry out after first-level filtering removes, define svlm image, be designated as Msvlm(x, y), being specifically defined formula is: Msvlm(x, y)=a1J1(x,y)+a2J2(x,y)+a3J3(x,y)+a4J4(x, y), wherein a1、a2、a3、a4For variable weight, a i = Ji J 1 + J 2 + J 3 + J 4 , i = 1,2,3,4 ; (x, y) for the image after filtered for J;
Image enhaucament submodule 13:
When | 128 - m | > | &omega; - 50 | 3 Time, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) , Wherein, (x, y) for enhanced gray value for L;(x y) is the gamma correction coefficient including local message, now to ψα be range for 0 to 1 variable element,
When | 128 - m | &le; | &omega; - 50 | 3 And during ω > 50, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) &times; ( 1 - &omega; - 50 &omega; 2 ) , Wherein &psi; ( x , y ) = &psi; &alpha; ( M s v l m ( x , y ) ) , &alpha; = 1 - | 128 - min ( m L , m H ) 128 | , mHIt is the average of the gray value all pixels higher than 128, m in imageLIt is the average of the gray value all pixels lower than 128, and now m=min (mH, mL), when α value is known, calculates 256 ψ correction coefficients as look-up table, be designated asWherein i is index value, utilizes Msvlm(x, gray value y) is as index, according to ψ (x, y)=ψα(Msvlm(x, y)) quickly obtain each pixel in image gamma correction coefficient ψ (x, y);For template correction factor;
(2) detecting and tracking module 2, specifically includes structure submodule 21, loses differentiation submodule 22 and update submodule 23:
Build submodule 21, for the structure of visual dictionary:
Obtain the position and yardstick of following the tracks of target at initial frame, choosing positive and negative sample training tracker about, result will be followed the tracks of as training set X={x1,x2,……xN}T;And the every width target image in training set is extracted the SIFT feature of 128 dimensionsWherein StThe number of SIFT feature in t width target image in expression training set;After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, it is designated asThe feature total amount that can extractWherein K < < FN, andAfter visual dictionary builds, every width training image is expressed as the form of feature bag, for representing the frequency that in visual dictionary, feature word occurs, with rectangular histogram h (xt) represent, h (xt) obtain in the following manner: by a width training image XtIn each feature fs (t)Projecting to visual dictionary, the feature word the shortest with projector distance represents this feature, after all Projection Characters, adds up the frequency of occurrences of each feature word, and normalization obtains training image XtFeature histogram h (xt);
Lose and differentiate submodule 22, for differentiating that the loss of target is whether:
When a new two field picture arrives, from K histogram, randomly select Z < K histogram, and Z=4, form the new sub-rectangular histogram h being sized to Z(z)(xt), sub histogrammic number is up toIndividual;Calculate candidate target region son histogrammic similarity Ф corresponding to certain target area in training sett_z,Wherein t=1,2 ..., N, z=1,2 ..., Ns, then calculate overall similarity Фt=1-∏z(1-Фt_z);Similarity Ф=max{ Ф of candidate target region and targett, t} represents, then track rejection judges that formula is: u = s i g n ( &Phi; ) = 1 &Phi; &GreaterEqual; g s 0 &Phi; < g s , Wherein gs be manually set sentence mistake threshold values;As u=1, target is by tenacious tracking, as u=0, and track rejection;
When track rejection, define affine Transform Model: x t y t = s . cos ( &mu; 1 &times; &theta; ) s . sin ( &mu; 1 &times; &theta; ) - s . sin ( &mu; 1 &times; &theta; ) s . cos ( &mu; 1 &times; &theta; ) x t - 1 y t - 1 + &mu; 2 e f , Wherein (xt,yt) and (xt-1,yt-1) the respectively position coordinates of certain SITF characteristic point and the position coordinates of Corresponding matching characteristic point in previous frame target in present frame target, both are known quantity;S is scale coefficient, and θ is coefficient of rotary, and e and f represents translation coefficient, &mu; 1 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 For temperature rotation correction coefficient, &mu; 2 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 Correction factor, μ is translated for temperature1And μ2For revising because the image rotation that causes of ambient temperature deviation and translation error, T0For the standard temperature being manually set, being set to 20 degree, T is monitored the temperature value obtained in real time by temperature sensor;Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, under new yardstick s and coefficient of rotary θ, finally gather positive negative sample, update grader;
Update submodule 23, for the renewal of visual dictionary:
After every two field picture obtains target location, the result of calculation according to affine transformation parameter, collect all SIFT feature points meeting result parameterAfter F=3 frame, it is thus achieved that new feature point setWherein St-FRepresent the total characteristic obtained from F two field picture to count;Utilize following formula that new and old characteristic point re-starts K cluster: WhereinRepresenting new visual dictionary, the size of visual dictionary remains unchanged;It is forgetting factor, it was shown that proportion shared by old dictionary,More little, the judgement of track rejection is contributed more many by new feature, takes
(3) output module is identified, identification and output for image: utilize track algorithm to obtain target area in image sequence to be identified, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
Preferably, adopting Wiener filtering to carry out after first-level filtering removes, now image information also includes the noise of remnants, adopts following two-stage filter to carry out secondary filtering:
J ( x , y ) = &Sigma; i = - m / 2 m / 2 &Sigma; j = - n / 2 n / 2 H ( x , y ) P g ( x + i , y + j )
Wherein, J (x, y) be after filtering after image;Pg(x+i, y+j) represents the function that yardstick is m × n and Pg(x+i, y+j)=q × exp (-(x2+y2)/ω), wherein q is by the coefficient of function normalization, it may be assumed that ∫ ∫ q × exp (-(x2+y2)/ω) dxdy=1.
The bridge construction device of this embodiment, at image pre-processing phase, the image strengthened can according to the size adaptation adjustment of template, improve reinforced effects, and can automatically revise at the Rule of judgment when different templates size, and consider visual custom and human eye to non-linear relation with colouring intensity of the perceptibility of different color;Take full advantage of local feature and the global characteristics of image, there is adaptivity, it is possible to suppress excessively to strengthen, the image enhancement effects obtained under complex illumination environment is obvious;M × N number of power exponent computing is reduced to 256, improves computational efficiency, Z=4, F=3,Calculating average frame per second is 15FPS, and amount of calculation is less than the dictionary algorithm of same type;At target detection and tracking phase, the error that different temperatures causes the rotation of image and translation to cause can be eliminated, improve discrimination, image detail after treatment becomes apparent from, and amount of calculation is greatly reduced relative to traditional method, it is possible to effectively adapt to target scale change, and can accurately judge whether target loses, can again be detected and tenacious tracking after target comes back to visual field, until remaining to tenacious tracking target after 110 frames.Additionally, this bridge construction car has, real-time is good, the advantage of accurate positioning and strong robustness, and has good effect in quickly having the target detection blocked and tracking, achieves beyond thought effect.
Embodiment 2: as shown in Figure 1-2, the bridge construction device that a kind of local and global characteristics combine, including bridge construction car 5 and the monitoring device 4 being arranged on bridge construction car 5, monitoring device 4 for carrying out video image monitoring to the activity near bridge construction car 5, and monitoring device 4 includes pretreatment module 1, detecting and tracking module 2, identifies output module 3.
(1) pretreatment module 1, for the image received is carried out pretreatment, specifically includes image transformant module 11, image filtering submodule 12 and image enhaucament submodule 13:
Image transformant module 11, for coloured image is converted into gray level image:
H ( x , y ) = max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 ( max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) )
Wherein, (x, y), (x, y), (x, (x, y) the intensity red green blue value at place, (x y) represents coordinate (x, y) grey scale pixel value at place to H to B to G to R y) to represent pixel respectively;Image is sized to m × n;
Image filtering submodule 12, for gray level image is filtered:
Adopt Wiener filtering to carry out after first-level filtering removes, define svlm image, be designated as Msvlm(x, y), being specifically defined formula is: Msvlm(x, y)=a1J1(x,y)+a2J2(x,y)+a3J3(x,y)+a4J4(x, y), wherein a1、a2、a3、a4For variable weight, a i = Ji J 1 + J 2 + J 3 + J 4 , i = 1,2,3,4 ; (x, y) for the image after filtered for J;
Image enhaucament submodule 13:
When | 128 - m | > | &omega; - 50 | 3 Time, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) , Wherein, (x, y) for enhanced gray value for L;(x y) is the gamma correction coefficient including local message, now to ψα be range for 0 to 1 variable element,
When | 128 - m | &le; | &omega; - 50 | 3 And during ω > 50, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) &times; ( 1 - &omega; - 50 &omega; 2 ) , Wherein &psi; ( x , y ) = &psi; &alpha; ( M s v l m ( x , y ) ) , &alpha; = 1 - | 128 - min ( m L , m H ) 128 | , mHIt is the average of the gray value all pixels higher than 128, m in imageLIt is the average of the gray value all pixels lower than 128, and now m=min (mH, mL), when α value is known, calculates 256 ψ correction coefficients as look-up table, be designated asWherein i is index value, utilizes Msvlm(x, gray value y) is as index, according to ψ (x, y)=ψα(Msvlm(x, y)) quickly obtain each pixel in image gamma correction coefficient ψ (x, y);For template correction factor;
(2) detecting and tracking module 2, specifically includes structure submodule 21, loses differentiation submodule 22 and update submodule 23:
Build submodule 21, for the structure of visual dictionary:
Obtain the position and yardstick of following the tracks of target at initial frame, choosing positive and negative sample training tracker about, result will be followed the tracks of as training set X={x1,x2,……xN}T;And the every width target image in training set is extracted the SIFT feature of 128 dimensionsWherein StThe number of SIFT feature in t width target image in expression training set;After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, it is designated asThe feature total amount that can extractWherein K < < FN, andAfter visual dictionary builds, every width training image is expressed as the form of feature bag, for representing the frequency that in visual dictionary, feature word occurs, with rectangular histogram h (xt) represent, h (xt) obtain in the following manner: by a width training image XtIn each feature fs (t)Projecting to visual dictionary, the feature word the shortest with projector distance represents this feature, after all Projection Characters, adds up the frequency of occurrences of each feature word, and normalization obtains training image XtFeature histogram h (xt);
Lose and differentiate submodule 22, for differentiating that the loss of target is whether:
When a new two field picture arrives, from K histogram, randomly select Z < K histogram, and Z=5, form the new sub-rectangular histogram h being sized to Z(z)(xt), sub histogrammic number is up toIndividual;Calculate candidate target region son histogrammic similarity Ф corresponding to certain target area in training sett_z,Wherein t=1,2 ..., N, z=1,2 ..., Ns, then calculate overall similarity Фt=1-∏z(1-Фt_z);Similarity Ф=max{ Ф of candidate target region and targett, t} represents, then track rejection judges that formula is: u = s i g n ( &Phi; ) = 1 &Phi; &GreaterEqual; g s 0 &Phi; < g s , Wherein gs be manually set sentence mistake threshold values;As u=1, target is by tenacious tracking, as u=0, and track rejection;
When track rejection, define affine Transform Model: x t y t = s . cos ( &mu; 1 &times; &theta; ) s . sin ( &mu; 1 &times; &theta; ) - s . sin ( &mu; 1 &times; &theta; ) s . cos ( &mu; 1 &times; &theta; ) x t - 1 y t - 1 + &mu; 2 e f , Wherein (xt,yt) and (xt-1,yt-1) the respectively position coordinates of certain SITF characteristic point and the position coordinates of Corresponding matching characteristic point in previous frame target in present frame target, both are known quantity;S is scale coefficient, and θ is coefficient of rotary, and e and f represents translation coefficient, &mu; 1 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 For temperature rotation correction coefficient, &mu; 2 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 Correction factor, μ is translated for temperature1And μ2For revising because the image rotation that causes of ambient temperature deviation and translation error, T0For the standard temperature being manually set, being set to 20 degree, T is monitored the temperature value obtained in real time by temperature sensor;Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, under new yardstick s and coefficient of rotary θ, finally gather positive negative sample, update grader;
Update submodule 23, for the renewal of visual dictionary:
After every two field picture obtains target location, the result of calculation according to affine transformation parameter, collect all SIFT feature points meeting result parameterAfter F=4 frame, it is thus achieved that new feature point setWherein St-FRepresent the total characteristic obtained from F two field picture to count;Utilize following formula that new and old characteristic point re-starts K cluster: WhereinRepresenting new visual dictionary, the size of visual dictionary remains unchanged;It is forgetting factor, it was shown that proportion shared by old dictionary,More little, the judgement of track rejection is contributed more many by new feature, takes
(3) output module is identified, identification and output for image: utilize track algorithm to obtain target area in image sequence to be identified, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
Preferably, adopting Wiener filtering to carry out after first-level filtering removes, now image information also includes the noise of remnants, adopts following two-stage filter to carry out secondary filtering:
J ( x , y ) = &Sigma; i = - m / 2 m / 2 &Sigma; j = - n / 2 n / 2 H ( x , y ) P g ( x + i , y + j )
Wherein, J (x, y) be after filtering after image;Pg(x+i, y+j) represents the function that yardstick is m × n and Pg(x+i, y+j)=q × exp (-(x2+y2)/ω), wherein q is by the coefficient of function normalization, it may be assumed that ∫ ∫ q × exp (-(x2+y2)/ω) dxdy=1.
The bridge construction device of this embodiment, at image pre-processing phase, the image strengthened can according to the size adaptation adjustment of template, improve reinforced effects, and can automatically revise at the Rule of judgment when different templates size, and consider visual custom and human eye to non-linear relation with colouring intensity of the perceptibility of different color;Take full advantage of local feature and the global characteristics of image, there is adaptivity, it is possible to suppress excessively to strengthen, the image enhancement effects obtained under complex illumination environment is obvious;M × N number of power exponent computing is reduced to 256, improves computational efficiency, Z=5, F=4,Calculating average frame per second is 16FPS, and amount of calculation is less than the dictionary algorithm of same type;At target detection and tracking phase, the error that different temperatures causes the rotation of image and translation to cause can be eliminated, improve discrimination, image detail after treatment becomes apparent from, and amount of calculation is greatly reduced relative to traditional method, it is possible to effectively adapt to target scale change, and can accurately judge whether target loses, can again be detected and tenacious tracking after target comes back to visual field, until remaining to tenacious tracking target after 115 frames.Additionally, this bridge construction car has, real-time is good, the advantage of accurate positioning and strong robustness, and has good effect in quickly having the target detection blocked and tracking, achieves beyond thought effect.
Embodiment 3: as shown in Figure 1-2, the bridge construction device that a kind of local and global characteristics combine, including bridge construction car 5 and the monitoring device 4 being arranged on bridge construction car 5, monitoring device 4 for carrying out video image monitoring to the activity near bridge construction car 5, and monitoring device 4 includes pretreatment module 1, detecting and tracking module 2, identifies output module 3.
(1) pretreatment module 1, for the image received is carried out pretreatment, specifically includes image transformant module 11, image filtering submodule 12 and image enhaucament submodule 13:
Image transformant module 11, for coloured image is converted into gray level image:
H ( x , y ) = max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 ( max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) )
Wherein, (x, y), (x, y), (x, (x, y) the intensity red green blue value at place, (x y) represents coordinate (x, y) grey scale pixel value at place to H to B to G to R y) to represent pixel respectively;Image is sized to m × n;
Image filtering submodule 12, for gray level image is filtered:
Adopt Wiener filtering to carry out after first-level filtering removes, define svlm image, be designated as Msvlm(x, y), being specifically defined formula is: Msvlm(x, y)=a1J1(x,y)+a2J2(x,y)+a3J3(x,y)+a4J4(x, y), wherein a1、a2、a3、a4For variable weight, a i = Ji J 1 + J 2 + J 3 + J 4 , i = 1,2,3,4 ; (x, y) for the image after filtered for J;
Image enhaucament submodule 13:
When | 128 - m | > | &omega; - 50 | 3 Time, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) , Wherein, (x, y) for enhanced gray value for L;(x y) is the gamma correction coefficient including local message, now to ψα be range for 0 to 1 variable element,
When | 128 - m | &le; | &omega; - 50 | 3 And during ω > 50, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) &times; ( 1 - &omega; - 50 &omega; 2 ) , Wherein &psi; ( x , y ) = &psi; &alpha; ( M s v l m ( x , y ) ) , &alpha; = 1 - | 128 - min ( m L , m H ) 128 | , mHIt is the average of the gray value all pixels higher than 128, m in imageLIt is the average of the gray value all pixels lower than 128, and now m=min (mH, mL), when α value is known, calculates 256 ψ correction coefficients as look-up table, be designated asWherein i is index value, utilizes Msvlm(x, gray value y) is as index, according to ψ (x, y)=ψα(Msvlm(x, y)) quickly obtain each pixel in image gamma correction coefficient ψ (x, y);For template correction factor;
(2) detecting and tracking module 2, specifically includes structure submodule 21, loses differentiation submodule 22 and update submodule 23:
Build submodule 21, for the structure of visual dictionary:
Obtain the position and yardstick of following the tracks of target at initial frame, choosing positive and negative sample training tracker about, result will be followed the tracks of as training set X={x1,x2,……xN}T;And the every width target image in training set is extracted the SIFT feature of 128 dimensionsWherein StThe number of SIFT feature in t width target image in expression training set;After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, it is designated asThe feature total amount that can extractWherein K < < FN, andAfter visual dictionary builds, every width training image is expressed as the form of feature bag, for representing the frequency that in visual dictionary, feature word occurs, with rectangular histogram h (xt) represent, h (xt) obtain in the following manner: by a width training image XtIn each feature fs (t)Projecting to visual dictionary, the feature word the shortest with projector distance represents this feature, after all Projection Characters, adds up the frequency of occurrences of each feature word, and normalization obtains training image XtFeature histogram h (xt);
Lose and differentiate submodule 22, for differentiating that the loss of target is whether:
When a new two field picture arrives, from K histogram, randomly select Z < K histogram, and Z=6, form the new sub-rectangular histogram h being sized to Z(z)(xt), sub histogrammic number is up toIndividual;Calculate candidate target region son histogrammic similarity Ф corresponding to certain target area in training sett_z,Wherein t=1,2 ..., N, z=1,2 ..., Ns, then calculate overall similarity Фt=1-∏z(1-Фt_z);Similarity Ф=max{ Ф of candidate target region and targett, t} represents, then track rejection judges that formula is: u = s i g n ( &Phi; ) = 1 &Phi; &GreaterEqual; g s 0 &Phi; < g s , Wherein gs be manually set sentence mistake threshold values;As u=1, target is by tenacious tracking, as u=0, and track rejection;
When track rejection, define affine Transform Model: x t y t = s . cos ( &mu; 1 &times; &theta; ) s . sin ( &mu; 1 &times; &theta; ) - s . sin ( &mu; 1 &times; &theta; ) s . cos ( &mu; 1 &times; &theta; ) x t - 1 y t - 1 + &mu; 2 e f , Wherein (xt,yt) and (xt-1,yt-1) the respectively position coordinates of certain SITF characteristic point and the position coordinates of Corresponding matching characteristic point in previous frame target in present frame target, both are known quantity;S is scale coefficient, and θ is coefficient of rotary, and e and f represents translation coefficient, &mu; 1 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 For temperature rotation correction coefficient, &mu; 2 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 Correction factor, μ is translated for temperature1And μ2For revising because the image rotation that causes of ambient temperature deviation and translation error, T0For the standard temperature being manually set, being set to 20 degree, T is monitored the temperature value obtained in real time by temperature sensor;Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, under new yardstick s and coefficient of rotary θ, finally gather positive negative sample, update grader;
Update submodule 23, for the renewal of visual dictionary:
After every two field picture obtains target location, the result of calculation according to affine transformation parameter, collect all SIFT feature points meeting result parameterAfter F=5 frame, it is thus achieved that new feature point setWherein St-FRepresent the total characteristic obtained from F two field picture to count;Utilize following formula that new and old characteristic point re-starts K cluster: WhereinRepresenting new visual dictionary, the size of visual dictionary remains unchanged;It is forgetting factor, it was shown that proportion shared by old dictionary,More little, the judgement of track rejection is contributed more many by new feature, takes
(3) output module 3 is identified, identification and output for image: utilize track algorithm to obtain target area in image sequence to be identified, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
Preferably, adopting Wiener filtering to carry out after first-level filtering removes, now image information also includes the noise of remnants, adopts following two-stage filter to carry out secondary filtering:
J ( x , y ) = &Sigma; i = - m / 2 m / 2 &Sigma; j = - n / 2 n / 2 H ( x , y ) P g ( x + i , y + j )
Wherein, J (x, y) be after filtering after image;Pg(x+i, y+j) represents the function that yardstick is m × n and Pg(x+i, y+j)=q × exp (-(x2+y2)/ω), wherein q is by the coefficient of function normalization, it may be assumed that ∫ ∫ q × exp (-(x2+y2)/ω) dxdy=1.
The bridge construction device of this embodiment, at image pre-processing phase, the image strengthened can according to the size adaptation adjustment of template, improve reinforced effects, and can automatically revise at the Rule of judgment when different templates size, and consider visual custom and human eye to non-linear relation with colouring intensity of the perceptibility of different color;Take full advantage of local feature and the global characteristics of image, there is adaptivity, it is possible to suppress excessively to strengthen, the image enhancement effects obtained under complex illumination environment is obvious;M × N number of power exponent computing is reduced to 256, improves computational efficiency, Z=6, F=5,Calculating average frame per second is 17FPS, and amount of calculation is less than the dictionary algorithm of same type;At target detection and tracking phase, the error that different temperatures causes the rotation of image and translation to cause can be eliminated, improve discrimination, image detail after treatment becomes apparent from, and amount of calculation is greatly reduced relative to traditional method, it is possible to effectively adapt to target scale change, and can accurately judge whether target loses, can again be detected and tenacious tracking after target comes back to visual field, until remaining to tenacious tracking target after 120 frames.Additionally, this bridge construction car has, real-time is good, the advantage of accurate positioning and strong robustness, and has good effect in quickly having the target detection blocked and tracking, achieves beyond thought effect.
Embodiment 4: as shown in Figure 1-2, the bridge construction device that a kind of local and global characteristics combine, including bridge construction car 5 and the monitoring device 4 being arranged on bridge construction car 5, monitoring device 4 for carrying out video image monitoring to the activity near bridge construction car 5, and monitoring device 4 includes pretreatment module 1, detecting and tracking module 2, identifies output module 3.
(1) pretreatment module 1, for the image received is carried out pretreatment, specifically includes image transformant module 11, image filtering submodule 12 and image enhaucament submodule 13:
Image transformant module 11, for coloured image is converted into gray level image:
H ( x , y ) = max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 ( max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) )
Wherein, (x, y), (x, y), (x, (x, y) the intensity red green blue value at place, (x y) represents coordinate (x, y) grey scale pixel value at place to H to B to G to R y) to represent pixel respectively;Image is sized to m × n;
Image filtering submodule 12, for gray level image is filtered:
Adopt Wiener filtering to carry out after first-level filtering removes, define svlm image, be designated as Msvlm(x, y), being specifically defined formula is: Msvlm(x, y)=a1J1(x,y)+a2J2(x,y)+a3J3(x,y)+a4J4(x, y), wherein a1、a2、a3、a4For variable weight, a i = Ji J 1 + J 2 + J 3 + J 4 , i = 1,2,3,4 ; (x, y) for the image after filtered for J;
Image enhaucament submodule 13:
When | 128 - m | > | &omega; - 50 | 3 Time, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) , Wherein, (x, y) for enhanced gray value for L;(x y) is the gamma correction coefficient including local message, now to ψα be range for 0 to 1 variable element,
When | 128 - m | &le; | &omega; - 50 | 3 And during ω > 50, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) &times; ( 1 - &omega; - 50 &omega; 2 ) , Wherein &psi; ( x , y ) = &psi; &alpha; ( M s v l m ( x , y ) ) , &alpha; = 1 - | 128 - min ( m L , m H ) 128 | , mHIt is the average of the gray value all pixels higher than 128, m in imageLIt is the average of the gray value all pixels lower than 128, and now m=min (mH, mL), when α value is known, calculates 256 ψ correction coefficients as look-up table, be designated asWherein i is index value, utilizes Msvlm(x, gray value y) is as index, according to ψ (x, y)=ψα(Msvlm(x, y)) quickly obtain each pixel in image gamma correction coefficient ψ (x, y);For template correction factor;
(2) detecting and tracking module 2, specifically includes structure submodule 21, loses differentiation submodule 22 and update submodule 23:
Build submodule 21, for the structure of visual dictionary:
Obtain the position and yardstick of following the tracks of target at initial frame, choosing positive and negative sample training tracker about, result will be followed the tracks of as training set X={x1,x2,……xN}T;And the every width target image in training set is extracted the SIFT feature of 128 dimensionsWherein StThe number of SIFT feature in t width target image in expression training set;After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, it is designated asThe feature total amount that can extractWherein K < < FN, andAfter visual dictionary builds, every width training image is expressed as the form of feature bag, for representing the frequency that in visual dictionary, feature word occurs, with rectangular histogram h (xt) represent, h (xt) obtain in the following manner: by a width training image XtIn each feature fs (t)Projecting to visual dictionary, the feature word the shortest with projector distance represents this feature, after all Projection Characters, adds up the frequency of occurrences of each feature word, and normalization obtains training image XtFeature histogram h (xt);
Lose and differentiate submodule 22, for differentiating that the loss of target is whether:
When a new two field picture arrives, from K histogram, randomly select Z < K histogram, and Z=7, form the new sub-rectangular histogram h being sized to Z(z)(xt), sub histogrammic number is up toIndividual;Calculate candidate target region son histogrammic similarity Ф corresponding to certain target area in training sett_z,Wherein t=1,2 ..., N, z=1,2 ..., Ns, then calculate overall similarity Фt=1-∏z(1-Фt_z);Similarity Ф=max{ Ф of candidate target region and targett, t} represents, then track rejection judges that formula is: u = s i g n ( &Phi; ) = 1 &Phi; &GreaterEqual; g s 0 &Phi; < g s , Wherein gs be manually set sentence mistake threshold values;As u=1, target is by tenacious tracking, as u=0, and track rejection;
When track rejection, define affine Transform Model: x t y t = s . cos ( &mu; 1 &times; &theta; ) s . sin ( &mu; 1 &times; &theta; ) - s . sin ( &mu; 1 &times; &theta; ) s . cos ( &mu; 1 &times; &theta; ) x t - 1 y t - 1 + &mu; 2 e f , Wherein (xt,yt) and (xt-1,yt-1) the respectively position coordinates of certain SITF characteristic point and the position coordinates of Corresponding matching characteristic point in previous frame target in present frame target, both are known quantity;S is scale coefficient, and θ is coefficient of rotary, and e and f represents translation coefficient, &mu; 1 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 For temperature rotation correction coefficient, &mu; 2 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 Correction factor, μ is translated for temperature1And μ2For revising because the image rotation that causes of ambient temperature deviation and translation error, T0For the standard temperature being manually set, being set to 20 degree, T is monitored the temperature value obtained in real time by temperature sensor;Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, under new yardstick s and coefficient of rotary θ, finally gather positive negative sample, update grader;
Update submodule 23, for the renewal of visual dictionary:
After every two field picture obtains target location, the result of calculation according to affine transformation parameter, collect all SIFT feature points meeting result parameterAfter F=6 frame, it is thus achieved that new feature point setWherein St-FRepresent the total characteristic obtained from F two field picture to count;Utilize following formula that new and old characteristic point re-starts K cluster: WhereinRepresenting new visual dictionary, the size of visual dictionary remains unchanged;It is forgetting factor, it was shown that proportion shared by old dictionary,More little, the judgement of track rejection is contributed more many by new feature, takes
(3) output module 3 is identified, identification and output for image: utilize track algorithm to obtain target area in image sequence to be identified, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
Preferably, adopting Wiener filtering to carry out after first-level filtering removes, now image information also includes the noise of remnants, adopts following two-stage filter to carry out secondary filtering:
J ( x , y ) = &Sigma; i = - m / 2 m / 2 &Sigma; j = - n / 2 n / 2 H ( x , y ) P g ( x + i , y + j )
Wherein, J (x, y) be after filtering after image;Pg(x+i, y+j) represents the function that yardstick is m × n and Pg(x+i, y+j)=q × exp (-(x2+y2)/ω), wherein q is by the coefficient of function normalization, it may be assumed that ∫ ∫ q × exp (-(x2+y2)/ω) dxdy=1.
The bridge construction device of this embodiment, at image pre-processing phase, the image strengthened can according to the size adaptation adjustment of template, improve reinforced effects, and can automatically revise at the Rule of judgment when different templates size, and consider visual custom and human eye to non-linear relation with colouring intensity of the perceptibility of different color;Take full advantage of local feature and the global characteristics of image, there is adaptivity, it is possible to suppress excessively to strengthen, the image enhancement effects obtained under complex illumination environment is obvious;M × N number of power exponent computing being reduced to 256, improves computational efficiency, Z=7, F=6, φ=0.18, calculating average frame per second is 18FPS, and amount of calculation is less than the dictionary algorithm of same type;At target detection and tracking phase, the error that different temperatures causes the rotation of image and translation to cause can be eliminated, improve discrimination, image detail after treatment becomes apparent from, and amount of calculation is greatly reduced relative to traditional method, it is possible to effectively adapt to target scale change, and can accurately judge whether target loses, can again be detected and tenacious tracking after target comes back to visual field, until remaining to tenacious tracking target after 125 frames.Additionally, this bridge construction car has, real-time is good, the advantage of accurate positioning and strong robustness, and has good effect in quickly having the target detection blocked and tracking, achieves beyond thought effect.
Embodiment 5: as shown in Figure 1-2, the bridge construction device that a kind of local and global characteristics combine, including bridge construction car 5 and the monitoring device 4 being arranged on bridge construction car 5, monitoring device 4 for carrying out video image monitoring to the activity near bridge construction car 5, and monitoring device 4 includes pretreatment module 1, detecting and tracking module 2, identifies output module 3.
(1) pretreatment module 1, for the image received is carried out pretreatment, specifically includes image transformant module 11, image filtering submodule 12 and image enhaucament submodule 13:
Image transformant module 11, for coloured image is converted into gray level image:
H ( x , y ) = max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 ( max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) )
Wherein, (x, y), (x, y), (x, (x, y) the intensity red green blue value at place, (x y) represents coordinate (x, y) grey scale pixel value at place to H to B to G to R y) to represent pixel respectively;Image is sized to m × n;
Image filtering submodule 12, for gray level image is filtered:
Adopt Wiener filtering to carry out after first-level filtering removes, define svlm image, be designated as Msvlm(x, y), being specifically defined formula is: Msvlm(x, y)=a1J1(x,y)+a2J2(x,y)+a3J3(x,y)+a4J4(x, y), wherein a1、a2、a3、a4For variable weight, a i = Ji J 1 + J 2 + J 3 + J 4 , i = 1,2,3,4 ; (x, y) for the image after filtered for J;
Image enhaucament submodule 13:
When | 128 - m | > | &omega; - 50 | 3 Time, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) , Wherein, (x, y) for enhanced gray value for L;(x y) is the gamma correction coefficient including local message, now to ψα be range for 0 to 1 variable element,
When | 128 - m | &le; | &omega; - 50 | 3 And during ω > 50, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) &times; ( 1 - &omega; - 50 &omega; 2 ) , Wherein &psi; ( x , y ) = &psi; &alpha; ( M s v l m ( x , y ) ) , &alpha; = 1 - | 128 - min ( m L , m H ) 128 | , mHIt is the average of the gray value all pixels higher than 128, m in imageLIt is the average of the gray value all pixels lower than 128, and now m=min (mH, mL), when α value is known, calculates 256 ψ correction coefficients as look-up table, be designated asWherein i is index value, utilizes Msvlm(x, gray value y) is as index, according to ψ (x, y)=ψα(Msvlm(x, y)) quickly obtain each pixel in image gamma correction coefficient ψ (x, y);For template correction factor;
(2) detecting and tracking module 2, specifically includes structure submodule 21, loses differentiation submodule 22 and update submodule 23:
Build submodule 21, for the structure of visual dictionary:
Obtain the position and yardstick of following the tracks of target at initial frame, choosing positive and negative sample training tracker about, result will be followed the tracks of as training set X={x1,x2,……xN}T;And the every width target image in training set is extracted the SIFT feature of 128 dimensionsWherein StThe number of SIFT feature in t width target image in expression training set;After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, it is designated asThe feature total amount that can extractWherein K < < FN, andAfter visual dictionary builds, every width training image is expressed as the form of feature bag, for representing the frequency that in visual dictionary, feature word occurs, with rectangular histogram h (xt) represent, h (xt) obtain in the following manner: by a width training image XtIn each feature fs (t)Projecting to visual dictionary, the feature word the shortest with projector distance represents this feature, after all Projection Characters, adds up the frequency of occurrences of each feature word, and normalization obtains training image XtFeature histogram h (xt);
Lose and differentiate submodule 22, for differentiating that the loss of target is whether:
When a new two field picture arrives, from K histogram, randomly select Z < K histogram, and Z=8, form the new sub-rectangular histogram h being sized to Z(z)(xt), sub histogrammic number is up toIndividual;Calculate candidate target region son histogrammic similarity Ф corresponding to certain target area in training sett_z,Wherein t=1,2 ..., N, z=1,2 ..., Ns, then calculate overall similarity Фt=1-∏z(1-Фt_z);Similarity Ф=max{ Ф of candidate target region and targett, t} represents, then track rejection judges that formula is: u = s i g n ( &Phi; ) = 1 &Phi; &GreaterEqual; g s 0 &Phi; < g s , Wherein gs be manually set sentence mistake threshold values;As u=1, target is by tenacious tracking, as u=0, and track rejection;
When track rejection, define affine Transform Model: x t y t = s . cos ( &mu; 1 &times; &theta; ) s . sin ( &mu; 1 &times; &theta; ) - s . sin ( &mu; 1 &times; &theta; ) s . cos ( &mu; 1 &times; &theta; ) x t - 1 y t - 1 + &mu; 2 e f , Wherein (xt,yt) and (xt-1,yt-1) the respectively position coordinates of certain SITF characteristic point and the position coordinates of Corresponding matching characteristic point in previous frame target in present frame target, both are known quantity;S is scale coefficient, and θ is coefficient of rotary, and e and f represents translation coefficient, &mu; 1 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 For temperature rotation correction coefficient, &mu; 2 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 Correction factor, μ is translated for temperature1And μ2For revising because the image rotation that causes of ambient temperature deviation and translation error, T0For the standard temperature being manually set, being set to 20 degree, T is monitored the temperature value obtained in real time by temperature sensor;Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, under new yardstick s and coefficient of rotary θ, finally gather positive negative sample, update grader;
Update submodule 23, for the renewal of visual dictionary:
After every two field picture obtains target location, the result of calculation according to affine transformation parameter, collect all SIFT feature points meeting result parameterAfter F=7 frame, it is thus achieved that new feature point setWherein St-FRepresent the total characteristic obtained from F two field picture to count;Utilize following formula that new and old characteristic point re-starts K cluster: WhereinRepresenting new visual dictionary, the size of visual dictionary remains unchanged;It is forgetting factor, it was shown that proportion shared by old dictionary,More little, the judgement of track rejection is contributed more many by new feature, takes
(3) output module 3 is identified, identification and output for image: utilize track algorithm to obtain target area in image sequence to be identified, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
Preferably, adopting Wiener filtering to carry out after first-level filtering removes, now image information also includes the noise of remnants, adopts following two-stage filter to carry out secondary filtering:
J ( x , y ) = &Sigma; i = - m / 2 m / 2 &Sigma; j = - n / 2 n / 2 H ( x , y ) P g ( x + i , y + j )
Wherein, J (x, y) be after filtering after image;Pg(x+i, y+j) represents the function that yardstick is m × n and Pg(x+i, y+j)=q × exp (-(x2+y2)/ω), wherein q is by the coefficient of function normalization, it may be assumed that ∫ ∫ q × exp (-(x2+y2)/ω) dxdy=1.
The bridge construction device of this embodiment, at image pre-processing phase, the image strengthened can according to the size adaptation adjustment of template, improve reinforced effects, and can automatically revise at the Rule of judgment when different templates size, and consider visual custom and human eye to non-linear relation with colouring intensity of the perceptibility of different color;Take full advantage of local feature and the global characteristics of image, there is adaptivity, it is possible to suppress excessively to strengthen, the image enhancement effects obtained under complex illumination environment is obvious;M × N number of power exponent computing is reduced to 256, improves computational efficiency, Z=8, F=7,Calculating average frame per second is 19FPS, and amount of calculation is less than the dictionary algorithm of same type;At target detection and tracking phase, the error that different temperatures causes the rotation of image and translation to cause can be eliminated, improve discrimination, image detail after treatment becomes apparent from, and amount of calculation is greatly reduced relative to traditional method, it is possible to effectively adapt to target scale change, and can accurately judge whether target loses, can again be detected and tenacious tracking after target comes back to visual field, until remaining to tenacious tracking target after 130 frames.Additionally, this bridge construction car has, real-time is good, the advantage of accurate positioning and strong robustness, and has good effect in quickly having the target detection blocked and tracking, achieves beyond thought effect.

Claims (2)

1. the bridge construction device that a local and global characteristics combine, including bridge construction car and the monitoring device being arranged on bridge construction car, monitoring device is for carrying out video image monitoring to the activity near bridge construction car, it is characterized in that, monitoring device includes pretreatment module, detecting and tracking module, identifies output module;
(1) pretreatment module, for the image received is carried out pretreatment, specifically includes image transformant module, image filtering submodule and image enhaucament submodule:
Image transformant module, for coloured image is converted into gray level image:
H ( x , y ) = max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + min ( R ( x , y ) , G ( x , y ) B , ( x , y ) ) 2 + 2 ( max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) )
Wherein, (x, y), (x, y), (x, (x, y) the intensity red green blue value at place, (x y) represents coordinate (x, y) grey scale pixel value at place to H to B to G to R y) to represent pixel respectively;Image is sized to m × n;
Image filtering submodule, for gray level image is filtered:
Adopt Wiener filtering to carry out after first-level filtering removes, define svlm image, be designated as Msvlm(x, y), being specifically defined formula is: Msvlm(x, y)=a1J1(x,y)+a2J2(x,y)+a3J3(x,y)+a4J4(x, y), wherein a1、a2、a3、a4For variable weight,I=1,2,3,4;(x, y) for the image after filtered for J;
Image enhaucament submodule:
When | 128 - m | > | &omega; - 50 | 3 Time, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &Psi; ( x , y ) , Wherein, (x, y) for enhanced gray value for L;(x y) is the gamma correction coefficient including local message, now to ψα be range for 0 to 1 variable element,ω is template scale size parameter, and yardstick is more big, and the neighborhood territory pixel information comprised in template is more many, and input picture is through different scale ωiTemplate, the image J obtainediThe neighborhood information of different range will be comprised;
When | 128 - m | &le; | &omega; - 50 | 3 And during ω > 50, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) &times; ( 1 - &omega; - 50 &omega; 2 ) , Wherein ψ (x, y)=ψα(Msvlm(x, y)),mHIt is the average of the gray value all pixels higher than 128, m in imageLIt is the average of the gray value all pixels lower than 128, and now m=min (mH, mL), when α value is known, calculates 256 ψ correction coefficients as look-up table, be designated asWherein i is index value, utilizes Msvlm(x, gray value y) is as index, according to ψ (x, y)=ψα(Msvlm(x, y)) quickly obtain each pixel in image gamma correction coefficient ψ (x, y);For template correction factor;
(2) detecting and tracking module, specifically includes structure submodule, loses differentiation submodule and update submodule:
Build submodule, for the structure of visual dictionary:
Obtain the position and yardstick of following the tracks of target at initial frame, choosing positive and negative sample training tracker about, result will be followed the tracks of as training set X={x1,x2,……xN}T;And the every width target image in training set is extracted the SIFT feature of 128 dimensionsWherein StThe number of SIFT feature in t width target image in expression training set;After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, it is designated asThe feature total amount that can extractWherein K < < FN, andAfter visual dictionary builds, every width training image is expressed as the form of feature bag, for representing the frequency that in visual dictionary, feature word occurs, with rectangular histogram h (xt) represent, h (xt) obtain in the following manner: by a width training image XtIn each feature fs (t)Projecting to visual dictionary, the feature word the shortest with projector distance represents this feature, after all Projection Characters, adds up the frequency of occurrences of each feature word, and normalization obtains training image XtFeature histogram h (xt);
Lose and differentiate submodule, for differentiating that the loss of target is whether:
When a new two field picture arrives, from K histogram, randomly select Z < K histogram, and Z=4, form the new sub-rectangular histogram h being sized to Z(z)(xt), sub histogrammic number is up toIndividual;Calculate candidate target region son histogrammic similarity Ф corresponding to certain target area in training sett_z,Wherein t=1,2 ..., N, z=1,2 ..., Ns, then calculate overall similarity Фt=1-∏z(1-Фt_z);Similarity Ф=max{ Ф of candidate target region and targett, t} represents, then track rejection judges that formula is: u = s i g n ( &Phi; ) = 1 &Phi; &GreaterEqual; g s 0 &Phi; < g s , Wherein gs be manually set sentence mistake threshold values;As u=1, target is by tenacious tracking, as u=0, and track rejection;When track rejection, define affine Transform Model: x t y t = s . cos ( &mu; 1 &times; &theta; ) s . sin ( &mu; 1 &times; &theta; ) - s . sin ( &mu; 1 &times; &theta; ) s . cos ( &mu; 1 &times; &theta; ) x t - 1 y t - 1 + &mu; 2 e f , Wherein (xt, yt) and (xt-1,yt-1) the respectively position coordinates of certain SITF characteristic point and the position coordinates of Corresponding matching characteristic point in previous frame target in present frame target, both are known quantity;S is scale coefficient, and θ is coefficient of rotary, and e and f represents translation coefficient, &mu; 1 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 For temperature rotation correction coefficient, &mu; 2 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 Correction factor, μ is translated for temperature1And μ2For revising because the image rotation that causes of ambient temperature deviation and translation error, T0For the standard temperature being manually set, being set to 20 degree, T is monitored the temperature value obtained in real time by temperature sensor;Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, under new yardstick s and coefficient of rotary θ, finally gather positive negative sample, update grader;
Update submodule, for the renewal of visual dictionary:
After every two field picture obtains target location, the result of calculation according to affine transformation parameter, collect all SIFT feature points meeting result parameterAfter F=3 frame, it is thus achieved that new feature point setWherein St-FRepresent the total characteristic obtained from F two field picture to count;Utilize following formula that new and old characteristic point re-starts K cluster: WhereinRepresenting new visual dictionary, the size of visual dictionary remains unchanged;It is forgetting factor, it was shown that proportion shared by old dictionary,More little, the judgement of track rejection is contributed more many by new feature, takes
(3) output module is identified, identification and output for image: utilize track algorithm to obtain target area in image sequence to be identified, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
2. the bridge construction device that a kind of local according to claim 1 and global characteristics combine, it is characterized in that, adopting Wiener filtering to carry out after first-level filtering removes, now image information also includes the noise of remnants, adopts following two-stage filter to carry out secondary filtering:
J ( x , y ) = &Sigma; i = - m / 2 m / 2 &Sigma; j = - n / 2 n / 2 H ( x , y ) P g ( x + i , y + j )
Wherein, J (x, y) be after filtering after image;Pg(x+i, y+j) represents the function that yardstick is m × n and Pg(x+i, y+j)=q × exp (-(x2+y2)/ω), wherein q is by the coefficient of function normalization, it may be assumed that ∫ ∫ q × exp (-(x2+y2)/ω) dxdy=1.
CN201610045943.5A 2016-01-22 2016-01-22 Bridge construction apparatus with local and global features combined Pending CN105740865A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610045943.5A CN105740865A (en) 2016-01-22 2016-01-22 Bridge construction apparatus with local and global features combined

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610045943.5A CN105740865A (en) 2016-01-22 2016-01-22 Bridge construction apparatus with local and global features combined

Publications (1)

Publication Number Publication Date
CN105740865A true CN105740865A (en) 2016-07-06

Family

ID=56246582

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610045943.5A Pending CN105740865A (en) 2016-01-22 2016-01-22 Bridge construction apparatus with local and global features combined

Country Status (1)

Country Link
CN (1) CN105740865A (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810723A (en) * 2014-02-27 2014-05-21 西安电子科技大学 Target tracking method based on inter-frame constraint super-pixel encoding

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810723A (en) * 2014-02-27 2014-05-21 西安电子科技大学 Target tracking method based on inter-frame constraint super-pixel encoding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴京辉: "视频监控目标的跟踪与识别研究", 《中国博士学位论文全文数据库 信息科技辑》 *

Similar Documents

Publication Publication Date Title
CN107545239B (en) Fake plate detection method based on license plate recognition and vehicle characteristic matching
CN103559507B (en) The method for traffic sign detection being combined based on color with shape facility
CN108509902B (en) Method for detecting call behavior of handheld phone in driving process of driver
CN109635758B (en) Intelligent building site video-based safety belt wearing detection method for aerial work personnel
TW201120814A (en) Method for determining if an input image is a foggy image, method for determining a foggy level of an input image and cleaning method for foggy images
CN105718896A (en) Intelligent robot with target recognition function
CN104732227A (en) Rapid license-plate positioning method based on definition and luminance evaluation
CN103679656B (en) A kind of Automated sharpening of images method
CN110728185B (en) Detection method for judging existence of handheld mobile phone conversation behavior of driver
CN107273884A (en) A kind of License Plate Identification method based on mobile terminal camera
CN105718895A (en) Unmanned aerial vehicle based on visual characteristics
CN115082776A (en) Electric energy meter automatic detection system and method based on image recognition
CN105740768A (en) Unmanned forklift device based on combination of global and local features
CN105718897A (en) Numerical control lathe based on visual characteristics
Chen et al. Real-time vehicle color identification using symmetrical SURFs and chromatic strength
CN111553217A (en) Driver call monitoring method and system
CN114022468B (en) Method for detecting article left-over and lost in security monitoring
CN105740865A (en) Bridge construction apparatus with local and global features combined
CN105574517A (en) Electric vehicle charging pile with stable tracking function
CN106778675B (en) A kind of recognition methods of target in video image object and device
CN105718911A (en) Outdoor transformer capable of target identification
CN114973215A (en) Fatigue driving determination method and device and electronic equipment
CN114067186A (en) Pedestrian detection method and device, electronic equipment and storage medium
CN105718910A (en) Battery room with combination of local and global characteristics
CN105740770A (en) Vacuum packaging apparatus with stable tracking function

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160706