CN105740770A - Vacuum packaging apparatus with stable tracking function - Google Patents

Vacuum packaging apparatus with stable tracking function Download PDF

Info

Publication number
CN105740770A
CN105740770A CN201610045942.0A CN201610045942A CN105740770A CN 105740770 A CN105740770 A CN 105740770A CN 201610045942 A CN201610045942 A CN 201610045942A CN 105740770 A CN105740770 A CN 105740770A
Authority
CN
China
Prior art keywords
image
target
module
feature
submodule
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610045942.0A
Other languages
Chinese (zh)
Inventor
孟玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201610045942.0A priority Critical patent/CN105740770A/en
Publication of CN105740770A publication Critical patent/CN105740770A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65BMACHINES, APPARATUS OR DEVICES FOR, OR METHODS OF, PACKAGING ARTICLES OR MATERIALS; UNPACKING
    • B65B31/00Packaging articles or materials under special atmospheric or gaseous conditions; Adding propellants to aerosol containers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Chemical & Material Sciences (AREA)
  • Dispersion Chemistry (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses a vacuum packaging apparatus with a stable tracking function. The apparatus comprises a vacuum packaging machine and a monitoring apparatus mounted on the vacuum packaging machine; and the monitoring apparatus specifically comprises a pre-processing module, a detecting tracking module and an identification output module, wherein the pre-processing module comprises three sub-modules, i.e. an image conversion sub-module, an image filtration sub-module and an image enhancement sub-module, and the detection tracking module comprises three sub-modules, i.e. a construction sub-module, a loss discrimination sub-module and an update sub-module. A video image technology is applied to the vacuum packaging machine, so that the vacuum packaging apparatus is capable of effectively monitoring and recording malicious damage behaviors on cultural relics and has the advantages of a good real-time property, accurate positioning, a strong self-adaptive ability, complete image detail reservation and high robustness and the like.

Description

A kind of vacuum packaging equipment with tenacious tracking function
Technical field
The present invention relates to vacuum packaging field, be specifically related to a kind of vacuum packaging equipment with tenacious tracking function.
Background technology
Vacuum packaging equipment, refers mainly to vacuum packing machine, it is possible to automatically extracts the air in packaging bag out, completes sealing process after reaching predetermined vacuum level.Also rechargeable enter nitrogen or other mixing gas, then complete sealing process.Vacuum packing machine is commonly used for food service industry because after vacuum packaging, food can antioxidation, thus reaching the purpose of long-term preservation.
It addition, vacuum packing machine is as a kind of important expensive device, its safety is particularly important, it is necessary to can prevent and monitor malicious sabotage behavior.
Summary of the invention
For the problems referred to above, the present invention provides a kind of vacuum packaging equipment with tenacious tracking function.
The purpose of the present invention realizes by the following technical solutions:
A kind of vacuum packaging equipment with tenacious tracking function, including vacuum packing machine and the monitoring device being arranged on vacuum packing machine, monitoring device for carrying out video image monitoring to the activity near vacuum packaging, and monitoring device includes pretreatment module, detecting and tracking module, identifies output module;
(1) pretreatment module, for the image received is carried out pretreatment, specifically includes image transformant module, image filtering submodule and image enhaucament submodule:
Image transformant module, for coloured image is converted into gray level image:
H ( x , y ) = max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 ( max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) )
Wherein, (x, y), (x, y), (x, (x, y) the intensity red green blue value at place, (x y) represents coordinate (x, y) grey scale pixel value at place to H to B to G to R y) to represent pixel respectively;Image is sized to m × n;
Image filtering submodule, for gray level image is filtered:
Adopt Wiener filtering to carry out after first-level filtering removes, define svlm image, be designated as Msvlm(x, y), being specifically defined formula is: Msvlm(x, y)=a1J1(x, y)+a2J2(x, y)+a3J3(x, y)+a4J4(x, y), wherein a1、a2、a3、a4For variable weight,I=1,2,3,4;(x, y) for the image after filtered for J;
Image enhaucament submodule:
When | 128 - m | > | ω - 50 | 3 Time, L ( x , y ) = 255 × ( H ( x , y ) 255 ) ψ ( x , y ) , Wherein, (x, y) for enhanced gray value for L;(x y) is the gamma correction coefficient including local message, now to ψα be range for 0 to 1 variable element,
When | 128 - m | ≤ | ω - 50 | 3 And during ω > 50, L ( x , y ) = 255 × ( H ( x , y ) 255 ) ψ ( x , y ) × ( 1 - ω - 50 ω 2 ) , Wherein ψ (x, y)=ψα(Msvlm(x, y)),mHIt is the average of the gray value all pixels higher than 128, m in imageLIt is the average of the gray value all pixels lower than 128, and now m=min (mH, mL), when α value is known, calculates 256 ψ correction coefficients as look-up table, be designated asWherein i is index value, utilizes Msvlm(x, gray value y) is as index, according to ψ (x, y)=ψα(Msvlm(x, y)) quickly obtain each pixel in image gamma correction coefficient ψ (x, y);For template correction factor;
(2) detecting and tracking module, specifically includes structure submodule, loses differentiation submodule and update submodule:
Build submodule, for the structure of visual dictionary:
Obtain the position and yardstick of following the tracks of target at initial frame, choosing positive and negative sample training tracker about, result will be followed the tracks of as training set X={x1, x2... xN}T;And the every width target image in training set is extracted the SIFT feature of 128 dimensionsWherein StThe number of SIFT feature in t width target image in expression training set;After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, it is designated asThe feature total amount that can extractWherein K < < FN, andAfter visual dictionary builds, every width training image is expressed as the form of feature bag, for representing the frequency that in visual dictionary, feature word occurs, with rectangular histogram h (xt) represent, h (xt) obtain in the following manner: by a width training image XtIn each feature fs (t)Projecting to visual dictionary, the feature word the shortest with projector distance represents this feature, after all Projection Characters, adds up the frequency of occurrences of each feature word, and normalization obtains training image XtFeature histogram h (xt);
Lose and differentiate submodule, for differentiating that the loss of target is whether:
When a new two field picture arrives, from K histogram, randomly select Z < K histogram, and Z=4, form the new sub-rectangular histogram h being sized to Z(z)(xt), sub histogrammic number is up toIndividual;Calculate candidate target region son histogrammic similarity Ф corresponding to certain target area in training sett_z,Wherein t=1,2 ..., N, z=1,2 ..., Ns, then calculate overall similarity Φt=1-∏z(1-Φt_z);Similarity Φ=max{ Φ of candidate target region and targett, t} represents, then track rejection judges that formula is: u = s i g n ( &Phi; ) = 1 &Phi; &GreaterEqual; g s 0 &Phi; < g s , Wherein gs be manually set sentence mistake threshold values;As u=1, target is by tenacious tracking, as u=0, and track rejection;
When track rejection, define affine Transform Model: x t y t = s . cos ( &mu; 1 &times; &theta; ) s . sin ( &mu; 1 &times; &theta; ) - s . sin ( &mu; 1 &times; &theta; ) s . cos ( &mu; 1 &times; &theta; ) x t - 1 y t - 1 + &mu; 2 e f , Wherein (xt, yt) and (xt-1, yt-1) the respectively position coordinates of certain SITF characteristic point and the position coordinates of Corresponding matching characteristic point in previous frame target in present frame target, both are known quantity;S is scale coefficient, and θ is coefficient of rotary, and e and f represents translation coefficient, &mu; 1 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 For temperature rotation correction coefficient, &mu; 2 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 Correction factor, μ is translated for temperature1And μ2For revising because the image rotation that causes of ambient temperature deviation and translation error, T0For the standard temperature being manually set, being set to 20 degree, T is monitored the temperature value obtained in real time by temperature sensor;Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, under new yardstick s and coefficient of rotary θ, finally gather positive negative sample, update grader;
Update submodule, for the renewal of visual dictionary:
After every two field picture obtains target location, the result of calculation according to affine transformation parameter, collect all SIFT feature points meeting result parameterAfter F=3 frame, it is thus achieved that new feature point setWherein St-FRepresent the total characteristic obtained from F two field picture to count;Utilize following formula that new and old characteristic point re-starts K cluster: WhereinRepresenting new visual dictionary, the size of visual dictionary remains unchanged;It is forgetting factor, it was shown that proportion shared by old dictionary,More little, the judgement of track rejection is contributed more many by new feature, takes
(3) output module is identified, identification and output for image: utilize track algorithm to obtain target area in image sequence to be identified, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
Preferably, adopting Wiener filtering to carry out after first-level filtering removes, now image information also includes the noise of remnants, adopts following two-stage filter to carry out secondary filtering:
J ( x , y ) = &Sigma; i = - m / 2 m / 2 &Sigma; j = - n / 2 n / 2 H ( x , y ) P g ( x + i , y + j )
Wherein, J (x, y) be after filtering after image;Pg(x+i, y+j) represents the function that yardstick is m × n and Pg(x+i, y+j)=q × exp (-(x2+y2)/ω), wherein q is by the coefficient of function normalization, it may be assumed that ∫ ∫ q × exp (-(x2+y2)/ω) dxdy=1.
This vacuum packaging equipment have the beneficial effect that at image pre-processing phase, the image strengthened can according to the size adaptation adjustment of template, improve reinforced effects, and can automatically revise at the Rule of judgment when different templates size, and consider visual custom and human eye to non-linear relation with colouring intensity of the perceptibility of different color;M × N number of power exponent computing is reduced to 256, improves computational efficiency;At target detection and tracking phase, the error that different temperatures causes the rotation of image and translation to cause can be eliminated, improve discrimination, image detail after treatment becomes apparent from, and amount of calculation is greatly reduced relative to traditional method, can effectively adapt to target scale change, and can accurately judge whether target loses, can by detection tenacious tracking again after target comes back to visual field.Additionally, this vacuum packing machine has, real-time is good, the advantage of accurate positioning and strong robustness, and achieves good effect in quickly having the target detection blocked and tracking.
Accompanying drawing explanation
The invention will be further described to utilize accompanying drawing, but the embodiment in accompanying drawing does not constitute any limitation of the invention, for those of ordinary skill in the art, under the premise not paying creative work, it is also possible to obtain other accompanying drawing according to the following drawings.
Fig. 1 is the structured flowchart of a kind of vacuum packaging equipment with tenacious tracking function;
Fig. 2 is the outside schematic diagram of a kind of vacuum packaging equipment with tenacious tracking function.
Detailed description of the invention
The invention will be further described with the following Examples.
Embodiment 1: as shown in Figure 1-2, a kind of vacuum packaging equipment with tenacious tracking function, including vacuum packing machine 5 and the monitoring device 4 being arranged on vacuum packing machine 5, monitoring device 4 for carrying out video image monitoring to the activity near vacuum packing machine 5, and monitoring device 4 includes pretreatment module 1, detecting and tracking module 2, identifies output module 3.
(1) pretreatment module 1, for the image received is carried out pretreatment, specifically includes image transformant module 11, image filtering submodule 12 and image enhaucament submodule 13:
Image transformant module 11, for coloured image is converted into gray level image:
H ( x , y ) = max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 ( max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) )
Wherein, (x, y), (x, y), (x, (x, y) the intensity red green blue value at place, (x y) represents coordinate (x, y) grey scale pixel value at place to H to B to G to R y) to represent pixel respectively;Image is sized to m × n;
Image filtering submodule 12, for gray level image is filtered:
Adopt Wiener filtering to carry out after first-level filtering removes, define svlm image, be designated as Msvlm(x, y), being specifically defined formula is: Msvlm(x, y)=a1J1(x, y)+a2J2(x, y)+a3J3(x, y)+a4J4(x, y), wherein a1、a2、a3、a4For variable weight,I=1,2,3,4;(x, y) for the image after filtered for J;
Image enhaucament submodule 13:
When | 128 - m | > | &omega; - 50 | 3 Time, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) , Wherein, (x, y) for enhanced gray value for L;(x y) is the gamma correction coefficient including local message, now to ψα be range for 0 to 1 variable element,
When | 128 - m | &le; | &omega; - 50 | 3 And during ω > 50, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) &times; ( 1 - &omega; - 50 &omega; 2 ) , Wherein ψ (x, y)=ψα(Msvlm(x, y)),mHIt is the average of the gray value all pixels higher than 128, m in imageLIt is the average of the gray value all pixels lower than 128, and now m=min (mH, mL), when a value is known, calculates 256 ψ correction coefficients as look-up table, be designated asWherein i is index value, utilizes Msvlm(x, gray value y) is as index, according to ψ (x, y)=ψα(Msvlm(x, y)) quickly obtain each pixel in image gamma correction coefficient ψ (x, y);For template correction factor;
(2) detecting and tracking module 2, specifically includes structure submodule 21, loses differentiation submodule 22 and update submodule 23:
Build submodule 21, for the structure of visual dictionary:
Obtain the position and yardstick of following the tracks of target at initial frame, choosing positive and negative sample training tracker about, result will be followed the tracks of as training set X={x1, x2... xN}T;And the every width target image in training set is extracted the SIFT feature of 128 dimensionsWherein StThe number of SIFT feature in t width target image in expression training set;After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, it is designated asThe feature total amount that can extractWherein K < < FN, andAfter visual dictionary builds, every width training image is expressed as the form of feature bag, for representing the frequency that in visual dictionary, feature word occurs, with rectangular histogram h (xt) represent, h (xt) obtain in the following manner: by a width training image XtIn each feature fs (t)Projecting to visual dictionary, the feature word the shortest with projector distance represents this feature, after all Projection Characters, adds up the frequency of occurrences of each feature word, and normalization obtains training image XtFeature histogram h (xt);
Lose and differentiate submodule 22, for differentiating that the loss of target is whether:
When a new two field picture arrives, from K histogram, randomly select Z < K histogram, and Z=4, form the new sub-rectangular histogram h being sized to Z(z)(xt), sub histogrammic number is up toIndividual;Calculate candidate target region son histogrammic similarity Ф corresponding to certain target area in training sett_z,Wherein t=1,2 ..., N, z=1,2 ..., Ns, then calculate overall similarity Φt=1-∏z(1-Φt_z);Similarity Φ=max{ Φ of candidate target region and targett, t} represents, then track rejection judges that formula is: u = s i g n ( &Phi; ) = 1 &Phi; &GreaterEqual; g s 0 &Phi; < g s , Wherein gs be manually set sentence mistake threshold values;As u=1, target is by tenacious tracking, as u=0, and track rejection;
When track rejection, define affine Transform Model: x t y t = s . cos ( &mu; 1 &times; &theta; ) s . sin ( &mu; 1 &times; &theta; ) - s . sin ( &mu; 1 &times; &theta; ) s . cos ( &mu; 1 &times; &theta; ) x t - 1 y t - 1 + &mu; 2 e f , Wherein (xt,yt) and (xt-1, yt-1) the respectively position coordinates of certain SITF characteristic point and the position coordinates of Corresponding matching characteristic point in previous frame target in present frame target, both are known quantity;S is scale coefficient, and θ is coefficient of rotary, and e and f represents translation coefficient, &mu; 1 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 For temperature rotation correction coefficient, &mu; 2 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 Correction factor, μ is translated for temperature1And μ2For revising because the image rotation that causes of ambient temperature deviation and translation error, T0For the standard temperature being manually set, being set to 20 degree, T is monitored the temperature value obtained in real time by temperature sensor;Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, under new yardstick s and coefficient of rotary θ, finally gather positive negative sample, update grader;
Update submodule 23, for the renewal of visual dictionary:
After every two field picture obtains target location, the result of calculation according to affine transformation parameter, collect all SIFT feature points meeting result parameterAfter F=3 frame, it is thus achieved that new feature point setWherein St-FRepresent the total characteristic obtained from F two field picture to count;Utilize following formula that new and old characteristic point re-starts K cluster: WhereinRepresenting new visual dictionary, the size of visual dictionary remains unchanged;It is forgetting factor, it was shown that proportion shared by old dictionary,More little, the judgement of track rejection is contributed more many by new feature, takes
(3) output module is identified, identification and output for image: utilize track algorithm to obtain target area in image sequence to be identified, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
Preferably, adopting Wiener filtering to carry out after first-level filtering removes, now image information also includes the noise of remnants, adopts following two-stage filter to carry out secondary filtering:
J ( x , y ) = &Sigma; i = - m / 2 m / 2 &Sigma; j = - n / 2 n / 2 H ( x , y ) P g ( x + i , y + j )
Wherein, J (x, y) be after filtering after image;Pg(x+i, y+j) represents the function that yardstick is m × n and Pg(x+i, y+j)=q × exp (-(x2+y2)/ω), wherein q is by the coefficient of function normalization, it may be assumed that ∫ ∫ qxexp (-(x2+y2)/ω) dxdy=1.
The vacuum packing machine of this embodiment, at image pre-processing phase, the image strengthened can according to the size adaptation adjustment of template, improve reinforced effects, and can automatically revise at the Rule of judgment when different templates size, and consider visual custom and human eye to non-linear relation with colouring intensity of the perceptibility of different color;Take full advantage of local feature and the global characteristics of image, there is adaptivity, it is possible to suppress excessively to strengthen, the image enhancement effects obtained under complex illumination environment is obvious;M × N number of power exponent computing is reduced to 256, improves computational efficiency, Z=4, F=3,Calculating average frame per second is 15FPS, and amount of calculation is less than the dictionary algorithm of same type;At target detection and tracking phase, the error that different temperatures causes the rotation of image and translation to cause can be eliminated, improve discrimination, image detail after treatment becomes apparent from, and amount of calculation is greatly reduced relative to traditional method, it is possible to effectively adapt to target scale change, and can accurately judge whether target loses, can again be detected and tenacious tracking after target comes back to visual field, until remaining to tenacious tracking target after 110 frames.Additionally, this vacuum packing machine has, real-time is good, the advantage of accurate positioning and strong robustness, and has good effect in quickly having the target detection blocked and tracking, achieves beyond thought effect.
Embodiment 2: as shown in Figure 1-2, a kind of vacuum packaging equipment with tenacious tracking function, including vacuum packing machine 5 and the monitoring device 4 being arranged on vacuum packing machine 5, monitoring device 4 for carrying out video image monitoring to the activity near vacuum packing machine 5, and monitoring device 4 includes pretreatment module 1, detecting and tracking module 2, identifies output module 3.
(1) pretreatment module 1, for the image received is carried out pretreatment, specifically includes image transformant module 11, image filtering submodule 12 and image enhaucament submodule 13:
Image transformant module 11, for coloured image is converted into gray level image:
H ( x , y ) = max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 ( max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) )
Wherein, (x, y), (x, y), (x, (x, y) the intensity red green blue value at place, (x y) represents coordinate (x, y) grey scale pixel value at place to H to B to G to R y) to represent pixel respectively;Image is sized to m × n;
Image filtering submodule 12, for gray level image is filtered:
Adopt Wiener filtering to carry out after first-level filtering removes, define svlm image, be designated as Msvlm(x, y), being specifically defined formula is: Msvlm(x, y)=a1J1(x, y)+a2J2(x, y)+a3J3(x, y)+a4J4(x, y), wherein a1、a2、a3、a4For variable weight,I=1,2,3,4;(x, y) for the image after filtered for J;
Image enhaucament submodule 13:
When | 128 - m | > | &omega; - 50 | 3 Time, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) , Wherein, (x, y) for enhanced gray value for L;(x y) is the gamma correction coefficient including local message, now to ψα be range for 0 to 1 variable element,
When | 128 - m | &le; | &omega; - 50 | 3 And during ω > 50, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) &times; ( 1 - &omega; - 50 &omega; 2 ) , Wherein ψ (x, y)=ψα(Msvlm(x, y)),mHIt is the average of the gray value all pixels higher than 128, m in imageLIt is the average of the gray value all pixels lower than 128, and now m=min (mH, mL), when α value is known, calculates 256 ψ correction coefficients as look-up table, be designated asWherein i is index value, utilizes Msvlm(x, gray value y) as index, according toQuickly obtain each pixel in image gamma correction coefficient ψ (x, y);For template correction factor;
(2) detecting and tracking module 2, specifically includes structure submodule 21, loses differentiation submodule 22 and update submodule 23:
Build submodule 21, for the structure of visual dictionary:
Obtain the position and yardstick of following the tracks of target at initial frame, choosing positive and negative sample training tracker about, result will be followed the tracks of as training set X={x1, x2... xN}T;And the every width target image in training set is extracted the SIFT feature of 128 dimensionsWherein StThe number of SIFT feature in t width target image in expression training set;After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, it is designated asThe feature total amount that can extractWherein K < < FN, andAfter visual dictionary builds, every width training image is expressed as the form of feature bag, for representing the frequency that in visual dictionary, feature word occurs, with rectangular histogram h (xt) represent, h (xt) obtain in the following manner: by a width training image XtIn each feature fs (t)Projecting to visual dictionary, the feature word the shortest with projector distance represents this feature, after all Projection Characters, adds up the frequency of occurrences of each feature word, and normalization obtains training image XtFeature histogram h (xt);
Lose and differentiate submodule 22, for differentiating that the loss of target is whether:
When a new two field picture arrives, from K histogram, randomly select Z < K histogram, and Z=5, form the new sub-rectangular histogram h being sized to Z(z)(xt), sub histogrammic number is up toIndividual;Calculate candidate target region son histogrammic similarity Ф corresponding to certain target area in training sett_z,Wherein t=1,2 ..., N, z=1,2 ..., Ns, then calculate overall similarity Φt=1-∏z(1-Φt_z);Similarity Φ=max{ Φ of candidate target region and targett, t} represents, then track rejection judges that formula is: u = s i g n ( &Phi; ) = 1 &Phi; &GreaterEqual; g s 0 &Phi; < g s , Wherein gs be manually set sentence mistake threshold values;As u=1, target is by tenacious tracking, as u=0, and track rejection;
When track rejection, define affine Transform Model: x t y t = s . cos ( &mu; 1 &times; &theta; ) s . sin ( &mu; 1 &times; &theta; ) - s . sin ( &mu; 1 &times; &theta; ) s . cos ( &mu; 1 &times; &theta; ) x t - 1 y t - 1 + &mu; 2 e f , Wherein (xt,yt) and (xt-1, yt-1) the respectively position coordinates of certain SITF characteristic point and the position coordinates of Corresponding matching characteristic point in previous frame target in present frame target, both are known quantity;S is scale coefficient, and θ is coefficient of rotary, and e and f represents translation coefficient, &mu; 1 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 For temperature rotation correction coefficient, &mu; 2 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 Correction factor, μ is translated for temperature1And μ2For revising because the image rotation that causes of ambient temperature deviation and translation error, T0For the standard temperature being manually set, being set to 20 degree, T is monitored the temperature value obtained in real time by temperature sensor;Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, under new yardstick s and coefficient of rotary θ, finally gather positive negative sample, update grader;
Update submodule 23, for the renewal of visual dictionary:
After every two field picture obtains target location, the result of calculation according to affine transformation parameter, collect all SIFT feature points meeting result parameterAfter F=4 frame, it is thus achieved that new feature point setWherein St-FRepresent the total characteristic obtained from F two field picture to count;Utilize following formula that new and old characteristic point re-starts K cluster: WhereinRepresenting new visual dictionary, the size of visual dictionary remains unchanged;It is forgetting factor, it was shown that proportion shared by old dictionary,More little, the judgement of track rejection is contributed more many by new feature, takes
(3) output module is identified, identification and output for image: utilize track algorithm to obtain target area in image sequence to be identified, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
Preferably, adopting Wiener filtering to carry out after first-level filtering removes, now image information also includes the noise of remnants, adopts following two-stage filter to carry out secondary filtering:
J ( x , y ) = &Sigma; i = - m / 2 m / 2 &Sigma; j = - n / 2 n / 2 H ( x , y ) P g ( x + i , y + j )
Wherein, J (x, y) be after filtering after image;Pg(x+i, y+j) represents the function that yardstick is m × n and Pg(x+i, y+j)=q × exp (-(x2+y2)/ω), wherein q is by the coefficient of function normalization, it may be assumed that ∫ ∫ qxexp (-(x2+y2)/ω) dxdy=1.
The vacuum packing machine of this embodiment, at image pre-processing phase, the image strengthened can according to the size adaptation adjustment of template, improve reinforced effects, and can automatically revise at the Rule of judgment when different templates size, and consider visual custom and human eye to non-linear relation with colouring intensity of the perceptibility of different color;Take full advantage of local feature and the global characteristics of image, there is adaptivity, it is possible to suppress excessively to strengthen, the image enhancement effects obtained under complex illumination environment is obvious;M × N number of power exponent computing is reduced to 256, improves computational efficiency, Z=5, F=4,Calculating average frame per second is 16FPS, and amount of calculation is less than the dictionary algorithm of same type;At target detection and tracking phase, the error that different temperatures causes the rotation of image and translation to cause can be eliminated, improve discrimination, image detail after treatment becomes apparent from, and amount of calculation is greatly reduced relative to traditional method, it is possible to effectively adapt to target scale change, and can accurately judge whether target loses, can again be detected and tenacious tracking after target comes back to visual field, until remaining to tenacious tracking target after 115 frames.Additionally, this vacuum packing machine has, real-time is good, the advantage of accurate positioning and strong robustness, and has good effect in quickly having the target detection blocked and tracking, achieves beyond thought effect.
Embodiment 3: as shown in Figure 1-2, a kind of vacuum packaging equipment with tenacious tracking function, including vacuum packing machine 5 and the monitoring device 4 being arranged on vacuum packing machine 5, monitoring device 4 for carrying out video image monitoring to the activity near vacuum packing machine 5, and monitoring device 4 includes pretreatment module 1, detecting and tracking module 2, identifies output module 3.
(1) pretreatment module 1, for the image received is carried out pretreatment, specifically includes image transformant module 11, image filtering submodule 12 and image enhaucament submodule 13:
Image transformant module 11, for coloured image is converted into gray level image:
H ( x , y ) = max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 ( max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) )
Wherein, (x, y), (x, y), (x, (x, y) the intensity red green blue value at place, (x y) represents coordinate (x, y) grey scale pixel value at place to H to B to G to R y) to represent pixel respectively;Image is sized to m × n;
Image filtering submodule 12, for gray level image is filtered:
Adopt Wiener filtering to carry out after first-level filtering removes, define svlm image, be designated as Msvlm(x, y), being specifically defined formula is: Msvlm(x, y)=a1J1(x, y)+a2J2(x, y)+a3J3(x, y)+a4J4(x, y), wherein a1、a2、a3、a4For variable weight,I=1,2,3,4;(x, y) for the image after filtered for J;
Image enhaucament submodule 13:
When | 128 - m | > | &omega; - 50 | 3 Time, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) , Wherein, (x, y) for enhanced gray value for L;(x y) is the gamma correction coefficient including local message, now to ψα be range for 0 to 1 variable element,
When | 128 - m | &le; | &omega; - 50 | 3 And during ω > 50, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) &times; ( 1 - &omega; - 50 &omega; 2 ) , Wherein ψ (x, y)=ψα(Msvlm(x, y)),mHIt is the average of the gray value all pixels higher than 128, m in imageLIt is the average of the gray value all pixels lower than 128, and now m=min (mH, mL), when α value is known, calculates 256 ψ correction coefficients as look-up table, be designated asWherein i is index value, utilizes Msvlm(x, gray value y) is as index, according to ψ (x, y)=ψα(Msvlm(x, y)) quickly obtain each pixel in image gamma correction coefficient ψ (x, y);For template correction factor;
(2) detecting and tracking module 2, specifically includes structure submodule 21, loses differentiation submodule 22 and update submodule 23:
Build submodule 21, for the structure of visual dictionary:
Obtain the position and yardstick of following the tracks of target at initial frame, choosing positive and negative sample training tracker about, result will be followed the tracks of as training set X={x1, x2... xN}T;And the every width target image in training set is extracted the SIFT feature of 128 dimensionsWherein StThe number of SIFT feature in t width target image in expression training set;After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, it is designated asThe feature total amount that can extractWherein K < < FN, andAfter visual dictionary builds, every width training image is expressed as the form of feature bag, for representing the frequency that in visual dictionary, feature word occurs, with rectangular histogram h (xt) represent, h (xt) obtain in the following manner: by a width training image XtIn each feature fs (t)Projecting to visual dictionary, the feature word the shortest with projector distance represents this feature, after all Projection Characters, adds up the frequency of occurrences of each feature word, and normalization obtains training image XtFeature histogram h (xt);
Lose and differentiate submodule 22, for differentiating that the loss of target is whether:
When a new two field picture arrives, from K histogram, randomly select Z < K histogram, and Z=6, form the new sub-rectangular histogram h being sized to Z(z)(xt), sub histogrammic number is up toIndividual;Calculate candidate target region son histogrammic similarity Ф corresponding to certain target area in training sett_z,Wherein t=1,2 ..., N, z=1,2 ..., Ns, then calculate overall similarity Φt=1-∏z(1-Φt_z);Similarity Φ=max{ Φ of candidate target region and targett, t} represents, then track rejection judges that formula is: u = s i g n ( &Phi; ) = 1 &Phi; &GreaterEqual; g s 0 &Phi; < g s , Wherein gs be manually set sentence mistake threshold values;As u=1, target is by tenacious tracking, as u=0, and track rejection;
When track rejection, define affine Transform Model: x t y t = s . cos ( &mu; 1 &times; &theta; ) s . sin ( &mu; 1 &times; &theta; ) - s . sin ( &mu; 1 &times; &theta; ) s . cos ( &mu; 1 &times; &theta; ) x t - 1 y t - 1 + &mu; 2 e f , Wherein (xt,yt) and (xt-1, yt-1) the respectively position coordinates of certain SITF characteristic point and the position coordinates of Corresponding matching characteristic point in previous frame target in present frame target, both are known quantity;S is scale coefficient, and θ is coefficient of rotary, and e and f represents translation coefficient, &mu; 1 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 For temperature rotation correction coefficient, &mu; 2 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 Correction factor, μ is translated for temperature1And μ2For revising because the image rotation that causes of ambient temperature deviation and translation error, T0For the standard temperature being manually set, being set to 20 degree, T is monitored the temperature value obtained in real time by temperature sensor;Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, under new yardstick s and coefficient of rotary θ, finally gather positive negative sample, update grader;
Update submodule 23, for the renewal of visual dictionary:
After every two field picture obtains target location, the result of calculation according to affine transformation parameter, collect all SIFT feature points meeting result parameterAfter F=5 frame, it is thus achieved that new feature point setWherein St-FRepresent the total characteristic obtained from F two field picture to count;Utilize following formula that new and old characteristic point re-starts K cluster: WhereinRepresenting new visual dictionary, the size of visual dictionary remains unchanged;It is forgetting factor, it was shown that proportion shared by old dictionary,More little, the judgement of track rejection is contributed more many by new feature, takes
(3) output module 3 is identified, identification and output for image: utilize track algorithm to obtain target area in image sequence to be identified, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
Preferably, adopting Wiener filtering to carry out after first-level filtering removes, now image information also includes the noise of remnants, adopts following two-stage filter to carry out secondary filtering:
J ( x , y ) = &Sigma; i = - m / 2 m / 2 &Sigma; j = - n / 2 n / 2 H ( x , y ) P g ( x + i , y + j )
Wherein, J (x, y) be after filtering after image;Pg(x+i, y+j) represents the function that yardstick is m × n and Pg(x+i, y+j)=q × exp (-(x2+y2)/ω), wherein q is by the coefficient of function normalization, it may be assumed that ∫ ∫ qxexp (-(x2+y2)/ω) dxdy=1.
The vacuum packing machine of this embodiment, at image pre-processing phase, the image strengthened can according to the size adaptation adjustment of template, improve reinforced effects, and can automatically revise at the Rule of judgment when different templates size, and consider visual custom and human eye to non-linear relation with colouring intensity of the perceptibility of different color;Take full advantage of local feature and the global characteristics of image, there is adaptivity, it is possible to suppress excessively to strengthen, the image enhancement effects obtained under complex illumination environment is obvious;M × N number of power exponent computing is reduced to 256, improves computational efficiency, Z=6, F=5,Calculating average frame per second is 17FPS, and amount of calculation is less than the dictionary algorithm of same type;At target detection and tracking phase, the error that different temperatures causes the rotation of image and translation to cause can be eliminated, improve discrimination, image detail after treatment becomes apparent from, and amount of calculation is greatly reduced relative to traditional method, it is possible to effectively adapt to target scale change, and can accurately judge whether target loses, can again be detected and tenacious tracking after target comes back to visual field, until remaining to tenacious tracking target after 120 frames.Additionally, this vacuum packing machine has, real-time is good, the advantage of accurate positioning and strong robustness, and has good effect in quickly having the target detection blocked and tracking, achieves beyond thought effect.
Embodiment 4: as shown in Figure 1-2, a kind of vacuum packaging equipment with tenacious tracking function, including vacuum packing machine 5 and the monitoring device 4 being arranged on vacuum packing machine 5, monitoring device 4 for carrying out video image monitoring to the activity near vacuum packing machine 5, and monitoring device 4 includes pretreatment module 1, detecting and tracking module 2, identifies output module 3.
(1) pretreatment module 1, for the image received is carried out pretreatment, specifically includes image transformant module 11, image filtering submodule 12 and image enhaucament submodule 13:
Image transformant module 11, for coloured image is converted into gray level image:
H ( x , y ) = max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 ( max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) )
Wherein, (x, y), (x, y), (x, (x, y) the intensity red green blue value at place, (x y) represents coordinate (x, y) grey scale pixel value at place to H to B to G to R y) to represent pixel respectively;Image is sized to m × n;
Image filtering submodule 12, for gray level image is filtered:
Adopt Wiener filtering to carry out after first-level filtering removes, define svlm image, be designated as Msvlm(x, y), being specifically defined formula is: Msvlm(x, y)=a1J1(x, y)+a2J2(x, y)+a3J3(x, y)+a4J4(x, y), wherein a1、a2、a3、a4For variable weight,I=1,2,3,4;(x, y) for the image after filtered for J;
Image enhaucament submodule 13:
When | 128 - m | > | &omega; - 50 | 3 Time, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) , Wherein, (x, y) for enhanced gray value for L;(x y) is the gamma correction coefficient including local message, now to ψα be range for 0 to 1 variable element,
When | 128 - m | &le; | &omega; - 50 | 3 And during ω > 50, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) &times; ( 1 - &omega; - 50 &omega; 2 ) , Wherein ψ (x, y)=ψα(Msvlm(x, y)),mHIt is the average of the gray value all pixels higher than 128, m in imageLIt is the average of the gray value all pixels lower than 128, and now m=min (mH, mL), when α value is known, calculates 256 ψ correction coefficients as look-up table, be designated asWherein i is index value, utilizes Msvlm(x, gray value y) is as index, according to ψ (x, y)=ψα(Msvlm(x, y)) quickly obtain each pixel in image gamma correction coefficient ψ (x, y);For template correction factor;
(2) detecting and tracking module 2, specifically includes structure submodule 21, loses differentiation submodule 22 and update submodule 23:
Build submodule 21, for the structure of visual dictionary:
Obtain the position and yardstick of following the tracks of target at initial frame, choosing positive and negative sample training tracker about, result will be followed the tracks of as training set X={x1, x2... xN}T;And the every width target image in training set is extracted the SIFT feature of 128 dimensionsWherein StThe number of SIFT feature in t width target image in expression training set;After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, it is designated asThe feature total amount that can extractWherein K < < FN, andAfter visual dictionary builds, every width training image is expressed as the form of feature bag, for representing the frequency that in visual dictionary, feature word occurs, with rectangular histogram h (xt) represent, h (xt) obtain in the following manner: by a width training image XtIn each feature fs (t)Projecting to visual dictionary, the feature word the shortest with projector distance represents this feature, after all Projection Characters, adds up the frequency of occurrences of each feature word, and normalization obtains training image XtFeature histogram h (xt);
Lose and differentiate submodule 22, for differentiating that the loss of target is whether:
When a new two field picture arrives, from K histogram, randomly select Z < K histogram, and Z=7, form the new sub-rectangular histogram h being sized to Z(z)(xt), sub histogrammic number is up toIndividual;Calculate candidate target region son histogrammic similarity Ф corresponding to certain target area in training sett_z,Wherein t=1,2 ..., N, z=1,2 ..., Ns, then calculate overall similarity Φt=1-∏z(1-Φt_z);Similarity Φ=max{ Φ of candidate target region and targett, t} represents, then track rejection judges that formula is: u = s i g n ( &Phi; ) = 1 &Phi; &GreaterEqual; g s 0 &Phi; < g s , Wherein gs be manually set sentence mistake threshold values;As u=1, target is by tenacious tracking, as u=0, and track rejection;
When track rejection, define affine Transform Model: x t y t = s . cos ( &mu; 1 &times; &theta; ) s . sin ( &mu; 1 &times; &theta; ) - s . sin ( &mu; 1 &times; &theta; ) s . cos ( &mu; 1 &times; &theta; ) x t - 1 y t - 1 + &mu; 2 e f , Wherein (xt,yt) and (xt-1, yt-1) the respectively position coordinates of certain SITF characteristic point and the position coordinates of Corresponding matching characteristic point in previous frame target in present frame target, both are known quantity;S is scale coefficient, and θ is coefficient of rotary, and e and f represents translation coefficient, &mu; 1 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 For temperature rotation correction coefficient, &mu; 2 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 Correction factor, μ is translated for temperature1And μ2For revising because the image rotation that causes of ambient temperature deviation and translation error, T0For the standard temperature being manually set, being set to 20 degree, T is monitored the temperature value obtained in real time by temperature sensor;Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, under new yardstick s and coefficient of rotary θ, finally gather positive negative sample, update grader;
Update submodule 23, for the renewal of visual dictionary:
After every two field picture obtains target location, the result of calculation according to affine transformation parameter, collect all SIFT feature points meeting result parameterAfter F=6 frame, it is thus achieved that new feature point setWherein St-FRepresent the total characteristic obtained from F two field picture to count;Utilize following formula that new and old characteristic point re-starts K cluster: WhereinRepresenting new visual dictionary, the size of visual dictionary remains unchanged;It is forgetting factor, it was shown that proportion shared by old dictionary,More little, the judgement of track rejection is contributed more many by new feature, takes
(3) output module 3 is identified, identification and output for image: utilize track algorithm to obtain target area in image sequence to be identified, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
Preferably, adopting Wiener filtering to carry out after first-level filtering removes, now image information also includes the noise of remnants, adopts following two-stage filter to carry out secondary filtering:
J ( x , y ) = &Sigma; i = - m / 2 m / 2 &Sigma; j = - n / 2 n / 2 H ( x , y ) P g ( x + i , y + j )
Wherein, J (x, y) be after filtering after image;Pg(x+i, y+j) represents the function that yardstick is m × n and Pg(x+i, y+j)=q × exp (-(x2+y2)/ω), wherein q is by the coefficient of function normalization, it may be assumed that ∫ ∫ q × exp (-(x2+y2)/ω) dxdy=1.
The vacuum packing machine of this embodiment, at image pre-processing phase, the image strengthened can according to the size adaptation adjustment of template, improve reinforced effects, and can automatically revise at the Rule of judgment when different templates size, and consider visual custom and human eye to non-linear relation with colouring intensity of the perceptibility of different color;Take full advantage of local feature and the global characteristics of image, there is adaptivity, it is possible to suppress excessively to strengthen, the image enhancement effects obtained under complex illumination environment is obvious;M × N number of power exponent computing being reduced to 256, improves computational efficiency, Z=7, F=6, φ=0.18, calculating average frame per second is 18FPS, and amount of calculation is less than the dictionary algorithm of same type;At target detection and tracking phase, the error that different temperatures causes the rotation of image and translation to cause can be eliminated, improve discrimination, image detail after treatment becomes apparent from, and amount of calculation is greatly reduced relative to traditional method, it is possible to effectively adapt to target scale change, and can accurately judge whether target loses, can again be detected and tenacious tracking after target comes back to visual field, until remaining to tenacious tracking target after 125 frames.Additionally, this vacuum packing machine has, real-time is good, the advantage of accurate positioning and strong robustness, and has good effect in quickly having the target detection blocked and tracking, achieves beyond thought effect.
Embodiment 5: as shown in Figure 1-2, a kind of vacuum packaging equipment with tenacious tracking function, including vacuum packing machine 5 and the monitoring device 4 being arranged on vacuum packing machine 5, monitoring device 4 for carrying out video image monitoring to the activity near vacuum packing machine 5, and monitoring device 4 includes pretreatment module 1, detecting and tracking module 2, identifies output module 3.
(1) pretreatment module 1, for the image received is carried out pretreatment, specifically includes image transformant module 11, image filtering submodule 12 and image enhaucament submodule 13:
Image transformant module 11, for coloured image is converted into gray level image:
H ( x , y ) = max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 ( max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) )
Wherein, (x, y), (x, y), (x, (x, y) the intensity red green blue value at place, (x y) represents coordinate (x, y) grey scale pixel value at place to H to B to G to R y) to represent pixel respectively;Image is sized to m × n;
Image filtering submodule 12, for gray level image is filtered:
Adopt Wiener filtering to carry out after first-level filtering removes, define svlm image, be designated as Msvlm(x, y), being specifically defined formula is: Msvlm(x, y)=a1J1(x, y)+a2J2(x, y)+a3J3(x, y)+a4J4(x, y), wherein a1、a2、a3、a4For variable weight,I=1,2,3,4;(x, y) for the image after filtered for J;
Image enhaucament submodule 13:
When | 128 - m | > | &omega; - 50 | 3 Time, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) , Wherein, (x, y) for enhanced gray value for L;(x y) is the gamma correction coefficient including local message, now to ψα be range for 0 to 1 variable element,
When | 128 - m | &le; | &omega; - 50 | 3 And during ω > 50, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) &times; ( 1 - &omega; - 50 &omega; 2 ) , Wherein ψ (x, y)=ψα(Msvlm(x, y)),mHIt is the average of the gray value all pixels higher than 128, m in imageLIt is the average of the gray value all pixels lower than 128, and now m=min (mH, mL), when α value is known, calculates 256 ψ correction coefficients as look-up table, be designated asWherein i is index value, utilizes Msvlm(x, gray value y) is as index, according to ψ (x, y)=ψα(Msvlm(x, y)) quickly obtain each pixel in image gamma correction coefficient ψ (x, y);For template correction factor;
(2) detecting and tracking module 2, specifically includes structure submodule 21, loses differentiation submodule 22 and update submodule 23:
Build submodule 21, for the structure of visual dictionary:
Obtain the position and yardstick of following the tracks of target at initial frame, choosing positive and negative sample training tracker about, result will be followed the tracks of as training set X={x1, x2... xN}T;And the every width target image in training set is extracted the SIFT feature of 128 dimensionsWherein StThe number of SIFT feature in t width target image in expression training set;After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, it is designated asThe feature total amount that can extractWherein K < < FN, andAfter visual dictionary builds, every width training image is expressed as the form of feature bag, for representing the frequency that in visual dictionary, feature word occurs, with rectangular histogram h (xt) represent, h (xt) obtain in the following manner: by a width training image XtIn each feature fs (t)Projecting to visual dictionary, the feature word the shortest with projector distance represents this feature, after all Projection Characters, adds up the frequency of occurrences of each feature word, and normalization obtains training image XtFeature histogram h (xt);
Lose and differentiate submodule 22, for differentiating that the loss of target is whether:
When a new two field picture arrives, from K histogram, randomly select Z < K histogram, and Z=8, form the new sub-rectangular histogram h being sized to Z(z)(xt), sub histogrammic number is up toIndividual;Calculate candidate target region son histogrammic similarity Ф corresponding to certain target area in training sett_z,Wherein t=1,2 ..., N, z=1,2 ..., Ns, then calculate overall similarity Φt=1-∏z(1-Φt_z);Similarity Φ=max{ Φ of candidate target region and targett, t} represents, then track rejection judges that formula is: u = s i g n ( &Phi; ) = 1 &Phi; &GreaterEqual; g s 0 &Phi; < g s , Wherein gs be manually set sentence mistake threshold values;As u=1, target is by tenacious tracking, as u=0, and track rejection;
When track rejection, define affine Transform Model: x t y t = s . cos ( &mu; 1 &times; &theta; ) s . sin ( &mu; 1 &times; &theta; ) - s . sin ( &mu; 1 &times; &theta; ) s . cos ( &mu; 1 &times; &theta; ) x t - 1 y t - 1 + &mu; 2 e f , Wherein (xt,yt) and (xt-1, yt-1) the respectively position coordinates of certain SITF characteristic point and the position coordinates of Corresponding matching characteristic point in previous frame target in present frame target, both are known quantity;S is scale coefficient, and θ is coefficient of rotary, and e and f represents translation coefficient, &mu; 1 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 For temperature rotation correction coefficient, &mu; 2 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 Correction factor, μ is translated for temperature1And μ2For revising because the image rotation that causes of ambient temperature deviation and translation error, T0For the standard temperature being manually set, being set to 20 degree, T is monitored the temperature value obtained in real time by temperature sensor;Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, under new yardstick s and coefficient of rotary θ, finally gather positive negative sample, update grader;
Update submodule 23, for the renewal of visual dictionary:
After every two field picture obtains target location, the result of calculation according to affine transformation parameter, collect all SIFT feature points meeting result parameterAfter F=7 frame, it is thus achieved that new feature point setWherein St-FRepresent the total characteristic obtained from F two field picture to count;Utilize following formula that new and old characteristic point re-starts K cluster: WhereinRepresenting new visual dictionary, the size of visual dictionary remains unchanged;It is forgetting factor, it was shown that proportion shared by old dictionary,More little, the judgement of track rejection is contributed more many by new feature, takes
(3) output module 3 is identified, identification and output for image: utilize track algorithm to obtain target area in image sequence to be identified, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
Preferably, adopting Wiener filtering to carry out after first-level filtering removes, now image information also includes the noise of remnants, adopts following two-stage filter to carry out secondary filtering:
J ( x , y ) = &Sigma; i = - m / 2 m / 2 &Sigma; j = - n / 2 n / 2 H ( x , y ) P g ( x + i , y + j )
Wherein, J (x, y) be after filtering after image;Pg(x+i, y+j) represents the function that yardstick is m × n and Pg(x+i, y+j)=q × exp (-(x2+y2)/ω), wherein q is by the coefficient of function normalization, it may be assumed that ∫ ∫ q × exp (-(x2+y2)/ω) dxdy=1.
The vacuum packing machine of this embodiment, at image pre-processing phase, the image strengthened can according to the size adaptation adjustment of template, improve reinforced effects, and can automatically revise at the Rule of judgment when different templates size, and consider visual custom and human eye to non-linear relation with colouring intensity of the perceptibility of different color;Take full advantage of local feature and the global characteristics of image, there is adaptivity, it is possible to suppress excessively to strengthen, the image enhancement effects obtained under complex illumination environment is obvious;M × N number of power exponent computing is reduced to 256, improves computational efficiency, Z=8, F=7,Calculating average frame per second is 19FPS, and amount of calculation is less than the dictionary algorithm of same type;At target detection and tracking phase, the error that different temperatures causes the rotation of image and translation to cause can be eliminated, improve discrimination, image detail after treatment becomes apparent from, and amount of calculation is greatly reduced relative to traditional method, it is possible to effectively adapt to target scale change, and can accurately judge whether target loses, can again be detected and tenacious tracking after target comes back to visual field, until remaining to tenacious tracking target after 130 frames.Additionally, this vacuum packing machine has, real-time is good, the advantage of accurate positioning and strong robustness, and has good effect in quickly having the target detection blocked and tracking, achieves beyond thought effect.

Claims (2)

1. a vacuum packaging equipment with tenacious tracking function, including vacuum packing machine and the monitoring device being arranged on vacuum packing machine, monitoring device is for carrying out video image monitoring to the activity near vacuum packaging, it is characterized in that, monitoring device includes pretreatment module, detecting and tracking module, identifies output module;
(1) pretreatment module, for the image received is carried out pretreatment, specifically includes image transformant module, image filtering submodule and image enhaucament submodule:
Image transformant module, for coloured image is converted into gray level image:
H ( x , y ) = max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 ( max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - m i n ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) )
Wherein, (x, y), (x, y), (x, (x, y) the intensity red green blue value at place, (x y) represents coordinate (x, y) grey scale pixel value at place to H to B to G to R y) to represent pixel respectively;Image is sized to m × n;
Image filtering submodule, for gray level image is filtered:
Adopt Wiener filtering to carry out after first-level filtering removes, define svlm image, be designated as Msvlm(x, y), being specifically defined formula is: Msvlm(x, y)=a1J1(x, y)+a2J2(x, y)+a3J3(x, y)+a4J4(x, y), wherein a1、a2、a3、a4For variable weight,I=1,2,3,4;(x, y) for the image after filtered for J;
Image enhaucament submodule:
When | 128 - m | > | &omega; - 50 | 3 Time, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) , Wherein, (x, y) for enhanced gray value for L;(x y) is the gamma correction coefficient including local message, now to ψα be range for 0 to 1 variable element,ω is template scale size parameter, and yardstick is more big, and the neighborhood territory pixel information comprised in template is more many, and input picture is through different scale ωiTemplate, the image J obtainediThe neighborhood information of different range will be comprised;
When | 128 - m | &le; | &omega; - 50 | 3 And during ω > 50, L ( x , y ) = 255 &times; ( H ( x , y ) 255 ) &psi; ( x , y ) &times; ( 1 - &omega; - 50 &omega; 2 ) , Wherein ψ (x, y)=ψα(Msvlm(x, y)),mHIt is the average of the gray value all pixels higher than 128, m in imageLIt is the average of the gray value all pixels lower than 128, and now m=min (mH, mL), when α value is known, calculates 256 ψ correction coefficients as look-up table, be designated asWherein i is index value, utilizes Msvlm(x, gray value y) is as index, according to ψ (x, y)=ψα(Msvlm(x, y)) quickly obtain each pixel in image gamma correction coefficient ψ (x, y);For template correction factor;
(2) detecting and tracking module, specifically includes structure submodule, loses differentiation submodule and update submodule:
Build submodule, for the structure of visual dictionary:
Obtain the position and yardstick of following the tracks of target at initial frame, choosing positive and negative sample training tracker about, result will be followed the tracks of as training set X={x1, x2... xN}T;And the every width target image in training set is extracted the SIFT feature of 128 dimensionsWherein StThe number of SIFT feature in t width target image in expression training set;After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, it is designated asThe feature total amount that can extractWherein K < < FN, andAfter visual dictionary builds, every width training image is expressed as the form of feature bag, for representing the frequency that in visual dictionary, feature word occurs, with rectangular histogram h (xt) represent, h (xt) obtain in the following manner: by a width training image XtIn each feature fs (t)Projecting to visual dictionary, the feature word the shortest with projector distance represents this feature, after all Projection Characters, adds up the frequency of occurrences of each feature word, and normalization obtains training image XtFeature histogram h (xt);
Lose and differentiate submodule, for differentiating that the loss of target is whether:
When a new two field picture arrives, from K histogram, randomly select K histogram of Z < and Z=4, form the new sub-rectangular histogram h being sized to Z(z)(xt), sub histogrammic number is up toIndividual;Calculate candidate target region son histogrammic similarity Φ corresponding to certain target area in training sett_z,Wherein t=1,2 ..., N, z=1,2 ..., Ns, then calculate overall similarity Φt=1-∏z(1-Φt_z);Similarity Φ=max{ Φ of candidate target region and targett, t} represents, then track rejection judges that formula is: u = s i g n ( &Phi; ) = 1 &Phi; &GreaterEqual; g s 0 &Phi; < g s , Wherein gs be manually set sentence mistake threshold values;As u=1, target is by tenacious tracking, as u=0, and track rejection;When track rejection, define affine Transform Model: x t y t = s . c o s ( &mu; 1 &times; &theta; ) s . s i n ( &mu; 1 &times; &theta; ) - s . s i n ( &mu; 1 &times; &theta; ) s . c o s ( &mu; 1 &times; &theta; ) x t - 1 y t - 1 + &mu; 2 e f , Wherein (xt, yt) and (xt-1, yt-1) the respectively position coordinates of certain SITF characteristic point and the position coordinates of Corresponding matching characteristic point in previous frame target in present frame target, both are known quantity;S is scale coefficient, and θ is coefficient of rotary, and e and f represents translation coefficient, &mu; 1 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 For temperature rotation correction coefficient, &mu; 2 = 1 - | T - T 0 | 1000 T 0 T &GreaterEqual; T 0 1 + | T - T 0 | 1000 T 0 T < T 0 Correction factor, μ is translated for temperature1And μ2For revising because the image rotation that causes of ambient temperature deviation and translation error, T0For the standard temperature being manually set, being set to 20 degree, T is monitored the temperature value obtained in real time by temperature sensor;Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, under new yardstick s and coefficient of rotary θ, finally gather positive negative sample, update grader;
Update submodule, for the renewal of visual dictionary:
After every two field picture obtains target location, the result of calculation according to affine transformation parameter, collect all SIFT feature points meeting result parameterAfter F=3 frame, it is thus achieved that new feature point setWherein St-FRepresent the total characteristic obtained from F two field picture to count;Utilize following formula that new and old characteristic point re-starts K cluster: WhereinRepresenting new visual dictionary, the size of visual dictionary remains unchanged;It is forgetting factor, it was shown that proportion shared by old dictionary,More little, the judgement of track rejection is contributed more many by new feature, takes
(3) output module is identified, identification and output for image: utilize track algorithm to obtain target area in image sequence to be identified, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
2. a kind of vacuum packaging equipment with tenacious tracking function according to claim 1, is characterized in that, adopts Wiener filtering to carry out after first-level filtering removes, and now image information also includes the noise of remnants, adopts following two-stage filter to carry out secondary filtering:
J ( x , y ) = &Sigma; i = - m / 2 m / 2 &Sigma; j = - n / 2 n / 2 H ( x , y ) P g ( x + i , y + j )
Wherein, J (x, y) be after filtering after image;Pg(x+i, y+j) represents the function that yardstick is m × n and Pg(x+i, y+j)=q × exp (-(x2+y2)/ω), wherein q is by the coefficient of function normalization, it may be assumed that ∫ ∫ q × exp (-(x2+y2)/ω) dxdy=1.
CN201610045942.0A 2016-01-22 2016-01-22 Vacuum packaging apparatus with stable tracking function Pending CN105740770A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610045942.0A CN105740770A (en) 2016-01-22 2016-01-22 Vacuum packaging apparatus with stable tracking function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610045942.0A CN105740770A (en) 2016-01-22 2016-01-22 Vacuum packaging apparatus with stable tracking function

Publications (1)

Publication Number Publication Date
CN105740770A true CN105740770A (en) 2016-07-06

Family

ID=56246526

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610045942.0A Pending CN105740770A (en) 2016-01-22 2016-01-22 Vacuum packaging apparatus with stable tracking function

Country Status (1)

Country Link
CN (1) CN105740770A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050053267A1 (en) * 2003-09-05 2005-03-10 Varian Medical Systems Technologies, Inc. Systems and methods for tracking moving targets and monitoring object positions
US20080219509A1 (en) * 2007-03-05 2008-09-11 White Marvin S Tracking an object with multiple asynchronous cameras
CN103810723A (en) * 2014-02-27 2014-05-21 西安电子科技大学 Target tracking method based on inter-frame constraint super-pixel encoding

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050053267A1 (en) * 2003-09-05 2005-03-10 Varian Medical Systems Technologies, Inc. Systems and methods for tracking moving targets and monitoring object positions
US20080219509A1 (en) * 2007-03-05 2008-09-11 White Marvin S Tracking an object with multiple asynchronous cameras
CN103810723A (en) * 2014-02-27 2014-05-21 西安电子科技大学 Target tracking method based on inter-frame constraint super-pixel encoding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴京辉: "视频监控目标的跟踪与识别研究", 《中国博士学位论文全文数据库 信息科技辑》 *

Similar Documents

Publication Publication Date Title
CN103761529B (en) A kind of naked light detection method and system based on multicolour model and rectangular characteristic
CN109635758B (en) Intelligent building site video-based safety belt wearing detection method for aerial work personnel
CN113139521B (en) Pedestrian boundary crossing monitoring method for electric power monitoring
CN108509902B (en) Method for detecting call behavior of handheld phone in driving process of driver
CN105718896A (en) Intelligent robot with target recognition function
CN107220664B (en) Oil bottle boxing and counting method based on structured random forest
CN108154496B (en) Electric equipment appearance change identification method suitable for electric power robot
CN105718895A (en) Unmanned aerial vehicle based on visual characteristics
CN107273884A (en) A kind of License Plate Identification method based on mobile terminal camera
CN105740768A (en) Unmanned forklift device based on combination of global and local features
CN102542304B (en) Region segmentation skin-color algorithm for identifying WAP (Wireless Application Protocol) mobile porn image
CN112001336A (en) Pedestrian boundary crossing alarm method, device, equipment and system
CN105718897A (en) Numerical control lathe based on visual characteristics
CN105740770A (en) Vacuum packaging apparatus with stable tracking function
CN106778675B (en) A kind of recognition methods of target in video image object and device
CN114022468B (en) Method for detecting article left-over and lost in security monitoring
CN105574517A (en) Electric vehicle charging pile with stable tracking function
CN105718911A (en) Outdoor transformer capable of target identification
CN108960181A (en) Black smoke vehicle detection method based on multiple dimensioned piecemeal LBP and Hidden Markov Model
CN114283157A (en) Ellipse fitting-based ellipse object segmentation method
CN105740865A (en) Bridge construction apparatus with local and global features combined
CN105740766A (en) Greenhouse ecosystem with stable tracking function
CN105718899A (en) Solar water heater based on visual characteristics
CN107273804A (en) Pedestrian recognition method based on SVMs and depth characteristic
CN105718900A (en) Cultural relic display cabinet based on visual characteristics

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160706