CN105574517A - Electric vehicle charging pile with stable tracking function - Google Patents
Electric vehicle charging pile with stable tracking function Download PDFInfo
- Publication number
- CN105574517A CN105574517A CN201610046346.4A CN201610046346A CN105574517A CN 105574517 A CN105574517 A CN 105574517A CN 201610046346 A CN201610046346 A CN 201610046346A CN 105574517 A CN105574517 A CN 105574517A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- module
- charging pile
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07F—COIN-FREED OR LIKE APPARATUS
- G07F15/00—Coin-freed apparatus with meter-controlled dispensing of liquid, gas or electricity
- G07F15/003—Coin-freed apparatus with meter-controlled dispensing of liquid, gas or electricity for electricity
- G07F15/005—Coin-freed apparatus with meter-controlled dispensing of liquid, gas or electricity for electricity dispensed for the electrical charging of vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an electric vehicle charging pile with a stable tracking function. The electric vehicle charging pile comprises an electric vehicle charging pile body and a monitoring device which is mounted on the electric vehicle charging pile body. The monitoring device comprises a preprocessing module, a detection tracking module and an identification output module, wherein the preprocessing module comprises an image conversion sub-module, an image filtering sub-module and an image reinforcement sub-module. The detection tracking module comprises a construction sub-module, a loss determining sub-module and an updating sub-module. According to the electric vehicle charging pile, video image technology is applied on the electric vehicle charging pile, and malicious destruction behaviors on historical relics can be effectively monitored. The electric vehicle charging pile has advantages of high real-time performance, high positioning accuracy, high self-adapting capability, high retaining integrity of image details, high robustness, etc.
Description
Technical field
The present invention relates to electric automobile charging pile field, be specifically related to a kind of electric automobile charging pile with tenacious tracking function.
Background technology
Its function class of charging pile is similar to the fuel charger inside refuelling station, ground or wall can be fixed on, being installed in public building (public building, market, Public Parking etc.) and parking lot, residential quarter or charging station, can be the charging electric vehicle of various model according to different electric pressures.The input end of charging pile is directly connected with AC network, and output terminal is all equipped with charging plug for being charging electric vehicle.Charging pile generally provides normal charge and rapid charge two kinds of charging modes, the man-machine interactive interface that people can use specific recharged card to provide at charging pile to be swiped the card use, carry out the operations such as corresponding charging modes, duration of charging, cost data printing, charging pile display screen can show the data such as charge volume, expense, duration of charging.
In addition, electric automobile charging pile is as a kind of important expensive device, and its security is particularly important, must prevent and monitor malicious sabotage behavior.
Summary of the invention
For the problems referred to above, the invention provides a kind of electric automobile charging pile with tenacious tracking function.
Object of the present invention realizes by the following technical solutions:
A kind of electric automobile charging pile with tenacious tracking function, comprise electric automobile charging pile and be arranged on the monitoring device on electric automobile charging pile, monitoring device is used for carrying out video image monitoring to the activity near electric automobile charging pile, and monitoring device comprises pretreatment module, detecting and tracking module, identifies output module;
(1) pretreatment module, for carrying out pre-service to the image received, specifically comprises image transformant module, image filtering submodule and image enhaucament submodule:
Image transformant module, for coloured image is converted into gray level image:
Wherein, R (x, y), G (x, y), B (x, y) represent the intensity red green blue value at pixel (x, y) place respectively, and H (x, y) represents the grey scale pixel value at coordinate (x, y) place; Image size is m × n;
Image filtering submodule, for carrying out filtering to gray level image:
Adopt Wiener filtering to carry out after first-level filtering removes, definition svlm image, is designated as M
svlm(x, y), concrete defined formula is: M
svlm(x, y)=a
1j
1(x, y)+a
2j
2(x, y)+a
3j
3(x, y)+a
4j
4(x, y), wherein a
1, a
2, a
3, a
4for variable weight,
J (x, y) is through filtered image;
Image enhaucament submodule:
When
Time,
Wherein, L (x, y) is the gray-scale value after strengthening; ψ (x, y) is the gamma correction coefficient including local message, now
α to be scope be 0 to 1 variable element,
When
And during ω > 50,
Wherein ψ (x, y)=ψ
α(M
svlm(x, y)),
m
hbe in image gray-scale value higher than 128 the average of all pixels, m
lgray-scale value lower than 128 the average of all pixels, and now m=min (m
h, m
l), when α value is known, calculates 256 ψ correction coefficient as look-up table, be designated as
wherein i is index value, utilizes M
svlmthe gray-scale value of (x, y) as index, according to ψ (x, y)=ψ
α(M
svlm(x, y)) quick gamma correction coefficient ψ (x, y) obtaining each pixel in image;
for template correction factor;
(2) detecting and tracking module, specifically comprises and builds submodule, loss differentiation submodule and upgrade submodule:
Build submodule, the structure for visual dictionary:
Obtain position and the yardstick of tracking target at initial frame, around it, choose positive and negative sample training tracker, using tracking results as training set X={x
1, x
2... x
n}
t; And the every width target image in training set is extracted to the SIFT feature of 128 dimensions
wherein S
trepresent the number of SIFT feature in t width target image in training set; After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, is designated as
the feature total amount that can extract
wherein K < < F
n, and
after visual dictionary builds, every width training image is expressed as the form of feature bag, for the frequency representing that in visual dictionary, feature word occurs, with histogram h (x
t) represent, h (x
t) obtain in the following manner: by a width training image X
tin each feature f
s (t)to visual dictionary projection, the feature word the shortest with projector distance represents this feature, after all Projection Characters, add up the frequency of occurrences of each feature word, and normalization obtains training image X
tfeature histogram h (x
t);
Lose and differentiate submodule, for whether differentiating the loss of target:
When a new two field picture arrives, random selecting Z < K histogram from K histogram, and Z=4, form the sub-histogram h that new size is Z
(z)(x
t), sub histogrammic number mostly is most
individual; The corresponding sub histogrammic similarity Φ in certain target area in calculated candidate target area and training set
t_z,
wherein t=1,2 ..., N, z=1,2 ..., N
s, then calculated population similarity Φ
t=1-∏
z(1-Φ
t_z); Similarity Φ=max{ Φ of candidate target region and target
t, t} represents, then track rejection judges that formula is:
Wherein gs be artificial setting sentence mistake threshold values; As u=1, target is by tenacious tracking, as u=0, and track rejection;
When track rejection, definition affine Transform Model:
Wherein (x
t, y
t) and (x
t-1, y
t-1) being respectively the position coordinates of Corresponding matching unique point in the position coordinates of certain SITF unique point in present frame target and previous frame target, both are known quantity; S is scale coefficient, and θ is coefficient of rotary, e and f represents translation coefficient,
For temperature rotation correction coefficient,
For temperature translation correction factor, μ
1and μ
2for revising the image rotation and translation error that cause because of ambient temperature deviation, T
0for the standard temperature artificially set, be set to 20 degree, T is the temperature value obtained by temperature sensor Real-Time Monitoring; Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, finally under new yardstick s and coefficient of rotary θ, gather positive negative sample, upgrade sorter;
Upgrade submodule, the renewal for visual dictionary:
After every two field picture obtains target location, according to the result of calculation of affine transformation parameter, collect all SIFT feature points meeting result parameter
after F=3 frame, obtain new feature point set
wherein S
t-Frepresent the total characteristic obtained from F two field picture to count; Following formula is utilized to re-start K cluster to new and old unique point:
wherein
represent new visual dictionary, the size of visual dictionary remains unchanged;
be forgetting factor, indicate the proportion shared by old dictionary,
less, the judgement contribution of new feature to track rejection is more, gets
(3) output module is identified, identification and output for image: in image sequence to be identified, utilize track algorithm to obtain target area, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
Preferably, adopt Wiener filtering to carry out after first-level filtering removes, now image information also includes remaining noise, adopts following two-stage filter to carry out secondary filtering:
Wherein, J (x, y) be after filtering after image; P
g(x+i, y+j) represents the function that yardstick is m × n, and P
g(x+i, y+j)=q × exp (-(x
2+ y
2)/ω), wherein q is by the coefficient of function normalization, that is: ∫ ∫ q × exp (-(x
2+ y
2)/ω) dxdy=1.
The beneficial effect of this electric automobile charging pile is: at image pre-processing phase, the image strengthened can adjust according to the size adaptation of template, improve and strengthen effect, and when in different templates size Rule of judgment energy auto modification, and consider visual custom and human eye to the nonlinear relationship of the perceptibility of different color with colouring intensity; M × N number of power exponent computing is reduced to 256, improves counting yield; At target detection and tracking phase, the error that different temperatures causes the rotation of image and translation to cause can be eliminated, improve discrimination, image detail is after treatment more clear, and calculated amount significantly reduces relative to classic method, effectively can adapt to target scale change, and accurately can judge whether target loses, can again be detected and tenacious tracking after target comes back to visual field.In addition, this electric automobile charging pile has that real-time is good, the advantage of accurate positioning and strong robustness, and achieves good effect having fast in the target detection and tracking of blocking.
Accompanying drawing explanation
The invention will be further described to utilize accompanying drawing, but the embodiment in accompanying drawing does not form any limitation of the invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, can also obtain other accompanying drawing according to the following drawings.
Fig. 1 is a kind of structured flowchart with the electric automobile charging pile of tenacious tracking function;
Fig. 2 is a kind of outside schematic diagram with the electric automobile charging pile of tenacious tracking function.
Embodiment
The invention will be further described with the following Examples.
Embodiment 1: as shown in Figure 1-2, a kind of electric automobile charging pile with tenacious tracking function, the monitoring device 4 comprising electric automobile charging pile 5 and be arranged on electric automobile charging pile 5, monitoring device 4 is for carrying out video image monitoring to the activity near electric automobile charging pile 5, and monitoring device 4 comprises pretreatment module 1, detecting and tracking module 2, identifies output module 3.
(1) pretreatment module 1, for carrying out pre-service to the image received, specifically comprises image transformant module 11, image filtering submodule 12 and image enhaucament submodule 13:
Image transformant module 11, for coloured image is converted into gray level image:
Wherein, R (x, y), G (x, y), B (x, y) represent the intensity red green blue value at pixel (x, y) place respectively, and H (x, y) represents the grey scale pixel value at coordinate (x, y) place; Image size is m × n;
Image filtering submodule 12, for carrying out filtering to gray level image:
Adopt Wiener filtering to carry out after first-level filtering removes, definition svlm image, is designated as M
svlm(x, y), concrete defined formula is: M
svlm(x, y)=a
1j
1(x, y)+a
2j
2(x, y)+a
3j
3(x, y)+a
4j
4(x, y), wherein a
1, a
2, a
3, a
4for variable weight,
J (x, y) is through filtered image;
Image enhaucament submodule 13:
When
Time,
Wherein, L (x, y) is the gray-scale value after strengthening; ψ (x, y) is the gamma correction coefficient including local message, now
α to be scope be 0 to 1 variable element,
When
And during ω > 50,
Wherein ψ (x, y)=ψ
α(M
svlm(x, y)),
m
hbe in image gray-scale value higher than 128 the average of all pixels, m
lgray-scale value lower than 128 the average of all pixels, and now m=min (m
h, m
l), when α value is known, calculates 256 ψ correction coefficient as look-up table, be designated as
wherein i is index value, utilizes M
svlmthe gray-scale value of (x, y) as index, according to ψ (x, y)=ψ
α(M
svlm(x, y)) quick gamma correction coefficient ψ (x, y) obtaining each pixel in image;
for template correction factor;
(2) detecting and tracking module 2, specifically comprises and builds submodule 21, loss differentiation submodule 22 and upgrade submodule 23:
Build submodule 21, the structure for visual dictionary:
Obtain position and the yardstick of tracking target at initial frame, around it, choose positive and negative sample training tracker, using tracking results as training set X={x
1, x
2... x
n}
t; And the every width target image in training set is extracted to the SIFT feature of 128 dimensions
wherein S
trepresent the number of SIFT feature in t width target image in training set; After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, is designated as
the feature total amount that can extract
wherein K < < F
n, and
after visual dictionary builds, every width training image is expressed as the form of feature bag, for the frequency representing that in visual dictionary, feature word occurs, with histogram h (x
t) represent, h (x
t) obtain in the following manner: by a width training image X
tin each feature f
s (t)to visual dictionary projection, the feature word the shortest with projector distance represents this feature, after all Projection Characters, add up the frequency of occurrences of each feature word, and normalization obtains training image X
tfeature histogram h (x
t);
Lose and differentiate submodule 22, for whether differentiating the loss of target:
When a new two field picture arrives, random selecting Z < K histogram from K histogram, and Z=4, form the sub-histogram h that new size is Z
(z)(x
t), sub histogrammic number mostly is most
individual; The corresponding sub histogrammic similarity Φ in certain target area in calculated candidate target area and training set
t_z,
wherein t=1,2 ..., N, z=1,2 ..., N
s, then calculated population similarity Φ
t=1-∏
z(1-Φ
t_z); Similarity Φ=max{ Φ of candidate target region and target
t, t} represents, then track rejection judges that formula is:
Wherein gs be artificial setting sentence mistake threshold values; As u=1, target is by tenacious tracking, as u=0, and track rejection;
When track rejection, definition affine Transform Model:
Wherein (x
t, y
t) and (x
t-1, y
t-1) being respectively the position coordinates of Corresponding matching unique point in the position coordinates of certain SITF unique point in present frame target and previous frame target, both are known quantity; S is scale coefficient, and θ is coefficient of rotary, e and f represents translation coefficient,
For temperature rotation correction coefficient,
For temperature translation correction factor, μ
1and μ
2for revising the image rotation and translation error that cause because of ambient temperature deviation, T
0for the standard temperature artificially set, be set to 20 degree, T is the temperature value obtained by temperature sensor Real-Time Monitoring; Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, finally under new yardstick s and coefficient of rotary θ, gather positive negative sample, upgrade sorter;
Upgrade submodule 23, the renewal for visual dictionary:
After every two field picture obtains target location, according to the result of calculation of affine transformation parameter, collect all SIFT feature points meeting result parameter
after F=3 frame, obtain new feature point set
wherein S
t-Frepresent the total characteristic obtained from F two field picture to count; Following formula is utilized to re-start K cluster to new and old unique point:
wherein
represent new visual dictionary, the size of visual dictionary remains unchanged;
be forgetting factor, indicate the proportion shared by old dictionary,
less, the judgement contribution of new feature to track rejection is more, gets
(3) output module is identified, identification and output for image: in image sequence to be identified, utilize track algorithm to obtain target area, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
Preferably, adopt Wiener filtering to carry out after first-level filtering removes, now image information also includes remaining noise, adopts following two-stage filter to carry out secondary filtering:
Wherein, J (x, y) be after filtering after image; P
g(x+i, y+j) represents the function that yardstick is m × n, and P
g(x+i, y+j)=q × exp (-(x
2+ y
2)/ω), wherein q is by the coefficient of function normalization, that is: ∫ ∫ q × exp (-(x
2+ y
2)/ω) dxdy=1.
The electric automobile charging pile of this embodiment, at image pre-processing phase, the image strengthened can adjust according to the size adaptation of template, improve and strengthen effect, and when in different templates size Rule of judgment energy auto modification, and consider visual custom and human eye to the nonlinear relationship of the perceptibility of different color with colouring intensity; Take full advantage of local feature and the global characteristics of image, there is adaptivity, excessive enhancing can be suppressed, obvious to the image enhancement effects obtained under complex illumination environment; M × N number of power exponent computing is reduced to 256, improves counting yield, Z=4, F=3,
calculating average frame per second is 15FPS, and calculated amount is less than dictionary algorithm of the same type; At target detection and tracking phase, the error that different temperatures causes the rotation of image and translation to cause can be eliminated, improve discrimination, image detail is after treatment more clear, and calculated amount significantly reduces relative to classic method, effectively can adapt to target scale change, and accurately can judge whether target loses, can again be detected and tenacious tracking after target comes back to visual field, until still can tenacious tracking target after 110 frames.In addition, this electric automobile charging pile has that real-time is good, the advantage of accurate positioning and strong robustness, and has good effect having fast in the target detection and tracking of blocking, and achieves beyond thought effect.
Embodiment 2: as shown in Figure 1-2, a kind of electric automobile charging pile with tenacious tracking function, the monitoring device 4 comprising electric automobile charging pile 5 and be arranged on electric automobile charging pile 5, monitoring device 4 is for carrying out video image monitoring to the activity near electric automobile charging pile 5, and monitoring device 4 comprises pretreatment module 1, detecting and tracking module 2, identifies output module 3.
(1) pretreatment module 1, for carrying out pre-service to the image received, specifically comprises image transformant module 11, image filtering submodule 12 and image enhaucament submodule 13:
Image transformant module 11, for coloured image is converted into gray level image:
Wherein, R (x, y), G (x, y), B (x, y) represent the intensity red green blue value at pixel (x, y) place respectively, and H (x, y) represents the grey scale pixel value at coordinate (x, y) place; Image size is m × n;
Image filtering submodule 12, for carrying out filtering to gray level image:
Adopt Wiener filtering to carry out after first-level filtering removes, definition svlm image, is designated as M
svlm(x, y), concrete defined formula is: M
svlm(x, y)=a
1j
1(x, y)+a
2j
2(x, y)+a
3j
3(x, y)+a
4j
4(x, y), wherein a
1, a
2, a
3, a
4for variable weight,
J (x, y) is through filtered image;
Image enhaucament submodule 13:
When
Time,
Wherein, L (x, y) is the gray-scale value after strengthening; ψ (x, y) is the gamma correction coefficient including local message, now
α to be scope be 0 to 1 variable element,
When
And during ω > 50,
Wherein ψ (x, y)=ψ
α(M
svlm(x, y)),
m
hbe in image gray-scale value higher than 128 the average of all pixels, m
lgray-scale value lower than 128 the average of all pixels, and now m=min (m
h, m
l), when α value is known, calculates 256 ψ correction coefficient as look-up table, be designated as
wherein i is index value, utilizes M
svlmthe gray-scale value of (x, y) as index, according to ψ (x, y)=ψ
α(M
svlm(x, y)) quick gamma correction coefficient ψ (x, y) obtaining each pixel in image;
for template correction factor;
(2) detecting and tracking module 2, specifically comprises and builds submodule 21, loss differentiation submodule 22 and upgrade submodule 23:
Build submodule 21, the structure for visual dictionary:
Obtain position and the yardstick of tracking target at initial frame, around it, choose positive and negative sample training tracker, using tracking results as training set X={x
1, x
2... x
n}
t; And the every width target image in training set is extracted to the SIFT feature of 128 dimensions
wherein S
trepresent the number of SIFT feature in t width target image in training set; After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, is designated as
the feature total amount that can extract
wherein K < < F
n, and
after visual dictionary builds, every width training image is expressed as the form of feature bag, for the frequency representing that in visual dictionary, feature word occurs, with histogram h (x
t) represent, h (x
t) obtain in the following manner: by a width training image X
tin each feature f
s (t)to visual dictionary projection, the feature word the shortest with projector distance represents this feature, after all Projection Characters, add up the frequency of occurrences of each feature word, and normalization obtains training image X
tfeature histogram h (x
t);
Lose and differentiate submodule 22, for whether differentiating the loss of target:
When a new two field picture arrives, random selecting Z < K histogram from K histogram, and Z=5, form the sub-histogram h that new size is Z
(z)(x
t), sub histogrammic number mostly is most
individual; The corresponding sub histogrammic similarity Φ in certain target area in calculated candidate target area and training set
t_z,
wherein t=1,2 ..., N, z=1,2 ..., N
s, then calculated population similarity Φ
t=1-∏
z(1-Φ
t_z); Similarity Φ=max{ Φ of candidate target region and target
t, t} represents, then track rejection judges that formula is:
Wherein gs be artificial setting sentence mistake threshold values; As u=1, target is by tenacious tracking, as u=0, and track rejection;
When track rejection, definition affine Transform Model:
Wherein (x
t, y
t) and (x
t-1, y
t-1) being respectively the position coordinates of Corresponding matching unique point in the position coordinates of certain SITF unique point in present frame target and previous frame target, both are known quantity; S is scale coefficient, and θ is coefficient of rotary, e and f represents translation coefficient,
For temperature rotation correction coefficient,
For temperature translation correction factor, μ
1and μ
2for revising the image rotation and translation error that cause because of ambient temperature deviation, T
0for the standard temperature artificially set, be set to 20 degree, T is the temperature value obtained by temperature sensor Real-Time Monitoring; Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, finally under new yardstick s and coefficient of rotary θ, gather positive negative sample, upgrade sorter;
Upgrade submodule 23, the renewal for visual dictionary:
After every two field picture obtains target location, according to the result of calculation of affine transformation parameter, collect all SIFT feature points meeting result parameter
after F=4 frame, obtain new feature point set
wherein S
t-Frepresent the total characteristic obtained from F two field picture to count; Following formula is utilized to re-start K cluster to new and old unique point:
wherein
represent new visual dictionary, the size of visual dictionary remains unchanged;
be forgetting factor, indicate the proportion shared by old dictionary,
less, the judgement contribution of new feature to track rejection is more, gets
(3) output module is identified, identification and output for image: in image sequence to be identified, utilize track algorithm to obtain target area, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
Preferably, adopt Wiener filtering to carry out after first-level filtering removes, now image information also includes remaining noise, adopts following two-stage filter to carry out secondary filtering:
Wherein, J (x, y) be after filtering after image; P
g(x+i, y+j) represents the function that yardstick is m × n, and P
g(x+i, y+j)=q × exp (-(x
2+ y
2)/ω), wherein q is by the coefficient of function normalization, that is: ∫ ∫ q × exp (-(x
2+ y
2)/ω) dxdy=1.
The electric automobile charging pile of this embodiment, at image pre-processing phase, the image strengthened can adjust according to the size adaptation of template, improve and strengthen effect, and when in different templates size Rule of judgment energy auto modification, and consider visual custom and human eye to the nonlinear relationship of the perceptibility of different color with colouring intensity; Take full advantage of local feature and the global characteristics of image, there is adaptivity, excessive enhancing can be suppressed, obvious to the image enhancement effects obtained under complex illumination environment; M × N number of power exponent computing is reduced to 256, improves counting yield, Z=5, F=4,
calculating average frame per second is 16FPS, and calculated amount is less than dictionary algorithm of the same type; At target detection and tracking phase, the error that different temperatures causes the rotation of image and translation to cause can be eliminated, improve discrimination, image detail is after treatment more clear, and calculated amount significantly reduces relative to classic method, effectively can adapt to target scale change, and accurately can judge whether target loses, can again be detected and tenacious tracking after target comes back to visual field, until still can tenacious tracking target after 115 frames.In addition, this electric automobile charging pile has that real-time is good, the advantage of accurate positioning and strong robustness, and has good effect having fast in the target detection and tracking of blocking, and achieves beyond thought effect.
Embodiment 3: as shown in Figure 1-2, a kind of electric automobile charging pile with tenacious tracking function, the monitoring device 4 comprising electric automobile charging pile 5 and be arranged on electric automobile charging pile 5, monitoring device 4 is for carrying out video image monitoring to the activity near electric automobile charging pile 5, and monitoring device 4 comprises pretreatment module 1, detecting and tracking module 2, identifies output module 3.
(1) pretreatment module 1, for carrying out pre-service to the image received, specifically comprises image transformant module 11, image filtering submodule 12 and image enhaucament submodule 13:
Image transformant module 11, for coloured image is converted into gray level image:
Wherein, R (x, y), G (x, y), B (x, y) represent the intensity red green blue value at pixel (x, y) place respectively, and H (x, y) represents the grey scale pixel value at coordinate (x, y) place; Image size is m × n;
Image filtering submodule 12, for carrying out filtering to gray level image:
Adopt Wiener filtering to carry out after first-level filtering removes, definition svlm image, is designated as M
svlm(x, y), concrete defined formula is: M
svlm(x, y)=a
1j
1(x, y)+a
2j
2(x, y)+a
3j
3(x, y)+a
4j
4(x, y), wherein a
1, a
2, a
3, a
4for variable weight,
J (x, y) is through filtered image;
Image enhaucament submodule 13:
When
Time,
Wherein, L (x, y) is the gray-scale value after strengthening; ψ (x, y) is the gamma correction coefficient including local message, now
α to be scope be 0 to 1 variable element,
When
And during ω > 50,
Wherein ψ (x, y)=ψ
α(M
svlm(x, y)),
m
hbe in image gray-scale value higher than 128 the average of all pixels, m
lgray-scale value lower than 128 the average of all pixels, and now m=min (m
h, m
l), when α value is known, calculates 256 ψ correction coefficient as look-up table, be designated as
wherein i is index value, utilizes M
svlmthe gray-scale value of (x, y) as index, according to ψ (x, y)=ψ
α(M
svlm(x, y)) quick gamma correction coefficient ψ (x, y) obtaining each pixel in image;
for template correction factor;
(2) detecting and tracking module 2, specifically comprises and builds submodule 21, loss differentiation submodule 22 and upgrade submodule 23:
Build submodule 21, the structure for visual dictionary:
Obtain position and the yardstick of tracking target at initial frame, around it, choose positive and negative sample training tracker, using tracking results as training set X={x
1, x
2... x
n}
t; And the every width target image in training set is extracted to the SIFT feature of 128 dimensions
wherein S
trepresent the number of SIFT feature in t width target image in training set; After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, is designated as
the feature total amount that can extract
wherein K < < F
n, and
after visual dictionary builds, every width training image is expressed as the form of feature bag, for the frequency representing that in visual dictionary, feature word occurs, with histogram h (x
t) represent, h (x
t) obtain in the following manner: by a width training image X
tin each feature f
s (t)to visual dictionary projection, the feature word the shortest with projector distance represents this feature, after all Projection Characters, add up the frequency of occurrences of each feature word, and normalization obtains training image X
tfeature histogram h (x
t);
Lose and differentiate submodule 22, for whether differentiating the loss of target:
When a new two field picture arrives, random selecting Z < K histogram from K histogram, and Z=6, form the sub-histogram h that new size is Z
(z)(x
t), sub histogrammic number mostly is most
individual; The corresponding sub histogrammic similarity Φ in certain target area in calculated candidate target area and training set
t_z,
wherein t=1,2 ..., N, z=1,2 ..., N
s, then calculated population similarity Φ
t=1-∏
z(1-Φ
t_z); Similarity Φ=max{ Φ of candidate target region and target
t, t} represents, then track rejection judges that formula is:
Wherein gs be artificial setting sentence mistake threshold values; As u=1, target is by tenacious tracking, as u=0, and track rejection;
When track rejection, definition affine Transform Model:
Wherein (x
t, y
t) and (x
t-1, y
t-1) being respectively the position coordinates of Corresponding matching unique point in the position coordinates of certain SITF unique point in present frame target and previous frame target, both are known quantity; S is scale coefficient, and θ is coefficient of rotary, e and f represents translation coefficient,
For temperature rotation correction coefficient,
For temperature translation correction factor, μ
1and μ
2for revising the image rotation and translation error that cause because of ambient temperature deviation, T
0for the standard temperature artificially set, be set to 20 degree, T is the temperature value obtained by temperature sensor Real-Time Monitoring; Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, finally under new yardstick s and coefficient of rotary θ, gather positive negative sample, upgrade sorter;
Upgrade submodule 23, the renewal for visual dictionary:
After every two field picture obtains target location, according to the result of calculation of affine transformation parameter, collect all SIFT feature points meeting result parameter
after F=5 frame, obtain new feature point set
wherein S
t-Frepresent the total characteristic obtained from F two field picture to count; Following formula is utilized to re-start K cluster to new and old unique point:
wherein
represent new visual dictionary, the size of visual dictionary remains unchanged;
be forgetting factor, indicate the proportion shared by old dictionary,
less, the judgement contribution of new feature to track rejection is more, gets
(3) output module 3 is identified, identification and output for image: in image sequence to be identified, utilize track algorithm to obtain target area, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
Preferably, adopt Wiener filtering to carry out after first-level filtering removes, now image information also includes remaining noise, adopts following two-stage filter to carry out secondary filtering:
Wherein, J (x, y) be after filtering after image; P
g(x+i, y+j) represents the function that yardstick is m × n, and P
g(x+i, y+j)=q × exp (-(x
2+ y
2)/ω), wherein q is by the coefficient of function normalization, that is: ∫ ∫ q × exp (-(x
2+ y
2)/ω) dxdy=1.
The electric automobile charging pile of this embodiment, at image pre-processing phase, the image strengthened can adjust according to the size adaptation of template, improve and strengthen effect, and when in different templates size Rule of judgment energy auto modification, and consider visual custom and human eye to the nonlinear relationship of the perceptibility of different color with colouring intensity; Take full advantage of local feature and the global characteristics of image, there is adaptivity, excessive enhancing can be suppressed, obvious to the image enhancement effects obtained under complex illumination environment; M × N number of power exponent computing is reduced to 256, improves counting yield, Z=6, F=5,
calculating average frame per second is 17FPS, and calculated amount is less than dictionary algorithm of the same type; At target detection and tracking phase, the error that different temperatures causes the rotation of image and translation to cause can be eliminated, improve discrimination, image detail is after treatment more clear, and calculated amount significantly reduces relative to classic method, effectively can adapt to target scale change, and accurately can judge whether target loses, can again be detected and tenacious tracking after target comes back to visual field, until still can tenacious tracking target after 120 frames.In addition, this electric automobile charging pile has that real-time is good, the advantage of accurate positioning and strong robustness, and has good effect having fast in the target detection and tracking of blocking, and achieves beyond thought effect.
Embodiment 4: as shown in Figure 1-2, a kind of electric automobile charging pile with tenacious tracking function, the monitoring device 4 comprising electric automobile charging pile 5 and be arranged on electric automobile charging pile 5, monitoring device 4 is for carrying out video image monitoring to the activity near electric automobile charging pile 5, and monitoring device 4 comprises pretreatment module 1, detecting and tracking module 2, identifies output module 3.
(1) pretreatment module 1, for carrying out pre-service to the image received, specifically comprises image transformant module 11, image filtering submodule 12 and image enhaucament submodule 13:
Image transformant module 11, for coloured image is converted into gray level image:
Wherein, R (x, y), G (x, y), B (x, y) represent the intensity red green blue value at pixel (x, y) place respectively, and H (x, y) represents the grey scale pixel value at coordinate (x, y) place; Image size is m × n;
Image filtering submodule 12, for carrying out filtering to gray level image:
Adopt Wiener filtering to carry out after first-level filtering removes, definition svlm image, is designated as M
svlm(x, y), concrete defined formula is: M
svlm(x, y)=a
1j
1(x, y)+a
2j
2(x, y)+a
3j
3(x, y)+a
4j
4(x, y), wherein a
1, a
2, a
3, a
4for variable weight,
J (x, y) is through filtered image;
Image enhaucament submodule 13:
When
Time,
Wherein, L (x, y) is the gray-scale value after strengthening; ψ (x, y) is the gamma correction coefficient including local message, now
α to be scope be 0 to 1 variable element,
When
And during ω > 50,
Wherein ψ (x, y)=ψ
α(M
svlm(x, y)),
m
hbe in image gray-scale value higher than 128 the average of all pixels, m
lgray-scale value lower than 128 the average of all pixels, and now m=min (m
h, m
l), when α value is known, calculates 256 ψ correction coefficient as look-up table, be designated as
wherein i is index value, utilizes M
svlmthe gray-scale value of (x, y) as index, according to ψ (x, y)=ψ
α(M
svlm(x, y)) quick gamma correction coefficient ψ (x, y) obtaining each pixel in image;
for template correction factor;
(2) detecting and tracking module 2, specifically comprises and builds submodule 21, loss differentiation submodule 22 and upgrade submodule 23:
Build submodule 21, the structure for visual dictionary:
Obtain position and the yardstick of tracking target at initial frame, around it, choose positive and negative sample training tracker, using tracking results as training set X={x
1, x
2... x
n}
t; And the every width target image in training set is extracted to the SIFT feature of 128 dimensions
wherein S
trepresent the number of SIFT feature in t width target image in training set; After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, is designated as
the feature total amount that can extract
wherein K < < F
n, and
after visual dictionary builds, every width training image is expressed as the form of feature bag, for the frequency representing that in visual dictionary, feature word occurs, with histogram h (x
t) represent, h (x
t) obtain in the following manner: by a width training image X
tin each feature f
s (t)to visual dictionary projection, the feature word the shortest with projector distance represents this feature, after all Projection Characters, add up the frequency of occurrences of each feature word, and normalization obtains training image X
tfeature histogram h (x
t);
Lose and differentiate submodule 22, for whether differentiating the loss of target:
When a new two field picture arrives, random selecting Z < K histogram from K histogram, and Z=7, form the sub-histogram h that new size is Z
(z)(x
t), sub histogrammic number mostly is most
individual; The corresponding sub histogrammic similarity Φ in certain target area in calculated candidate target area and training set
t_z,
wherein t=1,2 ..., N, z=1,2 ..., N
s, then calculated population similarity Φ
t=1-∏
z(1-Φ
t_z); Similarity Φ=max{ Φ of candidate target region and target
t, t} represents, then track rejection judges that formula is:
Wherein gs be artificial setting sentence mistake threshold values; As u=1, target is by tenacious tracking, as u=0, and track rejection;
When track rejection, definition affine Transform Model:
Wherein (x
t, y
t) and (x
t-1, y
t-1) being respectively the position coordinates of Corresponding matching unique point in the position coordinates of certain SITF unique point in present frame target and previous frame target, both are known quantity; S is scale coefficient, and θ is coefficient of rotary, e and f represents translation coefficient,
For temperature rotation correction coefficient,
For temperature translation correction factor, μ
1and μ
2for revising the image rotation and translation error that cause because of ambient temperature deviation, T
0for the standard temperature artificially set, be set to 20 degree, T is the temperature value obtained by temperature sensor Real-Time Monitoring; Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, finally under new yardstick s and coefficient of rotary θ, gather positive negative sample, upgrade sorter;
Upgrade submodule 23, the renewal for visual dictionary:
After every two field picture obtains target location, according to the result of calculation of affine transformation parameter, collect all SIFT feature points meeting result parameter
after F=6 frame, obtain new feature point set
wherein S
t-Frepresent the total characteristic obtained from F two field picture to count; Following formula is utilized to re-start K cluster to new and old unique point:
wherein
represent new visual dictionary, the size of visual dictionary remains unchanged;
be forgetting factor, indicate the proportion shared by old dictionary,
less, the judgement contribution of new feature to track rejection is more, gets
(3) output module 3 is identified, identification and output for image: in image sequence to be identified, utilize track algorithm to obtain target area, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
Preferably, adopt Wiener filtering to carry out after first-level filtering removes, now image information also includes remaining noise, adopts following two-stage filter to carry out secondary filtering:
Wherein, J (x, y) be after filtering after image; P
g(x+i, y+j) represents the function that yardstick is m × n, and P
g(x+i, y+j)=q × exp (-(x
2+ y
2)/ω), wherein q is by the coefficient of function normalization, that is: ∫ ∫ q × exp (-(x
2+ y
2)/ω) dxdy=1.
The electric automobile charging pile of this embodiment, at image pre-processing phase, the image strengthened can adjust according to the size adaptation of template, improve and strengthen effect, and when in different templates size Rule of judgment energy auto modification, and consider visual custom and human eye to the nonlinear relationship of the perceptibility of different color with colouring intensity; Take full advantage of local feature and the global characteristics of image, there is adaptivity, excessive enhancing can be suppressed, obvious to the image enhancement effects obtained under complex illumination environment; M × N number of power exponent computing is reduced to 256, improves counting yield, Z=7, F=6, φ=0.18, and calculating average frame per second is 18FPS, and calculated amount is less than dictionary algorithm of the same type; At target detection and tracking phase, the error that different temperatures causes the rotation of image and translation to cause can be eliminated, improve discrimination, image detail is after treatment more clear, and calculated amount significantly reduces relative to classic method, effectively can adapt to target scale change, and accurately can judge whether target loses, can again be detected and tenacious tracking after target comes back to visual field, until still can tenacious tracking target after 125 frames.In addition, this electric automobile charging pile has that real-time is good, the advantage of accurate positioning and strong robustness, and has good effect having fast in the target detection and tracking of blocking, and achieves beyond thought effect.
Embodiment 5: as shown in Figure 1-2, a kind of electric automobile charging pile with tenacious tracking function, the monitoring device 4 comprising electric automobile charging pile 5 and be arranged on electric automobile charging pile 5, monitoring device 4 is for carrying out video image monitoring to the activity near electric automobile charging pile 5, and monitoring device 4 comprises pretreatment module 1, detecting and tracking module 2, identifies output module 3.
(1) pretreatment module 1, for carrying out pre-service to the image received, specifically comprises image transformant module 11, image filtering submodule 12 and image enhaucament submodule 13:
Image transformant module 11, for coloured image is converted into gray level image:
Wherein, R (x, y), G (x, y), B (x, y) represent the intensity red green blue value at pixel (x, y) place respectively, and H (x, y) represents the grey scale pixel value at coordinate (x, y) place; Image size is m × n;
Image filtering submodule 12, for carrying out filtering to gray level image:
Adopt Wiener filtering to carry out after first-level filtering removes, definition svlm image, is designated as M
svlm(x, y), concrete defined formula is: M
svlm(x, y)=a
1j
1(x, y)+a
2j
2(x, y)+a
3j
3(x, y)+a
4j
4(x, y), wherein a
1, a
2, a
3, a
4for variable weight,
J (x, y) is through filtered image;
Image enhaucament submodule 13:
When
Time,
Wherein, L (x, y) is the gray-scale value after strengthening; ψ (x, y) is the gamma correction coefficient including local message, now
α to be scope be 0 to 1 variable element,
When
And during ω > 50,
Wherein ψ (x, y)=ψ
α(M
svlm(x, y)),
m
hbe in image gray-scale value higher than 128 the average of all pixels, m
lgray-scale value lower than 128 the average of all pixels, and now m=min (m
h, m
l), when α value is known, calculates 256 ψ correction coefficient as look-up table, be designated as
wherein i is index value, utilizes M
svlmthe gray-scale value of (x, y) as index, according to ψ (x, y)=ψ
α(M
svlm(x, y)) quick gamma correction coefficient ψ (x, y) obtaining each pixel in image;
for template correction factor;
(2) detecting and tracking module 2, specifically comprises and builds submodule 21, loss differentiation submodule 22 and upgrade submodule 23:
Build submodule 21, the structure for visual dictionary:
Obtain position and the yardstick of tracking target at initial frame, around it, choose positive and negative sample training tracker, using tracking results as training set X={x
1, x
2... x
n}
t; And the every width target image in training set is extracted to the SIFT feature of 128 dimensions
wherein S
trepresent the number of SIFT feature in t width target image in training set; After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, is designated as
the feature total amount that can extract
wherein K < < F
n, and
after visual dictionary builds, every width training image is expressed as the form of feature bag, for the frequency representing that in visual dictionary, feature word occurs, with histogram h (x
t) represent, h (x
t) obtain in the following manner: by a width training image X
tin each feature f
s (t)to visual dictionary projection, the feature word the shortest with projector distance represents this feature, after all Projection Characters, add up the frequency of occurrences of each feature word, and normalization obtains training image X
tfeature histogram h (x
t);
Lose and differentiate submodule 22, for whether differentiating the loss of target:
When a new two field picture arrives, random selecting Z < K histogram from K histogram, and Z=8, form the sub-histogram h that new size is Z
(z)(x
t), sub histogrammic number mostly is most
individual; The corresponding sub histogrammic similarity Φ in certain target area in calculated candidate target area and training set
t_z,
wherein t=1,2 ..., N, z=1,2 ..., N
s, then calculated population similarity Φ
t=1-∏
z(1-Φ
t_z); Similarity Φ=max{ Φ of candidate target region and target
t, t} represents, then track rejection judges that formula is:
Wherein gs be artificial setting sentence mistake threshold values; As u=1, target is by tenacious tracking, as u=0, and track rejection;
When track rejection, definition affine Transform Model:
Wherein (x
t, y
t) and (x
t-1, y
t-1) being respectively the position coordinates of Corresponding matching unique point in the position coordinates of certain SITF unique point in present frame target and previous frame target, both are known quantity; S is scale coefficient, and θ is coefficient of rotary, e and f represents translation coefficient,
For temperature rotation correction coefficient,
For temperature translation correction factor, μ
1and μ
2for revising the image rotation and translation error that cause because of ambient temperature deviation, T
0for the standard temperature artificially set, be set to 20 degree, T is the temperature value obtained by temperature sensor Real-Time Monitoring; Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, finally under new yardstick s and coefficient of rotary θ, gather positive negative sample, upgrade sorter;
Upgrade submodule 23, the renewal for visual dictionary:
After every two field picture obtains target location, according to the result of calculation of affine transformation parameter, collect all SIFT feature points meeting result parameter
after F=7 frame, obtain new feature point set
wherein S
t-Frepresent the total characteristic obtained from F two field picture to count; Following formula is utilized to re-start K cluster to new and old unique point:
wherein
represent new visual dictionary, the size of visual dictionary remains unchanged;
be forgetting factor, indicate the proportion shared by old dictionary,
less, the judgement contribution of new feature to track rejection is more, gets
(3) output module 3 is identified, identification and output for image: in image sequence to be identified, utilize track algorithm to obtain target area, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
Preferably, adopt Wiener filtering to carry out after first-level filtering removes, now image information also includes remaining noise, adopts following two-stage filter to carry out secondary filtering:
Wherein, J (x, y) be after filtering after image; P
g(x+i, y+j) represents the function that yardstick is m × n, and P
g(x+i, y+j)=q × exp (-(x
2+ y
2)/ω), wherein q is by the coefficient of function normalization, that is: ∫ ∫ q × exp (-(x
2+ y
2)/ω) dxdy=1.
The electric automobile charging pile of this embodiment, at image pre-processing phase, the image strengthened can adjust according to the size adaptation of template, improve and strengthen effect, and when in different templates size Rule of judgment energy auto modification, and consider visual custom and human eye to the nonlinear relationship of the perceptibility of different color with colouring intensity; Take full advantage of local feature and the global characteristics of image, there is adaptivity, excessive enhancing can be suppressed, obvious to the image enhancement effects obtained under complex illumination environment; M × N number of power exponent computing is reduced to 256, improves counting yield, Z=8, F=7,
calculating average frame per second is 19FPS, and calculated amount is less than dictionary algorithm of the same type; At target detection and tracking phase, the error that different temperatures causes the rotation of image and translation to cause can be eliminated, improve discrimination, image detail is after treatment more clear, and calculated amount significantly reduces relative to classic method, effectively can adapt to target scale change, and accurately can judge whether target loses, can again be detected and tenacious tracking after target comes back to visual field, until still can tenacious tracking target after 130 frames.In addition, this electric automobile charging pile has that real-time is good, the advantage of accurate positioning and strong robustness, and has good effect having fast in the target detection and tracking of blocking, and achieves beyond thought effect.
Claims (2)
1. one kind has the electric automobile charging pile of tenacious tracking function, comprise electric automobile charging pile and be arranged on the monitoring device on electric automobile charging pile, monitoring device is used for carrying out video image monitoring to the activity near electric automobile charging pile, it is characterized in that, monitoring device comprises pretreatment module, detecting and tracking module, identifies output module;
(1) pretreatment module, for carrying out pre-service to the image received, specifically comprises image transformant module, image filtering submodule and image enhaucament submodule:
Image transformant module, for coloured image is converted into gray level image:
Wherein, R (x, y), G (x, y), B (x, y) represent the intensity red green blue value at pixel (x, y) place respectively, and H (x, y) represents the grey scale pixel value at coordinate (x, y) place; Image size is m × n;
Image filtering submodule, for carrying out filtering to gray level image:
Adopt Wiener filtering to carry out after first-level filtering removes, definition svlm image, is designated as M
svlm(x, y), concrete defined formula is: M
svlm(x, y)=a
1j
1(x, y)+a
2j
2(x, y)+a
3j
3(x, y)+a
4j
4(x, y), wherein a
1, a
2, a
3, a
4for variable weight,
i=1,2,3,4; J (x, y) is through filtered image;
Image enhaucament submodule:
When
time,
Wherein, L (x, y) is the gray-scale value after strengthening; ψ (x, y) is the gamma correction coefficient including local message, now
α to be scope be 0 to 1 variable element,
ω is template scale size parameter, and yardstick is larger, and the neighborhood territory pixel information comprised in template is more, and input picture is through different scale ω
itemplate, the image J obtained
ithe neighborhood information of different range will be comprised;
When
And during ω > 50,
Wherein ψ (x, y)=ψ
α(M
svlm(x, y)),
m
hbe in image gray-scale value higher than 128 the average of all pixels, m
lgray-scale value lower than 128 the average of all pixels, and now m=min (m
h, m
l), when α value is known, calculates 256 ψ correction coefficient as look-up table, be designated as
wherein i is index value, utilizes M
svlmthe gray-scale value of (x, y) as index, according to ψ (x, y)=ψ
α(M
svlm(x, y)) quick gamma correction coefficient ψ (x, y) obtaining each pixel in image;
for template correction factor;
(2) detecting and tracking module, specifically comprises and builds submodule, loss differentiation submodule and upgrade submodule:
Build submodule, the structure for visual dictionary:
Obtain position and the yardstick of tracking target at initial frame, around it, choose positive and negative sample training tracker, using tracking results as training set X={x
1, x
2... x
n}
t; And the every width target image in training set is extracted to the SIFT feature of 128 dimensions
wherein S
trepresent the number of SIFT feature in t width target image in training set; After following the tracks of N frame, by clustering algorithm, these features are divided into K bunch, the center constitutive characteristic word of each bunch, is designated as
the feature total amount that can extract
wherein K < < F
n, and
after visual dictionary builds, every width training image is expressed as the form of feature bag, for the frequency representing that in visual dictionary, feature word occurs, with histogram h (x
t) represent, h (x
t) obtain in the following manner: by a width training image X
tin each feature f
s (t)to visual dictionary projection, the feature word the shortest with projector distance represents this feature, after all Projection Characters, add up the frequency of occurrences of each feature word, and normalization obtains training image X
tfeature histogram h (x
t);
Lose and differentiate submodule, for whether differentiating the loss of target:
When a new two field picture arrives, random selecting Z < K histogram from K histogram, and Z=4, form the sub-histogram h that new size is Z
(z)(x
t), sub histogrammic number mostly is most
individual; The corresponding sub histogrammic similarity Φ in certain target area in calculated candidate target area and training set
t_z,
wherein t=1,2 ..., N, z=1,2 ..., N
s, then calculated population similarity Φ
t=1-∏
z(1-Φ
t_z); Similarity Φ=max{ Φ of candidate target region and target
t, t} represents, then track rejection judges that formula is:
Wherein gs be artificial setting sentence mistake threshold values; As u=1, target is by tenacious tracking, as u=0, and track rejection; When track rejection, definition affine Transform Model:
Wherein (x
t, y
t) and (x
t-1, y
t-1) being respectively the position coordinates of Corresponding matching unique point in the position coordinates of certain SITF unique point in present frame target and previous frame target, both are known quantity; S is scale coefficient, and θ is coefficient of rotary, e and f represents translation coefficient,
For temperature rotation correction coefficient,
For temperature translation correction factor, μ
1and μ
2for revising the image rotation and translation error that cause because of ambient temperature deviation, T
0for the standard temperature artificially set, be set to 20 degree, T is the temperature value obtained by temperature sensor Real-Time Monitoring; Adopt Ransac algorithm for estimating to ask for the parameter of affine Transform Model, finally under new yardstick s and coefficient of rotary θ, gather positive negative sample, upgrade sorter;
Upgrade submodule, the renewal for visual dictionary:
After every two field picture obtains target location, according to the result of calculation of affine transformation parameter, collect all SIFT feature points meeting result parameter
after F=3 frame, obtain new feature point set
wherein S
t-Frepresent the total characteristic obtained from F two field picture to count; Following formula is utilized to re-start K cluster to new and old unique point:
wherein
represent new visual dictionary, the size of visual dictionary remains unchanged;
be forgetting factor, indicate the proportion shared by old dictionary,
less, the judgement contribution of new feature to track rejection is more, gets
(3) output module is identified, identification and output for image: in image sequence to be identified, utilize track algorithm to obtain target area, target area is mapped to the subspace that known training data is formed, calculate the distance between target area and training data in subspace, obtain similarity measurement, judge target classification, and export recognition result.
2. a kind of electric automobile charging pile with tenacious tracking function according to claim 1, it is characterized in that, adopt Wiener filtering to carry out after first-level filtering removes, now image information also includes remaining noise, adopts following two-stage filter to carry out secondary filtering:
Wherein, J (x, y) be after filtering after image; P
g(x+i, y+j) represents the function that yardstick is m × n, and P
g(x+i, y+j)=q × exp (-(x
2+ y
2)/ω), wherein q is by the coefficient of function normalization, that is: ∫ ∫ q × exp (-(x
2+ y
2)/ω) dxdy=1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610046346.4A CN105574517A (en) | 2016-01-22 | 2016-01-22 | Electric vehicle charging pile with stable tracking function |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610046346.4A CN105574517A (en) | 2016-01-22 | 2016-01-22 | Electric vehicle charging pile with stable tracking function |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105574517A true CN105574517A (en) | 2016-05-11 |
Family
ID=55884625
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610046346.4A Pending CN105574517A (en) | 2016-01-22 | 2016-01-22 | Electric vehicle charging pile with stable tracking function |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105574517A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108024010A (en) * | 2017-11-07 | 2018-05-11 | 秦广民 | Cellphone monitoring system based on electrical measurement |
CN111775760A (en) * | 2020-07-10 | 2020-10-16 | 郭开华 | Intelligent management system for solar charging piles |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102254224A (en) * | 2011-07-06 | 2011-11-23 | 无锡泛太科技有限公司 | Internet of things electric automobile charging station system based on image identification of rough set neural network |
US20120154580A1 (en) * | 2010-12-20 | 2012-06-21 | Huang tai-hui | Moving object detection method and image processing system for moving object detection |
CN103136526A (en) * | 2013-03-01 | 2013-06-05 | 西北工业大学 | Online target tracking method based on multi-source image feature fusion |
-
2016
- 2016-01-22 CN CN201610046346.4A patent/CN105574517A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120154580A1 (en) * | 2010-12-20 | 2012-06-21 | Huang tai-hui | Moving object detection method and image processing system for moving object detection |
CN102254224A (en) * | 2011-07-06 | 2011-11-23 | 无锡泛太科技有限公司 | Internet of things electric automobile charging station system based on image identification of rough set neural network |
CN103136526A (en) * | 2013-03-01 | 2013-06-05 | 西北工业大学 | Online target tracking method based on multi-source image feature fusion |
Non-Patent Citations (1)
Title |
---|
吴京辉: "视频监控目标的跟踪与识别研究", 《中国博士学位论文全文数据库信息科技辑》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108024010A (en) * | 2017-11-07 | 2018-05-11 | 秦广民 | Cellphone monitoring system based on electrical measurement |
CN111775760A (en) * | 2020-07-10 | 2020-10-16 | 郭开华 | Intelligent management system for solar charging piles |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10223597B2 (en) | Method and system for calculating passenger crowdedness degree | |
CN103034836B (en) | Road sign detection method and road sign checkout equipment | |
CN104392212B (en) | The road information detection and front vehicles recognition methods of a kind of view-based access control model | |
CN109190444B (en) | Method for realizing video-based toll lane vehicle feature recognition system | |
CN111178272B (en) | Method, device and equipment for identifying driver behavior | |
CN103208185A (en) | Method and system for nighttime vehicle detection on basis of vehicle light identification | |
CN101142584A (en) | Method for facial features detection | |
CN104156731A (en) | License plate recognition system based on artificial neural network and method | |
CN114973207B (en) | Road sign identification method based on target detection | |
WO2023124442A1 (en) | Method and device for measuring depth of accumulated water | |
CN109670449B (en) | Vehicle illegal judgment method based on vertical snapshot mode | |
CN105718896A (en) | Intelligent robot with target recognition function | |
CN106845359A (en) | Tunnel portal driving prompt apparatus and method based on infrared emission | |
CN104834932A (en) | Matlab algorithm of automobile license plate identification | |
CN107545244A (en) | Speed(-)limit sign detection method based on image processing techniques | |
CN105574517A (en) | Electric vehicle charging pile with stable tracking function | |
CN104298988A (en) | Method for property protection based on video image local feature matching | |
CN105740768A (en) | Unmanned forklift device based on combination of global and local features | |
CN104123553A (en) | License plate positioning method and system based on cascading morphological transformation | |
CN107506739A (en) | To vehicle detection and distance-finding method before a kind of night | |
CN105718897A (en) | Numerical control lathe based on visual characteristics | |
CN105718911A (en) | Outdoor transformer capable of target identification | |
CN105718910A (en) | Battery room with combination of local and global characteristics | |
WO2022241807A1 (en) | Method for recognizing color of vehicle body of vehicle, and storage medium and terminal | |
CN105718899A (en) | Solar water heater based on visual characteristics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160511 |
|
RJ01 | Rejection of invention patent application after publication |