CN103489013A - Image recognition method for electrical equipment monitoring - Google Patents

Image recognition method for electrical equipment monitoring Download PDF

Info

Publication number
CN103489013A
CN103489013A CN201310430340.3A CN201310430340A CN103489013A CN 103489013 A CN103489013 A CN 103489013A CN 201310430340 A CN201310430340 A CN 201310430340A CN 103489013 A CN103489013 A CN 103489013A
Authority
CN
China
Prior art keywords
dct
image
template
pixels
sum
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310430340.3A
Other languages
Chinese (zh)
Inventor
张卓阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Science and Industry Shenzhen Group Co Ltd
Original Assignee
Aerospace Science and Industry Shenzhen Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Science and Industry Shenzhen Group Co Ltd filed Critical Aerospace Science and Industry Shenzhen Group Co Ltd
Priority to CN201310430340.3A priority Critical patent/CN103489013A/en
Publication of CN103489013A publication Critical patent/CN103489013A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides an image recognition method for electrical equipment monitoring. The method comprises the steps that S100, according to a matching template T, an image I is controlled to be overlapped and partitioned; S200, the image characteristics of overlapped and partitioned image subblocks and the template T are controlled to be extracted; S300, each image subblock is controlled to be matched with the template T once; S400, whether values D are smaller or equal to a first preset value q or not is judged respectively, and the image subblocks with the values D smaller than the preset value q are controlled to be marked; S500, the marked image subblocks are controlled to be secondarily matched with the template T respectively; S600, the image subblocks with the values D' smaller than or equal to a second preset value q' are controlled to be defined as matched blocks; S700, the result of the area, matched with the template T, of the image I is output. Through the technical scheme, blurred images shot by an electric power video monitoring system influenced by the outside can be effectively recognized, and the execution efficiency of image recognition is effectively improved.

Description

A kind of image-recognizing method for the power equipment monitoring
Technical field
The present invention relates to the electric power monitoring field, relate in particular to a kind of image-recognizing method for the power equipment monitoring.
Background technology
Power industry is the foundation stone industry of the national economic development, safe, stable and effective work of power equipment is to provide the prerequisite of quality power supply, efficient operation in order to ensure power equipment, China's electrical network is used video monitoring system in a large number, this system can be carried out real-time recording to the working condition of power equipment, power equipment is effectively monitored, but this system can't be carried out automatic discriminance analysis to the video acquisition image, must dependence manually complete the identification to image, the concrete equipment that the judgement accident occurs etc.
According to its intrinsic feature of power equipment, in the identifying to image, the identification monitoring method that template matching method is the most frequently used.Template matching method refers to: use the little image of known object as template, then mated with image to be identified, if a certain zone of image to be identified is consistent with size, direction and the pixel of template, this zone is labeled out, and thinks and template matches.From the principle that realizes of existing template matches, can find, such algorithm calculated amount is large, memory space is large, and efficiency is lower, is unsuitable for the requirement of real-time high-efficiency in electric system; And during video capture,, because being subject to the interference such as noise and photo environment, image very easily produces On Local Fuzzy, and existing template matching method can't be identified the fuzzy region of image.
Summary of the invention
The present invention is intended to solve that in prior art, existing image-recognizing method calculated amount is large, memory space is large, lower and the technical matters that can not be identified image blurring zone of efficiency, provide a kind of can be to the image-recognizing method for the power equipment monitoring that image blurring zone is identified and efficiency is high.
The invention provides a kind of image-recognizing method for the power equipment monitoring, comprise the following steps:
The matching template T that step S100 is m*n according to size, control the image I be M*N by size and carry out overlap partition, M >=m wherein, N >=n;
Step S200, image subblock after control extraction overlap partition and the characteristics of image of template T, and use respectively vector v (s1, s2, s3, s4, s5, s6, s7) to mean, wherein s1, s2, s3 record respectively the red, green, blue component average of correspondence image, and s4, s5, s6, s7 record respectively the eigenwert on correspondence image DCT coefficient horizontal direction, vertical direction, diagonal and back-diagonal direction;
Step S300, control each image subblock and template T once mates respectively, matching formula D=│ v i(s1, s2, s3)-v t(s1, s2, s3) │, wherein v ithe vectorial feature red, green, blue component average that (s1, s2, s3) is image subblock, v tthe vectorial feature red, green, blue component average that (s1, s2, s3) is template T.
Step S400, judge respectively whether the D value is less than or equal to the first preset value q, and the image subblock that control is less than or equal to preset value q to the D value advances mark;
Step S500, the image subblock and the template T that control institute's mark carry out respectively Secondary Match, matching formula D '=│ v i(s4, s5, s6, s7)-v t(s4, s5, s6, s7) │, wherein v ieigenwert on the component of a vector that (s4, s5, s6, s7) is image subblock-DCT coefficient horizontal direction, vertical direction, diagonal and back-diagonal direction, v teigenwert on the component of a vector that (s4, s5, s6, s7) is template T-DCT coefficient horizontal direction, vertical direction, diagonal and back-diagonal direction;
Step S600, the image subblock that control is less than or equal to the second preset value q ' by D ' is defined as match block;
Step S700, be labeled as identified region by described match block, and output image I and template T be complementary regional result, ending step simultaneously.
Preferably, in described step S100, the concrete grammar of the superimposed piecemeal of image I is: with template T, as moving window, the pixel that at every turn slides to the lower right corner from the upper left corner of image I is scanned, and image I is divided into the individual image subblock of (M-m+1) * (N-n+1).
Preferably, in described step S200,
S4=sum (DCT (1))/sum (DCT (1)+DCT (2)), the DCT coefficient that DCT (1) is the left half regional all pixels of image subblock or template T, the DCT coefficient that DCT (2) is all pixels of remaining area in image subblock or template T, sum (DCT (1)) is sued for peace for the DCT coefficient to image subblock or the left half regional all pixels of template T, and sum (DCT (1)+DCT (2)) is for to be sued for peace to the DCT coefficient of all pixels in image subblock or template T;
S5=sum (DCT (3))/sum (DCT (3)+DCT (4)), the DCT coefficient that DCT (3) is first regional all pixels in image subblock or template T, the DCT coefficient that DCT (4) is all pixels of remaining area in image subblock or template T, sum (DCT (3)) is for to be sued for peace to the DCT coefficient of first regional all pixels in image subblock or template T, and sum (DCT (3)+DCT (4)) is for to be sued for peace to the DCT coefficient of all pixels in image subblock or template T;
S6=sum (DCT (5))/sum (DCT (5)+DCT (6)), the DCT coefficient that DCT (5) is image subblock or left first the regional all pixels of template T, the DCT coefficient that DCT (6) is all pixels of remaining area in image subblock or template T, sum (DCT (5)) is for to be sued for peace to the DCT coefficient of image subblock or left first the regional all pixels of template T, and sum (DCT (5)+DCT (6)) is for to be sued for peace to the DCT coefficient of all pixels in image subblock or template T;
S7=sum (DCT (7))/sum (DCT (7)+DCT (8)), the DCT coefficient that DCT (7) is image subblock or right first the regional all pixels of template T, the DCT coefficient that DCT (8) is all pixels of remaining area in image subblock or template T, sum (DCT (7)) is for to be sued for peace to the DCT coefficient of image subblock or right first the regional all pixels of template T, and sum (DCT (7)+DCT (8)) is for to be sued for peace to the DCT coefficient of all pixels in image subblock or template T.
The DCT coefficient of the pixel that preferably, in described image subblock, coordinate is (k, l) is:
D ( k , l ) = 2 mn c ( k ) c ( l ) Σ x = 0 m Σ y = 0 n I ( x , y ) cos ( 2 x + 1 ) kπ 2 m cos ( 2 y + 1 ) lπ 2 n
Wherein,
c ( k ) = 1 / 2 k = 0 1 k = 1,2 . . . m - 1
c ( l ) = 1 / 2 l = 0 1 l = 1,2 . . . n - 1
The DCT coefficient of the pixel that in described template T, coordinate is (k, l) is:
D ( k , l ) = 2 mn c ( k ) c ( l ) Σ x = 0 m Σ y = 0 n T ( x , y ) cos ( 2 x + 1 ) kπ 2 m cos ( 2 y + 1 ) lπ 2 n
Wherein,
c ( k ) = 1 / 2 k = 0 1 k = 1,2 . . . m - 1
c ( l ) = 1 / 2 l = 0 1 l = 1,2 . . . n - 1
The pixel value that in I (x, y) presentation video sub-block, (x, y) locates, T (x, y) means the pixel value that in template T, (x, y) locates.
Preferably, also comprise step S120 before described step S200, control by same image subblock or template T respectively along continuous straight runs, vertical direction, diagonal and back-diagonal direction divide equally, form respectively territory, Yu He right half-court, left half-court, first zone and second is regional, half zone, upper left and half zone, bottom right and half zone, upper right and half zone, lower-left.
Preferably, in described step S400, if described D value all is greater than the first preset value q, step finishes, and output image I and template T are without the regional information that is complementary.
Preferably, in described step S600, if described D ' value all is greater than the first preset value q ', step finishes, and output image I and template T are without the regional information that is complementary.
Preferably, described m*n is 8*8.
Preferably, described the first preset value q is greater than described the second preset value q '.
In above technical scheme, image I to be identified is carried out to overlap partition according to template T, image subblock and template T after the control overlap partition carry out Secondary Match, thereby can find the zone that is complementary of image I and template T, this image-recognizing method efficiently solves that existing image-recognizing method calculated amount is large, memory space is large, the technical disadvantages that efficiency is lower, can effectively identify because of the captured blurred picture of ectocine the electric power video monitoring system, effectively improve the execution efficiency of image recognition.
The accompanying drawing explanation
Fig. 1 is the image-recognizing method process flow diagram for the power equipment monitoring of an embodiment of the present invention;
Fig. 2 is the four kinds of direction models of image subblock or template T in the present invention.
Embodiment
In order to make technical matters solved by the invention, technical scheme and beneficial effect clearer, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein, only in order to explain the present invention, is not intended to limit the present invention.
The algorithm of image-recognizing method for the power equipment monitoring provided by the invention based on the piece coupling, at first algorithm is treated recognition image I by the size of template T and is carried out overlap partition, then the image subblock and the template T that obtain are carried out respectively to feature extraction, then the feature of extracting is carried out to similarity analysis, finally the result according to similarity analysis is marked definite matching area.
In order to guarantee that algorithm is to because of noise, the blurred picture that the impact of the external factors such as illumination produces is still effective, need to extract anti-fuzzy characteristics of image and carry out similarity analysis, simultaneously, in order to improve the execution efficiency of algorithm, matching method is proposed twice, be that algorithm first carries out coupling twice by image subblock and template T, be fuzzy matching for the first time, choose the simple image feature of template and image subblock and carry out similarity analysis, all image subblocks and the template T that will meet in similarity empirical value scope carry out exact matching for the second time again, finally according in the I of positioning image as a result of exact matching with the similar area of matching template T.
As shown in Figure 1, the image-recognizing method for the power equipment monitoring that the embodiment of the present invention provides specifically comprises:
The matching template T that step S100 is m*n according to size, control the image I be M*N by size and carry out overlap partition, M >=m wherein, N >=n; Concrete, control the moving window of the same size that a size is m*n and template T, the pixel that at every turn slides to the lower right corner from the upper left corner of image I is scanned, so image I will be divided into the individual image subblock of (M-m+1) * (N-n+1).
Step S200, image subblock after control extraction overlap partition and the characteristics of image of template T, and use respectively vector v (s1, s2, s3, s4, s5, s6, s7) to mean, wherein s1, s2, s3 distinguish the red, green, blue component average of document image, the eigenwert on s4, s5, s6, s7 difference document image DCT coefficient horizontal direction, vertical direction, diagonal and back-diagonal direction;
Shown in Fig. 2, control by same image subblock or template T respectively along continuous straight runs, vertical direction, diagonal and back-diagonal direction divide equally, form respectively territory, left half-court 1 and territory, right half-court 2, first zone 3 and second zone 4, half zone 5, upper left and half zone 6, bottom right and half zone 7, upper right and half zone 8, lower-left.
Further, for each pixel in image subblock or template T, all have discrete cosine variable coefficient DCT, the DCT coefficient of the pixel that in described image subblock, coordinate is (k, l) is:
D ( k , l ) = 2 mn c ( k ) c ( l ) Σ x = 0 m Σ y = 0 n I ( x , y ) cos ( 2 x + 1 ) kπ 2 m cos ( 2 y + 1 ) lπ 2 n
Wherein,
c ( k ) = 1 / 2 k = 0 1 k = 1,2 . . . m - 1
c ( l ) = 1 / 2 l = 0 1 l = 1,2 . . . n - 1
The DCT coefficient of the pixel that in described template T, coordinate is (k, l) is:
D ( k , l ) = 2 mn c ( k ) c ( l ) Σ x = 0 m Σ y = 0 n T ( x , y ) cos ( 2 x + 1 ) kπ 2 m cos ( 2 y + 1 ) lπ 2 n
Wherein,
c ( k ) = 1 / 2 k = 0 1 k = 1,2 . . . m - 1
c ( l ) = 1 / 2 l = 0 1 l = 1,2 . . . n - 1
The pixel value that in I (x, y) presentation video sub-block, (x, y) locates, T (x, y) means the pixel value that in template T, (x, y) locates.
Further, according to such scheme, described s4=sum (DCT (1))/sum (DCT (1)+DCT (2)), shown in Fig. 2 (a), the DCT coefficient that DCT (1) is all pixels in left half zone 1 of image subblock or template T, DCT (2) is remaining area in image subblock or template T, it is the DCT coefficient of all pixels in territory, right half-court 2, sum (DCT (1)) is sued for peace for the DCT coefficient to all pixels in image subblock or left half zone 1 of template T, sum (DCT (1)+DCT (2)) is for to be sued for peace to the DCT coefficient of all pixels in image subblock or template T.
Described s5=sum (DCT (3))/sum (DCT (3)+DCT (4)), shown in Fig. 2 (b), DCT (3) is the DCT coefficient of all pixels in first zone 3 in image subblock or template T, DCT (4) is remaining area in image subblock or template T, it is the DCT coefficient of all pixels in second zone 4, sum (DCT (3)) is for to be sued for peace to the DCT coefficient of all pixels in first zone 3 in image subblock or template T, and sum (DCT (3)+DCT (4)) is for to be sued for peace to the DCT coefficient of all pixels in image subblock or template T.
Described S6=sum (DCT (5))/sum (DCT (5)+DCT (6)), shown in Fig. 2 (c), the DCT coefficient that DCT (5) is all pixels in left first zone 5 of image subblock or template T, DCT (6) is remaining area in image subblock or template T, it is the DCT coefficient of all pixels in half zone 6, bottom right, sum (DCT (5)) is for to be sued for peace to the DCT coefficient of all pixels in image subblock or left first zone 5 of template T, and sum (DCT (5)+DCT (6)) is for to be sued for peace to the DCT coefficient of all pixels in image subblock or template T.
Described S7=sum (DCT (7))/sum (DCT (7)+DCT (8)), shown in Fig. 2 (d), the DCT coefficient that DCT (7) is all pixels in right first zone 7 of image subblock or template T, DCT (8) is remaining area in image subblock or template T, it is the DCT coefficient of all pixels in half zone 8, lower-left, sum (DCT (7)) is for to be sued for peace to the DCT coefficient of all pixels in image subblock or right first zone 7 of template T, and sum (DCT (7)+DCT (8)) is for to be sued for peace to the DCT coefficient of all pixels in image subblock or template T.
After obtaining the vector v separately (s1, s2, s3, s4, s5, s6, s7) of above-mentioned image subblock and template T, start to control image subblock and template T are mated.
Step S300, control each image subblock and template T once mates respectively, matching formula D=│ v i(s1, s2, s3)-v t(s1, s2, s3) │, wherein v ithe vectorial feature red, green, blue component average that (s1, s2, s3) is image subblock, v tthe vectorial feature red, green, blue component average that (s1, s2, s3) is template T.This coupling is a fuzzy matching.
Step S400, judge respectively whether the D value is less than or equal to the first preset value q, and the image subblock that control is less than or equal to preset value q to the D value advances mark; The first preset value q here is the similarity empirical value.In matching process, if exist the D value whether to be less than or equal to the image subblock of the first preset value q, this image subblock and template T approximate match are described, need further, by Secondary Match, image subblock and the template T of institute's mark to be carried out to exact matching.
Step S500, the image subblock and the template T that control institute's mark carry out respectively Secondary Match, matching formula D '=│ v i(s4, s5, s6, s7)-v t(s4, s5, s6, s7) │, wherein v ieigenwert on the component of a vector that (s4, s5, s6, s7) is image subblock-DCT coefficient horizontal direction, vertical direction, diagonal and back-diagonal direction, v teigenwert on the component of a vector that (s4, s5, s6, s7) is template T-DCT coefficient horizontal direction, vertical direction, diagonal and back-diagonal direction;
Step S600, the image subblock that control is less than or equal to the second preset value q ' by D ' is defined as match block, and the second preset value q ' described here is also a kind of similarity empirical value.In the present embodiment, the value of the first preset value q is greater than the value of the second preset value q '.
Step S700, be labeled as identified region by described match block, and output image I and template T be complementary regional result, ending step simultaneously.
Through above-mentioned Secondary Match, can effectively identify because of the power equipment blurred picture of the external factors such as camera noise, illumination generation and the coupling of template, not only adapt to the identification of blurred picture, and improved execution efficiency.
Preferably, in above-described embodiment, described template T just has the match block that pixel is 8*8.
Further, in described step S400, if described D value all is greater than the first preset value q, step finishes, and output image I and template T are without the regional information that is complementary.
Closer, in described step S600, if described D ' value all is greater than the first preset value q ', step finishes, and output image I and template T are without the regional information that is complementary.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, all any modifications of doing within the spirit and principles in the present invention, be equal to and replace and improvement etc., within all should being included in protection scope of the present invention.

Claims (9)

1. the image-recognizing method for the power equipment monitoring, is characterized in that, comprises the following steps:
The matching template T that step S100 is m*n according to size, control the image I be M*N by size and carry out overlap partition, M >=m wherein, N >=n;
Step S200, image subblock after control extraction overlap partition and the characteristics of image of template T, and use respectively vector v (s1, s2, s3, s4, s5, s6, s7) to mean, wherein s1, s2, s3 record respectively the red, green, blue component average of correspondence image, and s4, s5, s6, s7 record respectively the eigenwert on correspondence image DCT coefficient horizontal direction, vertical direction, diagonal and back-diagonal direction;
Step S300, control each image subblock and template T once mates respectively, matching formula D=│ v i(s1, s2, s3)-v t(s1, s2, s3) │, wherein v ithe vectorial feature red, green, blue component average that (s1, s2, s3) is image subblock, v tthe vectorial feature red, green, blue component average that (s1, s2, s3) is template T.
Step S400, judge respectively whether the D value is less than or equal to the first preset value q, and the image subblock that control is less than or equal to preset value q to the D value advances mark;
Step S500, the image subblock and the template T that control institute's mark carry out respectively Secondary Match, matching formula D '=│ v i(s4, s5, s6, s7)-v t(s4, s5, s6, s7) │, wherein v ieigenwert on the component of a vector that (s4, s5, s6, s7) is image subblock-DCT coefficient horizontal direction, vertical direction, diagonal and back-diagonal direction, v teigenwert on the component of a vector that (s4, s5, s6, s7) is template T-DCT coefficient horizontal direction, vertical direction, diagonal and back-diagonal direction;
Step S600, the image subblock that control is less than or equal to the second preset value q ' by D ' is defined as match block;
Step S700, be labeled as identified region by described match block, and output image I and template T be complementary regional result, ending step simultaneously.
2. image-recognizing method according to claim 1, it is characterized in that, in described step S100, the concrete grammar of the superimposed piecemeal of image I is: use template T as moving window, the pixel that at every turn slides to the lower right corner from the upper left corner of image I is scanned, and image I is divided into the individual image subblock of (M-m+1) * (N-n+1).
3. image-recognizing method according to claim 2, is characterized in that, in described step S200,
S4=sum (DCT (1))/sum (DCT (1)+DCT (2)), the DCT coefficient that DCT (1) is the left half regional all pixels of image subblock or template T, the DCT coefficient that DCT (2) is all pixels of remaining area in image subblock or template T, sum (DCT (1)) is sued for peace for the DCT coefficient to image subblock or the left half regional all pixels of template T, and sum (DCT (1)+DCT (2)) is for to be sued for peace to the DCT coefficient of all pixels in image subblock or template T;
S5=sum (DCT (3))/sum (DCT (3)+DCT (4)), the DCT coefficient that DCT (3) is first regional all pixels in image subblock or template T, the DCT coefficient that DCT (4) is all pixels of remaining area in image subblock or template T, sum (DCT (3)) is for to be sued for peace to the DCT coefficient of first regional all pixels in image subblock or template T, and sum (DCT (3)+DCT (4)) is for to be sued for peace to the DCT coefficient of all pixels in image subblock or template T;
S6=sum (DCT (5))/sum (DCT (5)+DCT (6)), the DCT coefficient that DCT (5) is image subblock or left first the regional all pixels of template T, the DCT coefficient that DCT (6) is all pixels of remaining area in image subblock or template T, sum (DCT (5)) is for to be sued for peace to the DCT coefficient of image subblock or left first the regional all pixels of template T, and sum (DCT (5)+DCT (6)) is for to be sued for peace to the DCT coefficient of all pixels in image subblock or template T;
S7=sum (DCT (7))/sum (DCT (7)+DCT (8)), the DCT coefficient that DCT (7) is image subblock or right first the regional all pixels of template T, the DCT coefficient that DCT (8) is all pixels of remaining area in image subblock or template T, sum (DCT (7)) is for to be sued for peace to the DCT coefficient of image subblock or right first the regional all pixels of template T, and sum (DCT (7)+DCT (8)) is for to be sued for peace to the DCT coefficient of all pixels in image subblock or template T.
4. image-recognizing method according to claim 3, is characterized in that,
The DCT coefficient of the pixel that in described image subblock, coordinate is (k, l) is:
D ( k , l ) = 2 mn c ( k ) c ( l ) Σ x = 0 m Σ y = 0 n I ( x , y ) cos ( 2 x + 1 ) kπ 2 m cos ( 2 y + 1 ) lπ 2 n
Wherein,
c ( k ) = 1 / 2 k = 0 1 k = 1,2 . . . m - 1
c ( l ) = 1 / 2 l = 0 1 l = 1,2 . . . n - 1
The DCT coefficient of the pixel that in described template T, coordinate is (k, l) is:
D ( k , l ) = 2 mn c ( k ) c ( l ) Σ x = 0 m Σ y = 0 n T ( x , y ) cos ( 2 x + 1 ) kπ 2 m cos ( 2 y + 1 ) lπ 2 n
Wherein,
c ( k ) = 1 / 2 k = 0 1 k = 1,2 . . . m - 1
c ( l ) = 1 / 2 l = 0 1 l = 1,2 . . . n - 1
The pixel value that in I (x, y) presentation video sub-block, (x, y) locates, T (x, y) means the pixel value that in template T, (x, y) locates.
5. image-recognizing method according to claim 2, it is characterized in that, also comprise step S120 before described step S200, control by same image subblock or template T respectively along continuous straight runs, vertical direction, diagonal and back-diagonal direction divide equally, form respectively territory, Yu He right half-court, left half-court, first zone and second is regional, half zone, upper left and half zone, bottom right and half zone, upper right and half zone, lower-left.
6. image-recognizing method according to claim 1, is characterized in that, in described step S400, if described D value all is greater than the first preset value q, step finishes, and output image I and template T are without the regional information that is complementary.
7. image-recognizing method according to claim 1, is characterized in that, in described step S600, if described D ' value all is greater than the first preset value q ', step finishes, and output image I and template T are without the regional information that is complementary.
8. image-recognizing method according to claim 1, is characterized in that, described m*n is 8*8.
9. image-recognizing method according to claim 1, is characterized in that, described the first preset value q is greater than described the second preset value q '.
CN201310430340.3A 2013-09-18 2013-09-18 Image recognition method for electrical equipment monitoring Pending CN103489013A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310430340.3A CN103489013A (en) 2013-09-18 2013-09-18 Image recognition method for electrical equipment monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310430340.3A CN103489013A (en) 2013-09-18 2013-09-18 Image recognition method for electrical equipment monitoring

Publications (1)

Publication Number Publication Date
CN103489013A true CN103489013A (en) 2014-01-01

Family

ID=49829219

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310430340.3A Pending CN103489013A (en) 2013-09-18 2013-09-18 Image recognition method for electrical equipment monitoring

Country Status (1)

Country Link
CN (1) CN103489013A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106681524A (en) * 2015-11-10 2017-05-17 阿里巴巴集团控股有限公司 Method and device for processing information
CN109035166A (en) * 2018-07-16 2018-12-18 国网四川省电力公司巴中供电公司 Electrical equipment infrared image enhancing method based on non-lower sampling shearing wave conversion
CN113064373A (en) * 2021-04-07 2021-07-02 四川中鼎智能技术有限公司 Industrial hydroelectric equipment logic signal control method, system, terminal and storage medium based on video image recognition

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090304231A1 (en) * 2008-06-09 2009-12-10 Arcsoft, Inc. Method of automatically detecting and tracking successive frames in a region of interesting by an electronic imaging device
CN102184537A (en) * 2011-04-22 2011-09-14 西安理工大学 Image region tamper detection method based on wavelet transform and principal component analysis
CN102567740A (en) * 2010-12-14 2012-07-11 苏州大学 Image recognition method and system
CN102867383A (en) * 2011-07-07 2013-01-09 哈尔滨工业大学深圳研究生院 Robbery monitoring alarm method and system
CN103247059A (en) * 2013-05-27 2013-08-14 北京师范大学 Remote sensing image region of interest detection method based on integer wavelets and visual features

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090304231A1 (en) * 2008-06-09 2009-12-10 Arcsoft, Inc. Method of automatically detecting and tracking successive frames in a region of interesting by an electronic imaging device
CN102567740A (en) * 2010-12-14 2012-07-11 苏州大学 Image recognition method and system
CN102184537A (en) * 2011-04-22 2011-09-14 西安理工大学 Image region tamper detection method based on wavelet transform and principal component analysis
CN102867383A (en) * 2011-07-07 2013-01-09 哈尔滨工业大学深圳研究生院 Robbery monitoring alarm method and system
CN103247059A (en) * 2013-05-27 2013-08-14 北京师范大学 Remote sensing image region of interest detection method based on integer wavelets and visual features

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106681524A (en) * 2015-11-10 2017-05-17 阿里巴巴集团控股有限公司 Method and device for processing information
CN109035166A (en) * 2018-07-16 2018-12-18 国网四川省电力公司巴中供电公司 Electrical equipment infrared image enhancing method based on non-lower sampling shearing wave conversion
CN109035166B (en) * 2018-07-16 2022-02-01 国网四川省电力公司巴中供电公司 Electrical equipment infrared image enhancement method based on non-subsampled shear wave transformation
CN113064373A (en) * 2021-04-07 2021-07-02 四川中鼎智能技术有限公司 Industrial hydroelectric equipment logic signal control method, system, terminal and storage medium based on video image recognition
CN113064373B (en) * 2021-04-07 2022-04-15 四川中鼎智能技术有限公司 Industrial hydroelectric equipment logic signal control method, system, terminal and storage medium based on video image recognition

Similar Documents

Publication Publication Date Title
CN106407928B (en) Transformer composite insulator casing monitoring method and system based on raindrop identification
CN104361314B (en) Based on infrared and transformer localization method and device of visual image fusion
CN111179232A (en) Steel bar size detection system and method based on image processing
JP6904614B2 (en) Object detection device, prediction model creation device, object detection method and program
CN107133592B (en) Human body target feature detection algorithm for power substation by fusing infrared thermal imaging and visible light imaging technologies
CN110991448A (en) Text detection method and device for nameplate image of power equipment
CN105631455A (en) Image main body extraction method and system
CN109687382B (en) Relay protection pressing plate switching state identification method based on color template matching
CN103810696B (en) Method for detecting image of target object and device thereof
CN103996203A (en) Method and device for detecting whether face in image is sheltered
CN103971524A (en) Traffic flow detection method based on machine vision
CN103489013A (en) Image recognition method for electrical equipment monitoring
CN105096305A (en) Method and device for analyzing state of insulators
CN104217425A (en) Superpixel-based electric transmission and transformation equipment infrared fault image segmentation method
CN107067595A (en) State identification method, device and the electronic equipment of a kind of indicator lamp
CN108664886A (en) A kind of fast face recognition method adapting to substation's disengaging monitoring demand
CN103533332B (en) A kind of 2D video turns the image processing method of 3D video
CN103824074A (en) Crowd density estimation method based on background subtraction and texture features and system
CN105631868A (en) Depth information extraction method based on image classification
CN110705432B (en) Pedestrian detection device and method based on color and depth cameras
CN108805890A (en) A kind of arc hammer measurement method based on power transmission line image characteristic point
CN116681664A (en) Detection method and device for operation of stamping equipment
CN108022000A (en) A kind of metro passenger flow Early-Warning System and method
CN105550669A (en) Intelligent accident survey method based on image identification
CN109558881A (en) A kind of crag avalanche monitoring method based on computer vision

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140101