CN113297971A - Intelligent management method for unattended field operation of transformer substation integrating video analysis technology - Google Patents

Intelligent management method for unattended field operation of transformer substation integrating video analysis technology Download PDF

Info

Publication number
CN113297971A
CN113297971A CN202110569312.4A CN202110569312A CN113297971A CN 113297971 A CN113297971 A CN 113297971A CN 202110569312 A CN202110569312 A CN 202110569312A CN 113297971 A CN113297971 A CN 113297971A
Authority
CN
China
Prior art keywords
face
image
representing
row
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110569312.4A
Other languages
Chinese (zh)
Inventor
常荣
党军朋
乔连留
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yuxi Power Supply Bureau of Yunnan Power Grid Co Ltd
Original Assignee
Yuxi Power Supply Bureau of Yunnan Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yuxi Power Supply Bureau of Yunnan Power Grid Co Ltd filed Critical Yuxi Power Supply Bureau of Yunnan Power Grid Co Ltd
Priority to CN202110569312.4A priority Critical patent/CN113297971A/en
Publication of CN113297971A publication Critical patent/CN113297971A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • AHUMAN NECESSITIES
    • A42HEADWEAR
    • A42BHATS; HEAD COVERINGS
    • A42B3/00Helmets; Helmet covers ; Other protective head coverings
    • A42B3/04Parts, details or accessories of helmets
    • A42B3/0406Accessories for helmets
    • AHUMAN NECESSITIES
    • A42HEADWEAR
    • A42BHATS; HEAD COVERINGS
    • A42B3/00Helmets; Helmet covers ; Other protective head coverings
    • A42B3/04Parts, details or accessories of helmets
    • A42B3/0406Accessories for helmets
    • A42B3/0433Detecting, signalling or lighting devices
    • AHUMAN NECESSITIES
    • A42HEADWEAR
    • A42BHATS; HEAD COVERINGS
    • A42B3/00Helmets; Helmet covers ; Other protective head coverings
    • A42B3/04Parts, details or accessories of helmets
    • A42B3/30Mounting radio sets or communication systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/06Electricity, gas or water supply
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention provides an intelligent management method for field operation without permission of a transformer substation by fusing a video analysis technology, which comprises a protective helmet and comprises the following steps: s1, a face image acquisition module on the access control system acquires face information of a technician waiting to enter the transformer substation; the method comprises the steps that face data processing is carried out on face information of technicians to enter a transformer substation, wherein the face information is collected by a face image collection module, and the face information is a collected face after the face data processing is carried out on the face information; s2, the access control system compares and compares whether the face is consistent with the collected face: if the comparison face is consistent with the collected face, the access control system opens the access control; and if the comparison face is inconsistent with the collected face, the access control system uploads the collected face to a warning face storage database. The invention can ensure the safety of personnel entering the transformer substation and record the operation process in the whole process.

Description

Intelligent management method for unattended field operation of transformer substation integrating video analysis technology
Technical Field
The invention relates to the technical field of transformer substations, in particular to an intelligent management method for unattended field operation of a transformer substation, which integrates a video analysis technology.
Background
The transformer substation is a place for converting voltage and current, receiving electric energy and distributing electric energy in an electric power system. The substations in the power plant are step-up substations, which are used to boost up the electrical energy generated by the generator and feed it into the high-voltage network. The patent application number 2019112524424, entitled "transformer substation insulator live working platform control system and method", discloses a control system comprising a core control unit, wherein the core control unit is connected with at least two sub-control units in a cascade manner; the sub-control unit is configured to receive and execute the live working instruction issued by the core control unit, and feed back the execution result of the live working instruction to the core control unit; the core control unit is configured to judge the completion degree of live working and path planning according to the execution result of the live working instruction so as to determine whether the corresponding sub-control unit continues to execute the live working instruction and autonomously avoid obstacles. The insulation safety protection performance of live working is improved by a distributed cascade control strategy.
Disclosure of Invention
The invention aims to at least solve the technical problems in the prior art, and particularly creatively provides an intelligent management method for the unattended field operation of a transformer substation, which integrates a video analysis technology.
In order to achieve the purpose, the invention provides an intelligent management method for the unattended field operation of a transformer substation integrating video analysis technology, which comprises the following steps:
s1, acquiring the face information of a technician to enter the transformer substation by a face image acquisition module on the access control system; the method comprises the steps that face data processing is carried out on face information of technicians to enter a transformer substation, wherein the face information is collected by a face image collection module, and the face information is a collected face after the face data processing is carried out on the face information;
s2, the access control system compares and compares whether the face is consistent with the collected face:
if the comparison face is consistent with the collected face, the access control system opens the access control;
and if the comparison face is inconsistent with the collected face, the access control system uploads the collected face to a warning face storage database.
In a preferred embodiment of the present invention, step S1 includes the following steps:
s11, the access control system judges whether the collected face image is a gray image:
if the face image collected by the access control system is a gray image, executing step S12;
if the face image collected by the access control system is not a gray image, executing the following steps:
s111, counting the total number of the face images collected by the access control system, and recording as a, a is the total number of the face images collected by the access control system, and A is1、A2、A3、……、Aa,A1The 1 st face image of technician A, collected for the access control system 22 nd face image of technician A collected for access control system, A3The 3 rd face image of technician A, collected for the access control systemaThe method comprises the steps of collecting the a-th face image of a technician A for an access control system;
s112, converting the RGB face image into a gray image through the following calculation formula:
Figure BDA0003082030250000021
wherein the content of the first and second substances,
Figure BDA0003082030250000022
representing an ith image of the gray face; 1, 2, 3, … …, a;
Imnrepresenting a grayscale image
Figure BDA0003082030250000023
The gray value of the pixel point at the position of the mth row and the nth column in the middle row; m is 1, 2, 3, … …, M, N is 1, 2, 3, … …, N, M is width × Resolution, M represents the total number of horizontal pixels,the width represents the width value of the RGB face image, and Resolution represents the Resolution of the RGB face image; n is high × Resolution, N represents the total number of vertical pixel points, and high represents the height value of the RGB face image; i.e. I11Representing a grayscale image
Figure BDA0003082030250000031
Gray value, I, of pixel point at the 1 st row and 1 st column position12Representing a grayscale image
Figure BDA0003082030250000032
Gray value, I, of pixel point at the 2 nd column position of the middle 1 st line13Representing a grayscale image
Figure BDA0003082030250000033
Gray value, I, of pixel point at the position of the 1 st row and the 3 rd column1NRepresenting a grayscale image
Figure BDA0003082030250000034
The gray value of the pixel point at the position of the No. 1 line and the No. N column; i is21Representing a grayscale image
Figure BDA0003082030250000035
Gray value, I, of pixel point at the 1 st column position of the middle 2 nd row22Representing a grayscale image
Figure BDA0003082030250000036
Gray value, I, of pixel point at 2 nd row and 2 nd column position in middle row23Representing a grayscale image
Figure BDA0003082030250000037
Gray value, I, of pixel point at the position of the 2 nd row and 3 rd column2NRepresenting a grayscale image
Figure BDA0003082030250000038
The gray value of the pixel point at the position of the Nth column in the 2 nd row; i is31Representing a grayscale image
Figure BDA0003082030250000039
Gray value, I, of pixel point at the 1 st column position of the 3 rd row32Representing a grayscale image
Figure BDA00030820302500000310
Gray value, I, of pixel point at the 2 nd column position of the 3 rd row33Representing a grayscale image
Figure BDA00030820302500000311
Gray value, I, of pixel point at the 3 rd row and column position in the 3 rd row3NRepresenting a grayscale image
Figure BDA00030820302500000312
The gray value of the pixel point at the Nth column position of the 3 rd row; i isM1Representing a grayscale image
Figure BDA00030820302500000313
Gray value, I, of pixel point at the 1 st column position of the M-th rowM2Representing a grayscale image
Figure BDA00030820302500000314
Gray value, I, of pixel point at position of middle Mth row and 2 nd columnM3Representing a grayscale image
Figure BDA00030820302500000315
Gray value, I, of pixel point at the 3 rd column position of the M-th rowMNRepresenting a grayscale image
Figure BDA00030820302500000316
The gray value of the pixel point at the position of the Mth row and the Nth column;
Figure BDA00030820302500000317
wherein R ismnExpressing the red channel value of a pixel point at the nth row position of the mth line in the RGB image;
Gmnrepresenting the green channel value of a pixel point at the nth row position of the mth line in the RGB image;
Bmnrepresenting a blue channel value of a pixel point at the nth row position of the mth line in the RGB image;
Figure BDA00030820302500000318
represents the red channel value GmnThe fusion parameters of (1);
Figure BDA00030820302500000319
represents the green channel value BmnThe fusion parameters of (1);
Figure BDA00030820302500000320
represents the blue channel value BmnThe fusion parameters of (1);
and S12, screening the gray face image.
In conclusion, due to the adoption of the technical scheme, the safety of personnel entering the transformer substation can be ensured, and the operation process can be recorded in the whole process.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic block diagram of the process of the present invention.
Fig. 2 is a schematic circuit diagram of the audio input unit according to the present invention.
Fig. 3 is a schematic circuit diagram of the audio output unit according to the present invention.
Fig. 4 is a schematic view of the structure of the protective helmet of the present invention.
Fig. 5 is a schematic view of another perspective structure of the protective helmet of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
The invention provides an intelligent management method for field operation without permission of a transformer substation by fusing a video analysis technology, which comprises a protective helmet and comprises the following steps as shown in figure 1:
s1, a face image acquisition module on the access control system acquires face information of a technician waiting to enter the transformer substation; the method comprises the steps that face data processing is carried out on face information of technicians to enter a transformer substation, wherein the face information is collected by a face image collection module, and the face information is a collected face after the face data processing is carried out on the face information;
s2, the access control system compares and compares whether the face is consistent with the collected face:
if the comparison face is consistent with the collected face, the access control system opens the access control;
and if the comparison face is inconsistent with the collected face, the access control system uploads the collected face to a warning face storage database.
And S3, after the transformer substation is started, the camera on the protective helmet transmits the video image in the field operation process until the cloud platform manages the video image.
In a preferred embodiment of the present invention, before step S1, the method further includes step S0, when a technician enters the substation, the wireless connection signal sent by the access control system communicates with the wireless data transmission connection module in the protective helmet, obtains attribute information of the wireless data transmission connection module, and queries corresponding face information according to the attribute information query value; the face information inquired according to the attribute information inquiry value is a comparison face;
the step S0 includes the following steps:
s01, the access control system communicates with the protective helmet, the access control system obtains the attribute information of the wireless data transmission connection module from the protective helmet, and after the controller receives the attribute information of the wireless data transmission connection module obtained from the access control system, the controller sends the obtained attribute information of the wireless data transmission connection module to the access control system;
s02, after receiving the attribute information of the wireless data transmission connection module sent by the protective helmet, the access control system performs the following operations on the received attribute information of the wireless data transmission connection module:
Property information query value=<Attribute information,Algorithm type>,
wherein, Property information query value represents a Property information query value;
the Attribute information represents Attribute information of a wireless data transmission connection module, and the Attribute information of the wireless data transmission connection module comprises a physical address of one of a wireless data transmission connection WiFi module, a wireless data transmission connection 3G module, a wireless data transmission connection 4G module, a wireless data transmission connection 5G module and a wireless data transmission connection Bluetooth module;
algorithm type represents an Algorithm operation type; the algorithm operation type employs the MD5 hash algorithm.
< Attribute information, Algorithm type > indicates an Algorithm operation type of Attribute information of the wireless data transmission connection module;
s03, judging whether the Property information query value exists in the face database:
if the Property information query value exists in the face feature value database, screening a face feature value corresponding to the Property information query value, and executing step S04;
if the attribute information query value does not exist in the human face characteristic value database, sending prompt information to a protective helmet of the user, wherein the prompt information is that data information of the protective helmet is not recorded into a transformer substation system;
and S04, obtaining the face information associated with the face characteristic value according to the face characteristic value obtained in the step S03. In a preferred embodiment of the present invention, in step S03, the method for calculating the face feature value includes:
Face feature value=<Face information,Algorithm type>,
wherein, the Face information represents Face information, namely a Face image;
algorithm type represents an Algorithm operation type;
< Face information, Algorithm type > represents an Algorithm operation type Algorithm type performed on Face information;
the Face feature value represents a Face feature value.
In a preferred embodiment of the present invention, step S1 includes the following steps:
s11, the access control system judges whether the collected face image is a gray image:
if the face image collected by the access control system is a gray image, executing step S12;
if the face image collected by the access control system is not a gray image, executing the following steps:
s111, counting the total number of the face images collected by the access control system, and recording as a, a is the total number of the face images collected by the access control system, and A is1、A2、A3、……、Aa,A1The 1 st face image of technician A, collected for the access control system 22 nd face image of technician A collected for access control system, A3The 3 rd face image of technician A, collected for the access control systemaThe method comprises the steps of collecting the a-th face image of a technician A for an access control system;
s112, converting the RGB face image into a gray image through the following calculation formula:
Figure BDA0003082030250000071
wherein the content of the first and second substances,
Figure BDA0003082030250000072
to representAn ith image of a gray face; 1, 2, 3, … …, a;
Imnrepresenting a grayscale image
Figure BDA0003082030250000073
The gray value of the pixel point at the position of the mth row and the nth column in the middle row; m is 1, 2, 3, … … and M, N is 1, 2, 3, … … and N, M is width × Resolution, M represents the total number of horizontal pixels, width represents the width value of the RGB face image, and Resolution represents the Resolution of the RGB face image; n is high × Resolution, N represents the total number of vertical pixel points, and high represents the height value of the RGB face image; i.e. I11Representing a grayscale image
Figure BDA0003082030250000074
Gray value, I, of pixel point at the 1 st row and 1 st column position12Representing a grayscale image
Figure BDA0003082030250000075
Gray value, I, of pixel point at the 2 nd column position of the middle 1 st line13Representing a grayscale image
Figure BDA0003082030250000076
Gray value, I, of pixel point at the position of the 1 st row and the 3 rd column1NRepresenting a grayscale image
Figure BDA0003082030250000077
The gray value of the pixel point at the position of the No. 1 line and the No. N column; i is21Representing a grayscale image
Figure BDA0003082030250000078
Gray value, I, of pixel point at the 1 st column position of the middle 2 nd row22Representing a grayscale image
Figure BDA0003082030250000079
Gray value, I, of pixel point at 2 nd row and 2 nd column position in middle row23Representing a grayscale image
Figure BDA00030820302500000710
Gray value, I, of pixel point at the position of the 2 nd row and 3 rd column2NRepresenting a grayscale image
Figure BDA00030820302500000711
The gray value of the pixel point at the position of the Nth column in the 2 nd row; i is31Representing a grayscale image
Figure BDA00030820302500000712
Gray value, I, of pixel point at the 1 st column position of the 3 rd row32Representing a grayscale image
Figure BDA00030820302500000713
Gray value, I, of pixel point at the 2 nd column position of the 3 rd row33Representing a grayscale image
Figure BDA00030820302500000714
Gray value, I, of pixel point at the 3 rd row and column position in the 3 rd row3NRepresenting a grayscale image
Figure BDA00030820302500000715
The gray value of the pixel point at the Nth column position of the 3 rd row; i isM1Representing a grayscale image
Figure BDA00030820302500000716
Gray value, I, of pixel point at the 1 st column position of the M-th rowM2Representing a grayscale image
Figure BDA0003082030250000081
Gray value, I, of pixel point at position of middle Mth row and 2 nd columnM3Representing a grayscale image
Figure BDA0003082030250000082
Gray value, I, of pixel point at the 3 rd column position of the M-th rowMNRepresenting a grayscale image
Figure BDA0003082030250000083
The gray value of the pixel point at the position of the Mth row and the Nth column;
Figure BDA0003082030250000084
wherein R ismnExpressing the red channel value of a pixel point at the nth row position of the mth line in the RGB image;
Gmnrepresenting the green channel value of a pixel point at the nth row position of the mth line in the RGB image;
Bmnrepresenting a blue channel value of a pixel point at the nth row position of the mth line in the RGB image;
Figure BDA0003082030250000085
represents the red channel value GmnThe fusion parameters of (1);
Figure BDA0003082030250000086
represents the green channel value BmnThe fusion parameters of (1);
Figure BDA0003082030250000087
represents the blue channel value BmnThe fusion parameters of (1);
and S12, screening the gray face image.
In a preferred embodiment of the present invention, step S12 includes the following steps: order to
Figure BDA0003082030250000088
S121, the gray level face image is processed
Figure BDA0003082030250000089
Dividing the image into M gray level face unit images, wherein M is a positive integer greater than or equal to 1 and is the 1 st unit image of the gray level face
Figure BDA00030820302500000810
Grayscale face
2 nd unit image
Figure BDA00030820302500000811
Grayscale face No. 3 unit image
Figure BDA00030820302500000812
… … grayscale human face M unit image
Figure BDA00030820302500000813
Wherein the content of the first and second substances,
Figure BDA00030820302500000814
&representing an image mosaic symbol;
s122, aiming at the m unit image of the gray human face
Figure BDA00030820302500000815
Performing a first screening value
Figure BDA00030820302500000816
Second screening value
Figure BDA00030820302500000817
And a second screening value
Figure BDA00030820302500000818
M is a positive integer less than or equal to M;
s123, if the m unit image of the gray human face
Figure BDA00030820302500000819
If the screening value is greater than or equal to the preset screening threshold, go to step S124;
if m unit image of gray human face
Figure BDA00030820302500000820
If the screening value is smaller than the preset screening threshold, step S125 is executed;
s124, judging the m unit image of the gray human face
Figure BDA00030820302500000821
Gray value pixel of middle zeta pixel pointζAnd the size between the first operation threshold of the image:
if m unit image of gray human face
Figure BDA0003082030250000091
Gray value pixel of middle zeta pixel pointζIf the image is greater than or equal to the first operation threshold value of the image, pixel is orderedζ=0;
If m unit image of gray human face
Figure BDA0003082030250000092
Gray value pixel of middle zeta pixel pointζIf the value is less than the second operation threshold value of the image, pixel is orderedζ=255;
S125, judging the m unit image of the gray human face
Figure BDA0003082030250000093
Gray value pixel of middle zeta pixel pointζAnd the size between the image and the second operation threshold value:
if m unit image of gray human face
Figure BDA0003082030250000094
Gray value pixel of middle zeta pixel pointζIf the value is greater than or equal to the second operation threshold value of the image, pixel is orderedζ=255;
If m unit image of gray human face
Figure BDA0003082030250000095
Gray value pixel of middle zeta pixel pointζIf the value is less than the second operation threshold value of the image, pixel is orderedζ=0。
In a preferred embodiment of the present invention, in step S122, the m-th unit image of the face is gray-scaled
Figure BDA0003082030250000096
First screening value of
Figure BDA0003082030250000097
The calculation method comprises the following steps:
Figure BDA0003082030250000098
wherein the content of the first and second substances,
Figure BDA0003082030250000099
m-th unit image for expressing gray human face
Figure BDA00030820302500000910
The number of the middle pixel points;
pixelζm-th unit image for expressing gray human face
Figure BDA00030820302500000911
The gray value of the middle zeta-th pixel point;
or/and m unit image of human face with gray scale
Figure BDA00030820302500000912
Second screening value of
Figure BDA00030820302500000913
The calculation method comprises the following steps:
Figure BDA00030820302500000914
wherein the content of the first and second substances,
Figure BDA00030820302500000915
m-th unit image for expressing gray human face
Figure BDA00030820302500000916
The number of the middle pixel points;
pixelζm-th unit image for expressing gray human face
Figure BDA00030820302500000917
Middle zeta imageGray values of the pixel points;
pixelξm-th unit image for expressing gray human face
Figure BDA00030820302500000918
The gray value of the middle xi pixel point;
Figure BDA0003082030250000101
representing a first selection number;
Figure BDA0003082030250000102
Figure BDA0003082030250000103
representing a second selection number;
Figure BDA0003082030250000104
or/and m unit image of human face with gray scale
Figure BDA0003082030250000105
Third screening value of
Figure BDA0003082030250000106
The calculation method comprises the following steps:
Figure BDA0003082030250000107
wherein the content of the first and second substances,
Figure BDA0003082030250000108
m-th unit image for expressing gray human face
Figure BDA0003082030250000109
The number of the middle pixel points;
pixelζm-th unit image for expressing gray human face
Figure BDA00030820302500001010
And (5) the gray value of the middle zeta-th pixel point.
In a preferred embodiment of the present invention, in step S123, the m-th unit image of the face is gray-scaled
Figure BDA00030820302500001011
The screening value of (2) is calculated by:
Figure BDA00030820302500001012
wherein the content of the first and second substances,
Figure BDA00030820302500001013
m-th unit image for expressing gray human face
Figure BDA00030820302500001014
The number of the middle pixel points;
pixelζm-th unit image for expressing gray human face
Figure BDA00030820302500001015
The gray value of the middle zeta-th pixel point;
Figure BDA00030820302500001016
m-th unit image for expressing gray human face
Figure BDA00030820302500001017
The screening value of (1).
In a preferred embodiment of the present invention, in step S124, the method for calculating the first operation threshold value of the image is:
Figure BDA00030820302500001018
wherein the content of the first and second substances,
Figure BDA00030820302500001019
representing a first operational threshold of the image;
Figure BDA00030820302500001020
m-th unit image for expressing gray human face
Figure BDA00030820302500001021
A first screening value of (a);
Figure BDA00030820302500001022
m-th unit image for expressing gray human face
Figure BDA00030820302500001023
A second screening value of (a);
Figure BDA0003082030250000111
m-th unit image for expressing gray human face
Figure BDA0003082030250000112
A third screening value of (a);
a represents a screening adjustment first coefficient;
b represents a second coefficient of screening modulation;
c represents a third coefficient of screening modulation; a + b + c is 1;
Figure BDA0003082030250000113
m-th unit image for expressing gray human face
Figure BDA0003082030250000114
The number of the middle pixel points;
pixelζm-th unit image for expressing gray human face
Figure BDA0003082030250000115
The gray value of the middle zeta-th pixel point;
pixelξexpress gray level human facem unit image
Figure BDA0003082030250000116
And the gray value of the middle xi pixel point.
In a preferred embodiment of the present invention, in step S125, the method for calculating the second operation threshold value of the image includes:
Figure BDA0003082030250000117
wherein the content of the first and second substances,
Figure BDA0003082030250000118
representing a second operational threshold of the image;
Figure BDA0003082030250000119
m-th unit image for expressing gray human face
Figure BDA00030820302500001110
A first screening value of (a);
Figure BDA00030820302500001111
m-th unit image for expressing gray human face
Figure BDA00030820302500001112
A second screening value of (a);
Figure BDA00030820302500001113
m-th unit image for expressing gray human face
Figure BDA00030820302500001114
A third screening value of (a);
a represents a screening adjustment first coefficient;
b represents a second coefficient of screening modulation;
c represents a third coefficient of screening modulation; a + b + c is 1;
Figure BDA00030820302500001115
m-th unit image for expressing gray human face
Figure BDA00030820302500001116
The number of the middle pixel points;
pixelζm-th unit image for expressing gray human face
Figure BDA00030820302500001117
The gray value of the middle zeta-th pixel point;
pixelξm-th unit image for expressing gray human face
Figure BDA0003082030250000121
And the gray value of the middle xi pixel point.
In a preferred embodiment of the present invention, as shown in fig. 4 and 5, the helmet comprises a helmet body 8, wherein the helmet is made of an insulating electricity-proof material. The front surface of the protective helmet body 8 is provided with an illuminating lamp fixing mounting seat 1 for fixedly mounting an illuminating lamp 2, the illuminating lamp 2 is fixedly mounted on the illuminating lamp fixing mounting seat 1, an illuminating lamp PCB fixing mounting seat for fixedly mounting an illuminating lamp PCB is arranged in the illuminating lamp fixing mounting seat 1, the illuminating lamp PCB is fixedly mounted on the illuminating lamp PCB fixing mounting seat, and an illuminating lamp driving module for driving the illuminating lamp 2 to work is arranged on the illuminating lamp PCB; the front side of the protective helmet body 8 is also provided with a brim 3, the bottom of the brim 3 is provided with an arc-shaped supporting block 7, the front side of the arc-shaped supporting block 7 is provided with an image audio acquisition module fixing mounting seat for fixedly mounting an image audio acquisition module 4, the image audio acquisition module 4 is fixedly mounted on the image audio acquisition module fixing mounting seat, and the image audio acquisition module comprises a camera 5, an audio input unit 6 and an audio output unit; an infrared detection module fixing mounting seat for fixedly mounting an infrared detection module 10 and a temperature detection module fixing mounting seat for fixedly mounting a temperature detection module 9 are arranged on the inner side of the protective helmet body 8, the infrared detection module 10 is fixedly mounted on the infrared detection module fixing mounting seat, and the temperature detection module 9 is fixedly mounted on the temperature detection module fixing mounting seat; the arrangement of the cap peak 3 is beneficial to shielding hard light, preventing the influence of sunlight on image acquisition and preventing rainwater from wetting the lens in rainy days to cause lens blurring; the infrared detection module 10 can detect whether a technician wears the protective helmet, and if the technician wears the protective helmet, the protective helmet works. Its temperature detection module 9 is used for detecting human body temperature, and temperature value when its collection is greater than or equal to and predetermines the temperature threshold value, then sends suggestion alarm information, avoids the high temperature operation.
A PCB circuit board fixing installation seat for fixedly installing a PCB circuit board is arranged in the protective helmet, the PCB circuit board is fixedly installed on the PCB circuit board fixing installation seat, and a controller and a wireless data transmission connection module are arranged on the PCB circuit board; the wireless data transmission link of controller links to each other with wireless data transmission link module's data transmission end, the light control end of controller links to each other with light drive module's drive control end, the image data output of camera 5 links to each other with the image data input of controller, the audio data output of audio input unit 6 links to each other with the audio data input of controller, the audio data input of audio output unit links to each other with the audio data output of controller, the temperature data output of temperature detection module 9 links to each other with the temperature data input of controller, the infrared detection data output of infrared detection module 10 links to each other with the infrared data input of controller. The wireless data transmission connection module comprises one or any combination of a wireless data transmission connection WiFi module, a wireless data transmission connection 3G module, a wireless data transmission connection 4G module, a wireless data transmission connection 5G module and a wireless data transmission connection Bluetooth module;
the wireless data transmission of controller connects the data transmission end of wiFi end and wireless data transmission connection wiFi module and links to each other, the wireless data transmission of controller connects the data transmission end of 3G end and wireless data transmission connection 3G module and links to each other, the wireless data transmission of controller connects the data transmission end of 4G end and wireless data transmission connection 4G module and links to each other, the wireless data transmission of controller connects the data transmission end of 5G end and wireless data transmission connection 5G module and links to each other, the wireless data transmission of controller connects the data transmission end of bluetooth end and wireless data transmission connection bluetooth module and links to each other.
In a preferred embodiment of the present invention, the audio input unit 6 includes: as shown in fig. 2, the selection control terminal sled of the audio collector MIC5 is respectively connected to the first terminal of the resistor R64 and the audio selection control terminal of the controller, the second terminal of the resistor R64 is connected to the power supply voltage VDD _1.8V, the clock terminal CLK of the audio collector MIC5 is connected to the audio input clock terminal of the controller, the DATA terminal DATA of the audio collector MIC5 is connected to the audio DATA input terminal of the controller, the power ground terminal of the audio collector MIC5 is connected to the power ground, the power supply voltage terminal VDD of the audio collector MIC5 is respectively connected to the first terminal of the capacitor C50 and the power supply voltage VDD _1.8V, and the second terminal of the capacitor C50 is connected to the power ground; in this embodiment, the resistance of the resistor R64 is 10K, the capacitance of the capacitor C50 is 0.1uF, and the model of the audio collector MIC5 is ZTS 6032.
The audio output unit includes: as shown in fig. 3, the left channel terminal INL-of the audio driver chip U2 is connected to the first terminal of the capacitor C4, the second terminal of the capacitor C4 is connected to the first terminal of the capacitor C2 and the first terminal of the resistor R24, the second terminal of the resistor R24 is connected to the negative left channel terminal of the driver interface J4, the left channel terminal INL + of the audio driver chip U2 is connected to the first terminal of the capacitor C5, the second terminal of the capacitor C5 is connected to the second terminal of the capacitor C2 and the first terminal of the resistor R25, and the second terminal of the resistor R25 is connected to the positive left channel terminal of the driver interface J4; a right channel end INR + of the audio driving chip U2 is connected to a first end of a capacitor C6, a second end of a capacitor C6 is connected to a first end of a capacitor C3 and a first end of a resistor R26, respectively, a second end of a resistor R26 is connected to a right channel positive end of the driving interface J4, a right channel end INR-of the audio driving chip U2 is connected to a first end of a capacitor C7, a second end of a capacitor C7 is connected to a second end of a capacitor C3 and a first end of a resistor R27, respectively, and a second end of the resistor R27 is connected to a right channel negative end of the driving interface J4; a left channel grounding first end of the driving interface J4 is connected with a first end of the transient suppression diode TVS26, a second end of the transient suppression diode TVS26 is connected with a power ground, a left channel grounding second end of the driving interface J4 is connected with a first end of the transient suppression diode TVS27, a second end of the transient suppression diode TVS27 is connected with the power ground, a right channel grounding first end of the driving interface J4 is connected with a first end of the transient suppression diode TVS28, a second end of the transient suppression diode TVS28 is connected with the power ground, a right channel grounding second end of the driving interface J4 is connected with a first end of the transient suppression diode TVS29, and a second end of the transient suppression diode TVS29 is connected with the power ground; the digital ground end of the driving interface J4 is connected with the first end of the resistor R89, and the power ground end of the driving interface J4 is connected with the second end of the resistor R89; the audio data output end of the controller is connected with the driving interface J4;
a selection terminal G0 of the audio driver chip U2 is respectively connected with a first terminal of a resistor R28 and a first terminal of a resistor R31, a second terminal of the resistor R28 is connected with a power supply voltage AVDD _3V3, a second terminal of a resistor R31 is connected with a digital ground, a selection terminal G1 of the audio driver chip U2 is respectively connected with a first terminal of a resistor R29 and a first terminal of a resistor R30, a second terminal of a resistor R30 is connected with a power supply voltage AVDD _3V3, a second terminal of a resistor R29 is connected with a digital ground, a power supply ground terminal HPVSS of the audio driver chip U2 is connected with a first terminal of a capacitor C9, and a second terminal of a capacitor C9 is connected with the digital ground;
a charge pump terminal CPN of the audio driver chip U2 is connected to a first terminal of the capacitor C11, a charge pump terminal CPP of the audio driver chip U2 is connected to a second terminal of the capacitor C11, a power ground terminal PGND of the audio driver chip U2 is connected to digital ground, a power ground terminal HPVDD of the audio driver chip U2 is connected to a first terminal of the capacitor C10, and a second terminal of the capacitor C10 is connected to power ground;
a power supply terminal VDD of the audio driving chip U2 is respectively connected with a power supply voltage AVDD _3V3, a first terminal of a capacitor C8 and a first terminal of a capacitor C54, and a power supply ground terminal SGND of the audio driving chip U2 is respectively connected with a digital ground, a second terminal of a capacitor C8 and a second terminal of a capacitor C54;
an enable terminal EN of the audio driver chip U2 is connected to a first terminal of the resistor R32 and a first terminal of the resistor R33, a second terminal of the resistor R33 is connected to a power ground, a first terminal of the resistor R32 is connected to an audio driver chip enable terminal of the controller, a left channel audio output terminal OUTL of the audio driver chip U2 is connected to a first terminal of the transient suppression diode TVS15 and a left channel terminal of the speaker interface J5, a second terminal of the transient suppression diode TVS15 is connected to a digital ground, a right channel audio output terminal OUTR of the audio driver chip U2 is connected to a first terminal of the transient suppression diode TVS19 and a right channel terminal of the speaker interface J5, a second terminal of the transient suppression diode TVS19 is connected to a digital ground, and a ground terminal of the speaker interface J5 is connected to a digital ground; the loudspeaker interface J5 is connected with the left loudspeaker and the right loudspeaker; real-time audio output and output are provided for technicians, remote conversation is realized, and problems are quickly solved; in this embodiment, the resistances of the resistor R24, the resistor R25, the resistor R26, and the resistor R27 are 560 Ω, the capacitances of the capacitor C2, the capacitor C3, and the capacitor C8 are 4.7uF, the capacitances of the capacitor C4, the capacitor C5, the capacitor C6, and the capacitor C7 are 220nF, the capacitance of the capacitor C54 is 10uF, the capacitance of the capacitor C10 is 10uF, the capacitances of the capacitor C9 and the capacitor C11 are 1uF, the resistances of the resistor R32, the resistor R31, and the resistor R29 are 1K, and the resistances of the resistor R33, the resistor R28, and the resistor R30 are 130 Ω.
The illumination lamp driving module includes: the base electrode of the first triode is connected with the first end of the first resistor, the second end of the first resistor is connected with the illuminating lamp control end of the controller, the collector electrode of the first triode is respectively connected with the first end of the second resistor and the negative electrode of the first diode, the second end of the second resistor is connected with the power supply voltage AVDD _3V3, the emitter electrode of the first triode is connected with the first end of the first normally open relay input loop, the second end of the second normally open relay input loop is respectively connected with the first end of the third resistor and the first end of the fourth resistor, the second end of the third resistor is connected with the positive electrode of the first diode, and the second end of the fourth resistor is connected with the power ground; the first normally open relay output circuit is connected in series in the lighting lamp power supply circuit. When it needs the illumination, the light control end output of controller switches on the level, and first triode switches on, and first normally open relay output circuit is become the closed state by normally open state, and light power supply circuit is closed, and the light illuminates.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (2)

1. The intelligent management method for the unattended field operation of the transformer substation integrating the video analysis technology comprises a protective helmet and is characterized by comprising the following steps of:
s1, acquiring the face information of a technician to enter the transformer substation by a face image acquisition module on the access control system; the method comprises the steps that face data processing is carried out on face information of technicians to enter a transformer substation, wherein the face information is collected by a face image collection module, and the face information is a collected face after the face data processing is carried out on the face information;
s2, the access control system compares and compares whether the face is consistent with the collected face:
if the comparison face is consistent with the collected face, the access control system opens the access control;
and if the comparison face is inconsistent with the collected face, the access control system uploads the collected face to a warning face storage database.
2. The intelligent management method for the unmanned site operation of the converged video analytics technology substation according to claim 1, wherein in step S1, the method comprises the following steps:
s11, the access control system judges whether the collected face image is a gray image:
if the face image collected by the access control system is a gray image, executing step S12;
if the face image collected by the access control system is not a gray image, executing the following steps:
s111, counting the total number of the face images collected by the access control system, and recording as a, a is the total number of the face images collected by the access control system, and A is1、A2、A3、……、Aa,A1For technician A who collected by the access control system1 st face image, A22 nd face image of technician A collected for access control system, A3The 3 rd face image of technician A, collected for the access control systemaThe method comprises the steps of collecting the a-th face image of a technician A for an access control system;
s112, converting the RGB face image into a gray image through the following calculation formula:
Figure FDA0003082030240000021
wherein the content of the first and second substances,
Figure FDA0003082030240000022
representing an ith image of the gray face; 1, 2, 3, … …, a;
Imnrepresenting a grayscale image
Figure FDA0003082030240000023
The gray value of the pixel point at the position of the mth row and the nth column in the middle row; m is 1, 2, 3, … … and M, N is 1, 2, 3, … … and N, M is width × Resolution, M represents the total number of horizontal pixels, width represents the width value of the RGB face image, and Resolution represents the Resolution of the RGB face image; n is high × Resolution, N represents the total number of vertical pixel points, and high represents the height value of the RGB face image; i.e. I11Representing a grayscale image
Figure FDA0003082030240000024
Gray value, I, of pixel point at the 1 st row and 1 st column position12Representing a grayscale image
Figure FDA0003082030240000025
Gray value, I, of pixel point at the 2 nd column position of the middle 1 st line13Representing a grayscale image
Figure FDA0003082030240000026
Gray scale of pixel point at position of middle 1 st row and 3 rd columnValue, I1NRepresenting a grayscale image
Figure FDA0003082030240000027
The gray value of the pixel point at the position of the No. 1 line and the No. N column; i is21Representing a grayscale image
Figure FDA0003082030240000028
Gray value, I, of pixel point at the 1 st column position of the middle 2 nd row22Representing a grayscale image
Figure FDA0003082030240000029
Gray value, I, of pixel point at 2 nd row and 2 nd column position in middle row23Representing a grayscale image
Figure FDA00030820302400000210
Gray value, I, of pixel point at the position of the 2 nd row and 3 rd column2NRepresenting a grayscale image
Figure FDA00030820302400000211
The gray value of the pixel point at the position of the Nth column in the 2 nd row; i is31Representing a grayscale image
Figure FDA00030820302400000212
Gray value, I, of pixel point at the 1 st column position of the 3 rd row32Representing a grayscale image
Figure FDA00030820302400000213
Gray value, I, of pixel point at the 2 nd column position of the 3 rd row33Representing a grayscale image
Figure FDA00030820302400000214
Gray value, I, of pixel point at the 3 rd row and column position in the 3 rd row3NRepresenting a grayscale image
Figure FDA00030820302400000215
Gray scale of pixel point at position of 3 rd row and N th columnA value; i isM1Representing a grayscale image
Figure FDA00030820302400000216
Gray value, I, of pixel point at the 1 st column position of the M-th rowM2Representing a grayscale image
Figure FDA00030820302400000217
Gray value, I, of pixel point at position of middle Mth row and 2 nd columnM3Representing a grayscale image
Figure FDA00030820302400000218
Gray value, I, of pixel point at the 3 rd column position of the M-th rowMNRepresenting a grayscale image
Figure FDA0003082030240000031
The gray value of the pixel point at the position of the Mth row and the Nth column;
Figure FDA0003082030240000032
wherein R ismnExpressing the red channel value of a pixel point at the nth row position of the mth line in the RGB image;
Gmnrepresenting the green channel value of a pixel point at the nth row position of the mth line in the RGB image;
Bmnrepresenting a blue channel value of a pixel point at the nth row position of the mth line in the RGB image;
Figure FDA0003082030240000033
represents the red channel value GmnThe fusion parameters of (1);
Figure FDA0003082030240000034
represents the green channel value BmnThe fusion parameters of (1);
Figure FDA0003082030240000035
represents the blue channel value BmnThe fusion parameters of (1);
and S12, screening the gray face image.
CN202110569312.4A 2021-05-25 2021-05-25 Intelligent management method for unattended field operation of transformer substation integrating video analysis technology Pending CN113297971A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110569312.4A CN113297971A (en) 2021-05-25 2021-05-25 Intelligent management method for unattended field operation of transformer substation integrating video analysis technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110569312.4A CN113297971A (en) 2021-05-25 2021-05-25 Intelligent management method for unattended field operation of transformer substation integrating video analysis technology

Publications (1)

Publication Number Publication Date
CN113297971A true CN113297971A (en) 2021-08-24

Family

ID=77324641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110569312.4A Pending CN113297971A (en) 2021-05-25 2021-05-25 Intelligent management method for unattended field operation of transformer substation integrating video analysis technology

Country Status (1)

Country Link
CN (1) CN113297971A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107481347A (en) * 2017-09-30 2017-12-15 四川民工加网络科技有限公司 Attendance checking system and equipment for construction site
CN107633209A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 Electronic installation, the method and storage medium of dynamic video recognition of face
CN109389729A (en) * 2018-12-03 2019-02-26 广东电网有限责任公司 A kind of more scene recognition of face monitoring systems of smart grid
CN109393624A (en) * 2018-11-28 2019-03-01 安徽清新互联信息科技有限公司 Multifunctional protection safety cap and its control method
CN110633623A (en) * 2019-07-23 2019-12-31 国网浙江省电力有限公司杭州供电公司 Management and control method for operation process of transformer substation worker
CN112465742A (en) * 2020-10-16 2021-03-09 重庆恢恢信息技术有限公司 Method for identifying and judging construction site reinforcement bar installation abnormity by fusing big data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633209A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 Electronic installation, the method and storage medium of dynamic video recognition of face
CN107481347A (en) * 2017-09-30 2017-12-15 四川民工加网络科技有限公司 Attendance checking system and equipment for construction site
CN109393624A (en) * 2018-11-28 2019-03-01 安徽清新互联信息科技有限公司 Multifunctional protection safety cap and its control method
CN109389729A (en) * 2018-12-03 2019-02-26 广东电网有限责任公司 A kind of more scene recognition of face monitoring systems of smart grid
CN110633623A (en) * 2019-07-23 2019-12-31 国网浙江省电力有限公司杭州供电公司 Management and control method for operation process of transformer substation worker
CN112465742A (en) * 2020-10-16 2021-03-09 重庆恢恢信息技术有限公司 Method for identifying and judging construction site reinforcement bar installation abnormity by fusing big data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郑树泉等: "《工业智能技术与应用》", 上海科学技术出版社, pages: 225 - 226 *

Similar Documents

Publication Publication Date Title
KR101942491B1 (en) Hybrid ai cctv mediation module device consisting of road traffic situation monitoring and real time traffic information analysis
CN110784511B (en) Intelligent street lamp system based on edge internet of things agent
CN103606277B (en) Intersection temporary traffic signal lamp and red-light running snapshot device
EP2296104A1 (en) Dynamic camera color correction device, and video search device using the same
JP4708192B2 (en) Dynamic camera color correction device and video search device using the same
CN104050679B (en) Illegal parking automatic evidence obtaining method
CN103337176A (en) Traffic violation snapshotting system and traffic violation snapshotting method
CN203422846U (en) Traffic violation snapshooting system
CN114785960B (en) 360 degree panorama vehicle event data recorder system based on wireless transmission technology
CN115661337A (en) Binocular vision-based three-dimensional reconstruction method for transformer substation operating personnel
Cohen et al. CCTV operational requirements manual 2009
KR102392822B1 (en) Device of object detecting and tracking using day type camera and night type camera and method of detecting and tracking object
CN107123242A (en) A kind of intelligent building Video Surveillance Alarm System
CN112001208A (en) Target detection method and device for vehicle blind area and electronic equipment
KR102169211B1 (en) apparatus and method for automatically detecting bird&#39;s cast
CN113297971A (en) Intelligent management method for unattended field operation of transformer substation integrating video analysis technology
CN105141860A (en) Infrared imaging system and method
KR101676444B1 (en) System and method for road-side automatic number plate recognition of multi-lane
CN102340628A (en) Camera and control method thereof
CN113822119A (en) Method and device for adjusting air quality in vehicle, storage medium and electronic equipment
CN105163030A (en) Field camera system based on infrared induction switches and working method thereof
CN113297970A (en) Intelligent control method for substation unattended field operation based on video analysis technology
CN105631425B (en) License plate recognition method and system based on video stream and intelligent digital camera
CN110666790B (en) Orchard live broadcast robot and orchard live broadcast system
CN113538967B (en) Vehicle-road cooperation device and method under crossroad scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination