CN113297970A - Intelligent control method for substation unattended field operation based on video analysis technology - Google Patents

Intelligent control method for substation unattended field operation based on video analysis technology Download PDF

Info

Publication number
CN113297970A
CN113297970A CN202110569307.3A CN202110569307A CN113297970A CN 113297970 A CN113297970 A CN 113297970A CN 202110569307 A CN202110569307 A CN 202110569307A CN 113297970 A CN113297970 A CN 113297970A
Authority
CN
China
Prior art keywords
face
image
value
gray
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110569307.3A
Other languages
Chinese (zh)
Inventor
李松霖
吕金生
付翠莲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yuxi Power Supply Bureau of Yunnan Power Grid Co Ltd
Original Assignee
Yuxi Power Supply Bureau of Yunnan Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yuxi Power Supply Bureau of Yunnan Power Grid Co Ltd filed Critical Yuxi Power Supply Bureau of Yunnan Power Grid Co Ltd
Priority to CN202110569307.3A priority Critical patent/CN113297970A/en
Publication of CN113297970A publication Critical patent/CN113297970A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • AHUMAN NECESSITIES
    • A42HEADWEAR
    • A42BHATS; HEAD COVERINGS
    • A42B3/00Helmets; Helmet covers ; Other protective head coverings
    • A42B3/04Parts, details or accessories of helmets
    • A42B3/0406Accessories for helmets
    • A42B3/0433Detecting, signalling or lighting devices
    • AHUMAN NECESSITIES
    • A42HEADWEAR
    • A42BHATS; HEAD COVERINGS
    • A42B3/00Helmets; Helmet covers ; Other protective head coverings
    • A42B3/04Parts, details or accessories of helmets
    • A42B3/0406Accessories for helmets
    • A42B3/0433Detecting, signalling or lighting devices
    • A42B3/044Lighting devices, e.g. helmets with lamps
    • AHUMAN NECESSITIES
    • A42HEADWEAR
    • A42BHATS; HEAD COVERINGS
    • A42B3/00Helmets; Helmet covers ; Other protective head coverings
    • A42B3/04Parts, details or accessories of helmets
    • A42B3/30Mounting radio sets or communication systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an intelligent control method for substation unattended field operation based on a video analysis technology, which comprises a protective helmet and comprises the following steps: s1, the technical personnel of the transformer substation enter the transformer substation; and S2, after the transformer substation is started, the camera on the protective helmet transmits the video image in the field operation process until the cloud platform manages the video image. The invention can ensure the safety of personnel entering the transformer substation and record the operation process in the whole process.

Description

Intelligent control method for substation unattended field operation based on video analysis technology
Technical Field
The invention relates to the technical field of transformer substations, in particular to an intelligent transformer substation unattended field operation management and control method based on a video analysis technology.
Background
The transformer substation is a place for converting voltage and current, receiving electric energy and distributing electric energy in an electric power system. The substations in the power plant are step-up substations, which are used to boost up the electrical energy generated by the generator and feed it into the high-voltage network. Patent application No. 2018110972027, the name is "an intelligent safety control system of transformer substation's maintenance operation", disclose including: the safety control system comprises a safety control host, a rolling screen type display screen and a safety prompter, wherein the safety control host comprises a first processor, a first memory, a read-write module and a first sound module, the first memory is respectively connected with the first processor, and the read-write module and the first sound module are used for updating overhaul operation information of a substation in the first memory; the rolling screen type display screen comprises a display part made of flexible materials which can be rolled into a reel; the number of the safety prompters is more than one, and each safety prompter comprises a second processor, and a second memory, a proximity sensor, a second projection module and a second sound module which are respectively connected with the second processor. The transformer substation maintenance operation safety management and control method can overcome the defects of the existing transformer substation maintenance operation safety management and control means, and has the characteristics of convenience in carrying, convenience and quickness in use, visual display, no limitation of an operation field, reduction of visual dead angles and reduction of the burden of safety guardians.
Disclosure of Invention
The invention aims to at least solve the technical problems in the prior art, and particularly creatively provides an intelligent transformer substation operation control method without permission based on a video analysis technology.
In order to achieve the above purpose, the invention provides an intelligent control method for substation unattended field operation based on a video analysis technology, which comprises a protective helmet and comprises the following steps:
s1, the technical personnel of the transformer substation enter the transformer substation;
and S2, after the transformer substation is started, the camera on the protective helmet transmits the video image in the field operation process until the cloud platform manages the video image.
In a preferred embodiment of the present invention, the protective helmet comprises a protective helmet body, wherein a lighting lamp fixing mounting seat for fixedly mounting a lighting lamp is arranged on the front surface of the protective helmet body, the lighting lamp is fixedly mounted on the lighting lamp fixing mounting seat, a lighting lamp PCB fixing mounting seat for fixedly mounting a lighting lamp PCB is arranged in the lighting lamp fixing mounting seat, the lighting lamp PCB is fixedly mounted on the lighting lamp PCB fixing mounting seat, and a lighting lamp driving module for driving the lighting lamp to work is arranged on the lighting lamp PCB; the front side of the protective helmet body is also provided with a brim, the bottom of the brim is provided with an arc-shaped supporting block, the front side of the arc-shaped supporting block is provided with an image audio acquisition module fixing mounting seat for fixedly mounting an image audio acquisition module, the image audio acquisition module is fixedly mounted on the image audio acquisition module fixing mounting seat, and the image audio acquisition module comprises a camera, an audio input unit and an audio output unit; the protection helmet comprises a protection helmet body and is characterized in that an infrared detection module fixing installation seat for fixedly installing an infrared detection module and a temperature detection module fixing installation seat for fixedly installing a temperature detection module are arranged on the inner side of the protection helmet body, the infrared detection module is fixedly installed on the infrared detection module fixing installation seat, and the temperature detection module is fixedly installed on the temperature detection module fixing installation seat;
a PCB circuit board fixing installation seat for fixedly installing a PCB circuit board is arranged in the protective helmet, the PCB circuit board is fixedly installed on the PCB circuit board fixing installation seat, and a controller and a wireless data transmission connection module are arranged on the PCB circuit board; the wireless data transmission link of controller links to each other with wireless data transmission link module's data transmission end, the light control end of controller links to each other with light drive module's drive control end, the image data output of camera links to each other with the image data input of controller, the audio data output of audio input unit links to each other with the audio data input of controller, the audio data input of audio output unit links to each other with the audio data output of controller, the temperature data output of temperature detection module links to each other with the temperature data input of controller, the infrared detection data output of infrared detection module links to each other with the infrared data input of controller.
In a preferred embodiment of the present invention, step S1 includes the following steps:
s11, when a technician waits to enter the transformer substation, the wireless connection signal sent by the access control system is communicated with the wireless data transmission connection module in the protective helmet, the attribute information of the wireless data transmission connection module is obtained, and the corresponding face information is inquired according to the attribute information inquiry value; the face information inquired according to the attribute information inquiry value is a comparison face;
s12, acquiring the face information of a technician to enter the transformer substation by a face image acquisition module on the access control system; the method comprises the steps that face data processing is carried out on face information of technicians to enter a transformer substation, wherein the face information is collected by a face image collection module, and the face information is a collected face after the face data processing is carried out on the face information;
s13, the access control system compares and compares whether the face is consistent with the collected face:
if the comparison face is consistent with the collected face, the access control system opens the access control;
and if the comparison face is inconsistent with the collected face, the access control system uploads the collected face to a warning face storage database.
In a preferred embodiment of the present invention, step S11 includes the following steps:
s111, the access control system communicates with the protective helmet, the access control system acquires the attribute information of the wireless data transmission connection module from the protective helmet, and after the controller receives the attribute information of the wireless data transmission connection module acquired from the access control system, the controller sends the acquired attribute information of the wireless data transmission connection module to the access control system;
s112, after the access control system receives the attribute information of the wireless data transmission connection module sent by the protective helmet, the access control system performs the following operations on the received attribute information of the wireless data transmission connection module:
Property information query value=<Attribute information,Algorithm type>,
wherein, Property information query value represents a Property information query value;
the Attribute information represents Attribute information of a wireless data transmission connection module, and the Attribute information of the wireless data transmission connection module comprises a physical address of one of a wireless data transmission connection WiFi module, a wireless data transmission connection 3G module, a wireless data transmission connection 4G module, a wireless data transmission connection 5G module and a wireless data transmission connection Bluetooth module;
s113, judging whether the Property information query value exists in the face database:
if the Property information query value exists in the face feature value database, screening a face feature value corresponding to the Property information query value, and executing step S114;
if the attribute information query value does not exist in the human face characteristic value database, sending prompt information to a protective helmet of the user, wherein the prompt information is that data information of the protective helmet is not recorded into a transformer substation system;
and S114, obtaining the face information associated with the face characteristic value according to the face characteristic value obtained in the step S113.
In a preferred embodiment of the present invention, in step S113, the method for calculating the face feature value includes:
Face feature value=<Face information,Algorithm type>,
wherein, the Face information represents Face information, namely a Face image;
algorithm type represents an Algorithm operation type;
< Face information, Algorithm type > represents the Algorithm operation type Algorithm type carried out on Face information;
the Face feature value represents a Face feature value.
In a preferred embodiment of the present invention, step S12 includes the following steps:
s121, judging whether the acquired face image is a gray image by the access control system:
if the face image collected by the access control system is a gray image, executing step S122;
if the face image collected by the access control system is not a gray image, executing the following steps:
s1211, counting the total number of the face images collected by the access control system, and recording as a, a is the total number of the face images collected by the access control system, and A is respectively1、A2、A3、……、Aa,A1The 1 st face image of technician A, collected for the access control system 22 nd face image of technician A collected for access control system, A3The 3 rd face image of technician A, collected for the access control systemaThe method comprises the steps of collecting the a-th face image of a technician A for an access control system;
and S1212, converting the RGB face image into a gray image through the following calculation formula:
Figure BDA0003082031060000051
wherein, Grayscale face
Figure BDA0003082031060000052
Representing an ith image of the gray face; 1, 2, 3, … …, a;
Imngrayscale surface image representing gray scale imageAiThe gray value of the pixel point at the position of the mth row and the nth column in the middle row; m is 1, 2, 3, … … and M, N is 1, 2, 3, … … and N, M is width × Resolution, M represents the total number of horizontal pixels, width represents the width value of the RGB face image, and Resolution represents the Resolution of the RGB face image; n is high × Resolution, N represents the total number of vertical pixel points, and high represents the height value of the RGB face image; i.e. I11Grayscale face for representing gray scale image
Figure BDA0003082031060000053
Gray value, I, of pixel point at the 1 st row and 1 st column position12Grayscale face for representing gray scale image
Figure BDA0003082031060000054
Gray value, I, of pixel point at the 2 nd column position of the middle 1 st line13Grayscale face for representing gray scale image
Figure BDA0003082031060000055
Gray value, I, of pixel point at the position of the 1 st row and the 3 rd column1NRepresenting a grayscale imageGrayscale face
Figure BDA0003082031060000056
The gray value of the pixel point at the position of the No. 1 line and the No. N column; i is21Grayscale face for representing gray scale image
Figure BDA0003082031060000057
Gray value, I, of pixel point at the 1 st column position of the middle 2 nd row22Grayscale face for representing gray scale image
Figure BDA0003082031060000058
Gray value, I, of pixel point at 2 nd row and 2 nd column position in middle row23Grayscale face for representing gray scale image
Figure BDA0003082031060000059
Gray value, I, of pixel point at the position of the 2 nd row and 3 rd column2NGrayscale face for representing gray scale image
Figure BDA00030820310600000510
The gray value of the pixel point at the position of the Nth column in the 2 nd row; i is31Grayscale face for representing gray scale image
Figure BDA00030820310600000511
Gray value, I, of pixel point at the 1 st column position of the 3 rd row32Grayscale face for representing gray scale image
Figure BDA00030820310600000512
Gray value, I, of pixel point at the 2 nd column position of the 3 rd row33Grayscale face for representing gray scale image
Figure BDA00030820310600000513
Gray value, I, of pixel point at the 3 rd row and column position in the 3 rd row3NGrayscale face for representing gray scale image
Figure BDA00030820310600000514
The gray value of the pixel point at the Nth column position of the 3 rd row; i isM1Represents ashGrayscale face
Figure BDA00030820310600000515
Gray value, I, of pixel point at the 1 st column position of the M-th rowM2Grayscale face for representing gray scale image
Figure BDA0003082031060000061
Gray value, I, of pixel point at position of middle Mth row and 2 nd columnM3Grayscale face for representing gray scale image
Figure BDA0003082031060000062
Gray value, I, of pixel point at the 3 rd column position of the M-th rowMNGrayscale face for representing gray scale image
Figure BDA0003082031060000063
The gray value of the pixel point at the position of the Mth row and the Nth column;
Figure BDA0003082031060000064
wherein R ismnExpressing the red channel value of a pixel point at the nth row position of the mth line in the RGB image;
Gmnrepresenting the green channel value of a pixel point at the nth row position of the mth line in the RGB image;
Bmnrepresenting a blue channel value of a pixel point at the nth row position of the mth line in the RGB image;
Figure BDA0003082031060000065
represents the red channel value GmnThe fusion parameters of (1);
Figure BDA0003082031060000066
represents the green channel value BmnThe fusion parameters of (1);
Figure BDA0003082031060000067
represents the blue channel value BmnThe fusion parameters of (1);
and S122, screening the gray-level face image.
In a preferred embodiment of the present invention, step S122 includes the following steps: order to
Figure BDA0003082031060000068
S1221, the gray-scale face image is subjected to
Figure BDA0003082031060000069
Dividing the image into M gray level face unit images, wherein M is a positive integer greater than or equal to 1 and is the 1 st unit image of the gray level face
Figure BDA00030820310600000610
Grayscale face
2 nd unit image
Figure BDA00030820310600000611
Grayscale face No. 3 unit image
Figure BDA00030820310600000612
M unit image of gray human face
Figure BDA00030820310600000613
Wherein the content of the first and second substances,
Figure BDA00030820310600000614
&representing an image mosaic symbol;
s1222 for m unit image of gray human face
Figure BDA00030820310600000615
Performing a first screening value
Figure BDA00030820310600000616
Second screening value
Figure BDA00030820310600000617
And a second screening value
Figure BDA00030820310600000618
M is a positive integer less than or equal to M;
s1223, if the m-th unit image of the gray human face
Figure BDA00030820310600000619
If the filtering value is greater than or equal to the preset filtering threshold, step S1224 is executed;
if m unit image of gray human face
Figure BDA00030820310600000620
If the screening value is smaller than the preset screening threshold, executing step S1225;
s1224, determining m unit image of gray human face
Figure BDA00030820310600000621
Gray value pixel of middle zeta pixel pointζAnd the size between the first operation threshold of the image:
if m unit image of gray human face
Figure BDA0003082031060000071
Gray value pixel of middle zeta pixel pointζIf the image is greater than or equal to the first operation threshold value of the image, pixel is orderedζ=0;
If m unit image of gray human face
Figure BDA0003082031060000072
Gray value pixel of middle zeta pixel pointζIf the value is less than the second operation threshold value of the image, pixel is orderedζ=255;
S1225, judging m-th unit image of gray human face
Figure BDA0003082031060000073
Gray value pixel of middle zeta pixel pointζAnd the size between the image and the second operation threshold value:
if gray level human faceM unit image
Figure BDA0003082031060000074
Gray value pixel of middle zeta pixel pointζIf the value is greater than or equal to the second operation threshold value of the image, pixel is orderedζ=255;
If m unit image of gray human face
Figure BDA0003082031060000075
Gray value pixel of middle zeta pixel pointζIf the value is less than the second operation threshold value of the image, pixel is orderedζ=0。
In conclusion, due to the adoption of the technical scheme, the safety of personnel entering the transformer substation can be ensured, and the operation process can be recorded in the whole process.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic block diagram of the process of the present invention.
Fig. 2 is a schematic view of the structure of the protective helmet of the present invention.
Fig. 3 is a schematic view of another perspective structure of the protective helmet of the present invention.
Fig. 4 is a schematic circuit diagram of the audio input unit according to the present invention.
Fig. 5 is a schematic circuit diagram of the audio output unit according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
The invention provides a video analysis technology-based intelligent control method for substation unmanned site operation, which comprises a protective helmet made of an insulated electricity-proof material. As shown in fig. 1, the method comprises the following steps:
s1, the technical personnel of the transformer substation enter the transformer substation;
and S2, after the transformer substation is started, the camera on the protective helmet transmits the video image in the field operation process until the cloud platform manages the video image.
In a preferred embodiment of the present invention, as shown in fig. 2 and 3, the protective helmet includes a protective helmet body 8, a lighting lamp fixing mount 1 for fixedly mounting a lighting lamp 2 is disposed on a front surface of the protective helmet body 8, the lighting lamp 2 is fixedly mounted on the lighting lamp fixing mount 1, a lighting lamp PCB fixing mount for fixedly mounting a lighting lamp PCB is disposed in the lighting lamp fixing mount 1, the lighting lamp PCB is fixedly mounted on the lighting lamp PCB fixing mount, and a lighting lamp driving module for driving the lighting lamp 2 to operate is disposed on the lighting lamp PCB; the front side of the protective helmet body 8 is also provided with a brim 3, the bottom of the brim 3 is provided with an arc-shaped supporting block 7, the front side of the arc-shaped supporting block 7 is provided with an image audio acquisition module fixing mounting seat for fixedly mounting an image audio acquisition module 4, the image audio acquisition module 4 is fixedly mounted on the image audio acquisition module fixing mounting seat, and the image audio acquisition module comprises a camera 5, an audio input unit 6 and an audio output unit; an infrared detection module fixing mounting seat for fixedly mounting an infrared detection module 10 and a temperature detection module fixing mounting seat for fixedly mounting a temperature detection module 9 are arranged on the inner side of the protective helmet body 8, the infrared detection module 10 is fixedly mounted on the infrared detection module fixing mounting seat, and the temperature detection module 9 is fixedly mounted on the temperature detection module fixing mounting seat; the arrangement of the cap peak 3 is beneficial to shielding hard light, preventing the influence of sunlight on image acquisition and preventing rainwater from wetting the lens in rainy days to cause lens blurring; the infrared detection module 10 can detect whether a technician wears the protective helmet, and if the technician wears the protective helmet, the protective helmet works. Its temperature detection module 9 is used for detecting human body temperature, and temperature value when its collection is greater than or equal to and predetermines the temperature threshold value, then sends suggestion alarm information, avoids the high temperature operation.
A PCB circuit board fixing installation seat for fixedly installing a PCB circuit board is arranged in the protective helmet, the PCB circuit board is fixedly installed on the PCB circuit board fixing installation seat, and a controller and a wireless data transmission connection module are arranged on the PCB circuit board; the wireless data transmission link of controller links to each other with wireless data transmission link module's data transmission end, the light control end of controller links to each other with light drive module's drive control end, the image data output of camera 5 links to each other with the image data input of controller, the audio data output of audio input unit 6 links to each other with the audio data input of controller, the audio data input of audio output unit links to each other with the audio data output of controller, the temperature data output of temperature detection module 9 links to each other with the temperature data input of controller, the infrared detection data output of infrared detection module 10 links to each other with the infrared data input of controller. The wireless data transmission connection module comprises one or any combination of a wireless data transmission connection WiFi module, a wireless data transmission connection 3G module, a wireless data transmission connection 4G module, a wireless data transmission connection 5G module and a wireless data transmission connection Bluetooth module;
the wireless data transmission of controller connects the data transmission end of wiFi end and wireless data transmission connection wiFi module and links to each other, the wireless data transmission of controller connects the data transmission end of 3G end and wireless data transmission connection 3G module and links to each other, the wireless data transmission of controller connects the data transmission end of 4G end and wireless data transmission connection 4G module and links to each other, the wireless data transmission of controller connects the data transmission end of 5G end and wireless data transmission connection 5G module and links to each other, the wireless data transmission of controller connects the data transmission end of bluetooth end and wireless data transmission connection bluetooth module and links to each other.
In a preferred embodiment of the present invention, the audio input unit 6 includes: as shown in fig. 4, the selection control terminal sled of the audio collector MIC5 is respectively connected to the first terminal of the resistor R64 and the audio selection control terminal of the controller, the second terminal of the resistor R64 is connected to the power supply voltage VDD _1.8V, the clock terminal CLK of the audio collector MIC5 is connected to the audio input clock terminal of the controller, the DATA terminal DATA of the audio collector MIC5 is connected to the audio DATA input terminal of the controller, the power ground terminal of the audio collector MIC5 is connected to the power ground, the power supply voltage terminal VDD of the audio collector MIC5 is respectively connected to the first terminal of the capacitor C50 and the power supply voltage VDD _1.8V, and the second terminal of the capacitor C50 is connected to the power ground; in this embodiment, the resistance of the resistor R64 is 10K, the capacitance of the capacitor C50 is 0.1uF, and the model of the audio collector MIC5 is ZTS 6032.
The audio output unit includes: as shown in fig. 5, the left channel terminal INL-of the audio driver chip U2 is connected to the first terminal of the capacitor C4, the second terminal of the capacitor C4 is connected to the first terminal of the capacitor C2 and the first terminal of the resistor R24, the second terminal of the resistor R24 is connected to the negative left channel terminal of the driver interface J4, the left channel terminal INL + of the audio driver chip U2 is connected to the first terminal of the capacitor C5, the second terminal of the capacitor C5 is connected to the second terminal of the capacitor C2 and the first terminal of the resistor R25, and the second terminal of the resistor R25 is connected to the positive left channel terminal of the driver interface J4; a right channel end INR + of the audio driving chip U2 is connected to a first end of a capacitor C6, a second end of a capacitor C6 is connected to a first end of a capacitor C3 and a first end of a resistor R26, respectively, a second end of a resistor R26 is connected to a right channel positive end of the driving interface J4, a right channel end INR-of the audio driving chip U2 is connected to a first end of a capacitor C7, a second end of a capacitor C7 is connected to a second end of a capacitor C3 and a first end of a resistor R27, respectively, and a second end of the resistor R27 is connected to a right channel negative end of the driving interface J4; a left channel grounding first end of the driving interface J4 is connected with a first end of the transient suppression diode TVS26, a second end of the transient suppression diode TVS26 is connected with a power ground, a left channel grounding second end of the driving interface J4 is connected with a first end of the transient suppression diode TVS27, a second end of the transient suppression diode TVS27 is connected with the power ground, a right channel grounding first end of the driving interface J4 is connected with a first end of the transient suppression diode TVS28, a second end of the transient suppression diode TVS28 is connected with the power ground, a right channel grounding second end of the driving interface J4 is connected with a first end of the transient suppression diode TVS29, and a second end of the transient suppression diode TVS29 is connected with the power ground; the digital ground end of the driving interface J4 is connected with the first end of the resistor R89, and the power ground end of the driving interface J4 is connected with the second end of the resistor R89; the audio data output end of the controller is connected with the driving interface J4;
a selection terminal G0 of the audio driver chip U2 is respectively connected with a first terminal of a resistor R28 and a first terminal of a resistor R31, a second terminal of the resistor R28 is connected with a power supply voltage AVDD _3V3, a second terminal of a resistor R31 is connected with a digital ground, a selection terminal G1 of the audio driver chip U2 is respectively connected with a first terminal of a resistor R29 and a first terminal of a resistor R30, a second terminal of a resistor R30 is connected with a power supply voltage AVDD _3V3, a second terminal of a resistor R29 is connected with a digital ground, a power supply ground terminal HPVSS of the audio driver chip U2 is connected with a first terminal of a capacitor C9, and a second terminal of a capacitor C9 is connected with the digital ground;
a charge pump terminal CPN of the audio driver chip U2 is connected to a first terminal of the capacitor C11, a charge pump terminal CPP of the audio driver chip U2 is connected to a second terminal of the capacitor C11, a power ground terminal PGND of the audio driver chip U2 is connected to digital ground, a power ground terminal HPVDD of the audio driver chip U2 is connected to a first terminal of the capacitor C10, and a second terminal of the capacitor C10 is connected to power ground;
a power supply terminal VDD of the audio driving chip U2 is respectively connected with a power supply voltage AVDD _3V3, a first terminal of a capacitor C8 and a first terminal of a capacitor C54, and a power supply ground terminal SGND of the audio driving chip U2 is respectively connected with a digital ground, a second terminal of a capacitor C8 and a second terminal of a capacitor C54;
an enable terminal EN of the audio driver chip U2 is connected to a first terminal of the resistor R32 and a first terminal of the resistor R33, a second terminal of the resistor R33 is connected to a power ground, a first terminal of the resistor R32 is connected to an audio driver chip enable terminal of the controller, a left channel audio output terminal OUTL of the audio driver chip U2 is connected to a first terminal of the transient suppression diode TVS15 and a left channel terminal of the speaker interface J5, a second terminal of the transient suppression diode TVS15 is connected to a digital ground, a right channel audio output terminal OUTR of the audio driver chip U2 is connected to a first terminal of the transient suppression diode TVS19 and a right channel terminal of the speaker interface J5, a second terminal of the transient suppression diode TVS19 is connected to a digital ground, and a ground terminal of the speaker interface J5 is connected to a digital ground; the loudspeaker interface J5 is connected with the left loudspeaker and the right loudspeaker; real-time audio output and output are provided for technicians, remote conversation is realized, and problems are quickly solved; in this embodiment, the resistances of the resistor R24, the resistor R25, the resistor R26, and the resistor R27 are 560 Ω, the capacitances of the capacitor C2, the capacitor C3, and the capacitor C8 are 4.7uF, the capacitances of the capacitor C4, the capacitor C5, the capacitor C6, and the capacitor C7 are 220nF, the capacitance of the capacitor C54 is 10uF, the capacitance of the capacitor C10 is 10uF, the capacitances of the capacitor C9 and the capacitor C11 are 1uF, the resistances of the resistor R32, the resistor R31, and the resistor R29 are 1K, and the resistances of the resistor R33, the resistor R28, and the resistor R30 are 130 Ω.
The illumination lamp driving module includes: the base electrode of the first triode is connected with the first end of the first resistor, the second end of the first resistor is connected with the illuminating lamp control end of the controller, the collector electrode of the first triode is respectively connected with the first end of the second resistor and the negative electrode of the first diode, the second end of the second resistor is connected with the power supply voltage AVDD _3V3, the emitter electrode of the first triode is connected with the first end of the first normally open relay input loop, the second end of the second normally open relay input loop is respectively connected with the first end of the third resistor and the first end of the fourth resistor, the second end of the third resistor is connected with the positive electrode of the first diode, and the second end of the fourth resistor is connected with the power ground; the first normally open relay output circuit is connected in series in the lighting lamp power supply circuit. When it needs the illumination, the light control end output of controller switches on the level, and first triode switches on, and first normally open relay output circuit is become the closed state by normally open state, and light power supply circuit is closed, and the light illuminates.
In a preferred embodiment of the present invention, step S1 includes the following steps:
s11, when a technician waits to enter the transformer substation, the wireless connection signal sent by the access control system is communicated with the wireless data transmission connection module in the protective helmet, the attribute information of the wireless data transmission connection module is obtained, and the corresponding face information is inquired according to the attribute information inquiry value; the face information inquired according to the attribute information inquiry value is a comparison face;
s12, a face image acquisition module on the access control system acquires face information of a technician waiting to enter the transformer substation; the method comprises the steps that face data processing is carried out on face information of technicians to enter a transformer substation, wherein the face information is collected by a face image collection module, and the face information is a collected face after the face data processing is carried out on the face information;
s13, the access control system compares and compares whether the face is consistent with the collected face:
if the comparison face is consistent with the collected face, the access control system opens the access control;
and if the comparison face is inconsistent with the collected face, the access control system uploads the collected face to a warning face storage database.
In a preferred embodiment of the present invention, step S11 includes the following steps:
s111, the access control system communicates with the protective helmet, the access control system acquires the attribute information of the wireless data transmission connection module from the protective helmet, and after the controller receives the attribute information of the wireless data transmission connection module acquired from the access control system, the controller sends the acquired attribute information of the wireless data transmission connection module to the access control system;
s112, after the access control system receives the attribute information of the wireless data transmission connection module sent by the protective helmet, the access control system performs the following operations on the received attribute information of the wireless data transmission connection module:
Property information query value=<Attribute information,Algorithm type>,
wherein, Property information query value represents a Property information query value;
the Attribute information represents Attribute information of a wireless data transmission connection module, and the Attribute information of the wireless data transmission connection module comprises a physical address of one of a wireless data transmission connection WiFi module, a wireless data transmission connection 3G module, a wireless data transmission connection 4G module, a wireless data transmission connection 5G module and a wireless data transmission connection Bluetooth module;
algorithm type represents an Algorithm operation type;
< Attribute information, Algorithm type > indicates an Algorithm operation type of Attribute information of the wireless data transmission connection module; the algorithm operation type employs the MD5 hash algorithm.
S113, judging whether the Property information query value exists in the face database:
if the Property information query value exists in the face feature value database, screening a face feature value corresponding to the Property information query value, and executing step S114;
if the attribute information query value does not exist in the human face characteristic value database, sending prompt information to a protective helmet of the user, wherein the prompt information is that data information of the protective helmet is not recorded into a transformer substation system;
and S114, obtaining the face information associated with the face characteristic value according to the face characteristic value obtained in the step S113.
In a preferred embodiment of the present invention, in step S113, the method for calculating the face feature value includes:
Face feature value=<Face information,Algorithm type>,
wherein, the Face information represents Face information, namely a Face image;
algorithm type represents an Algorithm operation type;
< Face information, Algorithm type > represents an Algorithm operation type Algorithm type performed on Face information;
the Face feature value represents a Face feature value.
In a preferred embodiment of the present invention, step S12 includes the following steps:
s121, judging whether the acquired face image is a gray image by the access control system:
if the face image collected by the access control system is a gray image, executing step S122;
if the face image collected by the access control system is not a gray image, executing the following steps:
s1211, counting the total number of the face images collected by the access control system, and recording as a, a is the total number of the face images collected by the access control system, and A is respectively1、A2、A3、……、Aa,A1The 1 st face image of technician A, collected for the access control system 22 nd face image of technician A collected for access control system, A3The 3 rd face image of technician A, collected for the access control systemaThe method comprises the steps of collecting the a-th face image of a technician A for an access control system;
and S1212, converting the RGB face image into a gray image through the following calculation formula:
Figure BDA0003082031060000141
wherein, Grayscale face
Figure BDA0003082031060000142
Representing an ith image of the gray face; 1, 2, 3, … …, a;
Imngrayscale face for representing gray scale image
Figure BDA0003082031060000143
The gray value of the pixel point at the position of the mth row and the nth column in the middle row; m is 1, 2, 3, … … and M, N is 1, 2, 3, … … and N, M is width × Resolution, M represents the total number of horizontal pixels, width represents the width value of the RGB face image, and Resolution represents the Resolution of the RGB face image; n is high × Resolution, N represents the total number of vertical pixel points, and high represents the height value of the RGB face image; i.e. I11Grayscale face for representing gray scale image
Figure BDA0003082031060000151
Gray value, I, of pixel point at the 1 st row and 1 st column position12Grayscale face for representing gray scale image
Figure BDA0003082031060000152
Gray value, I, of pixel point at the 2 nd column position of the middle 1 st line13Grayscale face for representing gray scale image
Figure BDA0003082031060000153
Gray value, I, of pixel point at the position of the 1 st row and the 3 rd column1NGrayscale face for representing gray scale image
Figure BDA0003082031060000154
The gray value of the pixel point at the position of the No. 1 line and the No. N column; i is21Grayscale face for representing gray scale image
Figure BDA0003082031060000155
Gray value, I, of pixel point at the 1 st column position of the middle 2 nd row22Grayscale face for representing gray scale image
Figure BDA0003082031060000156
Gray value, I, of pixel point at 2 nd row and 2 nd column position in middle row23Grayscale face for representing gray scale image
Figure BDA0003082031060000157
Gray value, I, of pixel point at the position of the 2 nd row and 3 rd column2NGrayscale face for representing gray scale image
Figure BDA0003082031060000158
The gray value of the pixel point at the position of the Nth column in the 2 nd row; i is31Grayscale face for representing gray scale image
Figure BDA0003082031060000159
Gray value, I, of pixel point at the 1 st column position of the 3 rd row32Grayscale face for representing gray scale image
Figure BDA00030820310600001510
Gray value, I, of pixel point at the 2 nd column position of the 3 rd row33Grayscale face for representing gray scale image
Figure BDA00030820310600001511
Gray value, I, of pixel point at the 3 rd row and column position in the 3 rd row3NGrayscale face for representing gray scale image
Figure BDA00030820310600001512
The gray value of the pixel point at the Nth column position of the 3 rd row; i isM1Grayscale face for representing gray scale image
Figure BDA00030820310600001513
Gray value, I, of pixel point at the 1 st column position of the M-th rowM2Grayscale face for representing gray scale image
Figure BDA00030820310600001514
Gray value, I, of pixel point at position of middle Mth row and 2 nd columnM3Grayscale face for representing gray scale image
Figure BDA00030820310600001515
Gray value, I, of pixel point at the 3 rd column position of the M-th rowMNGrayscale face for representing gray scale image
Figure BDA00030820310600001516
The gray value of the pixel point at the position of the Mth row and the Nth column;
Figure BDA00030820310600001517
wherein R ismnExpressing the red channel value of a pixel point at the nth row position of the mth line in the RGB image;
Gmnrepresenting the green channel value of a pixel point at the nth row position of the mth line in the RGB image;
Bmnrepresenting a blue channel value of a pixel point at the nth row position of the mth line in the RGB image;
Figure BDA00030820310600001518
represents the red channel value GmnThe fusion parameters of (1);
Figure BDA00030820310600001519
represents the green channel value BmnThe fusion parameters of (1);
Figure BDA00030820310600001520
represents the blue channel value BmnThe fusion parameters of (1);
and S122, screening the gray-level face image.
In a preferred embodiment of the present invention, step S122 includes the following steps: order to
Figure BDA0003082031060000161
S1221, the gray-scale face image is subjected to
Figure BDA0003082031060000162
Dividing the image into M gray level face unit images, wherein M is a positive integer greater than or equal to 1 and is the 1 st unit image of the gray level face
Figure BDA0003082031060000163
Grayscale face
2 nd unit image
Figure BDA0003082031060000164
Grayscale face No. 3 unit image
Figure BDA0003082031060000165
M unit image of gray human face
Figure BDA0003082031060000166
Wherein the content of the first and second substances,
Figure BDA0003082031060000167
&representing an image mosaic symbol;
s1222 for m unit image of gray human face
Figure BDA0003082031060000168
Performing a first screening value
Figure BDA0003082031060000169
Second screening value
Figure BDA00030820310600001610
And a second screening value
Figure BDA00030820310600001611
M is a positive integer less than or equal to M;
s1223, if the m-th unit image of the gray human face
Figure BDA00030820310600001612
If the filtering value is greater than or equal to the preset filtering threshold, step S1224 is executed;
if m unit image of gray human face
Figure BDA00030820310600001613
If the screening value is smaller than the preset screening threshold, executing step S1225;
s1224, determining m unit image of gray human face
Figure BDA00030820310600001614
Gray value pixel of middle zeta pixel pointζAnd the size between the first operation threshold of the image:
if m unit image of gray human face
Figure BDA00030820310600001615
Gray value pixel of middle zeta pixel pointζIf the image is greater than or equal to the first operation threshold value of the image, pixel is orderedζ=0;
If m unit image of gray human face
Figure BDA00030820310600001616
Gray value pixel of middle zeta pixel pointζIf the value is less than the second operation threshold value of the image, pixel is orderedζ=255;
S1225, judging m-th unit image of gray human face
Figure BDA00030820310600001617
Gray value pixel of middle zeta pixel pointζAnd the size between the image and the second operation threshold value:
if m unit image of gray human face
Figure BDA00030820310600001618
Gray value pixel of middle zeta pixel pointζIf the value is greater than or equal to the second operation threshold value of the image, pixel is orderedζ=255;
If m unit image of gray human face
Figure BDA00030820310600001619
Gray value pixel of middle zeta pixel pointζIf the value is less than the second operation threshold value of the image, pixel is orderedζ=0。
In a preferred embodiment of the present invention, in step S1222, the m-th unit image of the gray-scale human face is obtained
Figure BDA0003082031060000171
First screening value of
Figure BDA0003082031060000172
The calculation method comprises the following steps:
Figure BDA0003082031060000173
wherein the content of the first and second substances,
Figure BDA0003082031060000174
m-th unit image for expressing gray human face
Figure BDA0003082031060000175
The number of the middle pixel points;
pixelζm-th unit image for expressing gray human face
Figure BDA0003082031060000176
The gray value of the middle zeta-th pixel point;
or/and m unit image of human face with gray scale
Figure BDA0003082031060000177
Second screening value of
Figure BDA0003082031060000178
The calculation method comprises the following steps:
Figure BDA0003082031060000179
wherein the content of the first and second substances,
Figure BDA00030820310600001710
m-th unit image for expressing gray human face
Figure BDA00030820310600001711
The number of the middle pixel points;
pixelζm-th unit image for expressing gray human face
Figure BDA00030820310600001712
The gray value of the middle zeta-th pixel point;
pixelξm-th unit image for expressing gray human face
Figure BDA00030820310600001713
The gray value of the middle xi pixel point;
Figure BDA00030820310600001714
representing a first selection number;
Figure BDA00030820310600001715
Figure BDA00030820310600001716
representing a second selection number;
Figure BDA00030820310600001717
or/and m unit image of human face with gray scale
Figure BDA00030820310600001718
Third screening value of
Figure BDA00030820310600001719
The calculation method comprises the following steps:
Figure BDA00030820310600001720
wherein the content of the first and second substances,
Figure BDA00030820310600001721
m-th unit image for expressing gray human face
Figure BDA00030820310600001722
The number of the middle pixel points;
pixelζm-th unit image for expressing gray human face
Figure BDA00030820310600001723
And (5) the gray value of the middle zeta-th pixel point.
In a preferred embodiment of the present invention, in step S1223, the m-th cell image of the face is gray-scaled
Figure BDA00030820310600001724
The screening value of (2) is calculated by:
Figure BDA00030820310600001725
wherein the content of the first and second substances,
Figure BDA00030820310600001726
representing a gray-scale faceM unit image
Figure BDA00030820310600001727
The number of the middle pixel points;
pixelζm-th unit image for expressing gray human face
Figure BDA0003082031060000181
The gray value of the middle zeta-th pixel point;
Figure BDA0003082031060000182
m-th unit image for expressing gray human face
Figure BDA0003082031060000183
The screening value of (1).
In a preferred embodiment of the present invention, in step S1224, the method for calculating the first image operation threshold value includes:
Figure BDA0003082031060000184
wherein the content of the first and second substances,
Figure BDA0003082031060000185
representing a first operational threshold of the image;
Figure BDA0003082031060000186
m-th unit image for expressing gray human face
Figure BDA0003082031060000187
A first screening value of (a);
Figure BDA0003082031060000188
m-th unit image for expressing gray human face
Figure BDA0003082031060000189
Second screening of (2)A value;
Figure BDA00030820310600001810
m-th unit image for expressing gray human face
Figure BDA00030820310600001811
A third screening value of (a);
a represents a screening adjustment first coefficient;
b represents a second coefficient of screening modulation;
c represents a third coefficient of screening modulation; a + b + c is 1;
Figure BDA00030820310600001812
m-th unit image for expressing gray human face
Figure BDA00030820310600001813
The number of the middle pixel points;
pixelζm-th unit image for expressing gray human face
Figure BDA00030820310600001814
The gray value of the middle zeta-th pixel point;
pixelξm-th unit image for expressing gray human face
Figure BDA00030820310600001815
And the gray value of the middle xi pixel point.
In a preferred embodiment of the present invention, in step S1225, the method for calculating the second operation threshold of the image is:
Figure BDA0003082031060000191
wherein the content of the first and second substances,
Figure BDA0003082031060000192
representing a second operational threshold of the image;
Figure BDA0003082031060000193
m-th unit image for expressing gray human face
Figure BDA0003082031060000194
A first screening value of (a);
Figure BDA0003082031060000195
m-th unit image for expressing gray human face
Figure BDA0003082031060000196
A second screening value of (a);
Figure BDA0003082031060000197
m-th unit image for expressing gray human face
Figure BDA0003082031060000198
A third screening value of (a);
a represents a screening adjustment first coefficient;
b represents a second coefficient of screening modulation;
c represents a third coefficient of screening modulation; a + b + c is 1;
Figure BDA0003082031060000199
m-th unit image for expressing gray human face
Figure BDA00030820310600001910
The number of the middle pixel points;
pixelζm-th unit image for expressing gray human face
Figure BDA00030820310600001911
The gray value of the middle zeta-th pixel point;
pixelξm-th unit image for expressing gray human face
Figure BDA00030820310600001912
And the gray value of the middle xi pixel point.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (7)

1. A transformer substation unmanned permission field operation intelligent control method based on a video analysis technology comprises a protective helmet and is characterized by comprising the following steps:
s1, the technical personnel of the transformer substation enter the transformer substation;
and S2, after the transformer substation is started, the camera on the protective helmet transmits the video image in the field operation process until the cloud platform manages the video image.
2. The intelligent substation operation management and control method based on the video analysis technology is characterized in that a protective helmet comprises a protective helmet body (8), an illuminating lamp fixing installation seat (1) for fixedly installing an illuminating lamp (2) is arranged on the front face of the protective helmet body (8), the illuminating lamp (2) is fixedly installed on the illuminating lamp fixing installation seat (1), an illuminating lamp PCB fixing installation seat for fixedly installing an illuminating lamp PCB is arranged in the illuminating lamp fixing installation seat (1), the illuminating lamp PCB is fixedly installed on the illuminating lamp PCB fixing installation seat, and an illuminating lamp driving module for driving the illuminating lamp (2) to work is arranged on the illuminating lamp PCB; the front side of the protective helmet body (8) is also provided with a brim (3), the bottom of the brim (3) is provided with an arc-shaped supporting block (7), the front side of the arc-shaped supporting block (7) is provided with an image audio acquisition module fixing mounting seat for fixedly mounting an image audio acquisition module (4), the image audio acquisition module (4) is fixedly mounted on the image audio acquisition module fixing mounting seat, and the image audio acquisition module comprises a camera (5), an audio input unit (6) and an audio output unit; an infrared detection module fixing mounting seat for fixedly mounting an infrared detection module (10) and a temperature detection module fixing mounting seat for fixedly mounting a temperature detection module (9) are arranged on the inner side of the protective helmet body (8), the infrared detection module (10) is fixedly mounted on the infrared detection module fixing mounting seat, and the temperature detection module (9) is fixedly mounted on the temperature detection module fixing mounting seat;
a PCB circuit board fixing installation seat for fixedly installing a PCB circuit board is arranged in the protective helmet, the PCB circuit board is fixedly installed on the PCB circuit board fixing installation seat, and a controller and a wireless data transmission connection module are arranged on the PCB circuit board; the wireless data transmission link of controller links to each other with wireless data transmission link module's data transmission end, the light control end of controller links to each other with light drive module's drive control end, the image data output of camera (5) links to each other with the image data input of controller, the audio data output of audio input unit (6) links to each other with the audio data input of controller, the audio data input of audio output unit links to each other with the audio data output of controller, the temperature data output of temperature detection module (9) links to each other with the temperature data input of controller, the infrared detection data output of infrared detection module (10) links to each other with the infrared data input of controller.
3. The intelligent management and control method for the unmanned permission field work of the substation based on the video analysis technology of claim 1, wherein the step S1 comprises the following steps:
s11, when a technician waits to enter the transformer substation, the wireless connection signal sent by the access control system is communicated with the wireless data transmission connection module in the protective helmet, the attribute information of the wireless data transmission connection module is obtained, and the corresponding face information is inquired according to the attribute information inquiry value; the face information inquired according to the attribute information inquiry value is a comparison face;
s12, acquiring the face information of a technician to enter the transformer substation by a face image acquisition module on the access control system; the method comprises the steps that face data processing is carried out on face information of technicians to enter a transformer substation, wherein the face information is collected by a face image collection module, and the face information is a collected face after the face data processing is carried out on the face information;
s13, the access control system compares and compares whether the face is consistent with the collected face:
if the comparison face is consistent with the collected face, the access control system opens the access control;
and if the comparison face is inconsistent with the collected face, the access control system uploads the collected face to a warning face storage database.
4. The intelligent management and control method for the unmanned permission field work of the substation based on the video analysis technology as claimed in claim 3, wherein the step S11 comprises the following steps:
s111, the access control system communicates with the protective helmet, the access control system acquires the attribute information of the wireless data transmission connection module from the protective helmet, and after the controller receives the attribute information of the wireless data transmission connection module acquired from the access control system, the controller sends the acquired attribute information of the wireless data transmission connection module to the access control system;
s112, after the access control system receives the attribute information of the wireless data transmission connection module sent by the protective helmet, the access control system performs the following operations on the received attribute information of the wireless data transmission connection module:
Property information query value=<Attribute information,Algorithm type>,
wherein, Property information query value represents a Property information query value;
the Attribute information represents Attribute information of a wireless data transmission connection module, and the Attribute information of the wireless data transmission connection module comprises a physical address of one of a wireless data transmission connection WiFi module, a wireless data transmission connection 3G module, a wireless data transmission connection 4G module, a wireless data transmission connection 5G module and a wireless data transmission connection Bluetooth module;
s113, judging whether the Property information query value exists in the face database:
if the Property information query value exists in the face feature value database, screening a face feature value corresponding to the Property information query value, and executing step S114;
if the attribute information query value does not exist in the human face characteristic value database, sending prompt information to a protective helmet of the user, wherein the prompt information is that data information of the protective helmet is not recorded into a transformer substation system;
and S114, obtaining the face information associated with the face characteristic value according to the face characteristic value obtained in the step S113.
5. The intelligent substation operation management and control method based on the video analysis technology of claim 4 is characterized in that in step S113, the face feature value is calculated by the following method:
Face feature value=<Face information,Algorithm type>,
wherein, the Face information represents Face information, namely a Face image;
algorithm type represents an Algorithm operation type;
< Face information, Algorithm type > represents an Algorithm operation type Algorithm type performed on Face information;
the Face feature value represents a Face feature value.
6. The intelligent management and control method for the unmanned permission field work of the substation based on the video analysis technology as claimed in claim 3, wherein in the step S12, the method comprises the following steps:
s121, judging whether the acquired face image is a gray image by the access control system:
if the face image collected by the access control system is a gray image, executing step S122;
if the face image collected by the access control system is not a gray image, executing the following steps:
s1211, counting the total number of the face images collected by the access control system, and recording as a, a is collected by the access control systemThe total number of the face images is A1、A2、A3、……、Aa,A1The 1 st face image of technician A, collected for the access control system22 nd face image of technician A collected for access control system, A3The 3 rd face image of technician A, collected for the access control systemaThe method comprises the steps of collecting the a-th face image of a technician A for an access control system;
and S1212, converting the RGB face image into a gray image through the following calculation formula:
Figure FDA0003082031050000041
wherein, Grayscale face
Figure FDA0003082031050000042
Representing an ith image of the gray face; 1, 2, 3, … …, a;
Imngrayscale face for representing gray scale image
Figure FDA0003082031050000043
The gray value of the pixel point at the position of the mth row and the nth column in the middle row; m is 1, 2, 3, … … and M, N is 1, 2, 3, … … and N, M is width × Resolution, M represents the total number of horizontal pixels, width represents the width value of the RGB face image, and Resolution represents the Resolution of the RGB face image; n is high × Resolution, N represents the total number of vertical pixel points, and high represents the height value of the RGB face image; i.e. I11Grayscale face for representing gray scale image
Figure FDA0003082031050000044
Gray value, I, of pixel point at the 1 st row and 1 st column position12Grayscale face for representing gray scale image
Figure FDA0003082031050000045
Gray value, I, of pixel point at the 2 nd column position of the middle 1 st line13Grayscale face for representing gray scale image
Figure FDA0003082031050000046
Gray value, I, of pixel point at the position of the 1 st row and the 3 rd column1NGrayscale face for representing gray scale image
Figure FDA0003082031050000051
The gray value of the pixel point at the position of the No. 1 line and the No. N column; i is21Grayscale face for representing gray scale image
Figure FDA0003082031050000052
Gray value, I, of pixel point at the 1 st column position of the middle 2 nd row22Grayscale face for representing gray scale image
Figure FDA0003082031050000053
Gray value, I, of pixel point at 2 nd row and 2 nd column position in middle row23Grayscale face for representing gray scale image
Figure FDA0003082031050000054
Gray value, I, of pixel point at the position of the 2 nd row and 3 rd column2NGrayscale face for representing gray scale image
Figure FDA0003082031050000055
The gray value of the pixel point at the position of the Nth column in the 2 nd row; i is31Grayscale face for representing gray scale image
Figure FDA0003082031050000056
Gray value, I, of pixel point at the 1 st column position of the 3 rd row32Grayscale face for representing gray scale image
Figure FDA0003082031050000057
Gray value, I, of pixel point at the 2 nd column position of the 3 rd row33Grayscale face for representing gray scale image
Figure FDA0003082031050000058
Gray value, I, of pixel point at the 3 rd row and column position in the 3 rd row3NGrayscale face for representing gray scale image
Figure FDA0003082031050000059
The gray value of the pixel point at the Nth column position of the 3 rd row; i isM1Grayscale face for representing gray scale image
Figure FDA00030820310500000510
Gray value, I, of pixel point at the 1 st column position of the M-th rowM2Grayscale face for representing gray scale image
Figure FDA00030820310500000511
Gray value, I, of pixel point at position of middle Mth row and 2 nd columnM3Grayscale face for representing gray scale image
Figure FDA00030820310500000512
Gray value, I, of pixel point at the 3 rd column position of the M-th rowMNGrayscale face for representing gray scale image
Figure FDA00030820310500000513
The gray value of the pixel point at the position of the Mth row and the Nth column;
Figure FDA00030820310500000514
wherein R ismnExpressing the red channel value of a pixel point at the nth row position of the mth line in the RGB image;
Gmnrepresenting the green channel value of a pixel point at the nth row position of the mth line in the RGB image;
Bmnrepresenting a blue channel value of a pixel point at the nth row position of the mth line in the RGB image;
Figure FDA00030820310500000515
represents the red channel value GmnThe fusion parameters of (1);
Figure FDA00030820310500000516
represents the green channel value BmnThe fusion parameters of (1);
Figure FDA00030820310500000517
represents the blue channel value BmnThe fusion parameters of (1);
and S122, screening the gray-level face image.
7. The intelligent management and control method for the unmanned permission field work of the substation based on the video analysis technology as claimed in claim 6, wherein the step S122 comprises the following steps: order to
Figure FDA00030820310500000518
S1221, the gray-scale face image is subjected to
Figure FDA00030820310500000519
Dividing the image into M gray level face unit images, wherein M is a positive integer greater than or equal to 1 and is the 1 st unit image of the gray level face
Figure FDA0003082031050000061
Grayscale face 2 nd unit image
Figure FDA0003082031050000062
Grayscale face No. 3 unit image
Figure FDA0003082031050000063
… … grayscale human face M unit image
Figure FDA0003082031050000064
Wherein the content of the first and second substances,
Figure FDA0003082031050000065
&representing an image mosaic symbol;
s1222 for m unit image of gray human face
Figure FDA0003082031050000066
Performing a first screening value
Figure FDA0003082031050000067
Second screening value
Figure FDA0003082031050000068
And a second screening value
Figure FDA0003082031050000069
M is a positive integer less than or equal to M;
s1223, if the m-th unit image of the gray human face
Figure FDA00030820310500000610
If the filtering value is greater than or equal to the preset filtering threshold, step S1224 is executed;
if m unit image of gray human face
Figure FDA00030820310500000611
If the screening value is smaller than the preset screening threshold, executing step S1225;
s1224, determining m unit image of gray human face
Figure FDA00030820310500000612
Gray value pixel of middle zeta pixel pointζAnd the size between the first operation threshold of the image:
if m unit image of gray human face
Figure FDA00030820310500000613
Gray value pixel of middle zeta pixel pointζIf the image is greater than or equal to the first operation threshold value of the image, pixel is orderedζ=0;
If m unit image of gray human face
Figure FDA00030820310500000614
Gray value pixel of middle zeta pixel pointζIf the value is less than the second operation threshold value of the image, pixel is orderedζ=255;
S1225, judging m-th unit image of gray human face
Figure FDA00030820310500000615
Gray value pixel of middle zeta pixel pointζAnd the size between the image and the second operation threshold value:
if m unit image of gray human face
Figure FDA00030820310500000616
Gray value pixel of middle zeta pixel pointζIf the value is greater than or equal to the second operation threshold value of the image, pixel is orderedζ=255;
If m unit image of gray human face
Figure FDA00030820310500000617
Gray value pixel of middle zeta pixel pointζIf the value is less than the second operation threshold value of the image, pixel is orderedζ=0。
CN202110569307.3A 2021-05-25 2021-05-25 Intelligent control method for substation unattended field operation based on video analysis technology Pending CN113297970A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110569307.3A CN113297970A (en) 2021-05-25 2021-05-25 Intelligent control method for substation unattended field operation based on video analysis technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110569307.3A CN113297970A (en) 2021-05-25 2021-05-25 Intelligent control method for substation unattended field operation based on video analysis technology

Publications (1)

Publication Number Publication Date
CN113297970A true CN113297970A (en) 2021-08-24

Family

ID=77324653

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110569307.3A Pending CN113297970A (en) 2021-05-25 2021-05-25 Intelligent control method for substation unattended field operation based on video analysis technology

Country Status (1)

Country Link
CN (1) CN113297970A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107481347A (en) * 2017-09-30 2017-12-15 四川民工加网络科技有限公司 Attendance checking system and equipment for construction site
CN109393624A (en) * 2018-11-28 2019-03-01 安徽清新互联信息科技有限公司 Multifunctional protection safety cap and its control method
CN110633623A (en) * 2019-07-23 2019-12-31 国网浙江省电力有限公司杭州供电公司 Management and control method for operation process of transformer substation worker
CN112465742A (en) * 2020-10-16 2021-03-09 重庆恢恢信息技术有限公司 Method for identifying and judging construction site reinforcement bar installation abnormity by fusing big data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107481347A (en) * 2017-09-30 2017-12-15 四川民工加网络科技有限公司 Attendance checking system and equipment for construction site
CN109393624A (en) * 2018-11-28 2019-03-01 安徽清新互联信息科技有限公司 Multifunctional protection safety cap and its control method
CN110633623A (en) * 2019-07-23 2019-12-31 国网浙江省电力有限公司杭州供电公司 Management and control method for operation process of transformer substation worker
CN112465742A (en) * 2020-10-16 2021-03-09 重庆恢恢信息技术有限公司 Method for identifying and judging construction site reinforcement bar installation abnormity by fusing big data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郑树泉等: "《工业智能技术与应用》", 上海科学技术出版社, pages: 225 - 226 *

Similar Documents

Publication Publication Date Title
US20160042621A1 (en) Video Motion Detection Method and Alert Management
CN106170072B (en) Video acquisition system and acquisition method thereof
CN111523397B (en) Intelligent lamp post visual identification device, method and system and electronic equipment thereof
KR102076113B1 (en) Garbage unauthorized dumping monitoring device
Cohen et al. CCTV operational requirements manual 2009
CN101635837B (en) Display system
CN116486585B (en) Production safety management system based on AI machine vision analysis early warning
CN113225550A (en) Offset detection method and device, camera module, terminal equipment and storage medium
JP3506934B2 (en) Monitoring device and monitoring system
CN115546738A (en) Rail foreign matter detection method
CN1808516A (en) Vehicle monitoring method, specific character pattern recognition device, and vehicle monitoring system
CN208739296U (en) Intelligent municipal administration&#39;s information visualization total management system
CN113297970A (en) Intelligent control method for substation unattended field operation based on video analysis technology
KR101676444B1 (en) System and method for road-side automatic number plate recognition of multi-lane
CN102340628A (en) Camera and control method thereof
CN117213621A (en) Contact net vibration fixed-point monitoring system and monitoring method
CN108520615B (en) Fire identification system and method based on image
CN113297971A (en) Intelligent management method for unattended field operation of transformer substation integrating video analysis technology
CN111867205A (en) All-round stage equipment center monitored control system
CN110866462A (en) Behavior recognition system and method integrated in intelligent police car
KR100925382B1 (en) Violation Car Enforcement System
CN113538967B (en) Vehicle-road cooperation device and method under crossroad scene
CN107666603B (en) Highway emergency telephone video acquisition control system
JP2002140711A (en) Method for setting size of invasion object detecting device, invasion object detecting method and invasion object detector
CN216772462U (en) Construction site shooting warning monitoring system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination