CN113297970A - Intelligent control method for substation unattended field operation based on video analysis technology - Google Patents
Intelligent control method for substation unattended field operation based on video analysis technology Download PDFInfo
- Publication number
- CN113297970A CN113297970A CN202110569307.3A CN202110569307A CN113297970A CN 113297970 A CN113297970 A CN 113297970A CN 202110569307 A CN202110569307 A CN 202110569307A CN 113297970 A CN113297970 A CN 113297970A
- Authority
- CN
- China
- Prior art keywords
- face
- image
- value
- gray
- pixel point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000004458 analytical method Methods 0.000 title claims abstract description 14
- 238000005516 engineering process Methods 0.000 title claims abstract description 14
- 230000001681 protective effect Effects 0.000 claims abstract description 48
- 230000008569 process Effects 0.000 claims abstract description 9
- 230000005540 biological transmission Effects 0.000 claims description 76
- 238000012216 screening Methods 0.000 claims description 38
- 238000001514 detection method Methods 0.000 claims description 35
- 238000009434 installation Methods 0.000 claims description 15
- 230000004927 fusion Effects 0.000 claims description 9
- 239000000126 substance Substances 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 6
- 239000003990 capacitor Substances 0.000 description 36
- 230000001629 suppression Effects 0.000 description 12
- 230000001052 transient effect Effects 0.000 description 12
- 101001051490 Homo sapiens Neural cell adhesion molecule L1 Proteins 0.000 description 6
- 102100024964 Neural cell adhesion molecule L1 Human genes 0.000 description 6
- 238000010586 diagram Methods 0.000 description 3
- 238000012423 maintenance Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000005096 rolling process Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000036760 body temperature Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000009736 wetting Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- A—HUMAN NECESSITIES
- A42—HEADWEAR
- A42B—HATS; HEAD COVERINGS
- A42B3/00—Helmets; Helmet covers ; Other protective head coverings
- A42B3/04—Parts, details or accessories of helmets
- A42B3/0406—Accessories for helmets
- A42B3/0433—Detecting, signalling or lighting devices
-
- A—HUMAN NECESSITIES
- A42—HEADWEAR
- A42B—HATS; HEAD COVERINGS
- A42B3/00—Helmets; Helmet covers ; Other protective head coverings
- A42B3/04—Parts, details or accessories of helmets
- A42B3/0406—Accessories for helmets
- A42B3/0433—Detecting, signalling or lighting devices
- A42B3/044—Lighting devices, e.g. helmets with lamps
-
- A—HUMAN NECESSITIES
- A42—HEADWEAR
- A42B—HATS; HEAD COVERINGS
- A42B3/00—Helmets; Helmet covers ; Other protective head coverings
- A42B3/04—Parts, details or accessories of helmets
- A42B3/30—Mounting radio sets or communication systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/30—Individual registration on entry or exit not involving the use of a pass
- G07C9/32—Individual registration on entry or exit not involving the use of a pass in combination with an identity check
- G07C9/37—Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The invention provides an intelligent control method for substation unattended field operation based on a video analysis technology, which comprises a protective helmet and comprises the following steps: s1, the technical personnel of the transformer substation enter the transformer substation; and S2, after the transformer substation is started, the camera on the protective helmet transmits the video image in the field operation process until the cloud platform manages the video image. The invention can ensure the safety of personnel entering the transformer substation and record the operation process in the whole process.
Description
Technical Field
The invention relates to the technical field of transformer substations, in particular to an intelligent transformer substation unattended field operation management and control method based on a video analysis technology.
Background
The transformer substation is a place for converting voltage and current, receiving electric energy and distributing electric energy in an electric power system. The substations in the power plant are step-up substations, which are used to boost up the electrical energy generated by the generator and feed it into the high-voltage network. Patent application No. 2018110972027, the name is "an intelligent safety control system of transformer substation's maintenance operation", disclose including: the safety control system comprises a safety control host, a rolling screen type display screen and a safety prompter, wherein the safety control host comprises a first processor, a first memory, a read-write module and a first sound module, the first memory is respectively connected with the first processor, and the read-write module and the first sound module are used for updating overhaul operation information of a substation in the first memory; the rolling screen type display screen comprises a display part made of flexible materials which can be rolled into a reel; the number of the safety prompters is more than one, and each safety prompter comprises a second processor, and a second memory, a proximity sensor, a second projection module and a second sound module which are respectively connected with the second processor. The transformer substation maintenance operation safety management and control method can overcome the defects of the existing transformer substation maintenance operation safety management and control means, and has the characteristics of convenience in carrying, convenience and quickness in use, visual display, no limitation of an operation field, reduction of visual dead angles and reduction of the burden of safety guardians.
Disclosure of Invention
The invention aims to at least solve the technical problems in the prior art, and particularly creatively provides an intelligent transformer substation operation control method without permission based on a video analysis technology.
In order to achieve the above purpose, the invention provides an intelligent control method for substation unattended field operation based on a video analysis technology, which comprises a protective helmet and comprises the following steps:
s1, the technical personnel of the transformer substation enter the transformer substation;
and S2, after the transformer substation is started, the camera on the protective helmet transmits the video image in the field operation process until the cloud platform manages the video image.
In a preferred embodiment of the present invention, the protective helmet comprises a protective helmet body, wherein a lighting lamp fixing mounting seat for fixedly mounting a lighting lamp is arranged on the front surface of the protective helmet body, the lighting lamp is fixedly mounted on the lighting lamp fixing mounting seat, a lighting lamp PCB fixing mounting seat for fixedly mounting a lighting lamp PCB is arranged in the lighting lamp fixing mounting seat, the lighting lamp PCB is fixedly mounted on the lighting lamp PCB fixing mounting seat, and a lighting lamp driving module for driving the lighting lamp to work is arranged on the lighting lamp PCB; the front side of the protective helmet body is also provided with a brim, the bottom of the brim is provided with an arc-shaped supporting block, the front side of the arc-shaped supporting block is provided with an image audio acquisition module fixing mounting seat for fixedly mounting an image audio acquisition module, the image audio acquisition module is fixedly mounted on the image audio acquisition module fixing mounting seat, and the image audio acquisition module comprises a camera, an audio input unit and an audio output unit; the protection helmet comprises a protection helmet body and is characterized in that an infrared detection module fixing installation seat for fixedly installing an infrared detection module and a temperature detection module fixing installation seat for fixedly installing a temperature detection module are arranged on the inner side of the protection helmet body, the infrared detection module is fixedly installed on the infrared detection module fixing installation seat, and the temperature detection module is fixedly installed on the temperature detection module fixing installation seat;
a PCB circuit board fixing installation seat for fixedly installing a PCB circuit board is arranged in the protective helmet, the PCB circuit board is fixedly installed on the PCB circuit board fixing installation seat, and a controller and a wireless data transmission connection module are arranged on the PCB circuit board; the wireless data transmission link of controller links to each other with wireless data transmission link module's data transmission end, the light control end of controller links to each other with light drive module's drive control end, the image data output of camera links to each other with the image data input of controller, the audio data output of audio input unit links to each other with the audio data input of controller, the audio data input of audio output unit links to each other with the audio data output of controller, the temperature data output of temperature detection module links to each other with the temperature data input of controller, the infrared detection data output of infrared detection module links to each other with the infrared data input of controller.
In a preferred embodiment of the present invention, step S1 includes the following steps:
s11, when a technician waits to enter the transformer substation, the wireless connection signal sent by the access control system is communicated with the wireless data transmission connection module in the protective helmet, the attribute information of the wireless data transmission connection module is obtained, and the corresponding face information is inquired according to the attribute information inquiry value; the face information inquired according to the attribute information inquiry value is a comparison face;
s12, acquiring the face information of a technician to enter the transformer substation by a face image acquisition module on the access control system; the method comprises the steps that face data processing is carried out on face information of technicians to enter a transformer substation, wherein the face information is collected by a face image collection module, and the face information is a collected face after the face data processing is carried out on the face information;
s13, the access control system compares and compares whether the face is consistent with the collected face:
if the comparison face is consistent with the collected face, the access control system opens the access control;
and if the comparison face is inconsistent with the collected face, the access control system uploads the collected face to a warning face storage database.
In a preferred embodiment of the present invention, step S11 includes the following steps:
s111, the access control system communicates with the protective helmet, the access control system acquires the attribute information of the wireless data transmission connection module from the protective helmet, and after the controller receives the attribute information of the wireless data transmission connection module acquired from the access control system, the controller sends the acquired attribute information of the wireless data transmission connection module to the access control system;
s112, after the access control system receives the attribute information of the wireless data transmission connection module sent by the protective helmet, the access control system performs the following operations on the received attribute information of the wireless data transmission connection module:
Property information query value=<Attribute information,Algorithm type>,
wherein, Property information query value represents a Property information query value;
the Attribute information represents Attribute information of a wireless data transmission connection module, and the Attribute information of the wireless data transmission connection module comprises a physical address of one of a wireless data transmission connection WiFi module, a wireless data transmission connection 3G module, a wireless data transmission connection 4G module, a wireless data transmission connection 5G module and a wireless data transmission connection Bluetooth module;
s113, judging whether the Property information query value exists in the face database:
if the Property information query value exists in the face feature value database, screening a face feature value corresponding to the Property information query value, and executing step S114;
if the attribute information query value does not exist in the human face characteristic value database, sending prompt information to a protective helmet of the user, wherein the prompt information is that data information of the protective helmet is not recorded into a transformer substation system;
and S114, obtaining the face information associated with the face characteristic value according to the face characteristic value obtained in the step S113.
In a preferred embodiment of the present invention, in step S113, the method for calculating the face feature value includes:
Face feature value=<Face information,Algorithm type>,
wherein, the Face information represents Face information, namely a Face image;
algorithm type represents an Algorithm operation type;
< Face information, Algorithm type > represents the Algorithm operation type Algorithm type carried out on Face information;
the Face feature value represents a Face feature value.
In a preferred embodiment of the present invention, step S12 includes the following steps:
s121, judging whether the acquired face image is a gray image by the access control system:
if the face image collected by the access control system is a gray image, executing step S122;
if the face image collected by the access control system is not a gray image, executing the following steps:
s1211, counting the total number of the face images collected by the access control system, and recording as a, a is the total number of the face images collected by the access control system, and A is respectively1、A2、A3、……、Aa,A1The 1 st face image of technician A, collected for the access control system 22 nd face image of technician A collected for access control system, A3The 3 rd face image of technician A, collected for the access control systemaThe method comprises the steps of collecting the a-th face image of a technician A for an access control system;
and S1212, converting the RGB face image into a gray image through the following calculation formula:
Imngrayscale surface image representing gray scale imageAiThe gray value of the pixel point at the position of the mth row and the nth column in the middle row; m is 1, 2, 3, … … and M, N is 1, 2, 3, … … and N, M is width × Resolution, M represents the total number of horizontal pixels, width represents the width value of the RGB face image, and Resolution represents the Resolution of the RGB face image; n is high × Resolution, N represents the total number of vertical pixel points, and high represents the height value of the RGB face image; i.e. I11Grayscale face for representing gray scale imageGray value, I, of pixel point at the 1 st row and 1 st column position12Grayscale face for representing gray scale imageGray value, I, of pixel point at the 2 nd column position of the middle 1 st line13Grayscale face for representing gray scale imageGray value, I, of pixel point at the position of the 1 st row and the 3 rd column1NRepresenting a grayscale imageGrayscale faceThe gray value of the pixel point at the position of the No. 1 line and the No. N column; i is21Grayscale face for representing gray scale imageGray value, I, of pixel point at the 1 st column position of the middle 2 nd row22Grayscale face for representing gray scale imageGray value, I, of pixel point at 2 nd row and 2 nd column position in middle row23Grayscale face for representing gray scale imageGray value, I, of pixel point at the position of the 2 nd row and 3 rd column2NGrayscale face for representing gray scale imageThe gray value of the pixel point at the position of the Nth column in the 2 nd row; i is31Grayscale face for representing gray scale imageGray value, I, of pixel point at the 1 st column position of the 3 rd row32Grayscale face for representing gray scale imageGray value, I, of pixel point at the 2 nd column position of the 3 rd row33Grayscale face for representing gray scale imageGray value, I, of pixel point at the 3 rd row and column position in the 3 rd row3NGrayscale face for representing gray scale imageThe gray value of the pixel point at the Nth column position of the 3 rd row; i isM1Represents ashGrayscale faceGray value, I, of pixel point at the 1 st column position of the M-th rowM2Grayscale face for representing gray scale imageGray value, I, of pixel point at position of middle Mth row and 2 nd columnM3Grayscale face for representing gray scale imageGray value, I, of pixel point at the 3 rd column position of the M-th rowMNGrayscale face for representing gray scale imageThe gray value of the pixel point at the position of the Mth row and the Nth column;
wherein R ismnExpressing the red channel value of a pixel point at the nth row position of the mth line in the RGB image;
Gmnrepresenting the green channel value of a pixel point at the nth row position of the mth line in the RGB image;
Bmnrepresenting a blue channel value of a pixel point at the nth row position of the mth line in the RGB image;
and S122, screening the gray-level face image.
In a preferred embodiment of the present invention, step S122 includes the following steps: order to
S1221, the gray-scale face image is subjected toDividing the image into M gray level face unit images, wherein M is a positive integer greater than or equal to 1 and is the 1 st unit image of the gray level faceGrayscale face 2 nd unit imageGrayscale face No. 3 unit imageM unit image of gray human faceWherein the content of the first and second substances,&representing an image mosaic symbol;
s1222 for m unit image of gray human facePerforming a first screening valueSecond screening valueAnd a second screening valueM is a positive integer less than or equal to M;
s1223, if the m-th unit image of the gray human faceIf the filtering value is greater than or equal to the preset filtering threshold, step S1224 is executed;
if m unit image of gray human faceIf the screening value is smaller than the preset screening threshold, executing step S1225;
s1224, determining m unit image of gray human faceGray value pixel of middle zeta pixel pointζAnd the size between the first operation threshold of the image:
if m unit image of gray human faceGray value pixel of middle zeta pixel pointζIf the image is greater than or equal to the first operation threshold value of the image, pixel is orderedζ=0;
If m unit image of gray human faceGray value pixel of middle zeta pixel pointζIf the value is less than the second operation threshold value of the image, pixel is orderedζ=255;
S1225, judging m-th unit image of gray human faceGray value pixel of middle zeta pixel pointζAnd the size between the image and the second operation threshold value:
if gray level human faceM unit imageGray value pixel of middle zeta pixel pointζIf the value is greater than or equal to the second operation threshold value of the image, pixel is orderedζ=255;
If m unit image of gray human faceGray value pixel of middle zeta pixel pointζIf the value is less than the second operation threshold value of the image, pixel is orderedζ=0。
In conclusion, due to the adoption of the technical scheme, the safety of personnel entering the transformer substation can be ensured, and the operation process can be recorded in the whole process.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic block diagram of the process of the present invention.
Fig. 2 is a schematic view of the structure of the protective helmet of the present invention.
Fig. 3 is a schematic view of another perspective structure of the protective helmet of the present invention.
Fig. 4 is a schematic circuit diagram of the audio input unit according to the present invention.
Fig. 5 is a schematic circuit diagram of the audio output unit according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
The invention provides a video analysis technology-based intelligent control method for substation unmanned site operation, which comprises a protective helmet made of an insulated electricity-proof material. As shown in fig. 1, the method comprises the following steps:
s1, the technical personnel of the transformer substation enter the transformer substation;
and S2, after the transformer substation is started, the camera on the protective helmet transmits the video image in the field operation process until the cloud platform manages the video image.
In a preferred embodiment of the present invention, as shown in fig. 2 and 3, the protective helmet includes a protective helmet body 8, a lighting lamp fixing mount 1 for fixedly mounting a lighting lamp 2 is disposed on a front surface of the protective helmet body 8, the lighting lamp 2 is fixedly mounted on the lighting lamp fixing mount 1, a lighting lamp PCB fixing mount for fixedly mounting a lighting lamp PCB is disposed in the lighting lamp fixing mount 1, the lighting lamp PCB is fixedly mounted on the lighting lamp PCB fixing mount, and a lighting lamp driving module for driving the lighting lamp 2 to operate is disposed on the lighting lamp PCB; the front side of the protective helmet body 8 is also provided with a brim 3, the bottom of the brim 3 is provided with an arc-shaped supporting block 7, the front side of the arc-shaped supporting block 7 is provided with an image audio acquisition module fixing mounting seat for fixedly mounting an image audio acquisition module 4, the image audio acquisition module 4 is fixedly mounted on the image audio acquisition module fixing mounting seat, and the image audio acquisition module comprises a camera 5, an audio input unit 6 and an audio output unit; an infrared detection module fixing mounting seat for fixedly mounting an infrared detection module 10 and a temperature detection module fixing mounting seat for fixedly mounting a temperature detection module 9 are arranged on the inner side of the protective helmet body 8, the infrared detection module 10 is fixedly mounted on the infrared detection module fixing mounting seat, and the temperature detection module 9 is fixedly mounted on the temperature detection module fixing mounting seat; the arrangement of the cap peak 3 is beneficial to shielding hard light, preventing the influence of sunlight on image acquisition and preventing rainwater from wetting the lens in rainy days to cause lens blurring; the infrared detection module 10 can detect whether a technician wears the protective helmet, and if the technician wears the protective helmet, the protective helmet works. Its temperature detection module 9 is used for detecting human body temperature, and temperature value when its collection is greater than or equal to and predetermines the temperature threshold value, then sends suggestion alarm information, avoids the high temperature operation.
A PCB circuit board fixing installation seat for fixedly installing a PCB circuit board is arranged in the protective helmet, the PCB circuit board is fixedly installed on the PCB circuit board fixing installation seat, and a controller and a wireless data transmission connection module are arranged on the PCB circuit board; the wireless data transmission link of controller links to each other with wireless data transmission link module's data transmission end, the light control end of controller links to each other with light drive module's drive control end, the image data output of camera 5 links to each other with the image data input of controller, the audio data output of audio input unit 6 links to each other with the audio data input of controller, the audio data input of audio output unit links to each other with the audio data output of controller, the temperature data output of temperature detection module 9 links to each other with the temperature data input of controller, the infrared detection data output of infrared detection module 10 links to each other with the infrared data input of controller. The wireless data transmission connection module comprises one or any combination of a wireless data transmission connection WiFi module, a wireless data transmission connection 3G module, a wireless data transmission connection 4G module, a wireless data transmission connection 5G module and a wireless data transmission connection Bluetooth module;
the wireless data transmission of controller connects the data transmission end of wiFi end and wireless data transmission connection wiFi module and links to each other, the wireless data transmission of controller connects the data transmission end of 3G end and wireless data transmission connection 3G module and links to each other, the wireless data transmission of controller connects the data transmission end of 4G end and wireless data transmission connection 4G module and links to each other, the wireless data transmission of controller connects the data transmission end of 5G end and wireless data transmission connection 5G module and links to each other, the wireless data transmission of controller connects the data transmission end of bluetooth end and wireless data transmission connection bluetooth module and links to each other.
In a preferred embodiment of the present invention, the audio input unit 6 includes: as shown in fig. 4, the selection control terminal sled of the audio collector MIC5 is respectively connected to the first terminal of the resistor R64 and the audio selection control terminal of the controller, the second terminal of the resistor R64 is connected to the power supply voltage VDD _1.8V, the clock terminal CLK of the audio collector MIC5 is connected to the audio input clock terminal of the controller, the DATA terminal DATA of the audio collector MIC5 is connected to the audio DATA input terminal of the controller, the power ground terminal of the audio collector MIC5 is connected to the power ground, the power supply voltage terminal VDD of the audio collector MIC5 is respectively connected to the first terminal of the capacitor C50 and the power supply voltage VDD _1.8V, and the second terminal of the capacitor C50 is connected to the power ground; in this embodiment, the resistance of the resistor R64 is 10K, the capacitance of the capacitor C50 is 0.1uF, and the model of the audio collector MIC5 is ZTS 6032.
The audio output unit includes: as shown in fig. 5, the left channel terminal INL-of the audio driver chip U2 is connected to the first terminal of the capacitor C4, the second terminal of the capacitor C4 is connected to the first terminal of the capacitor C2 and the first terminal of the resistor R24, the second terminal of the resistor R24 is connected to the negative left channel terminal of the driver interface J4, the left channel terminal INL + of the audio driver chip U2 is connected to the first terminal of the capacitor C5, the second terminal of the capacitor C5 is connected to the second terminal of the capacitor C2 and the first terminal of the resistor R25, and the second terminal of the resistor R25 is connected to the positive left channel terminal of the driver interface J4; a right channel end INR + of the audio driving chip U2 is connected to a first end of a capacitor C6, a second end of a capacitor C6 is connected to a first end of a capacitor C3 and a first end of a resistor R26, respectively, a second end of a resistor R26 is connected to a right channel positive end of the driving interface J4, a right channel end INR-of the audio driving chip U2 is connected to a first end of a capacitor C7, a second end of a capacitor C7 is connected to a second end of a capacitor C3 and a first end of a resistor R27, respectively, and a second end of the resistor R27 is connected to a right channel negative end of the driving interface J4; a left channel grounding first end of the driving interface J4 is connected with a first end of the transient suppression diode TVS26, a second end of the transient suppression diode TVS26 is connected with a power ground, a left channel grounding second end of the driving interface J4 is connected with a first end of the transient suppression diode TVS27, a second end of the transient suppression diode TVS27 is connected with the power ground, a right channel grounding first end of the driving interface J4 is connected with a first end of the transient suppression diode TVS28, a second end of the transient suppression diode TVS28 is connected with the power ground, a right channel grounding second end of the driving interface J4 is connected with a first end of the transient suppression diode TVS29, and a second end of the transient suppression diode TVS29 is connected with the power ground; the digital ground end of the driving interface J4 is connected with the first end of the resistor R89, and the power ground end of the driving interface J4 is connected with the second end of the resistor R89; the audio data output end of the controller is connected with the driving interface J4;
a selection terminal G0 of the audio driver chip U2 is respectively connected with a first terminal of a resistor R28 and a first terminal of a resistor R31, a second terminal of the resistor R28 is connected with a power supply voltage AVDD _3V3, a second terminal of a resistor R31 is connected with a digital ground, a selection terminal G1 of the audio driver chip U2 is respectively connected with a first terminal of a resistor R29 and a first terminal of a resistor R30, a second terminal of a resistor R30 is connected with a power supply voltage AVDD _3V3, a second terminal of a resistor R29 is connected with a digital ground, a power supply ground terminal HPVSS of the audio driver chip U2 is connected with a first terminal of a capacitor C9, and a second terminal of a capacitor C9 is connected with the digital ground;
a charge pump terminal CPN of the audio driver chip U2 is connected to a first terminal of the capacitor C11, a charge pump terminal CPP of the audio driver chip U2 is connected to a second terminal of the capacitor C11, a power ground terminal PGND of the audio driver chip U2 is connected to digital ground, a power ground terminal HPVDD of the audio driver chip U2 is connected to a first terminal of the capacitor C10, and a second terminal of the capacitor C10 is connected to power ground;
a power supply terminal VDD of the audio driving chip U2 is respectively connected with a power supply voltage AVDD _3V3, a first terminal of a capacitor C8 and a first terminal of a capacitor C54, and a power supply ground terminal SGND of the audio driving chip U2 is respectively connected with a digital ground, a second terminal of a capacitor C8 and a second terminal of a capacitor C54;
an enable terminal EN of the audio driver chip U2 is connected to a first terminal of the resistor R32 and a first terminal of the resistor R33, a second terminal of the resistor R33 is connected to a power ground, a first terminal of the resistor R32 is connected to an audio driver chip enable terminal of the controller, a left channel audio output terminal OUTL of the audio driver chip U2 is connected to a first terminal of the transient suppression diode TVS15 and a left channel terminal of the speaker interface J5, a second terminal of the transient suppression diode TVS15 is connected to a digital ground, a right channel audio output terminal OUTR of the audio driver chip U2 is connected to a first terminal of the transient suppression diode TVS19 and a right channel terminal of the speaker interface J5, a second terminal of the transient suppression diode TVS19 is connected to a digital ground, and a ground terminal of the speaker interface J5 is connected to a digital ground; the loudspeaker interface J5 is connected with the left loudspeaker and the right loudspeaker; real-time audio output and output are provided for technicians, remote conversation is realized, and problems are quickly solved; in this embodiment, the resistances of the resistor R24, the resistor R25, the resistor R26, and the resistor R27 are 560 Ω, the capacitances of the capacitor C2, the capacitor C3, and the capacitor C8 are 4.7uF, the capacitances of the capacitor C4, the capacitor C5, the capacitor C6, and the capacitor C7 are 220nF, the capacitance of the capacitor C54 is 10uF, the capacitance of the capacitor C10 is 10uF, the capacitances of the capacitor C9 and the capacitor C11 are 1uF, the resistances of the resistor R32, the resistor R31, and the resistor R29 are 1K, and the resistances of the resistor R33, the resistor R28, and the resistor R30 are 130 Ω.
The illumination lamp driving module includes: the base electrode of the first triode is connected with the first end of the first resistor, the second end of the first resistor is connected with the illuminating lamp control end of the controller, the collector electrode of the first triode is respectively connected with the first end of the second resistor and the negative electrode of the first diode, the second end of the second resistor is connected with the power supply voltage AVDD _3V3, the emitter electrode of the first triode is connected with the first end of the first normally open relay input loop, the second end of the second normally open relay input loop is respectively connected with the first end of the third resistor and the first end of the fourth resistor, the second end of the third resistor is connected with the positive electrode of the first diode, and the second end of the fourth resistor is connected with the power ground; the first normally open relay output circuit is connected in series in the lighting lamp power supply circuit. When it needs the illumination, the light control end output of controller switches on the level, and first triode switches on, and first normally open relay output circuit is become the closed state by normally open state, and light power supply circuit is closed, and the light illuminates.
In a preferred embodiment of the present invention, step S1 includes the following steps:
s11, when a technician waits to enter the transformer substation, the wireless connection signal sent by the access control system is communicated with the wireless data transmission connection module in the protective helmet, the attribute information of the wireless data transmission connection module is obtained, and the corresponding face information is inquired according to the attribute information inquiry value; the face information inquired according to the attribute information inquiry value is a comparison face;
s12, a face image acquisition module on the access control system acquires face information of a technician waiting to enter the transformer substation; the method comprises the steps that face data processing is carried out on face information of technicians to enter a transformer substation, wherein the face information is collected by a face image collection module, and the face information is a collected face after the face data processing is carried out on the face information;
s13, the access control system compares and compares whether the face is consistent with the collected face:
if the comparison face is consistent with the collected face, the access control system opens the access control;
and if the comparison face is inconsistent with the collected face, the access control system uploads the collected face to a warning face storage database.
In a preferred embodiment of the present invention, step S11 includes the following steps:
s111, the access control system communicates with the protective helmet, the access control system acquires the attribute information of the wireless data transmission connection module from the protective helmet, and after the controller receives the attribute information of the wireless data transmission connection module acquired from the access control system, the controller sends the acquired attribute information of the wireless data transmission connection module to the access control system;
s112, after the access control system receives the attribute information of the wireless data transmission connection module sent by the protective helmet, the access control system performs the following operations on the received attribute information of the wireless data transmission connection module:
Property information query value=<Attribute information,Algorithm type>,
wherein, Property information query value represents a Property information query value;
the Attribute information represents Attribute information of a wireless data transmission connection module, and the Attribute information of the wireless data transmission connection module comprises a physical address of one of a wireless data transmission connection WiFi module, a wireless data transmission connection 3G module, a wireless data transmission connection 4G module, a wireless data transmission connection 5G module and a wireless data transmission connection Bluetooth module;
algorithm type represents an Algorithm operation type;
< Attribute information, Algorithm type > indicates an Algorithm operation type of Attribute information of the wireless data transmission connection module; the algorithm operation type employs the MD5 hash algorithm.
S113, judging whether the Property information query value exists in the face database:
if the Property information query value exists in the face feature value database, screening a face feature value corresponding to the Property information query value, and executing step S114;
if the attribute information query value does not exist in the human face characteristic value database, sending prompt information to a protective helmet of the user, wherein the prompt information is that data information of the protective helmet is not recorded into a transformer substation system;
and S114, obtaining the face information associated with the face characteristic value according to the face characteristic value obtained in the step S113.
In a preferred embodiment of the present invention, in step S113, the method for calculating the face feature value includes:
Face feature value=<Face information,Algorithm type>,
wherein, the Face information represents Face information, namely a Face image;
algorithm type represents an Algorithm operation type;
< Face information, Algorithm type > represents an Algorithm operation type Algorithm type performed on Face information;
the Face feature value represents a Face feature value.
In a preferred embodiment of the present invention, step S12 includes the following steps:
s121, judging whether the acquired face image is a gray image by the access control system:
if the face image collected by the access control system is a gray image, executing step S122;
if the face image collected by the access control system is not a gray image, executing the following steps:
s1211, counting the total number of the face images collected by the access control system, and recording as a, a is the total number of the face images collected by the access control system, and A is respectively1、A2、A3、……、Aa,A1The 1 st face image of technician A, collected for the access control system 22 nd face image of technician A collected for access control system, A3The 3 rd face image of technician A, collected for the access control systemaThe method comprises the steps of collecting the a-th face image of a technician A for an access control system;
and S1212, converting the RGB face image into a gray image through the following calculation formula:
Imngrayscale face for representing gray scale imageThe gray value of the pixel point at the position of the mth row and the nth column in the middle row; m is 1, 2, 3, … … and M, N is 1, 2, 3, … … and N, M is width × Resolution, M represents the total number of horizontal pixels, width represents the width value of the RGB face image, and Resolution represents the Resolution of the RGB face image; n is high × Resolution, N represents the total number of vertical pixel points, and high represents the height value of the RGB face image; i.e. I11Grayscale face for representing gray scale imageGray value, I, of pixel point at the 1 st row and 1 st column position12Grayscale face for representing gray scale imageGray value, I, of pixel point at the 2 nd column position of the middle 1 st line13Grayscale face for representing gray scale imageGray value, I, of pixel point at the position of the 1 st row and the 3 rd column1NGrayscale face for representing gray scale imageThe gray value of the pixel point at the position of the No. 1 line and the No. N column; i is21Grayscale face for representing gray scale imageGray value, I, of pixel point at the 1 st column position of the middle 2 nd row22Grayscale face for representing gray scale imageGray value, I, of pixel point at 2 nd row and 2 nd column position in middle row23Grayscale face for representing gray scale imageGray value, I, of pixel point at the position of the 2 nd row and 3 rd column2NGrayscale face for representing gray scale imageThe gray value of the pixel point at the position of the Nth column in the 2 nd row; i is31Grayscale face for representing gray scale imageGray value, I, of pixel point at the 1 st column position of the 3 rd row32Grayscale face for representing gray scale imageGray value, I, of pixel point at the 2 nd column position of the 3 rd row33Grayscale face for representing gray scale imageGray value, I, of pixel point at the 3 rd row and column position in the 3 rd row3NGrayscale face for representing gray scale imageThe gray value of the pixel point at the Nth column position of the 3 rd row; i isM1Grayscale face for representing gray scale imageGray value, I, of pixel point at the 1 st column position of the M-th rowM2Grayscale face for representing gray scale imageGray value, I, of pixel point at position of middle Mth row and 2 nd columnM3Grayscale face for representing gray scale imageGray value, I, of pixel point at the 3 rd column position of the M-th rowMNGrayscale face for representing gray scale imageThe gray value of the pixel point at the position of the Mth row and the Nth column;
wherein R ismnExpressing the red channel value of a pixel point at the nth row position of the mth line in the RGB image;
Gmnrepresenting the green channel value of a pixel point at the nth row position of the mth line in the RGB image;
Bmnrepresenting a blue channel value of a pixel point at the nth row position of the mth line in the RGB image;
and S122, screening the gray-level face image.
In a preferred embodiment of the present invention, step S122 includes the following steps: order to
S1221, the gray-scale face image is subjected toDividing the image into M gray level face unit images, wherein M is a positive integer greater than or equal to 1 and is the 1 st unit image of the gray level faceGrayscale face 2 nd unit imageGrayscale face No. 3 unit imageM unit image of gray human faceWherein the content of the first and second substances,&representing an image mosaic symbol;
s1222 for m unit image of gray human facePerforming a first screening valueSecond screening valueAnd a second screening valueM is a positive integer less than or equal to M;
s1223, if the m-th unit image of the gray human faceIf the filtering value is greater than or equal to the preset filtering threshold, step S1224 is executed;
if m unit image of gray human faceIf the screening value is smaller than the preset screening threshold, executing step S1225;
s1224, determining m unit image of gray human faceGray value pixel of middle zeta pixel pointζAnd the size between the first operation threshold of the image:
if m unit image of gray human faceGray value pixel of middle zeta pixel pointζIf the image is greater than or equal to the first operation threshold value of the image, pixel is orderedζ=0;
If m unit image of gray human faceGray value pixel of middle zeta pixel pointζIf the value is less than the second operation threshold value of the image, pixel is orderedζ=255;
S1225, judging m-th unit image of gray human faceGray value pixel of middle zeta pixel pointζAnd the size between the image and the second operation threshold value:
if m unit image of gray human faceGray value pixel of middle zeta pixel pointζIf the value is greater than or equal to the second operation threshold value of the image, pixel is orderedζ=255;
If m unit image of gray human faceGray value pixel of middle zeta pixel pointζIf the value is less than the second operation threshold value of the image, pixel is orderedζ=0。
In a preferred embodiment of the present invention, in step S1222, the m-th unit image of the gray-scale human face is obtainedFirst screening value ofThe calculation method comprises the following steps:
wherein the content of the first and second substances,m-th unit image for expressing gray human faceThe number of the middle pixel points;
pixelζm-th unit image for expressing gray human faceThe gray value of the middle zeta-th pixel point;
or/and m unit image of human face with gray scaleSecond screening value ofThe calculation method comprises the following steps:
wherein the content of the first and second substances,m-th unit image for expressing gray human faceThe number of the middle pixel points;
pixelζm-th unit image for expressing gray human faceThe gray value of the middle zeta-th pixel point;
representing a second selection number;or/and m unit image of human face with gray scaleThird screening value ofThe calculation method comprises the following steps:
wherein the content of the first and second substances,m-th unit image for expressing gray human faceThe number of the middle pixel points;
pixelζm-th unit image for expressing gray human faceAnd (5) the gray value of the middle zeta-th pixel point.
In a preferred embodiment of the present invention, in step S1223, the m-th cell image of the face is gray-scaledThe screening value of (2) is calculated by:
wherein the content of the first and second substances,representing a gray-scale faceM unit imageThe number of the middle pixel points;
pixelζm-th unit image for expressing gray human faceThe gray value of the middle zeta-th pixel point;
In a preferred embodiment of the present invention, in step S1224, the method for calculating the first image operation threshold value includes:
wherein the content of the first and second substances,representing a first operational threshold of the image;
a represents a screening adjustment first coefficient;
b represents a second coefficient of screening modulation;
c represents a third coefficient of screening modulation; a + b + c is 1;
pixelζm-th unit image for expressing gray human faceThe gray value of the middle zeta-th pixel point;
pixelξm-th unit image for expressing gray human faceAnd the gray value of the middle xi pixel point.
In a preferred embodiment of the present invention, in step S1225, the method for calculating the second operation threshold of the image is:
wherein the content of the first and second substances,representing a second operational threshold of the image;
a represents a screening adjustment first coefficient;
b represents a second coefficient of screening modulation;
c represents a third coefficient of screening modulation; a + b + c is 1;
pixelζm-th unit image for expressing gray human faceThe gray value of the middle zeta-th pixel point;
pixelξm-th unit image for expressing gray human faceAnd the gray value of the middle xi pixel point.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
Claims (7)
1. A transformer substation unmanned permission field operation intelligent control method based on a video analysis technology comprises a protective helmet and is characterized by comprising the following steps:
s1, the technical personnel of the transformer substation enter the transformer substation;
and S2, after the transformer substation is started, the camera on the protective helmet transmits the video image in the field operation process until the cloud platform manages the video image.
2. The intelligent substation operation management and control method based on the video analysis technology is characterized in that a protective helmet comprises a protective helmet body (8), an illuminating lamp fixing installation seat (1) for fixedly installing an illuminating lamp (2) is arranged on the front face of the protective helmet body (8), the illuminating lamp (2) is fixedly installed on the illuminating lamp fixing installation seat (1), an illuminating lamp PCB fixing installation seat for fixedly installing an illuminating lamp PCB is arranged in the illuminating lamp fixing installation seat (1), the illuminating lamp PCB is fixedly installed on the illuminating lamp PCB fixing installation seat, and an illuminating lamp driving module for driving the illuminating lamp (2) to work is arranged on the illuminating lamp PCB; the front side of the protective helmet body (8) is also provided with a brim (3), the bottom of the brim (3) is provided with an arc-shaped supporting block (7), the front side of the arc-shaped supporting block (7) is provided with an image audio acquisition module fixing mounting seat for fixedly mounting an image audio acquisition module (4), the image audio acquisition module (4) is fixedly mounted on the image audio acquisition module fixing mounting seat, and the image audio acquisition module comprises a camera (5), an audio input unit (6) and an audio output unit; an infrared detection module fixing mounting seat for fixedly mounting an infrared detection module (10) and a temperature detection module fixing mounting seat for fixedly mounting a temperature detection module (9) are arranged on the inner side of the protective helmet body (8), the infrared detection module (10) is fixedly mounted on the infrared detection module fixing mounting seat, and the temperature detection module (9) is fixedly mounted on the temperature detection module fixing mounting seat;
a PCB circuit board fixing installation seat for fixedly installing a PCB circuit board is arranged in the protective helmet, the PCB circuit board is fixedly installed on the PCB circuit board fixing installation seat, and a controller and a wireless data transmission connection module are arranged on the PCB circuit board; the wireless data transmission link of controller links to each other with wireless data transmission link module's data transmission end, the light control end of controller links to each other with light drive module's drive control end, the image data output of camera (5) links to each other with the image data input of controller, the audio data output of audio input unit (6) links to each other with the audio data input of controller, the audio data input of audio output unit links to each other with the audio data output of controller, the temperature data output of temperature detection module (9) links to each other with the temperature data input of controller, the infrared detection data output of infrared detection module (10) links to each other with the infrared data input of controller.
3. The intelligent management and control method for the unmanned permission field work of the substation based on the video analysis technology of claim 1, wherein the step S1 comprises the following steps:
s11, when a technician waits to enter the transformer substation, the wireless connection signal sent by the access control system is communicated with the wireless data transmission connection module in the protective helmet, the attribute information of the wireless data transmission connection module is obtained, and the corresponding face information is inquired according to the attribute information inquiry value; the face information inquired according to the attribute information inquiry value is a comparison face;
s12, acquiring the face information of a technician to enter the transformer substation by a face image acquisition module on the access control system; the method comprises the steps that face data processing is carried out on face information of technicians to enter a transformer substation, wherein the face information is collected by a face image collection module, and the face information is a collected face after the face data processing is carried out on the face information;
s13, the access control system compares and compares whether the face is consistent with the collected face:
if the comparison face is consistent with the collected face, the access control system opens the access control;
and if the comparison face is inconsistent with the collected face, the access control system uploads the collected face to a warning face storage database.
4. The intelligent management and control method for the unmanned permission field work of the substation based on the video analysis technology as claimed in claim 3, wherein the step S11 comprises the following steps:
s111, the access control system communicates with the protective helmet, the access control system acquires the attribute information of the wireless data transmission connection module from the protective helmet, and after the controller receives the attribute information of the wireless data transmission connection module acquired from the access control system, the controller sends the acquired attribute information of the wireless data transmission connection module to the access control system;
s112, after the access control system receives the attribute information of the wireless data transmission connection module sent by the protective helmet, the access control system performs the following operations on the received attribute information of the wireless data transmission connection module:
Property information query value=<Attribute information,Algorithm type>,
wherein, Property information query value represents a Property information query value;
the Attribute information represents Attribute information of a wireless data transmission connection module, and the Attribute information of the wireless data transmission connection module comprises a physical address of one of a wireless data transmission connection WiFi module, a wireless data transmission connection 3G module, a wireless data transmission connection 4G module, a wireless data transmission connection 5G module and a wireless data transmission connection Bluetooth module;
s113, judging whether the Property information query value exists in the face database:
if the Property information query value exists in the face feature value database, screening a face feature value corresponding to the Property information query value, and executing step S114;
if the attribute information query value does not exist in the human face characteristic value database, sending prompt information to a protective helmet of the user, wherein the prompt information is that data information of the protective helmet is not recorded into a transformer substation system;
and S114, obtaining the face information associated with the face characteristic value according to the face characteristic value obtained in the step S113.
5. The intelligent substation operation management and control method based on the video analysis technology of claim 4 is characterized in that in step S113, the face feature value is calculated by the following method:
Face feature value=<Face information,Algorithm type>,
wherein, the Face information represents Face information, namely a Face image;
algorithm type represents an Algorithm operation type;
< Face information, Algorithm type > represents an Algorithm operation type Algorithm type performed on Face information;
the Face feature value represents a Face feature value.
6. The intelligent management and control method for the unmanned permission field work of the substation based on the video analysis technology as claimed in claim 3, wherein in the step S12, the method comprises the following steps:
s121, judging whether the acquired face image is a gray image by the access control system:
if the face image collected by the access control system is a gray image, executing step S122;
if the face image collected by the access control system is not a gray image, executing the following steps:
s1211, counting the total number of the face images collected by the access control system, and recording as a, a is collected by the access control systemThe total number of the face images is A1、A2、A3、……、Aa,A1The 1 st face image of technician A, collected for the access control system22 nd face image of technician A collected for access control system, A3The 3 rd face image of technician A, collected for the access control systemaThe method comprises the steps of collecting the a-th face image of a technician A for an access control system;
and S1212, converting the RGB face image into a gray image through the following calculation formula:
Imngrayscale face for representing gray scale imageThe gray value of the pixel point at the position of the mth row and the nth column in the middle row; m is 1, 2, 3, … … and M, N is 1, 2, 3, … … and N, M is width × Resolution, M represents the total number of horizontal pixels, width represents the width value of the RGB face image, and Resolution represents the Resolution of the RGB face image; n is high × Resolution, N represents the total number of vertical pixel points, and high represents the height value of the RGB face image; i.e. I11Grayscale face for representing gray scale imageGray value, I, of pixel point at the 1 st row and 1 st column position12Grayscale face for representing gray scale imageGray value, I, of pixel point at the 2 nd column position of the middle 1 st line13Grayscale face for representing gray scale imageGray value, I, of pixel point at the position of the 1 st row and the 3 rd column1NGrayscale face for representing gray scale imageThe gray value of the pixel point at the position of the No. 1 line and the No. N column; i is21Grayscale face for representing gray scale imageGray value, I, of pixel point at the 1 st column position of the middle 2 nd row22Grayscale face for representing gray scale imageGray value, I, of pixel point at 2 nd row and 2 nd column position in middle row23Grayscale face for representing gray scale imageGray value, I, of pixel point at the position of the 2 nd row and 3 rd column2NGrayscale face for representing gray scale imageThe gray value of the pixel point at the position of the Nth column in the 2 nd row; i is31Grayscale face for representing gray scale imageGray value, I, of pixel point at the 1 st column position of the 3 rd row32Grayscale face for representing gray scale imageGray value, I, of pixel point at the 2 nd column position of the 3 rd row33Grayscale face for representing gray scale imageGray value, I, of pixel point at the 3 rd row and column position in the 3 rd row3NGrayscale face for representing gray scale imageThe gray value of the pixel point at the Nth column position of the 3 rd row; i isM1Grayscale face for representing gray scale imageGray value, I, of pixel point at the 1 st column position of the M-th rowM2Grayscale face for representing gray scale imageGray value, I, of pixel point at position of middle Mth row and 2 nd columnM3Grayscale face for representing gray scale imageGray value, I, of pixel point at the 3 rd column position of the M-th rowMNGrayscale face for representing gray scale imageThe gray value of the pixel point at the position of the Mth row and the Nth column;
wherein R ismnExpressing the red channel value of a pixel point at the nth row position of the mth line in the RGB image;
Gmnrepresenting the green channel value of a pixel point at the nth row position of the mth line in the RGB image;
Bmnrepresenting a blue channel value of a pixel point at the nth row position of the mth line in the RGB image;
and S122, screening the gray-level face image.
7. The intelligent management and control method for the unmanned permission field work of the substation based on the video analysis technology as claimed in claim 6, wherein the step S122 comprises the following steps: order to
S1221, the gray-scale face image is subjected toDividing the image into M gray level face unit images, wherein M is a positive integer greater than or equal to 1 and is the 1 st unit image of the gray level faceGrayscale face 2 nd unit imageGrayscale face No. 3 unit image… … grayscale human face M unit imageWherein the content of the first and second substances,&representing an image mosaic symbol;
s1222 for m unit image of gray human facePerforming a first screening valueSecond screening valueAnd a second screening valueM is a positive integer less than or equal to M;
s1223, if the m-th unit image of the gray human faceIf the filtering value is greater than or equal to the preset filtering threshold, step S1224 is executed;
if m unit image of gray human faceIf the screening value is smaller than the preset screening threshold, executing step S1225;
s1224, determining m unit image of gray human faceGray value pixel of middle zeta pixel pointζAnd the size between the first operation threshold of the image:
if m unit image of gray human faceGray value pixel of middle zeta pixel pointζIf the image is greater than or equal to the first operation threshold value of the image, pixel is orderedζ=0;
If m unit image of gray human faceGray value pixel of middle zeta pixel pointζIf the value is less than the second operation threshold value of the image, pixel is orderedζ=255;
S1225, judging m-th unit image of gray human faceGray value pixel of middle zeta pixel pointζAnd the size between the image and the second operation threshold value:
if m unit image of gray human faceGray value pixel of middle zeta pixel pointζIf the value is greater than or equal to the second operation threshold value of the image, pixel is orderedζ=255;
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110569307.3A CN113297970A (en) | 2021-05-25 | 2021-05-25 | Intelligent control method for substation unattended field operation based on video analysis technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110569307.3A CN113297970A (en) | 2021-05-25 | 2021-05-25 | Intelligent control method for substation unattended field operation based on video analysis technology |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113297970A true CN113297970A (en) | 2021-08-24 |
Family
ID=77324653
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110569307.3A Pending CN113297970A (en) | 2021-05-25 | 2021-05-25 | Intelligent control method for substation unattended field operation based on video analysis technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113297970A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107481347A (en) * | 2017-09-30 | 2017-12-15 | 四川民工加网络科技有限公司 | Attendance checking system and equipment for construction site |
CN109393624A (en) * | 2018-11-28 | 2019-03-01 | 安徽清新互联信息科技有限公司 | Multifunctional protection safety cap and its control method |
CN110633623A (en) * | 2019-07-23 | 2019-12-31 | 国网浙江省电力有限公司杭州供电公司 | Management and control method for operation process of transformer substation worker |
CN112465742A (en) * | 2020-10-16 | 2021-03-09 | 重庆恢恢信息技术有限公司 | Method for identifying and judging construction site reinforcement bar installation abnormity by fusing big data |
-
2021
- 2021-05-25 CN CN202110569307.3A patent/CN113297970A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107481347A (en) * | 2017-09-30 | 2017-12-15 | 四川民工加网络科技有限公司 | Attendance checking system and equipment for construction site |
CN109393624A (en) * | 2018-11-28 | 2019-03-01 | 安徽清新互联信息科技有限公司 | Multifunctional protection safety cap and its control method |
CN110633623A (en) * | 2019-07-23 | 2019-12-31 | 国网浙江省电力有限公司杭州供电公司 | Management and control method for operation process of transformer substation worker |
CN112465742A (en) * | 2020-10-16 | 2021-03-09 | 重庆恢恢信息技术有限公司 | Method for identifying and judging construction site reinforcement bar installation abnormity by fusing big data |
Non-Patent Citations (1)
Title |
---|
郑树泉等: "《工业智能技术与应用》", 上海科学技术出版社, pages: 225 - 226 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20160042621A1 (en) | Video Motion Detection Method and Alert Management | |
CN106170072B (en) | Video acquisition system and acquisition method thereof | |
CN111523397B (en) | Intelligent lamp post visual identification device, method and system and electronic equipment thereof | |
KR102076113B1 (en) | Garbage unauthorized dumping monitoring device | |
Cohen et al. | CCTV operational requirements manual 2009 | |
CN101635837B (en) | Display system | |
CN116486585B (en) | Production safety management system based on AI machine vision analysis early warning | |
CN113225550A (en) | Offset detection method and device, camera module, terminal equipment and storage medium | |
JP3506934B2 (en) | Monitoring device and monitoring system | |
CN115546738A (en) | Rail foreign matter detection method | |
CN1808516A (en) | Vehicle monitoring method, specific character pattern recognition device, and vehicle monitoring system | |
CN208739296U (en) | Intelligent municipal administration's information visualization total management system | |
CN113297970A (en) | Intelligent control method for substation unattended field operation based on video analysis technology | |
KR101676444B1 (en) | System and method for road-side automatic number plate recognition of multi-lane | |
CN102340628A (en) | Camera and control method thereof | |
CN117213621A (en) | Contact net vibration fixed-point monitoring system and monitoring method | |
CN108520615B (en) | Fire identification system and method based on image | |
CN113297971A (en) | Intelligent management method for unattended field operation of transformer substation integrating video analysis technology | |
CN111867205A (en) | All-round stage equipment center monitored control system | |
CN110866462A (en) | Behavior recognition system and method integrated in intelligent police car | |
KR100925382B1 (en) | Violation Car Enforcement System | |
CN113538967B (en) | Vehicle-road cooperation device and method under crossroad scene | |
CN107666603B (en) | Highway emergency telephone video acquisition control system | |
JP2002140711A (en) | Method for setting size of invasion object detecting device, invasion object detecting method and invasion object detector | |
CN216772462U (en) | Construction site shooting warning monitoring system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |