CN112487891B - Visual intelligent dynamic identification model construction method applied to electric power operation site - Google Patents

Visual intelligent dynamic identification model construction method applied to electric power operation site Download PDF

Info

Publication number
CN112487891B
CN112487891B CN202011288051.0A CN202011288051A CN112487891B CN 112487891 B CN112487891 B CN 112487891B CN 202011288051 A CN202011288051 A CN 202011288051A CN 112487891 B CN112487891 B CN 112487891B
Authority
CN
China
Prior art keywords
construction
samples
face
electric power
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011288051.0A
Other languages
Chinese (zh)
Other versions
CN112487891A (en
Inventor
施蔚青
李辉
刘洪兵
何四平
朱晟
孙西
张永明
杨柳
郑可伦
李磊
梁钧
杨任
罗林
黄鹤
刘伟华
刘传文
张啸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunnan Power Grid Co Ltd
Original Assignee
Yunnan Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunnan Power Grid Co Ltd filed Critical Yunnan Power Grid Co Ltd
Priority to CN202011288051.0A priority Critical patent/CN112487891B/en
Publication of CN112487891A publication Critical patent/CN112487891A/en
Application granted granted Critical
Publication of CN112487891B publication Critical patent/CN112487891B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a visual intelligent dynamic identification model construction method applied to an electric power operation site, and belongs to the field of electric power informatization devices. According to the invention, through researching the visual intelligent dynamic identification model of the electric power operation site, the large-scale full-scene real-time monitoring of the electric power operation site at 360 DEG and 80 DEG in the horizontal direction and the vertical direction is realized, the problem of monitoring dead angles existing in the past is effectively solved, complete image monitoring data is left for each electric power operation site, the standard electric power operation site construction is standardized, the comprehensive dead-angle-free electric power operation site safety supervision and protection electronic fence is formed, the functions of human behavior identification, human face identification and the like are realized, the video is required to realize the dead-angle-free operation area in the splicing process, the human face identification and the non-standard construction behavior can be accurately captured, and the life and property safety of constructors and citizens in the electric power operation site are ensured.

Description

Visual intelligent dynamic identification model construction method applied to electric power operation site
Technical Field
The invention belongs to the technical field of power informatization, and particularly relates to a visual intelligent dynamic identification model construction method applied to a power operation site.
Background
In the existing monitoring device of the power equipment, a hardware isolation method or an obvious identification method is generally adopted to monitor the electric shock prevention site of power operators, wherein various appliances made of insulating materials are generally adopted for hardware isolation to effectively isolate live bodies or construction sites from the operators, so that the human electric shock accidents or the non-operators break into the power construction sites are prevented. However, the method needs to carry a large number of isolation baffles in each construction, is limited by the size of construction sites, and brings great inconvenience to each construction because of more or less isolation baffles carried by the method; the obvious identification method generally uses marks such as color codes, warning signs and the like to prompt the electrified equipment or the power construction site.
Besides hanging conventional signboards (such as 'working at the site', 'stopping at the site and high-voltage danger'), warning boards such as 'upper live zone', various marks such as color codes and cloth curtains and the like are additionally arranged according to actual requirements of the site, and voice alarm prompts are set if necessary to ensure that workers can see, hear and receive warnings and reminders in time when the operation is transferred, so that the workers can be prevented from entering the construction site by mistake or touching live equipment by mistake due to paralysis. The method has the advantages that the method only has a warning effect on non-operation personnel, the capability of effectively preventing the non-operation personnel from entering an operation site is not available, the existing electric operation site is complex, the construction operation requirement is standard, the scientific and technological equipment is mixed and applied, a complex operation environment with multiple operation points is formed, and in order to effectively avoid the occurrence of electric operation safety accidents, a set of electric operation safety supervision robot early warning equipment is required to be designed.
At present, no effective countermeasure exists for the incorrect entering of non-constructors into the construction site, and the non-constructors cannot be effectively prevented from entering the construction site only by the site supervision personnel or the warning sign and the warning sign, so that an improved technology is urgently needed, and the problem of the lagging technology adopted in the existing construction site and supervised by the site supervision personnel is solved.
Disclosure of Invention
The invention aims to solve the defects of the prior art, and provides a visual intelligent dynamic identification model construction method applied to an electric power operation site, which is to discard the traditional device proposal of a physical isolation electric power operation site isolation baffle, automatically identify non-staff, identify the intention of staff through motion capture, early warn the staff when the staff wants to enter the operation site, and remind site supervision staff in real time.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
a visual intelligent dynamic identification model construction method applied to an electric power operation site comprises the following steps:
s1, warning and reminding the invasion behaviors of non-constructors in an electric power construction operation area by using a human face recognition device, a deep learning human body behavior recognition device, an electronic fence device and an intelligent monitoring target tracking device based on a neural network algorithm;
S2, accurately capturing face information through a face recognition device of a neural network algorithm, recognizing and capturing nonstandard construction behaviors, and guaranteeing standard construction of electric power constructors;
s3, when personnel intrude into the boundary of the construction site, firstly, identifying whether the intruder is a site constructor through a face recognition device, if so, identifying whether the constructor is normal in dressing through a dressing algorithm, and if not, prompting the constructor and a site supervision personnel, and if so, supervising the construction method of the constructor in the whole course;
s4, monitoring whether the construction steps meet the construction standard, if so, reminding constructors and on-site supervision staff in real time, helping the constructors to correct the places without standard construction in time, and guaranteeing the standard construction of the whole construction process;
s5, uploading construction video to the cloud or storing construction video data to the local in time after construction is completed, carrying out horizontal 360 DEG and vertical 80 DEG panoramic overall process monitoring on a construction site through a video panoramic stitching device and an electronic fence device after the construction is completed so as to carry out repeated analysis on construction links.
Further, preferably, the face recognition device based on the neural network algorithm includes the following steps:
S11, constructing a characteristic face space: after pretreatment, face detection, face expression, face identification, expression/gesture analysis and physiological classification are loaded into a face library, the first five faces of each object in the library are loaded as a training set, the last five faces are loaded as a test set, if N face images exist in the face library, the face images are set as a matrix, then row vectors of nm rows rearranged by rows of each face nm image are obtained to obtain a training sample set, and then the average value of the training samples, namely an average face, is calculated as a row vector, is the prominent difference, and the average face is subtracted to obtain N difference images;
s12, feature extraction: extracting features with the largest difference from the face images, performing recognition work, projecting the training set face images in a feature subspace, projecting the test set images into the feature subspace, and taking the set of coefficients as the basis of face recognition;
s13, face recognition: and carrying out feature extraction by adopting a PCA algorithm, carrying out classification recognition on the face after feature extraction, judging the distance between a sample to be classified and a known sample by adopting a nearest neighbor method, and comparing a new sample with the known samples one by one, wherein the class of the known sample with the nearest distance is used as the class of the new sample.
Further, preferably, when performing face recognition in step S13, performing image preprocessing by using a face recognition experiment, performing preprocessing on images in a face library before the face recognition experiment, transforming an expression subgraph into a uniform size by using a geometric normalization method to prevent the images from being influenced by scale and angle changes, firstly, calibrating three feature points of two eyes and a nose by using a [ x, y ] =ginput function in MATLAB, then, selecting manual calibration to obtain coordinate values of the three feature points, then, performing rotation image according to the coordinate values of the left and right eyes to ensure the direction consistency of the face, determining a rectangular feature area range according to the facial feature points and the geometric model, and finally, performing scale transformation to ensure the uniform size of the expression subgraph image.
Further, it is preferable that the human behavior recognition device based on deep learning includes the steps of:
s21, constructing a neural network model containing an LSTM network, wherein the neural network model comprises the following components: the system comprises an embellishing layer, an LSTM, a full-connection layer and a softmax layer, wherein the embellishing layer converts discrete signals input into continuous real vectors, the vectors converted by the embellishing layer are input into the LSTM according to time sequence, a plurality of time sequence vectors describing operation behaviors are spliced into a high-dimensional vector through the LSTM and then are input into the full-connection layer, and the vectors subjected to dimension reduction through the full-connection layer are input into the softmax layer;
S22, acquiring mass human behavior samples and machine behavior samples, wherein the human behavior samples are taken as positive samples, the machine behavior samples are taken as negative samples, the number of the samples contained in the positive sample set is not less than 5000, 80% of the samples are selected as training samples in the positive sample set, 20% of the samples are selected as test samples, the number of the samples contained in the negative sample set is not less than 5000, 80% of the samples are selected as training samples in the negative sample set, and 20% of the samples are selected as test samples;
s23, training the built neural network model by using positive and negative samples, wherein the training of the neural network model adopts a forward-backward algorithm, and after the accuracy rate on a test sample set reaches a set threshold value, the training of the neural network model is completed;
s24, judging whether the operation subject of the current page is a person or a machine through the trained neural network model.
Further, it is preferable that in the step S22, the operation behavior is described using a first order value of the mouse motion trajectory information, wherein dxi =xi-xi-1, dyi=yi-yi-1, dti=ti-ti-1; wherein xi is the abscissa of the mouse at the screen position, yi is the ordinate of the mouse at the screen position, ti is the time information, and the training positive sample is from the mouse motion track information recorded by a person when browsing a webpage, the mouse motion track is acquired through a network front end function, the position and time information of the mouse cursor in the screen are returned in the process of dragging the mouse through the function, and the mouse is returned in the form of (x 1, y1, t 1), (x 2, y2, t 2), (x 3, y3, t 3) … (xn, yn, tn) so as to reflect the transverse moving speed and the longitudinal moving speed, and the transverse moving and longitudinal moving conditions of the mouse in each corresponding small period of time in the moving process;
The negative samples are generated by a machine in a manner which includes randomly generating track lengths within a set maximum range of values, randomly generating (dxi, dyi, dti);
extracting N tracks from the positive sample, randomly dividing the extracted tracks into N subsections, and randomly combining thousands of subsections formed after division into new tracks; extracting M tracks in a positive sample; calculating a total transverse movement distance sum (dxi), a total longitudinal movement distance sum (dyi) and a total movement time sum (dti) of the track; randomly generating a lateral movement total distance sum (dxi) ', a longitudinal movement total distance sum (dyi) ' and a movement total time sum (dti) ', generating description parameters of a new motion trajectory using the following formula,
dxi ', dyi ' and dti ' are respectively the transverse coordinates, the longitudinal coordinates and the first order difference value of time of the new track; and extracting K tracks in the positive sample, randomly generating [ -0.5,0.5] times of disturbance to dxi, dyi and dti respectively, and obtaining the description parameters of the new motion track.
Further, it is preferable that the video panorama stitching device adopted in step S5 forms a panorama space by using the image stitching and the live-action image, and stitches a plurality of images into a large-scale image or a 360-degree panorama, including the steps of calibrating a camera, correcting the distortion of the sensor image, projecting and transforming the image, selecting a matching point, stitching the panorama image, and balancing the brightness and the color.
Further, preferably, when the video panorama stitching device is applied, a rj45 interface is adopted to connect video output ends of 8 network cameras, rtsp video streams of the 8 network cameras are obtained through an onvif protocol, a multithreading module in a qt platform is adopted to open up 9 sub-threads, 8 sub-threads in the 9 sub-threads respectively collect videos of the 8 network cameras, and the other 1 sub-thread fuses video images of the 8 sub-threads and displays the fused result; decoding the compressed video stream, converting the decoded video stream, and carrying out image correction on the converted video frame by frame to obtain corrected images; converting the corrected image into a gray level image, and extracting characteristic points from the converted gray level image by using a sift algorithm to obtain a main direction of the characteristic points and descriptors of the characteristic points; screening and extracting characteristic points by adopting a ransac algorithm, and determining a homography transformation matrix h of image registration by utilizing an apap algorithm; and splicing the rgb video frames, fusing every two adjacent image pairs in a gradual-in gradual-out fusion mode, and outputting a splicing result.
Further, preferably, when decoding the compressed video stream and converting the decoded video stream, the webcam outputs a format after h.264 compression, and the compressed video stream is decoded by cuda; resulting in decoded yuv422 format video, converting yuv422 format video to a 24-bit rgb888 video stream, the conversion process being expressed by the following formula,
R=1.164×(Y-16)+1.596×(U-128)
G=1.164×(Y-16)-0.813×(U-128)-0.392(V-128)
B=1.164×(Y-16)+2.017×(V-128)
Wherein R, G, B represents the color values of the three channels of red, green and blue respectively, Y, U, V represents the brightness, chromaticity and saturation of the pixel values respectively, and 8 paths of 24-bit rgb888 video streams are subjected to frame-by-frame image correction, network cameras are calibrated, corrected video streams are obtained, and the included angle between the network cameras is adjusted to be 45 degrees.
Further, preferably, when performing frame-by-frame image correction on the 8-path 24-bit rgb888 video stream, a checkerboard calibration method is adopted to obtain a camera matrix, point-to-point mapping is performed on each frame image, and frame-by-frame image correction is performed on the 8-path 24-bit rgb888 video stream to obtain a corrected video stream.
Further, preferably, the pulse generator of the intelligent electronic fence is electrified, the transmitting port transmits pulse voltage to the front-end fence, the time interval is 1.5 seconds, the pulse stays on the fence for 0.1 seconds, the pulse is returned to the receiving port of the host after a loop is formed on the front-end fence, and the port receives the pulse signal fed back; meanwhile, the host detects the resistance value between the two transmitting ends, if the front-end fence is damaged to cause open circuit or short circuit, the receiving port of the pulse host cannot receive the pulse signal or the resistance between the two transmitting ends is too small, and the host can send an alarm;
The intelligent monitoring target tracking device comprises: the extraction unit is used for extracting the foreground of the acquired monitoring video; a first determination unit configured to determine a first tracking frame according to the extracted foreground; the processing unit is used for processing the first tracking frame by using a preset tracking algorithm so as to obtain a second tracking frame with stronger stability; a second determining unit configured to determine a target tracking frame according to the first tracking frame and the second tracking frame; and the tracking unit is used for tracking the monitoring object according to the target tracking frame.
The invention provides a visual intelligent dynamic identification model construction method applied to an electric power operation site, aiming at standardizing the construction of the electric power operation site, forming an electronic fence with no dead angle, realizing the functions of human behavior identification, face recognition and the like, and the visual intelligent dynamic identification model construction method applied to the electric power operation site is provided, wherein the video is required to realize an operation area without dead angle and dead area in the splicing process, the face recognition and the non-standard construction behavior can be accurately captured, the life and property safety of operators and citizens in the electric power operation site is ensured, and a multi-video synthesized image is required to be distinguished from a transverse or longitudinal image splicing device, and finally a 360-degree organic double-image-free operation area is formed in consideration of multi-angle image overlapping.
In the step S22, when the neural network training is performed, massive training samples are needed, if the human behavior training samples are collected manually and specially, a lot of manpower is consumed, in the prior art, human behaviors are collected by arranging picture verification at the web end, and when logging in or browsing a webpage, verification is realized by dragging the verification codes, so that sample resources can be rapidly accumulated, and the running efficiency is improved.
According to the invention, the multi-angle video shooting instant synthesis device is used for constructing the operation site electronic fence, so that a plurality of problems of safe operation such as illegal wearing of personnel in an operation area, mistaken entering of non-operation personnel into the site, isolation of an electrified area and the like are tracked, alarmed and recorded in real time, and the occurrence of some safety accidents is avoided. In the method, if non-constructors break into the electric power construction site, the site supervision personnel are timely reminded through the acousto-optic linkage device, and meanwhile, the break-in personnel are reminded to leave the electric power construction site as soon as possible, so that the safe construction environment of the electric power construction site is ensured.
Compared with the prior art, the invention has the beneficial effects that:
compared with the prior device, the invention provides a visual intelligent dynamic identification model construction method applied to an electric power operation site, solves the problem that non-operation personnel enter the electric power operation site to ensure the safety of electric power construction, realizes the large-scale full-scene real-time monitoring of the electric power operation site at the horizontal angle of 360 DEG and the vertical angle of 80 DEG through the research of the visual intelligent dynamic identification model of the electric power operation site, effectively solves the problem of monitoring dead angles existing in the past, leaves complete image monitoring data for each electric power construction, and provides device reference for the non-operation personnel to enter the site by mistake and early warning, and replaces a backward device of the manual supervision of a site supervisor.
Drawings
FIG. 1 is a flow chart of a method for constructing a visual intelligent dynamic recognition model according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples.
It will be appreciated by those skilled in the art that the following examples are illustrative of the present invention and should not be construed as limiting the scope of the invention. The specific techniques or conditions are not identified in the examples and are performed according to techniques or conditions described in the literature in this field or according to the product specifications. The materials or equipment used are conventional products available from commercial sources, not identified to the manufacturer.
The following detailed description is provided to assist the reader in obtaining a comprehensive understanding of the methods, apparatus, and/or systems described herein. Accordingly, various alterations, modifications, and equivalents of the methods, apparatus, and/or systems described herein will be suggested to one of ordinary skill in the art. The progression of the described processing operations is an example; however, the order of operations and/or the operations are not limited to those set forth herein, and may be varied as is known in the art, except for operations that must occur in a particular order. In addition, a corresponding description of well-known functions and constructions may be omitted for clarity and conciseness.
Furthermore, exemplary embodiments will be described more fully hereinafter with reference to the accompanying drawings. The exemplary embodiments may, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. These embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the exemplary embodiments to those of ordinary skill in the art.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being "directly connected" or "directly coupled" to another element, there are no intervening elements present.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
Reference will now be made in detail to the exemplary embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout.
This disclosure may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the disclosure to those skilled in the art.
Referring to fig. 1, a visual intelligent dynamic identification model construction method applied to an electric power operation site is provided, a traditional device scheme of a physical isolation electric power operation site isolation baffle is abandoned, non-staff is automatically identified through a system, the intention of the staff is identified through motion capture, early warning is carried out on the staff to enter the operation site, and on-site supervision staff is reminded in real time.
The human behavior recognition device based on deep learning is utilized, the human face recognition device based on the neural network algorithm, the video panorama splicing device, the intelligent electronic fence device and the intelligent monitoring target tracking device are applied, the human face recognition device based on the neural network algorithm, the human behavior recognition device based on the deep learning, the electronic fence device and the intelligent monitoring target tracking device are used for warning and reminding the invasion behaviors of non-constructors in an electric power construction operation area, the human face recognition device based on the neural network algorithm is used for accurately capturing human face information, recognizing and capturing the non-standard construction behaviors, and standard construction of the electric power constructors is guaranteed.
Firstly, after personnel intrude into the boundary of an electric power operation site, a monitoring device starts a face recognition algorithm, the face recognition device is used for recognizing whether the intruder is a site constructor, if the intruder is the electric power operation site constructor, the constructor is recognized whether the dressing is normal through a dressing algorithm, if the dressing is not normal, the constructor and a site supervision person are prompted, after the dressing is normal, the construction method of the constructor is monitored in the whole course, if the construction step accords with the construction standard, if the constructor and the site supervision person are reminded in real time in the non-normal places, the constructor is helped to correct the non-normal places in time, the normal construction of the whole construction process is guaranteed, and after the construction is completed, construction video is uploaded to a cloud or construction video data is saved to the local place in time, so that a multiple disc of a construction link is analyzed after the construction.
If the face recognition of the monitoring equipment judges that a non-constructor breaks into the electric power construction site, the on-site supervision personnel are timely reminded according to the electronic fence algorithm, the human body behavior recognition algorithm and the target tracking algorithm through the acousto-optic linkage device, and meanwhile the intruder is reminded to leave the electric power construction site as soon as possible, so that the safe construction environment of the electric power construction site is ensured.
A visual intelligent dynamic identification model construction method applied to an electric power operation site comprises the following steps:
s1, warning and reminding the invasion behaviors of non-constructors in an electric power construction operation area by using a human face recognition device, a deep learning human body behavior recognition device, an electronic fence device and an intelligent monitoring target tracking device based on a neural network algorithm;
s2, accurately capturing face information through a face recognition device of a neural network algorithm, recognizing and capturing nonstandard construction behaviors, and guaranteeing standard construction of electric power constructors;
s3, when personnel intrude into the boundary of the construction site, firstly, identifying whether the intruder is a site constructor through a face recognition device, if so, identifying whether the constructor is normal in dressing through a dressing algorithm, and if not, prompting the constructor and a site supervision personnel, and if so, supervising the construction method of the constructor in the whole course;
s4, monitoring whether the construction steps meet the construction standard, if so, reminding constructors and on-site supervision staff in real time, helping the constructors to correct the places without standard construction in time, and guaranteeing the standard construction of the whole construction process;
S5, uploading construction video to the cloud or storing construction video data to the local in time after construction is completed, carrying out horizontal 360 DEG and vertical 80 DEG panoramic overall process monitoring on a construction site through a video panoramic stitching device and an electronic fence device after the construction is completed so as to carry out repeated analysis on construction links.
The face recognition device based on the neural network algorithm comprises the following steps:
constructing a characteristic face space, wherein face detection, face expression, face identification, expression/gesture analysis and physiological classification are subjected to basic pretreatment, then the basic pretreatment is carried out, the first five faces of each object in the library are loaded as a training set, the last five faces are loaded as a test set, if N face images exist in the face library, the face images can be set as a matrix, then row vectors of nm rows rearranged by rows of each face nm image are obtained to obtain a training sample set, then the average value of training samples, namely the average face, is taken as the row vector, the difference is prominent, and the average face is subtracted, so that N difference images can be obtained;
feature extraction, namely feature extraction is carried out on features with the largest difference in face images, then recognition work can be orderly carried out, in order to obtain training set coordinate coefficients, projection is carried out on the training set face images in a feature subspace, similarly, a test set image is projected to the feature subspace, and the group of coefficients can be used as the basis of face recognition; performing face recognition, performing feature extraction by adopting a PCA algorithm, performing classification recognition on the face after feature extraction, and determining by adopting a nearest neighbor method by judging the distance between a sample to be classified and a known sample, namely comparing a new sample with the known samples one by one, wherein the category of the known sample with the nearest distance is used as the category of the new sample.
When face recognition is carried out, the face recognition experiment is adopted to carry out image preprocessing, the face recognition experiment is preceded to carry out preprocessing on images in a face library, a geometric normalization method is adopted to enable the images not to be influenced by scale and angle changes, an expression sub-image is transformed into uniform size, first, three characteristic points of two eyes and a nose can be calibrated by using an [ x, y ] =ginput function in MATLAB, then, the three characteristic points are selected to be calibrated by a mouse, coordinate values of the three characteristic points are obtained, carriage return is finished, then, the images are rotated according to the coordinate values of the left eye and the right eye, the consistency of the faces is guaranteed, a rectangular characteristic area range can be determined according to the face characteristic points and a geometric model, and finally, the expression sub-image is enabled to be uniform in size by scale transformation, and expression characteristics are conveniently extracted.
The human behavior recognition device based on deep learning comprises the following steps:
building a neural network model comprising an LSTM network, the neural network model comprising: the system comprises an embellishing layer, an LSTM, a full-connection layer and a softmax layer, wherein the embellishing layer converts discrete signals input into continuous real vectors, the vectors converted by the embellishing layer are input into the LSTM according to time sequence, a plurality of time sequence vectors describing operation behaviors are spliced into a high-dimensional vector through the LSTM and then are input into the full-connection layer, and the vectors subjected to dimension reduction through the full-connection layer are input into the softmax layer;
Obtaining mass human behavior samples and machine behavior samples, wherein the human behavior samples are taken as positive samples, the machine behavior samples are taken as negative samples, the number of the samples contained in the positive sample set is not less than 5000, 80% of the samples are selected as training samples in the positive sample set, 20% of the samples are selected as test samples, the number of the samples contained in the negative sample set is not less than 5000, 80% of the samples are selected as training samples in the negative sample set, and 20% of the samples are selected as test samples;
training the built neural network model by using positive and negative samples, wherein the training of the neural network model adopts a forward-backward algorithm, and after the accuracy rate on the test sample set reaches a set threshold value, the training of the neural network model is considered to be completed;
and judging whether the operation subject of the current page is a person or a machine through the trained neural network model.
Wherein the operational behaviour is described using first order values (dxi, dyi, dti) of the mouse movement track information, wherein dxi = xi-xi-1, dyi = yi-yi-1, dti = ti-ti-1; wherein xi is the abscissa of the mouse at the screen position, yi is the ordinate of the mouse at the screen position, ti is the time information, the training positive sample is the mouse motion track information recorded by a person when browsing a webpage, the mouse motion track can be conveniently acquired through a network front end function, the position and the time information of the mouse cursor in the screen can be returned in the dragging process of the mouse through the function, and the mouse can be returned in the forms of (x 1, y1, t 1), (x 2, y2, t 2), (x 3, y3, t 3) … (xn, yn, tn), so that the fine characteristics of the mouse in the operation process of an operator are reflected in the corresponding small time periods of the mouse in the transverse moving speed, the longitudinal moving speed, the transverse moving displacement and the longitudinal displacement condition.
When training the neural network, massive training samples are needed, if the training samples of human behaviors are collected manually and specially, a great deal of manpower is consumed, in the prior art, human behaviors are collected by arranging picture verification at a web end, and when logging in or browsing a webpage, verification is realized by dragging verification codes, so that sample resources can be rapidly accumulated, and the operation efficiency is improved. The negative samples are generated by a machine in a manner which includes randomly generating track lengths within a set maximum range of values, randomly generating (dxi, dyi, dti); extracting N tracks, such as 2000 tracks, from a positive sample, randomly dividing the extracted tracks into N (such as 3-10) sub-segments, and then splicing thousands of sub-segments formed after division into new tracks; extracting M tracks in a positive sample; calculating a total transverse movement distance sum (dxi), a total longitudinal movement distance sum (dyi) and a total movement time sum (dti) of the track; randomly generating a lateral movement total distance sum (dxi) ', a longitudinal movement total distance sum (dyi) ' and a movement total time sum (dti) ', generating description parameters of a new motion trajectory using the following formula,
dxi ', dyi ' and dti ' are respectively the transverse coordinates, the longitudinal coordinates and the first order difference value of time of the new track; and extracting K tracks in the positive sample, randomly generating [ -0.5,0.5] times of disturbance to dxi, dyi and dti respectively, and obtaining the description parameters of the new motion track.
Compared with a sample generated directly and randomly, the negative sample generated by adopting a a, b, c, d mode is fully combined with positive sample characteristics, and has higher simulation degree on human behaviors, so that the neural network trained by the negative sample has higher identification capability. 2500 samples generated in the a, b, c, d mode are selected to form a negative sample set. Compared with the auxiliary samples generated by a single mode, the negative sample set comprises the samples generated by 4 modes, so that the negative sample set has a larger coverage range.
In addition, the video panorama stitching device used in the application utilizes image stitching and live-action images to form a panorama space, and a plurality of images are stitched into a large-scale image or a 360-degree panorama, and the device comprises the steps of camera calibration, sensor image distortion correction, image projection transformation, matching point selection, panorama image stitching and balance processing of brightness and color. When the video panorama splicing device is applied, a rj45 interface is adopted to connect 8 network camera video output ends, rtsp video streams of 8 network cameras are obtained through an onvif protocol, a multithread module in a qt platform is adopted to open up 9 sub-threads, 8 sub-threads in the 9 sub-threads respectively collect videos of the 8 network cameras, and the other 1 sub-thread fuses video images of the 8 sub-threads and displays the fused result; decoding the compressed video stream, converting the decoded video stream, and carrying out image correction on the converted video frame by frame to obtain corrected images; converting the corrected image into a gray level image, and extracting characteristic points from the converted gray level image by using a sift algorithm to obtain a main direction of the characteristic points and descriptors of the characteristic points; screening and extracting characteristic points by adopting a ransac algorithm, and determining a homography transformation matrix h of image registration by utilizing an apap algorithm; and splicing the rgb video frames, fusing every two adjacent image pairs in a gradual-in gradual-out fusion mode, and outputting a splicing result. When decoding the compressed video stream and converting the decoded video stream, the network camera outputs a format which is compressed by h.264, and the compressed video stream is decoded by cuda; obtaining a decoded video in a yuv422 format, converting the video in the yuv422 format into an rgb888 video stream with 24 bits, representing the conversion process by the following formula, correcting 8 paths of the rgb888 video stream with 24 bits by frame images, calibrating a network camera to obtain a corrected video stream, and adjusting the included angle between the network cameras to be 45 degrees; when 8 paths of 24-bit rgb888 video streams are subjected to image-by-image correction, a checkerboard calibration method is adopted to obtain a camera matrix, each frame of image is subjected to point-to-point mapping, 8 paths of 24-bit rgb888 video streams are subjected to image-by-image correction, the corrected video streams are obtained, the conversion process is expressed by the following formula, R=1.164× (Y-16) +1.596× (U-128)
G=1.164×(Y-16)-0.813×(U-128)-0.392(V-128)
B=1.164×(Y-16)+2.017×(V-128)
Wherein R, G, B represents the color values of the three channels of red, green and blue respectively, Y, U, V represents the brightness, chromaticity and saturation of the pixel values respectively, and 8 paths of 24-bit rgb888 video streams are subjected to frame-by-frame image correction, network cameras are calibrated, corrected video streams are obtained, and the included angle between the network cameras is adjusted to be 45 degrees.
The pulse generator of the intelligent electronic fence is electrified, the transmitting port transmits pulse voltage to the front-end fence, the time interval is 1.5 seconds, the pulse stays on the fence for 0.1 seconds, after a loop is formed on the front-end fence, the pulse is returned to the receiving port of the host, and the port receives the pulse signal fed back; meanwhile, the host can also detect the resistance value between the two transmitting ends, if the front-end fence is damaged to cause open circuit or short circuit, the receiving port of the pulse host can not receive the pulse signal or the resistance between the two transmitting ends is too small, and the host can send an alarm.
The intelligent monitoring target tracking device is provided with an extraction unit and is used for extracting the foreground of the acquired monitoring video; a first determination unit configured to determine a first tracking frame according to the extracted foreground; the processing unit is used for processing the first tracking frame by using a preset tracking algorithm so as to obtain a second tracking frame with stronger stability; a second determining unit configured to determine a target tracking frame according to the first tracking frame and the second tracking frame; and the tracking unit is used for tracking the monitoring object according to the target tracking frame. In the step S5, if the non-constructor breaks into the electric power construction site, the site supervision personnel are timely reminded through the acousto-optic linkage device, and meanwhile, the break-in personnel are reminded to leave the electric power construction site as soon as possible, so that the safe construction environment of the electric power construction site is ensured.
When personnel intrude into the boundary of a construction site, firstly, whether the intruder is a site constructor is identified through a face recognition technology, if the intruder is a site constructor for electric power operation, whether the constructor is in dressing standard is identified through a dressing algorithm, if the dressing is not in standard, the constructor and a site supervision person are prompted, after the dressing is in standard, the construction method of the constructor is monitored in the whole process, whether the construction steps conform to the construction standard is judged, if the constructor and the site supervision person are in non-standard places, the constructor and the site supervision person are reminded in real time, the constructor is helped to correct the places which are not in standard construction in time, and the whole construction process is guaranteed. And uploading the construction video to the cloud or saving the construction video data to the local in time after the construction is completed, so as to analyze the multiple disc of the construction link after the construction is completed. If the non-constructor breaks into the electric power construction site, the site supervision personnel are timely reminded through the acousto-optic linkage device, and meanwhile the break-in personnel are prompted to leave the electric power construction site as soon as possible, so that the safe construction environment of the electric power construction site is ensured.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present invention, and various changes and modifications may be made without departing from the spirit and scope of the invention, which is defined in the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (5)

1. The visual intelligent dynamic identification model construction method applied to the electric power operation site is characterized by comprising the following steps of:
s1, warning and reminding the invasion behaviors of non-constructors in an electric power construction operation area by using a human face recognition device, a deep learning human body behavior recognition device, an electronic fence device and an intelligent monitoring target tracking device based on a neural network algorithm;
s2, accurately capturing face information through a face recognition device of a neural network algorithm, recognizing and capturing nonstandard construction behaviors, and guaranteeing standard construction of electric power constructors;
s3, when personnel intrude into the boundary of the construction site, firstly, identifying whether the intruder is a site constructor through a face recognition device, if so, identifying whether the constructor is normal in dressing through a dressing algorithm, and if not, prompting the constructor and a site supervision personnel, and if so, supervising the construction method of the constructor in the whole course;
s4, monitoring whether the construction steps meet the construction standard, if so, reminding constructors and on-site supervision staff in real time, helping the constructors to correct the places without standard construction in time, and guaranteeing the standard construction of the whole construction process;
S5, uploading construction video to the cloud or storing construction video data to the local in time after construction is completed, carrying out horizontal 360 DEG and vertical 80 DEG panoramic overall process monitoring on a construction site through a video panoramic stitching device and an electronic fence device by carrying out repeated analysis on construction links after the construction is completed;
the human behavior recognition device based on deep learning comprises the following steps:
s21, constructing a neural network model containing an LSTM network, wherein the neural network model comprises the following components: the system comprises an embellishing layer, an LSTM, a full-connection layer and a softmax layer, wherein the embellishing layer converts discrete signals input into continuous real vectors, the vectors converted by the embellishing layer are input into the LSTM according to time sequence, a plurality of time sequence vectors describing operation behaviors are spliced into a high-dimensional vector through the LSTM and then are input into the full-connection layer, and the vectors subjected to dimension reduction through the full-connection layer are input into the softmax layer;
s22, acquiring mass human behavior samples and machine behavior samples, wherein the human behavior samples are taken as positive samples, the machine behavior samples are taken as negative samples, the number of the samples contained in the positive sample set is not less than 5000, 80% of the samples are selected as training samples in the positive sample set, 20% of the samples are selected as test samples, the number of the samples contained in the negative sample set is not less than 5000, 80% of the samples are selected as training samples in the negative sample set, and 20% of the samples are selected as test samples;
S23, training the built neural network model by using positive and negative samples, wherein the training of the neural network model adopts a forward-backward algorithm, and after the accuracy rate on a test sample set reaches a set threshold value, the training of the neural network model is completed;
s24, judging whether an operation subject of the current page is a person or a machine through a trained neural network model;
in the step S22, the first order value of the mouse motion trajectory information is used to describe the operation behavior, wherein dxi =xi-xi-1, dyi=yi-yi-1, dti=ti-ti-1; wherein xi is the abscissa of the mouse at the screen position, yi is the ordinate of the mouse at the screen position, ti is the time information, and the training positive sample is from the mouse motion track information recorded by a person when browsing a webpage, the mouse motion track is acquired through a network front end function, the position and time information of the mouse cursor in the screen are returned in the process of dragging the mouse through the function, and the mouse is returned in the form of (x 1, y1, t 1), (x 2, y2, t 2), (x 3, y3, t 3) … (xn, yn, tn) so as to reflect the transverse moving speed and the longitudinal moving speed, and the transverse moving and longitudinal moving conditions of the mouse in each corresponding small period of time in the moving process;
The negative samples are generated by a machine in a manner which includes randomly generating track lengths within a set maximum range of values, randomly generating (dxi, dyi, dti);
extracting N tracks from the positive sample, randomly dividing the extracted tracks into N subsections, and randomly combining thousands of subsections formed after division into new tracks; extracting M tracks in a positive sample; calculating a total transverse movement distance sum (dxi), a total longitudinal movement distance sum (dyi) and a total movement time sum (dti) of the track; randomly generating a lateral movement total distance sum (dxi) ', a longitudinal movement total distance sum (dyi) ' and a movement total time sum (dti) ', generating description parameters of a new motion trajectory using the following formula,
dxi ', dyi ' and dti ' are respectively the transverse coordinates, the longitudinal coordinates and the first order difference value of time of the new track; extracting K tracks from the positive sample, randomly generating [ -0.5,0.5] times of disturbance to dxi, dyi, dti, respectively, and obtaining description parameters of the new motion track;
the face recognition device based on the neural network algorithm comprises the following steps:
s11, constructing a feature face space: after pretreatment, face detection, face expression, face identification, expression/gesture analysis and physiological classification are loaded into a face library, the first five faces of each object in the library are loaded as a training set, the last five faces are loaded as a test set, if N face images exist in the face library, the face images are set as a matrix, then row vectors of nm rows rearranged by rows of each face nm image are obtained to obtain a training sample set, and then the average value of the training samples, namely an average face, is calculated as a row vector, is the prominent difference, and the average face is subtracted to obtain N difference images;
S12, feature extraction: extracting features with the largest difference from the face images, performing recognition work, projecting the training set face images in a feature subspace, projecting the test set images into the feature subspace, and taking the set of coefficients as the basis of face recognition;
s13, face recognition: performing feature extraction by adopting a PCA algorithm, classifying and identifying the face after feature extraction, judging the distance between a sample to be classified and a known sample by adopting a nearest neighbor method, and comparing a new sample with the known samples one by one, wherein the class of the known sample with the nearest distance is used as the class of the new sample;
when face recognition is carried out in the step S13, a face recognition experiment is adopted to carry out image pretreatment, images in a face library are preprocessed before the face recognition experiment, a geometric normalization method is adopted to ensure that the images are not influenced by scale and angle changes, expression subgraphs are transformed into uniform sizes, first, feature points are calibrated, three feature points of two eyes and a nose are calibrated in MATLAB by using an [ x, y ] = ginput function, then manual calibration is selected to obtain coordinate values of the three feature points, then the images are rotated according to the coordinate values of the left eye and the right eye, the consistency of the faces is ensured, a rectangular feature area range is determined according to the face feature points and the geometric model, and finally, the dimension of the expression subgraph images is unified by scale transformation;
The pulse generator of the intelligent electronic fence is electrified, the transmitting port transmits pulse voltage to the front-end fence, the time interval is 1.5 seconds, the pulse stays on the fence for 0.1 seconds, after a loop is formed on the front-end fence, the pulse is returned to the receiving port of the host, and the port receives the pulse signal fed back; meanwhile, the host detects the resistance value between the two transmitting ends, if the front-end fence is damaged to cause open circuit or short circuit, the receiving port of the pulse host cannot receive the pulse signal or the resistance between the two transmitting ends is too small, and the host can send an alarm;
the intelligent monitoring target tracking device comprises: the extraction unit is used for extracting the foreground of the acquired monitoring video; a first determination unit configured to determine a first tracking frame according to the extracted foreground; the processing unit is used for processing the first tracking frame by using a preset tracking algorithm so as to obtain a second tracking frame with stronger stability; a second determining unit configured to determine a target tracking frame according to the first tracking frame and the second tracking frame; and the tracking unit is used for tracking the monitoring object according to the target tracking frame.
2. The method for constructing a visual intelligent dynamic identification model applied to an electric power operation site according to claim 1, wherein the video panorama stitching device adopted in the step S5 utilizes image stitching and live-action images to form a panorama space, and a plurality of images are stitched into a large-scale image or a 360-degree panorama, and the method comprises the steps of camera calibration, sensor image distortion correction, image projection transformation, matching point selection, panorama image stitching and balance processing of brightness and color.
3. The method for constructing the visual intelligent dynamic identification model applied to the electric power operation site according to claim 2, wherein when the video panorama splicing device is applied, a rj45 interface is adopted to connect 8 network camera video output ends, rtsp video streams of 8 network cameras are obtained through an onvif protocol, 9 sub-threads are opened up by adopting a multithreading module in a qt platform, 8 sub-threads in the 9 sub-threads respectively collect videos of 8 network cameras, and the other 1 sub-thread fuses video images of the 8 sub-threads and displays the fused result; decoding the compressed video stream, converting the decoded video stream, and carrying out image correction on the converted video frame by frame to obtain corrected images; converting the corrected image into a gray level image, and extracting characteristic points from the converted gray level image by using a sift algorithm to obtain a main direction of the characteristic points and descriptors of the characteristic points; screening and extracting characteristic points by adopting a ransac algorithm, and determining a homography transformation matrix h of image registration by utilizing an apap algorithm; and splicing the rgb video frames, fusing every two adjacent image pairs in a gradual-in gradual-out fusion mode, and outputting a splicing result.
4. The visual intelligent dynamic identification model construction method applied to an electric power operation site according to claim 3, wherein when decoding the compressed video stream and converting the decoded video stream, the network camera outputs a format after h.264 compression, and the compressed video stream is decoded by cuda; resulting in decoded yuv422 format video, converting yuv422 format video to a 24-bit rgb888 video stream, the conversion process being expressed by the following formula,
R=1.164×(Y-16)+1.596×(U-128)
G=1.164×(Y-16)-0.813×(U-128)-0.392(V-128)
B=1.164×(Y-16)+2.017×(V-128)
wherein R, G, B represents the color values of the three channels of red, green and blue respectively, Y, U, V represents the brightness, chromaticity and saturation of the pixel values respectively, and 8 paths of 24-bit rgb888 video streams are subjected to frame-by-frame image correction, network cameras are calibrated, corrected video streams are obtained, and the included angle between the network cameras is adjusted to be 45 degrees.
5. The method for constructing a visual intelligent dynamic identification model applied to an electric power operation site according to claim 4, wherein when 8 paths of 24-bit rgb888 video streams are subjected to frame-by-frame image correction, a checkerboard calibration method is adopted to obtain a camera matrix, point-to-point mapping is carried out on each frame of image, and 8 paths of 24-bit rgb888 video streams are subjected to frame-by-frame image correction to obtain corrected video streams.
CN202011288051.0A 2020-11-17 2020-11-17 Visual intelligent dynamic identification model construction method applied to electric power operation site Active CN112487891B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011288051.0A CN112487891B (en) 2020-11-17 2020-11-17 Visual intelligent dynamic identification model construction method applied to electric power operation site

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011288051.0A CN112487891B (en) 2020-11-17 2020-11-17 Visual intelligent dynamic identification model construction method applied to electric power operation site

Publications (2)

Publication Number Publication Date
CN112487891A CN112487891A (en) 2021-03-12
CN112487891B true CN112487891B (en) 2023-07-18

Family

ID=74931130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011288051.0A Active CN112487891B (en) 2020-11-17 2020-11-17 Visual intelligent dynamic identification model construction method applied to electric power operation site

Country Status (1)

Country Link
CN (1) CN112487891B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113450612B (en) * 2021-05-17 2022-10-28 云南电网有限责任公司 Development method of complete teaching device applied to relay protection training
CN114999103B (en) * 2022-05-12 2023-07-25 刘帅 Intelligent early warning system and method for expressway road-related operation safety
CN115035458B (en) * 2022-07-06 2023-02-03 中国安全生产科学研究院 Safety risk evaluation method and system
CN117372427B (en) * 2023-12-06 2024-03-22 南昌中展数智科技有限公司 Engineering construction supervision method and system based on video analysis

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105225387A (en) * 2015-08-28 2016-01-06 桂林聚联科技有限公司 A kind of method of deformation formula optical fiber fence system and detecting intrusion activity thereof
CN106599854A (en) * 2016-12-19 2017-04-26 河北工业大学 Method for automatically recognizing face expressions based on multi-characteristic fusion
CN109145789A (en) * 2018-08-09 2019-01-04 炜呈智能电力科技(杭州)有限公司 Power supply system safety work support method and system
CN109657575A (en) * 2018-12-05 2019-04-19 国网安徽省电力有限公司检修分公司 Outdoor construction personnel's intelligent video track algorithm
CN110599557A (en) * 2017-08-30 2019-12-20 深圳市腾讯计算机系统有限公司 Image description generation method, model training method, device and storage medium
CN111645077A (en) * 2020-06-19 2020-09-11 国电南瑞科技股份有限公司 Ground monitoring system and monitoring method for distribution network line live working robot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8036425B2 (en) * 2008-06-26 2011-10-11 Billy Hou Neural network-controlled automatic tracking and recognizing system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105225387A (en) * 2015-08-28 2016-01-06 桂林聚联科技有限公司 A kind of method of deformation formula optical fiber fence system and detecting intrusion activity thereof
CN106599854A (en) * 2016-12-19 2017-04-26 河北工业大学 Method for automatically recognizing face expressions based on multi-characteristic fusion
CN110599557A (en) * 2017-08-30 2019-12-20 深圳市腾讯计算机系统有限公司 Image description generation method, model training method, device and storage medium
CN109145789A (en) * 2018-08-09 2019-01-04 炜呈智能电力科技(杭州)有限公司 Power supply system safety work support method and system
CN109657575A (en) * 2018-12-05 2019-04-19 国网安徽省电力有限公司检修分公司 Outdoor construction personnel's intelligent video track algorithm
CN111645077A (en) * 2020-06-19 2020-09-11 国电南瑞科技股份有限公司 Ground monitoring system and monitoring method for distribution network line live working robot

Also Published As

Publication number Publication date
CN112487891A (en) 2021-03-12

Similar Documents

Publication Publication Date Title
CN112487891B (en) Visual intelligent dynamic identification model construction method applied to electric power operation site
CN107679471B (en) Indoor personnel air post detection method based on video monitoring platform
CN109670441A (en) A kind of realization safety cap wearing knows method for distinguishing, system, terminal and computer readable storage medium
CN111401311A (en) High-altitude parabolic recognition method based on image detection
CN107133592B (en) Human body target feature detection algorithm for power substation by fusing infrared thermal imaging and visible light imaging technologies
CN110096945B (en) Indoor monitoring video key frame real-time extraction method based on machine learning
CN109409289A (en) A kind of electric operating safety supervision robot security job identifying method and system
CN109040711A (en) A kind of image-pickup method, monitoring method and imaging sensor
CN113411542A (en) Intelligent working condition monitoring equipment
CN110598596A (en) Dangerous behavior monitoring method and device and electronic equipment
CN110458198A (en) Multiresolution target identification method and device
CN115841651B (en) Constructor intelligent monitoring system based on computer vision and deep learning
CN115457446A (en) Abnormal behavior supervision system based on video analysis
CN113111771A (en) Method for identifying unsafe behaviors of power plant workers
CN113065568A (en) Target detection, attribute identification and tracking method and system
CN103020588B (en) Based on the flame detecting method of video image analysis
CN111222420A (en) FTP protocol-based low-bandwidth-requirement helmet identification method
CN106941580A (en) Method and system of the teacher student from motion tracking is realized based on single detective camera lens
CN116310922A (en) Petrochemical plant area monitoring video risk identification method, system, electronic equipment and storage medium
CN112532927A (en) Intelligent safety management and control system for construction site
CN116682162A (en) Robot detection algorithm based on real-time video stream
CN116419059A (en) Automatic monitoring method, device, equipment and medium based on behavior label
CN114140849A (en) Electric power infrastructure field personnel state management method and system based on expression recognition
CN114387665A (en) Unmanned staircase identification system ascends a height based on portable cloth accuse ball
CN114565870A (en) Production line control method, device and system based on vision, and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant