CN112364722A - Nuclear power operator monitoring processing method and device and computer equipment - Google Patents

Nuclear power operator monitoring processing method and device and computer equipment Download PDF

Info

Publication number
CN112364722A
CN112364722A CN202011148301.0A CN202011148301A CN112364722A CN 112364722 A CN112364722 A CN 112364722A CN 202011148301 A CN202011148301 A CN 202011148301A CN 112364722 A CN112364722 A CN 112364722A
Authority
CN
China
Prior art keywords
image
operator
target face
information
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011148301.0A
Other languages
Chinese (zh)
Inventor
刘斌
林伟
刘利君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China General Nuclear Power Corp
CGN Power Co Ltd
Daya Bay Nuclear Power Operations and Management Co Ltd
Lingdong Nuclear Power Co Ltd
Guangdong Nuclear Power Joint Venture Co Ltd
Lingao Nuclear Power Co Ltd
Original Assignee
China General Nuclear Power Corp
CGN Power Co Ltd
Daya Bay Nuclear Power Operations and Management Co Ltd
Lingdong Nuclear Power Co Ltd
Guangdong Nuclear Power Joint Venture Co Ltd
Lingao Nuclear Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China General Nuclear Power Corp, CGN Power Co Ltd, Daya Bay Nuclear Power Operations and Management Co Ltd, Lingdong Nuclear Power Co Ltd, Guangdong Nuclear Power Joint Venture Co Ltd, Lingao Nuclear Power Co Ltd filed Critical China General Nuclear Power Corp
Priority to CN202011148301.0A priority Critical patent/CN112364722A/en
Publication of CN112364722A publication Critical patent/CN112364722A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Economics (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Human Resources & Organizations (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • Data Mining & Analysis (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Monitoring And Testing Of Nuclear Reactors (AREA)

Abstract

The application relates to a nuclear power operator monitoring and processing method, a device and computer equipment in the technical field of nuclear power informatization construction. The method comprises the following steps: acquiring image data of a plurality of operation areas, and extracting a face image from the image data; extracting face characteristic information of the face image through the recognition model; comparing the face feature information with preset face features to determine target face features corresponding to the face feature information; acquiring an operator identification corresponding to the target face feature, and extracting a target face image corresponding to the operator identification, position information corresponding to the target face image and shooting time from the image data; acquiring a regional map of an operation region corresponding to the operator identification, and marking the regional map according to the position information to obtain a plurality of position points of the target face image in the regional map; and generating a behavior track image corresponding to the operator identification according to the plurality of position points and the shooting time. By adopting the method, the monitoring and processing efficiency of nuclear power operators can be improved.

Description

Nuclear power operator monitoring processing method and device and computer equipment
Technical Field
The present application relates to the field of nuclear power information technology, and in particular, to a method and an apparatus for monitoring and processing nuclear power operators, a computer device, and a storage medium.
Background
Nuclear power plants generate electricity from the thermal energy generated by nuclear fuel in nuclear reactors. Due to the particularity of the nuclear power plant, the most basic safety of the nuclear power plant needs to be ensured in the nuclear power operation process. Nuclear power operation refers to work, business, production tasks, and the like undertaken by employees in a nuclear power plant. In the process of nuclear power operation, the movement track of nuclear power operators needs to be known, so that when an abnormal condition occurs, responsible personnel can be determined in time. However, the traditional method is to determine responsible personnel by manually checking image data of a working area, and because the manual work is limited in time and energy, the monitoring efficiency of nuclear power workers is low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a nuclear power operator monitoring processing method, a nuclear power operator monitoring processing apparatus, a computer device, and a storage medium, which can improve monitoring processing efficiency of a nuclear power operator.
A nuclear power operator monitoring and processing method comprises the following steps:
in the nuclear power operation process, acquiring image data of a plurality of operation areas, and extracting face images from the image data;
calling an identification model, and extracting the features of the face image through the identification model to obtain face feature information;
comparing the face feature information with preset face features, and determining target face features corresponding to the face feature information;
acquiring an operator identification corresponding to the target face feature, and extracting a target face image corresponding to the operator identification, position information corresponding to the target face image and shooting time from the image data;
acquiring a regional map of an operation region corresponding to the operator identification, and marking the regional map according to the position information to obtain a plurality of position points of the target face image in the regional map;
and generating a behavior track image corresponding to the operator identification according to the plurality of position points and the shooting time.
In one embodiment, the generating of the behavior trace image corresponding to the operator identifier according to the plurality of position points and the shooting time includes:
determining the behavior time of each position point according to the shooting time;
and connecting a plurality of position points according to the behavior time and the area information in the area map to generate a behavior track image corresponding to the operator identification.
In one embodiment, the marking on the area map according to the position information to obtain a plurality of position points of the target face image in the area map comprises:
converting the position information to obtain a converted position point;
and determining corresponding mark points in the regional map according to the converted position points, and marking the mark points in the regional map to obtain a plurality of position points of the target face image in the regional map.
In one embodiment, the method further comprises:
when a plurality of operator identifications exist, determining a position point marking mode corresponding to each operator identification;
and marking on the regional map according to the position information of the target face image corresponding to each operator identification and the position point marking mode corresponding to the operator identification to obtain a plurality of position points of the target face image corresponding to each operator identification in the regional map.
In one embodiment, after the generating of the behavior trace image corresponding to the operator identifier, the method further includes:
acquiring a behavior trace image acquisition request, wherein the behavior trace image acquisition request carries an area identifier and a time parameter of a target operation area;
extracting a behavior track image corresponding to the behavior track image acquisition request according to the area identifier and the time parameter;
and determining the identification of the target operator according to the extracted behavior track image.
A nuclear power worker monitoring and processing device, the device comprising:
the communication module is used for acquiring image data of a plurality of operation areas in the nuclear power operation process;
the image extraction module is used for extracting a face image from the image data;
the feature extraction module is used for calling an identification model and extracting features of the face image through the identification model to obtain face feature information;
the characteristic comparison module is used for comparing the face characteristic information with preset face characteristics and determining target face characteristics corresponding to the face characteristic information;
the information extraction module is used for acquiring an operator identification corresponding to the target face feature, and extracting a target face image corresponding to the operator identification, position information corresponding to the target face image and shooting time from the image data;
the map marking module is used for acquiring a regional map of an operation region corresponding to the operator identification, and marking the regional map according to the position information to obtain a plurality of position points of the target face image in the regional map;
and the track generation module is used for generating a behavior track image corresponding to the operator identification according to the plurality of position points and the shooting time.
In one embodiment, the trajectory generation module is further configured to determine behavior time of each location point according to the shooting time; and connecting a plurality of position points according to the behavior time and the area information in the area map to generate a behavior track image corresponding to the operator identification.
In one embodiment, the map marking module is further configured to convert the location information to obtain a converted location point; and determining corresponding mark points in the regional map according to the converted position points, and marking the mark points in the regional map to obtain a plurality of position points of the target face image in the regional map.
A computer device comprising a memory and a processor, the memory storing a computer program operable on the processor, the processor implementing the steps in the various method embodiments described above when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the respective method embodiment described above.
According to the nuclear power operator monitoring processing method, device, computer equipment and storage medium, image data of a plurality of working areas are obtained in the nuclear power working process, and face images are extracted from the image data. The image data of the operation area can be acquired under the condition that a user does not sense the image data, and the influence on nuclear power operators is avoided. And calling the recognition model, and extracting the features of the face image through the recognition model to obtain the face feature information. And comparing the face feature information with preset face features to determine target face features corresponding to the face feature information. The identity information of nuclear power operators appearing in the operation area can be rapidly identified. And extracting a target face image corresponding to the target face features, position information corresponding to the target face image and shooting time from the image data. And further acquiring a regional map of a working region corresponding to the target face image, and marking the regional map according to the position information to obtain a plurality of position points of the target face image in the regional map. And generating a behavior track image of the operator corresponding to the target face image according to the plurality of position points and the shooting time. The method and the device can automatically mark position points in the corresponding regional map according to the position information corresponding to the target face image and the shooting time, realize automatic image data identification, generate the corresponding behavior track image, and effectively improve the monitoring efficiency of nuclear power operators.
Drawings
FIG. 1 is an environmental diagram illustrating an exemplary embodiment of a nuclear power worker monitoring process;
FIG. 2 is a schematic flow chart illustrating a nuclear power worker monitoring method according to an embodiment;
FIG. 3 is a flowchart illustrating steps of generating an image of a behavior trace corresponding to an operator identifier according to a plurality of location points and a shooting time in an embodiment;
FIG. 4 is a block diagram of a nuclear power worker monitoring and processing device according to an embodiment;
FIG. 5 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The nuclear power operator monitoring and processing method can be applied to the application environment shown in fig. 1. Wherein the monitoring device 102 and the server 104 communicate over a network. In the nuclear power operation process, the monitoring equipment 102 collects image data of a plurality of operation areas, sends the collected image data to the server 104, and the server 104 extracts a face image from the image data. The server 104 calls the recognition model, and performs feature extraction on the face image through the recognition model to obtain face feature information. The server 104 compares the face feature information with preset face features to determine target face features corresponding to the face feature information. The server 104 obtains an operator identifier corresponding to the target face feature, extracts a target face image corresponding to the operator identifier from the image data, and obtains position information and shooting time corresponding to the target face image. The server 104 acquires a regional map of the work area corresponding to the operator identifier, and marks the map area according to the position information to obtain a plurality of position points of the target face image in the regional map. The server 104 generates a behavior trace image corresponding to the operator identifier according to the plurality of position points and the shooting time. The monitoring device 102 may be, but is not limited to, various video capture devices and image capture devices. The server 104 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
In an embodiment, as shown in fig. 2, a nuclear power worker monitoring processing method is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
step 202, in the nuclear power operation process, image data of a plurality of operation areas are obtained, and a face image is extracted from the image data.
Nuclear power operation refers to work, business, production tasks, and the like undertaken by employees in a nuclear power plant. The operation area refers to a plurality of operation sites which are divided in advance in the nuclear power plant and are used for nuclear power operation. Various types of monitoring equipment may be pre-installed in various operating areas of a nuclear power plant. For example, fixed location monitoring devices as well as movable monitoring devices may be included.
The monitoring equipment can acquire image data of a corresponding operation area and transmit the image data to the server in the nuclear power operation process. The image data may include a person image, and the person image may be captured by the monitoring apparatus from a person who appears in the work area. Wherein, the figure can comprise nuclear power operating personnel. Specifically, when people appear in the corresponding operation area, the monitoring device can record corresponding people images and take a snapshot of the people. When the monitoring equipment captures the figure image, the image data corresponding to the figure image is uploaded to the server. The server can directly acquire the image data of the snapshot image uploaded by the monitoring equipment. By pre-installing the monitoring equipment in the operation area, the figure images can be collected under the condition that nuclear power operators in the operation area are not affected, users do not feel during the collection of the images and the monitoring of the nuclear power operators, and user experience is improved.
When the server acquires the image data, the person image may be acquired from the image data. Each frame of image data may correspond to a frame of character image. And the server monitors the corresponding operating personnel according to the figure image. Specifically, the server can acquire multiple frames of character images in the image data, and call multiple threads to perform parallel processing on the multiple frames of character images, so that the monitoring efficiency of nuclear power operators is improved. Thus, the server extracts a face image, which is an image corresponding to a face portion in the person image, from the person image. When a person faces away from the monitoring device, the monitoring device can acquire a person image, but the person image does not include a face image. The server needs to detect the person image. When the person image does not include the face image, the server may mark the image data corresponding to the person image as invalid image data, and remove the acquired invalid image data. When the human face image is included in the human figure image, the server may extract the corresponding human face image from the human figure image. Specifically, the server may calculate the person image according to a preset face detection algorithm, and determine whether the person image has a face image. The preset face detection algorithm may be any one of a plurality of detection algorithms, for example, a neural network detection algorithm or a template matching algorithm.
And 204, calling an identification model, and extracting the features of the face image through the identification model to obtain face feature information.
The server is pre-configured with an identification model, and the identification model is obtained by training a large amount of sample data, wherein the sample data refers to a face image subjected to face key point labeling. For example, the recognition model may be a convolutional neural network model. The server inputs the face image into the recognition model, positions the face image through the recognition model, and determines an image area corresponding to the face target in the face image. The image area may be the corresponding location and range of the human face target. And extracting the features of the face image through the recognition model to obtain face feature information. The face feature information may include feature information of a plurality of face key points, such as feature location information of a left eye, a right eye, a nose, a left mouth corner, and a right mouth corner.
And step 206, comparing the face feature information with preset face features, and determining a target face feature corresponding to the face feature information.
The preset face information may include pre-stored face images of nuclear power operators, face features of the face images corresponding to the nuclear power operators, operator identifications of the nuclear power operators, and other identity information. The server compares the extracted face feature information with preset face features to obtain a plurality of feature similarities. The similarity between the face image and the face image of the nuclear power operator stored in advance can be determined through the feature similarity.
The server compares the feature similarity with a threshold value and judges whether the feature similarity is greater than the threshold value. For example, the threshold may be set to 90%. The server selects the face features with the feature similarity larger than the threshold value as target face features, and therefore identity information of nuclear power operators appearing in the operation area is determined. And when the feature similarity is smaller than or equal to the threshold, indicating that the face features corresponding to the feature similarity are not the face features of the nuclear power operators appearing in the operation area.
And 208, acquiring an operator identification corresponding to the target face feature, and extracting a target face image corresponding to the operator identification, position information corresponding to the target face image and shooting time from the image data.
After the target face features are determined, the server acquires the operator identification corresponding to the target face features, and therefore the operators appearing in the image data are determined. The operator identifier is an identifier for uniquely marking an operator, such as a job number of the operator. When the monitoring device is installed in advance, the position information may be determined according to the installation position corresponding to the monitoring device, and the position information may include a position identifier. The location identity may be a specific location coordinate. Each monitoring device may correspond to a unique location identifier. When the monitoring device is a movable monitoring device, the monitoring device may obtain current location information when the person image is shot, and use the current location information as location information corresponding to the person image. When the monitoring device shoots the person image, the time for shooting the person image can be recorded. The image data acquired by the server may include a person image captured by the monitoring apparatus, and position information and capturing time when the person image is captured. Therefore, after the target face features are determined, the server can acquire the operator identifications corresponding to the target face features, extract the target face images corresponding to the operator identifications, acquire the position information corresponding to the target face images and the shooting time from the image data.
In one embodiment, the position information corresponding to the target face image may be obtained in a variety of ways. For example, source positioning beacons are arranged in each entrance and exit of the nuclear power plant and in the nuclear power plant in advance, and after a handheld positioning tag of an operator approaches the source positioning beacon, a terminal corresponding to the operator can transmit position information to a server through a 4G network.
Step 210, obtaining a regional map of the operation region corresponding to the operator identifier, and marking the regional map according to the position information to obtain a plurality of position points of the target face image in the regional map.
And 212, generating a behavior track image corresponding to the operator identification according to the plurality of position points and the shooting time.
The server acquires a monitoring device identification corresponding to the target face image, so that an area map of a working area corresponding to the monitoring device identification is acquired. The area map may be a two-dimensional map or a three-dimensional map. The server matches the position information corresponding to the target face image with the corresponding map position in the regional map, and marks the matched map position, so that a plurality of position points of the target character image in the operation region are marked. The server determines the behavior time of each position point according to the shooting time corresponding to the target face image, and then connects the position points according to the sequence of the behavior time of each position point to obtain the behavior track image of the operator corresponding to the target face image.
In this embodiment, the server acquires image data of a plurality of operating areas during nuclear power operation, and extracts a face image from the image data. The image data of the operation area can be acquired under the condition that a user does not sense the image data, and the influence on nuclear power operators is avoided. And the server calls the recognition model, and performs feature extraction on the face image through the recognition model to obtain the face feature information. And comparing the face feature information with preset face features to determine target face features corresponding to the face feature information. The identity information of nuclear power operators appearing in the operation area can be rapidly identified. And extracting a target face image corresponding to the target face features, position information corresponding to the target face image and shooting time from the image data. And further acquiring a regional map of a working region corresponding to the target face image, and marking the regional map according to the position information to obtain a plurality of position points of the target face image in the regional map. And generating a behavior track image of the operator corresponding to the target face image according to the plurality of position points and the shooting time. The method and the device can automatically mark position points in the corresponding regional map according to the position information corresponding to the target face image and the shooting time, realize automatic image data identification, generate the corresponding behavior track image, and effectively improve the monitoring efficiency of nuclear power operators.
In one embodiment, the step of acquiring image data of a plurality of job regions includes: acquiring video data of a plurality of operation areas, wherein the video data comprises a plurality of frames of original images; detecting a person image in an original image; image data including a person image is extracted from video data.
The server acquires video data of corresponding operation areas acquired by the plurality of monitoring devices. The video data includes a plurality of frames of original images, a frame being a minimum visual unit in the video data, and each frame in the video data may correspond to an image. In one embodiment, the server may further obtain, according to a preset frequency, video data acquired by the monitoring device in a corresponding time period. For example, the server may obtain video data every 5 hours, and the duration of the obtained video data is 5 hours.
And analyzing the video data acquired by each monitoring device by the server to obtain a plurality of frames of original images in the video data. The server may detect the original image in the video data. Specifically, the server may detect video data acquired by each monitoring device one by one, or may invoke multithreading to detect video data acquired by a plurality of monitoring devices in parallel. The server traverses multiple frames of original images in the video data and detects whether the original images have character images. When the figure image exists in the original image, the original image including the figure image is extracted from the video data, and the position information and the shooting time corresponding to the original image are acquired, so that a plurality of frames of image data including the figure image are obtained.
In the embodiment, the server is provided with the monitoring equipment in advance in the operation area, the figure images can be collected under the condition that nuclear power operators in the operation area are not affected, the user does not perceive the images and the monitoring process of the nuclear power operators, and the user experience is improved. Meanwhile, the server can extract image data including the figure images from the video data acquired by the monitoring equipment, and monitors nuclear power operators on the basis of original monitoring of the monitoring equipment, so that extra data acquisition cost is not increased, and monitoring cost of the nuclear power operators is effectively reduced.
In one embodiment, after the position information corresponding to the target face image is extracted from the image data, the extracted position information may be compared with a preset position range to determine whether the operator is within the preset position range. The preset position range may be a position range corresponding to the hazardous area, for example, the hazardous area may include a high radiation dose area or a hazardous area. Thus, the nuclear power plant does not allow the operator to approach the preset position range. And when the extracted position information is within the preset position range, generating alarm information according to the prompt content corresponding to the preset position range. For example, when the preset position range is a danger source region, the alarm information may be "please get away from the danger source". The alert information may include a variety of information types. For example, the alarm information may be at least one of text prompt information and voice prompt information. The server can send the alarm information to the alarm device, and the alarm device displays or plays the alarm prompt information to prompt nuclear power operators to keep away from the dangerous area.
In one embodiment, as shown in fig. 3, the step of generating the behavior trace image corresponding to the operator identifier according to the plurality of position points and the shooting time includes:
step 302, determining the action time of each position point according to the shooting time.
And 304, connecting a plurality of position points according to the behavior time and the area information in the area map to generate a behavior track image corresponding to the operator identification.
The shooting time refers to the collecting time of the monitoring equipment for collecting the target face image, and the server can use the shooting time corresponding to the target face image as the time when the corresponding operator appears at the position point corresponding to the target face image. The area information in the area map may include a placement area of the nuclear power plant and a walkable area in the work area. The walkable area is an activity area of the worker.
And the server determines the action time corresponding to each position point according to the shooting time corresponding to the target face image, and determines the sequence of the action time corresponding to each position point. And the server connects the plurality of position points and determines the behavior direction according to the sequence of the behavior time and the walkable area in the area map, and generates a behavior track image corresponding to the operator identification. The behavior track route obtained by the server connection is the behavior track route in the walkable area.
In this embodiment, the server determines the behavior time of each location point according to the shooting time, so as to connect the plurality of location points according to the behavior time and the area information in the area map, and generate a behavior trace image corresponding to the operator identifier. Because the monitoring equipment records the shooting time of the image and the corresponding position information in the process of acquiring the image data, the corresponding position points can be quickly matched in the regional map, and the action time of each position point can be determined. And position point connection is carried out according to the sequence of the shooting time and the area information of the area map, so that the generated behavior track route can be ensured to be in a walkable area, and the accuracy of the behavior track image is improved.
In one embodiment, marking on the area map according to the position information, and obtaining a plurality of position identifications of the target face image in the area map comprises: converting the position information to obtain a converted position point; and determining corresponding marking points in the area map according to the converted position points, and marking the marking points in the area map.
The server acquires the position information corresponding to the target face image, which is the actual position of the corresponding operator, and the position information needs to be converted into map position points corresponding to the regional map, so that the corresponding mark points are matched in the regional map according to the map position points obtained through conversion, the matched mark points are marked, and then a plurality of position points of the target face image in the regional map are obtained.
In this embodiment, the server converts the location information, which is beneficial to matching the corresponding location points in the area map for location point marking, and can quickly and accurately mark the location points.
In one embodiment, the method further comprises: when a plurality of operator identifications exist, determining a position point marking mode corresponding to each operator identification; and marking on the regional map according to the position information of the target face image corresponding to each operator identification and the position point marking mode corresponding to the operator identification to obtain a plurality of position points of the target face image corresponding to each operator identification in the regional map.
When multiple frames of face images exist in image data of any one operation area, the subsequent server can obtain target face features corresponding to face feature information of the multiple frames of face images, and therefore the number of operator identifications obtained by the server is multiple, and it is indicated that the server needs to generate behavior tracks corresponding to the multiple operator identifications in the same area map. And the server acquires a position point marking mode corresponding to each operator identification. For example, the position point marking method may mark the position points of different operators by using marks of a preset color, or may mark the position points by using a preset shape of a graphic. The position point marking manner may be different for different operators.
And the server matches corresponding map positions in the regional map according to the position information of the target human face image corresponding to each operator identification and the position point marking mode corresponding to the corresponding operator identification, and marks the matched map positions, so that a plurality of position points of the target character image corresponding to each operator identification in the operation region are marked. The server can further connect the behavior tracks of different operators by lines of different colors. The line colors of the behavior tracks corresponding to the same operator may be the same, and the line colors of the behavior tracks corresponding to different operators may be different.
In this embodiment, when there are a plurality of operators, the behavior tracks of different operators can be marked in the same area map by adopting different position point marking modes, so that the behavior tracks of different operators can be accurately distinguished.
In one embodiment, after generating the behavior trace image corresponding to the operator identifier, the method further includes: acquiring a behavior trace image acquisition request, wherein the behavior trace image acquisition request carries an area identifier and a time parameter of a target operation area; extracting a behavior track image according to the area identification and the time to acquire a behavior track image corresponding to the request; and determining the identification of the target operator according to the extracted behavior track image.
The server can acquire image data acquired by the monitoring equipment in the nuclear power operation process, and generates a behavior track image of corresponding operators in real time according to the image data. Since the behavior trace image is generated from the area image, there is an association relationship between the behavior trace image of the operator and the area identifier. And the server stores the generated behavior track image so that a subsequent manager can check the behavior track of the operator.
After the server acquires the behavior trace image acquisition request sent by the terminal, the behavior trace image acquisition request is analyzed to obtain the area identifier and the time parameter of the target operation area. For example, the target work area may be a work area where an abnormal condition occurs, and in order to quickly determine responsible personnel, the worker and the behavior trace of the worker that occur in the target work area need to be checked. The time parameter can be a time point when the abnormal condition occurs, such as 14:00 at 8/18/2020, or a time period when the abnormal condition occurs, such as 14:00 to 16:00 at 8/18/2020.
The server can inquire in the stored behavior track image according to the area identification and the time parameter, when the stored behavior track image is crossed with the time parameter, the server can extract the behavior track image, display the behavior track line of the crossed part of the time parameter in a solid line mode, and hide the behavior track line of the crossed part without the time parameter. And the server can determine the identification of the target operator according to the extracted behavior track image.
In this embodiment, when acquiring a behavior trace image acquisition request, the server can extract the behavior trace image corresponding to the request in time, so that responsible personnel corresponding to an abnormal condition can be quickly determined, and the monitoring efficiency of nuclear power operators is improved.
It should be understood that although the steps in the flowcharts of fig. 2 to 3 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-3 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 4, there is provided a nuclear power worker monitoring and processing device, including: a communication module 402, an image extraction module 404, a feature extraction module 406, a feature comparison module 408, an information extraction module 410, a map labeling module 412, and a trajectory generation module 414, wherein:
the communication module 402 is configured to acquire image data of a plurality of working areas during a nuclear power working process.
An image extraction module 404, configured to extract a face image from the image data.
And the feature extraction module 406 is configured to invoke an identification model, and perform feature extraction on the face image through the identification model to obtain face feature information.
The feature comparison module 408 is configured to compare the face feature information with a preset face feature, and determine a target face feature corresponding to the face feature information.
The information extraction module 410 is configured to obtain an operator identifier corresponding to the target face feature, and extract a target face image corresponding to the operator identifier, position information corresponding to the target face image, and shooting time from the image data.
And the map marking module 412 is configured to obtain an area map of the operation area corresponding to the operator identifier, and mark the area map according to the position information to obtain a plurality of position points of the target face image in the area map.
And the track generation module 414 is configured to generate a behavior track image corresponding to the operator identifier according to the plurality of position points and the shooting time.
In one embodiment, the trajectory generating module 414 is further configured to determine behavior time of each location point according to the shooting time; and connecting the plurality of position points according to the behavior time and the area information in the area map to generate a behavior track image corresponding to the operator identification.
In one embodiment, the map marking module 410 is further configured to convert the location information to obtain a converted location point; and determining corresponding mark points in the regional map according to the converted position points, and marking the mark points in the regional map to obtain a plurality of position points of the target face image in the regional map.
In one embodiment, the above apparatus further comprises:
the marking mode determining module is used for determining a position point marking mode corresponding to each operator identification when a plurality of operator identifications exist;
the map marking module 412 is further configured to mark on the area map according to the position information of the target face image corresponding to each operator identifier and a position point marking manner corresponding to the operator identifier, so as to obtain a plurality of position points of the target face image corresponding to each operator identifier in the area map.
In one embodiment, the above apparatus further comprises:
the request acquisition module is used for acquiring a behavior trace image acquisition request, and the behavior trace image acquisition request carries an area identifier and a time parameter of a target operation area;
the track image acquisition module is used for extracting a behavior track image corresponding to the behavior track image acquisition request according to the area identifier and the time parameter;
and the personnel determining module is used for determining the identification of the target operator according to the extracted behavior track image.
For specific limitations of the nuclear power operator monitoring and processing device, reference may be made to the above limitations on the nuclear power operator monitoring and processing method, which are not described herein again. All modules in the nuclear power operator monitoring and processing device can be completely or partially realized through software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 5. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing image data and behavior track images. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to realize a nuclear power operator monitoring and processing method.
Those skilled in the art will appreciate that the architecture shown in fig. 5 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory storing a computer program and a processor implementing the steps of the various embodiments described above when the processor executes the computer program.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the respective embodiments described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A nuclear power operator monitoring and processing method is characterized by comprising the following steps:
in the nuclear power operation process, acquiring image data of a plurality of operation areas, and extracting face images from the image data;
calling an identification model, and extracting the features of the face image through the identification model to obtain face feature information;
comparing the face feature information with preset face features, and determining target face features corresponding to the face feature information;
acquiring an operator identification corresponding to the target face feature, and extracting a target face image corresponding to the operator identification, position information corresponding to the target face image and shooting time from the image data;
acquiring a regional map of an operation region corresponding to the operator identification, and marking the regional map according to the position information to obtain a plurality of position points of the target face image in the regional map;
and generating a behavior track image corresponding to the operator identification according to the plurality of position points and the shooting time.
2. The method according to claim 1, wherein the generating of the behavior trace image corresponding to the operator identifier according to the plurality of position points and the shooting time comprises:
determining the behavior time of each position point according to the shooting time;
and connecting a plurality of position points according to the behavior time and the area information in the area map to generate a behavior track image corresponding to the operator identification.
3. The method according to claim 1, wherein the marking on the area map according to the position information, and obtaining a plurality of position points of the target face image in the area map comprises:
converting the position information to obtain a converted position point;
and determining corresponding mark points in the regional map according to the converted position points, and marking the mark points in the regional map to obtain a plurality of position points of the target face image in the regional map.
4. The method of claim 1, further comprising:
when a plurality of operator identifications exist, determining a position point marking mode corresponding to each operator identification;
and marking on the regional map according to the position information of the target face image corresponding to each operator identification and the position point marking mode corresponding to the operator identification to obtain a plurality of position points of the target face image corresponding to each operator identification in the regional map.
5. The method of claim 1, wherein after the generating the behavior trace image corresponding to the operator identification, the method further comprises:
acquiring a behavior trace image acquisition request, wherein the behavior trace image acquisition request carries an area identifier and a time parameter of a target operation area;
extracting a behavior track image corresponding to the behavior track image acquisition request according to the area identifier and the time parameter;
and determining the identification of the target operator according to the extracted behavior track image.
6. A nuclear power operating personnel monitoring and processing device is characterized by comprising:
the communication module is used for acquiring image data of a plurality of operation areas in the nuclear power operation process;
the image extraction module is used for extracting a face image from the image data;
the feature extraction module is used for calling an identification model and extracting features of the face image through the identification model to obtain face feature information;
the characteristic comparison module is used for comparing the face characteristic information with preset face characteristics and determining target face characteristics corresponding to the face characteristic information;
the information extraction module is used for acquiring an operator identification corresponding to the target face feature, and extracting a target face image corresponding to the operator identification, position information corresponding to the target face image and shooting time from the image data;
the map marking module is used for acquiring a regional map of an operation region corresponding to the operator identification, and marking the regional map according to the position information to obtain a plurality of position points of the target face image in the regional map;
and the track generation module is used for generating a behavior track image corresponding to the operator identification according to the plurality of position points and the shooting time.
7. The device of claim 6, wherein the trajectory generating module is further configured to determine a behavior time of each location point according to the shooting time; and connecting a plurality of position points according to the behavior time and the area information in the area map to generate a behavior track image corresponding to the operator identification.
8. The apparatus of claim 6, wherein the map marking module is further configured to convert the location information to obtain a converted location point; and determining corresponding mark points in the regional map according to the converted position points, and marking the mark points in the regional map to obtain a plurality of position points of the target face image in the regional map.
9. A computer device comprising a memory and a processor, the memory storing a computer program operable on the processor, wherein the processor implements the steps of the method of any one of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 5.
CN202011148301.0A 2020-10-23 2020-10-23 Nuclear power operator monitoring processing method and device and computer equipment Pending CN112364722A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011148301.0A CN112364722A (en) 2020-10-23 2020-10-23 Nuclear power operator monitoring processing method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011148301.0A CN112364722A (en) 2020-10-23 2020-10-23 Nuclear power operator monitoring processing method and device and computer equipment

Publications (1)

Publication Number Publication Date
CN112364722A true CN112364722A (en) 2021-02-12

Family

ID=74511929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011148301.0A Pending CN112364722A (en) 2020-10-23 2020-10-23 Nuclear power operator monitoring processing method and device and computer equipment

Country Status (1)

Country Link
CN (1) CN112364722A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113449828A (en) * 2021-06-07 2021-09-28 江苏恒创软件有限公司 Personnel site safety positioning management system and management method thereof
CN113536932A (en) * 2021-06-16 2021-10-22 中科曙光国际信息产业有限公司 Crowd gathering prediction method and device, computer equipment and storage medium
CN113609643A (en) * 2021-07-05 2021-11-05 中广核工程有限公司 Time acquisition method and system for nuclear power station digital operation basic action unit
CN113747115A (en) * 2021-06-25 2021-12-03 深圳市威尔电器有限公司 Method, system, device and storage medium for monitoring video of eye-to-eye network
CN114373241A (en) * 2022-02-22 2022-04-19 上海上实龙创智能科技股份有限公司 Point location card punching system and method
CN115330877A (en) * 2022-10-13 2022-11-11 常州铭赛机器人科技股份有限公司 Mutual copying method for operation programs of same machine
CN115633248A (en) * 2022-12-22 2023-01-20 浙江宇视科技有限公司 Multi-scene cooperative detection method and system
CN115861364A (en) * 2022-09-07 2023-03-28 国网江苏省电力有限公司太仓市供电分公司 AI identification-based field personnel management and control method and system

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012010121A (en) * 2010-06-25 2012-01-12 Tokyo Metropolitan Univ Behavior monitoring system, behavior monitoring program and behavior monitoring method
CN105913037A (en) * 2016-04-26 2016-08-31 广东技术师范学院 Face identification and radio frequency identification based monitoring and tracking system
CN107437075A (en) * 2017-07-29 2017-12-05 安徽博威康信息技术有限公司 A kind of risk alarm system based on daily behavior track
CN107452027A (en) * 2017-07-29 2017-12-08 安徽博威康信息技术有限公司 A kind of target person method for security protection based on multi-cam monitoring
EP3312785A1 (en) * 2016-10-21 2018-04-25 Karam Osama Karam Imseeh A tagless baggage tracking system and method
CN109145789A (en) * 2018-08-09 2019-01-04 炜呈智能电力科技(杭州)有限公司 Power supply system safety work support method and system
CN109214276A (en) * 2018-07-23 2019-01-15 武汉虹信技术服务有限责任公司 A kind of system and method for the target person track following based on face recognition technology
CN109325456A (en) * 2018-09-29 2019-02-12 佳都新太科技股份有限公司 Target identification method, device, target identification equipment and storage medium
CN109858358A (en) * 2018-12-28 2019-06-07 深圳供电局有限公司 Method and system for tracking person track between buildings and computer readable storage medium
CN110267011A (en) * 2019-06-28 2019-09-20 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium
WO2020011069A1 (en) * 2018-07-11 2020-01-16 中国银联股份有限公司 Feature processing method and device for motion trajectory, and computer storage medium
CN110728252A (en) * 2019-10-22 2020-01-24 山西省信息产业技术研究院有限公司 Face detection method applied to regional personnel motion trail monitoring
CN110781750A (en) * 2019-09-25 2020-02-11 万翼科技有限公司 Dangerous person monitoring processing method and device, computer equipment and storage medium
WO2020103293A1 (en) * 2018-11-22 2020-05-28 深圳云天励飞技术有限公司 Method, device, and electronic device for presenting individual search information
CN111400550A (en) * 2019-12-30 2020-07-10 深圳市商汤科技有限公司 Target motion trajectory construction method and device and computer storage medium
CN111639509A (en) * 2019-03-01 2020-09-08 芜湖潜思智能科技有限公司 Action track recording and inquiring method based on face recognition

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012010121A (en) * 2010-06-25 2012-01-12 Tokyo Metropolitan Univ Behavior monitoring system, behavior monitoring program and behavior monitoring method
CN105913037A (en) * 2016-04-26 2016-08-31 广东技术师范学院 Face identification and radio frequency identification based monitoring and tracking system
EP3312785A1 (en) * 2016-10-21 2018-04-25 Karam Osama Karam Imseeh A tagless baggage tracking system and method
CN107437075A (en) * 2017-07-29 2017-12-05 安徽博威康信息技术有限公司 A kind of risk alarm system based on daily behavior track
CN107452027A (en) * 2017-07-29 2017-12-08 安徽博威康信息技术有限公司 A kind of target person method for security protection based on multi-cam monitoring
WO2020011069A1 (en) * 2018-07-11 2020-01-16 中国银联股份有限公司 Feature processing method and device for motion trajectory, and computer storage medium
CN109214276A (en) * 2018-07-23 2019-01-15 武汉虹信技术服务有限责任公司 A kind of system and method for the target person track following based on face recognition technology
CN109145789A (en) * 2018-08-09 2019-01-04 炜呈智能电力科技(杭州)有限公司 Power supply system safety work support method and system
CN109325456A (en) * 2018-09-29 2019-02-12 佳都新太科技股份有限公司 Target identification method, device, target identification equipment and storage medium
WO2020103293A1 (en) * 2018-11-22 2020-05-28 深圳云天励飞技术有限公司 Method, device, and electronic device for presenting individual search information
CN109858358A (en) * 2018-12-28 2019-06-07 深圳供电局有限公司 Method and system for tracking person track between buildings and computer readable storage medium
CN111639509A (en) * 2019-03-01 2020-09-08 芜湖潜思智能科技有限公司 Action track recording and inquiring method based on face recognition
CN110267011A (en) * 2019-06-28 2019-09-20 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium
CN110781750A (en) * 2019-09-25 2020-02-11 万翼科技有限公司 Dangerous person monitoring processing method and device, computer equipment and storage medium
CN110728252A (en) * 2019-10-22 2020-01-24 山西省信息产业技术研究院有限公司 Face detection method applied to regional personnel motion trail monitoring
CN111400550A (en) * 2019-12-30 2020-07-10 深圳市商汤科技有限公司 Target motion trajectory construction method and device and computer storage medium

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113449828A (en) * 2021-06-07 2021-09-28 江苏恒创软件有限公司 Personnel site safety positioning management system and management method thereof
CN113449828B (en) * 2021-06-07 2023-06-09 江苏恒创软件有限公司 Personnel site safety positioning management system and management method thereof
CN113536932A (en) * 2021-06-16 2021-10-22 中科曙光国际信息产业有限公司 Crowd gathering prediction method and device, computer equipment and storage medium
CN113747115A (en) * 2021-06-25 2021-12-03 深圳市威尔电器有限公司 Method, system, device and storage medium for monitoring video of eye-to-eye network
CN113609643A (en) * 2021-07-05 2021-11-05 中广核工程有限公司 Time acquisition method and system for nuclear power station digital operation basic action unit
CN113609643B (en) * 2021-07-05 2024-05-10 中广核工程有限公司 Time acquisition method and system for digital operation basic action unit of nuclear power station
CN114373241A (en) * 2022-02-22 2022-04-19 上海上实龙创智能科技股份有限公司 Point location card punching system and method
CN115861364A (en) * 2022-09-07 2023-03-28 国网江苏省电力有限公司太仓市供电分公司 AI identification-based field personnel management and control method and system
CN115861364B (en) * 2022-09-07 2023-11-07 国网江苏省电力有限公司太仓市供电分公司 Site personnel management and control method and system based on AI identification
CN115330877A (en) * 2022-10-13 2022-11-11 常州铭赛机器人科技股份有限公司 Mutual copying method for operation programs of same machine
CN115633248A (en) * 2022-12-22 2023-01-20 浙江宇视科技有限公司 Multi-scene cooperative detection method and system

Similar Documents

Publication Publication Date Title
CN112364722A (en) Nuclear power operator monitoring processing method and device and computer equipment
CN112272288B (en) Nuclear power operation monitoring method and device, computer equipment and storage medium
CN111191567B (en) Identity data processing method, device, computer equipment and storage medium
CN112396658B (en) Indoor personnel positioning method and system based on video
CN111222423B (en) Target identification method and device based on operation area and computer equipment
CN110781750A (en) Dangerous person monitoring processing method and device, computer equipment and storage medium
CN111199200A (en) Wearing detection method and device based on electric protection equipment and computer equipment
CN112364715A (en) Nuclear power operation abnormity monitoring method and device, computer equipment and storage medium
CN112257610B (en) Nuclear power equipment monitoring method and device, computer equipment and storage medium
CN110598548A (en) Standing person identification method and device, computer equipment and storage medium
CN112489280A (en) Transformer substation personal safety monitoring method, system, terminal and storage medium
CN113807240A (en) Intelligent transformer substation personnel dressing monitoring method based on uncooperative face recognition
CN113837030B (en) Personnel intelligent management and control method and system for epidemic situation prevention and control and computer equipment
CN111191581A (en) Safety helmet detection method and device based on electric power construction and computer equipment
CN110991979B (en) Attendance management method, device and equipment for intelligent building site and computer storage medium
CN110619689A (en) Automatic sign-in and card-punching method for smart building, computer equipment and storage medium
CN112634561A (en) Safety alarm method and system based on image recognition
CN112595730A (en) Cable breakage identification method and device and computer equipment
CN112818758A (en) Monitoring method, system and device for electric power construction site and storage medium
CN111178212A (en) Image recognition method and device, computer equipment and storage medium
CN113743725B (en) Ascending qualification detection method and system for ascending operators based on edge calculation
CN113485277A (en) Intelligent power plant video identification monitoring management system and method
CN112348462A (en) Process processing method, apparatus, computer device, and storage medium
CN110751125A (en) Wearing detection method and device
CN110807401A (en) User identity identification and multi-user card punching method, device, storage medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination