CN116959028A - Method for supervising safety of high-altitude operation, inspection equipment and computing equipment - Google Patents

Method for supervising safety of high-altitude operation, inspection equipment and computing equipment Download PDF

Info

Publication number
CN116959028A
CN116959028A CN202310897247.7A CN202310897247A CN116959028A CN 116959028 A CN116959028 A CN 116959028A CN 202310897247 A CN202310897247 A CN 202310897247A CN 116959028 A CN116959028 A CN 116959028A
Authority
CN
China
Prior art keywords
area
personnel
image
safety
personnel area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310897247.7A
Other languages
Chinese (zh)
Other versions
CN116959028B (en
Inventor
赵景程
熊超
蔡权雄
牛昕宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Industry Research Kunyun Artificial Intelligence Research Institute Co ltd
Original Assignee
Shandong Industry Research Kunyun Artificial Intelligence Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Industry Research Kunyun Artificial Intelligence Research Institute Co ltd filed Critical Shandong Industry Research Kunyun Artificial Intelligence Research Institute Co ltd
Priority to CN202310897247.7A priority Critical patent/CN116959028B/en
Publication of CN116959028A publication Critical patent/CN116959028A/en
Application granted granted Critical
Publication of CN116959028B publication Critical patent/CN116959028B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/08Construction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Operations Research (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Signal Processing (AREA)
  • Game Theory and Decision Science (AREA)
  • Human Computer Interaction (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Emergency Management (AREA)
  • Primary Health Care (AREA)
  • Alarm Systems (AREA)

Abstract

The invention provides a method for supervising the safety of overhead operation, inspection equipment and computing equipment. The method comprises the following steps: acquiring a first image from a first image pickup device and a second image from a second image pickup device; identifying a first person region in the first image and a second person region in the second image; a first security device that detects the first personnel area; a second security device that detects the first personnel area and the second personnel area; establishing a matching relationship between the first personnel area and the second personnel area; and determining whether safety precaution is generated according to the matching relation between the first personnel area and the second personnel area, the detection result of the first safety device and the detection result of the second safety device. According to the technical scheme, the safety problem possibly generated in the high-altitude operation process can be effectively avoided.

Description

Method for supervising safety of high-altitude operation, inspection equipment and computing equipment
Technical Field
The invention relates to a work safety supervision technology, in particular to a method for supervising high-altitude work safety, patrol equipment and computing equipment.
Background
The safety supervision of the high-rise operation is very important to the personal safety of constructors. The method is an important measure for supervising the safety of the overhead operation for the unprotected and guardless identification, and can effectively avoid possible safety risks in the overhead operation process. However, the current overhead operation supervision method has the conditions of high cost or low accuracy.
Therefore, a method for supervising the safety of the overhead operation is needed, and the problems of high cost or low accuracy can be solved.
Disclosure of Invention
The invention aims to provide a method for supervising the safety of overhead operation, patrol equipment and computing equipment, which can realize the identification and alarm of the condition of no protection and no supervision of the overhead operation.
According to an aspect of the application, there is provided a method of supervising aerial work safety, the method comprising:
acquiring a first image from a first camera device and a second image from a second camera device, wherein the first image comprises an aerial work area, and the second image comprises a ground monitoring area below the aerial work area;
identifying a first person region in the first image and a second person region in the second image;
A first security device that detects the first personnel area;
a second security device that detects the first personnel area and the second personnel area;
establishing a matching relationship between the first personnel area and the second personnel area;
and determining whether safety early warning is generated according to the matching relation between the first personnel area and the second personnel area, the detection result of the first safety device and the detection result of the second safety device.
According to some embodiments, a first security device that detects the first personnel area comprises:
detecting key points of a human body based on the first person region;
determining a human upper body region based on the human key point detection result of the first human region;
and identifying the upper body area of the human body by using a first classification model, and judging whether the first safety device exists in the first personnel area.
According to some embodiments, a second security device that identifies the first personnel area and the second personnel area, comprising:
detecting key points of the human body based on the second personnel area;
determining a human head area based on the human body key point detection result of the first person area and the human body key point detection result of the second person area and the first person area and the second person area;
Classifying the quality of the human head region;
and identifying the head area of the human body by using a second classification model, and judging whether the second safety device exists in each of the first personnel area and the second personnel area.
According to some embodiments, establishing a matching relationship between the first personnel area and the second personnel area comprises:
calculating the central point pixel coordinates of the first personnel area and the second personnel area;
respectively calculating the center point pixel distance between each first personnel area and all second personnel areas;
and if the center point pixel distance is smaller than a preset threshold value, determining that a matching relationship exists between the first personnel area and the corresponding second personnel area.
According to some embodiments, determining whether to generate a security pre-warning according to the matching relationship between the first personnel area and the second personnel area, and the detection result of the first security device and the detection result of the second security device includes:
generating a security pre-warning if the first security device of the first personnel area is not detected; or alternatively
If the matching relation between the first personnel area and the second personnel area does not exist, generating a safety early warning; or alternatively
And if all second personnel areas matched with the first personnel area do not detect the second safety device, generating a safety early warning.
According to some embodiments, the first camera device and the second camera device are fixed in focal length and coplanar in shooting direction;
the first image pickup device and the second image pickup device have the same elevation angle and depression angle with respect to a horizontal plane, and the elevation angle and the depression angle are not less than 75 degrees.
According to another aspect of the present invention, there is provided an inspection apparatus for supervising safety of aloft work, comprising:
a mobile device;
the camera shooting rod is arranged on the mobile device;
the first imaging device and the second imaging device are arranged on the imaging rod, the first imaging device is arranged to shoot a first image of the aerial work area, and the second imaging device is arranged to shoot a second image of the ground monitoring area below the aerial work area;
communication means for transmitting the first image and the second image to a back-end computing device.
According to some embodiments, the first camera device and the second camera device are fixed in focal length and coplanar in shooting direction; the first image pickup device and the second image pickup device have the same elevation angle and depression angle with respect to a horizontal plane, and the elevation angle and the depression angle are not less than 75 degrees.
According to some embodiments, the first camera is arranged to be less than 1 meter from the ground; the second camera device is arranged to be more than 2 meters away from the ground.
According to another aspect of the present invention, there is provided a computing device comprising:
a processor; and
a memory storing a computer program which, when executed by the processor, causes the processor to perform the method of any one of the preceding claims.
According to another aspect of the invention there is provided a non-transitory computer readable storage medium having stored thereon computer readable instructions which, when executed by a processor, cause the processor to perform the method of any of the above.
According to the embodiment of the invention, whether safety early warning is generated or not is determined according to the matching relation between the first personnel area and the second personnel area, the detection result of the first safety device and the detection result of the second safety device, so that safety problems possibly generated in the high-altitude operation process can be effectively avoided. According to the scheme provided by the embodiment of the invention, the protection and monitoring conditions of the high-altitude operation can be accurately identified by using lower cost, and the potential safety hazards of constructors can be effectively eliminated.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the description of the embodiments will be briefly described below.
FIG. 1 illustrates a flow chart of a method of supervising aerial work safety according to an embodiment of the present invention.
Fig. 2 shows a flow chart of a method of establishing a matching relationship between the first person area and the second person area according to an embodiment of the invention.
Fig. 3 shows a flow chart of a method of identifying a first security device in accordance with an embodiment of the invention.
Fig. 4 shows a flow chart of a method of identifying a second security device according to an embodiment of the invention.
Fig. 5 shows a schematic diagram of a patrol apparatus according to an embodiment of the invention.
Fig. 6 shows a schematic diagram of a monitoring scheme for monitoring an overhead operation without protection and supervision according to an embodiment of the invention.
FIG. 7 illustrates a block diagram of a computing device according to an embodiment of the invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments can be embodied in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various components, these components should not be limited by these terms. These terms are used to distinguish one element from another element. Accordingly, a first component discussed below could be termed a second component without departing from the teachings of the present inventive concept. As used herein, the term "and/or" includes any one of the associated listed items and all combinations of one or more.
The user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present invention are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of related data is required to comply with the relevant laws and regulations and standards of the relevant country and region, and is provided with corresponding operation entries for the user to select authorization or rejection.
Those skilled in the art will appreciate that the drawings are schematic representations of example embodiments and that the modules or flows in the drawings are not necessarily required to practice the invention and therefore should not be taken to limit the scope of the invention.
In the current aerial working environment, the aerial working non-protection and non-guardianship identification are important measures for supervising the aerial working safety, and safety problems possibly generated in the aerial working process can be effectively avoided. The protection and monitoring conditions of the high-altitude operation can be accurately identified, and the potential safety hazards of constructors can be effectively eliminated.
Therefore, the invention provides a double-camera non-protection and non-guardian identification method for high-altitude operation, which utilizes double cameras to capture and match pictures of a high-altitude operation area and a ground guardian area, and utilizes methods of human body area, key point detection and head classification to identify the high-altitude operation without protection and non-guardian. According to the technical scheme, the cost of the cameras is low, the whole area of the air and the ground can be covered only by two cameras, important technical support can be provided for monitoring the high-altitude operation, and possible safety risks in the high-altitude operation process can be effectively avoided.
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings.
FIG. 1 illustrates a flow chart of a method of supervising aerial work safety according to an embodiment of the present invention.
According to an exemplary embodiment, the technical solution of the present invention employs two image capturing devices, such as a first image capturing device and a second image capturing device, to capture an on-site image. The first image pickup device and the second image pickup device can be fixed in focal length and coplanar in shooting direction, and can form the same elevation angle and depression angle with the horizontal plane, wherein the elevation angle and the depression angle are not smaller than 75 degrees. The images shot by the first camera device and the second camera device can be transmitted to a background server for processing through wifi or 4G or 5G network, or can be processed on site.
According to some embodiments, for the two image capturing devices, an internal reference calibration method of Zhang Zhengyou may be used to obtain an internal reference matrix and a distortion coefficient, and store the internal reference matrix and the distortion coefficient in a configuration file respectively.
Referring to fig. 1, in S101, a first image from a first image pickup device and a second image from a second image pickup device are acquired.
The first image comprises an aerial work area, wherein the aerial work area is an area where the position of an aerial worker is located. The second image comprises a ground monitoring area below the high-altitude operation area, wherein the ground monitoring area refers to a certain range below each high-altitude operation personnel, and the ground monitoring personnel are required to monitor the high-altitude operation personnel.
According to some embodiments, the background server may receive a first image from a first camera device and a second image from a second camera device over a WIFI or 4G or 5G network.
According to some embodiments, the reference matrix and the distortion coefficient of the first image capturing device and the second image capturing device may be acquired from the configuration file, respectively, and then the acquired first image and second image may be subjected to distortion correction.
At S103, a first person region in the first image and a second person region in the second image are identified.
According to some embodiments, detection of the first and second person areas may be performed using a pre-trained YOLOV7 neural network model. YOLO is an object detection algorithm that provides a solution to many real-life computer vision problems. YOLO has been used to detect traffic signals, examination proctories, game sights, and various industrial automation tools. It will be readily appreciated that other methods of detecting the first and second person regions may be employed by those skilled in the art, such as by means of models such as RCNN, fast RCNN, mask RCNN, and the like.
At S105, a first security device of the first personnel area is detected.
According to some embodiments, the first safety device comprises a safety belt and a high-altitude safety rope, so that personal safety of high-altitude operators can be protected, and potential hazards that the high-altitude operators fall from high places are reduced.
According to some embodiments, human keypoint detection may be performed based on the first human region, as will be described in detail below with reference to the accompanying drawings. Then, a human upper body region is determined based on the human body key point detection result of the first human body region. And detecting the upper body area of the human body by using a first classification model, and judging whether the first safety device exists in the first personnel area.
At S107, a second security device of the first personnel area and the second personnel area is detected.
According to some embodiments, the second safety device comprises a safety helmet, and the safety helmet can reduce hidden danger that an operator is hit by a high-altitude object and can also reduce hidden danger that the operator is hit when falling down.
According to some embodiments, human keypoint detection may be performed based on the second person region, as will be described in detail later with reference to the accompanying drawings. Then, a human head region is determined based on the human keypoint detection result of the first human region and the second human region. And classifying the quality of the human head area, detecting the human head area by using a second classification model, and judging whether the second safety device exists in each of the first personnel area and the second personnel area. At S109, a matching relationship between the first person region and the second person region is established.
According to some embodiments, the center point pixel coordinates of the first person region and the second person region may be calculated, and then the center point distances between each first person region and all second person regions may be calculated, respectively. If the center point distance is less than a predetermined threshold, it may be determined that a matching relationship exists between the first person region and the corresponding second person region.
By establishing a matching connection between the first personnel area and the second personnel area, the method can be used as one of criteria for supervising whether the overhead operation is safe or not. For example, if a matching relationship between the first personnel area and the second personnel area does not exist, a security pre-warning is generated. That is, if there is no supervisory person corresponding to the operator in the overhead working area in the ground monitoring area, a safety precaution needs to be generated.
In S111, it is determined whether a safety warning is generated according to the matching relationship between the first personnel area and the second personnel area, and the detection result of the first safety device and the detection result of the second safety device.
For example, if the first security device of the first personnel area is not detected, a security pre-warning is generated. When the first safety device of the first personnel area is not detected, it can be considered that no safety measures are taken in the high-altitude operation personnel. In this case, the safety warning can be directly generated without performing other judgment.
For another example, if a matching relationship between the first personnel area and the second personnel area does not exist, that is, if a supervisory person corresponding to an operator in an overhead working area does not exist in a ground monitoring area, a safety warning is generated.
For another example, a security pre-warning is generated if all second personnel areas that match the first personnel area do not detect the second security device. Otherwise, no alarm is generated.
According to the technical scheme of the embodiment, whether safety early warning is generated or not is determined by utilizing the matching relation between the first personnel area and the second personnel area and the detection result of the first safety device and the detection result of the second safety device, so that safety problems possibly generated in the high-altitude operation process can be effectively avoided, and the identification accuracy can be ensured under the condition of lower cost.
Fig. 2 shows a flow chart of a method of establishing a matching relationship between the first person area and the second person area according to an embodiment of the invention.
Referring to fig. 2, in S201, a first image pickup device acquires a first image; the second image pickup device acquires a second image. And then performing distortion correction on the first image and the second image.
At S203, a first person region is acquired from the first image; a second person region is acquired from the second image. According to an example embodiment, assuming that there are m people in the aerial work area, m first person areas in the first image are acquired. For example, a plurality of candidate boxes may be generated in the first image by using a target detection algorithm (e.g., YOLOV 7) on the first image, each candidate box representing an area that may contain a worker. The m first personnel areas correspond to m aerial work personnel.
According to some embodiments, candidate boxes may also be screened. For example, using a non-maximum suppression algorithm, overlapping candidate boxes are removed, and the highest scoring candidate boxes are retained. And extracting the characteristics of the reserved candidate frames, classifying and identifying each candidate frame by using a classifier to determine whether each candidate frame has operators, and detecting and positioning the operator areas.
Likewise, n second person areas in the second image may be acquired, where the n second person areas correspond to n ground guardianship persons.
At S205, center point pixel coordinates of the first person region and the second person region are calculated.
After the person areas are acquired, center point pixel coordinates of the first person area and the second person area are calculated in an image coordinate system. The coordinates of the center point pixel can be calculated with the first image as a projection onto the second image, with the origin of the coordinate systems of the first image and the second image having a correspondence, depending on the settings of the two cameras.
At S207, center point pixel distances between each first person region and all second person regions are calculated, respectively.
According to some embodiments, the distance between two human regions may be represented by a center point image pixel horizontal (length or x-direction) coordinate difference of the two regions.
For example, using top to represent a set of m first person areas,
top={c 01 ,c 02 ,...,c 0m },
c 0i can be expressed as [ x ] 0i ,y 0i ,w 0i ,h 0i ]First person area c 0i Is the center point pixel coordinate of (1)i is a natural number, x 0i And y 0i Can represent the lower left vertex coordinates, w, of the first person region 0i And h 0i Representing the length of the first person region in the x-axis direction and the y-axis direction.
bottom represents a set of n second person areas,
bottom={c 11 ,c 12 ,...,c 1n },
c 1j can be expressed as [ x ] 1j ,y 1j ,w 1j ,h 1j ]Second personnel area c 1j Is the center point pixel coordinate of (1)j is a natural number, x 0j And y 0j Can represent the lower left vertex coordinates, w, of the second person region 0j And h 0j Representing the length of the second person region in the x-axis direction and the y-axis direction.
First personnel area c 0i With the second personnel area c 1 The difference of the pixel coordinates of the center point of j in the horizontal direction is
According to other embodiments, the difference in vertical (width or y) pixel coordinates of the center points of the two regions may also be used to characterize the distance between the two human regions.
According to other embodiments, the distance between two personnel areas may also be represented by the Euclidean distance between the center points of the two areas. As described above, the pixel coordinates of the center point have been obtained, and the pixel distance therebetween can be calculated using the euclidean distance formula.
At S209, if the center point distance is less than a predetermined threshold, it is determined that a matching relationship exists between the first person region and the corresponding second person region.
According to an example embodiment, the difference value of the pixel coordinates in the horizontal direction of the center point of the first personnel area and the second personnel area is not smaller than the threshold value, and if the personnel identified as working aloft do not correspond to guardianship personnel, it is determined that the working personnel in the first personnel area and the guardianship personnel in the second personnel area have no matching relationship, and an alarm can be generated in this way. That is, for any person working aloft, if the distance from any person in the ground supervision area is not less than the predetermined threshold value, it may be indicated that the worker working aloft does not have a guardian corresponding thereto.
According to an exemplary embodiment, the m first person areas and the n second person areas are combined separately according to a mathematical permutation and combination, with a total of m x n combinations. And respectively calculating the difference value of the pixel coordinates of the central point of the personnel area in each combination of the m x n combinations, namely calculating the difference value of the pixel coordinates of the central point of each high-altitude operation personnel and all personnel in the ground monitoring area in the horizontal direction, so as to obtain the pixel distance of the two personnel in the horizontal direction corresponding to the two personnel areas, and judging whether the personnel in each high-altitude operation personnel are corresponding to the personnel on the ground for monitoring. Therefore, personnel working aloft can be effectively guaranteed to have personnel on the ground for monitoring, the operation safety of the personnel working aloft is guaranteed, the personnel working aloft is guaranteed to have ground monitoring personnel to confirm the safety measures of the operation aloft and execute the emergency plan, the personnel working aloft is instructed to stop working when dangerous situations occur, and the personnel working supervision work is completed according to the regulations, so that the illegal behaviors are corrected timely.
According to some embodiments, the threshold value is valued, and the operator can set the threshold value according to the actual working site.
According to some embodiments, the difference value of the pixel coordinates of the central points of each first personnel area and the n second personnel areas in the horizontal direction is smaller than a threshold value, so that personnel working aloft can have monitoring personnel on the ground to monitor, and no alarm is generated.
Fig. 3 shows a flow chart of a method of identifying a first security device according to an embodiment of the invention.
Referring to fig. 3, according to an example embodiment, a first person region is detected, and a human body detection frame is acquired; performing human body key point detection on the human body detection frame by using a key point detection algorithm to obtain human body key points; determining an upper body region of the human body through the intersection point of the human body waist key point connecting line of the human body key point and the human body detection frame; and identifying the upper body area of the human body by using a first classification model, and judging whether the first safety device exists in the first personnel area.
As shown in fig. 3, in S301, the first person region is detected, and a human body detection frame is acquired.
According to an example embodiment, a pre-trained YOLOv7 model may be employed to detect the first person region.
For example, the loss function for detection of personnel areas may be
LOSS=loss loc +loss obj +loss cls
Wherein loss is loc Loss function for position regression obj Loss of confidence for an object cls Is a class loss function.
Position regression loss function:
object confidence loss function:
classification loss function:
formula symbol interpretation:
S 2 representing a grid sequence number, wherein B represents a B frame of a current grid, and each grid has 3 frames;
representing the current grid, with object=1, no object=0;
in contrast, there is an object=0, no object=1;
representing the accumulation of objects in all frames in all grids;
λ cclsiou representing the parameter factor;
d, c respectively represents the center point distance of the prediction frame and the real frame, and the inclined side length of the outsourcing rectangle;
IOU represents the intersection ratio of the prediction frame and the real frame;
w, h represents the width and height of the prediction frame;
w gt ,h gt representing the width and height of the real frame;
the j-th frame confidence representing the i-th grid,>a confidence level of a predicted frame representing a jth frame of the ith grid;
represents the ithThe j-th box of the grid is the probability value of category C,>the j-th prediction box representing the i-th grid is the probability value of category C.
In S303, human body key point detection is performed based on the first human body region.
According to some embodiments, human keypoint detection may be performed using a pre-trained HRNet model.
For example, the loss function for keypoint detection may be:
wherein OKS represents the similarity of key points, d n Euclidean distance k representing predicted position and true position of nth key point of human body n The weight of the nth key point of the human body is represented, s represents the target size, delta (v) n ) A flag indicating whether each key point is visible, v n =1,δ(v n ) The representation is marked but invisible in the image, v n =2,δ(v n ) The representation is annotated and is visible in the image.
At S305, an upper body region of the human body is determined based on the human body keypoints and the human body detection frame.
According to example embodiments, the upper body region of the human body may be determined by an intersection of a human body waist key point line of human body key points and the human body detection frame. The key points are utilized to determine the upper body region of the human body, so that the subsequent identification of the safety device in the upper body region of the human body is facilitated, the identification precision is higher, and the false detection of the safety device is reduced.
In S307, the human upper body region is detected using a first classification model, and it is determined whether the first safety device is present in the first person region.
According to some embodiments, a classification test may be performed using a pre-trained resnet model to determine if a first security device (e.g., a seat belt) is present.
For example, the loss function for image classification may be:
wherein y is i,j A label indicating that the ith sample belongs to the jth class, p i,j Representing the probability that the ith sample is predicted as the jth class, n is the number of pictures per lot, and C is the number of classes.
According to an example embodiment, the first classification model classifies and identifies an upper body region of the human body in the first person region, and if the classification result of the upper body region of the human body is that the safety device is worn by the person, the first classification model identifies that the safety device is worn by the person. If the classification result of the upper body area of the human body is that the safety device is not available, the person is identified as not wearing the safety device, and an alarm is generated.
According to an exemplary embodiment, by performing a pre-process on a first person region, and performing a seat belt recognition on an upper body region of a human body using a classification model on the basis of the seat belt recognition, it is possible to narrow down a region related to the safety device to a certain extent. Therefore, the scheme provided by the embodiment of the invention can greatly reduce the difficulty of identification, reduce false identification of the safety device, improve the accuracy of safety belt identification and have higher accuracy. Particularly in dangerous scenes of high-altitude operation, because whether personnel wear the safety device is more important, the scheme of the invention can ensure that the high-altitude operation personnel wear the safety device.
Fig. 4 shows a flow chart for identifying a second security device according to an embodiment of the invention.
Referring to fig. 4, at S401, a first person region and a second person region are acquired. As already discussed above, the detection of the second person region and the first person region may be based on the same detection model, which is not described in detail herein.
In S403, human body key point detection is performed. The foregoing has been discussed and will not be repeated.
At S405, a human head region is determined based on the human keypoint detection result of the first person region and the human keypoint detection result of the second person region and the first person region and the second person region. For example, the human head region of the human body can be obtained through the intersection point of the human shoulder key point connecting line of the human body key point and the human body detection frame.
At S407, the human head region is mass classified. According to some embodiments, a pre-trained classification model (e.g., a resnet model) is employed to quality classify and filter the person's head region. For example, samples of severely insufficient quality may be excluded and a subsequent, different second security device classification detection model may be selected for different qualities.
At S409, the human head region is identified using a second classification model, and it is determined whether the second safety device exists in each of the first person region and the second person region. Similar to the detection of the first security device, a classification detection may be performed using a pre-trained resnet model to determine whether a second security device (e.g., a helmet) is present.
According to an example embodiment, the second classification model classifies and identifies a human head region of the first and second person regions, and if the human head region classification results in a safety device, identifies that the safety device is worn by the person. If the classification result of the head area of the human body is that the safety device is not available, the person is identified as not wearing the safety device, and an alarm is generated.
According to the example embodiment, the head area is classified by quality, samples with seriously insufficient quality are filtered, and the second safety device classification detection model with different quality can be selected, so that the safety cap identification accuracy can be improved, and the accuracy is higher.
Fig. 5 shows a schematic diagram of a patrol apparatus supervising aloft work safety according to an example embodiment.
Referring to fig. 5, according to an example embodiment, a patrol apparatus for supervising aloft work safety may include a moving device 501, a camera pole 503, first and second camera devices 505 and 507, and a communication device 509. The inspection device may be in the form of, for example, an inspection robot.
As shown in fig. 5, according to an embodiment, a camera bar 503 may be provided on the mobile device 501. The first image capturing device 505 and the second image capturing device 507 may be disposed on the image capturing pole 503, to form a patrol equipment for supervising the safety of the overhead operation. The communication device 509 may be disposed on the mobile device 501.
According to some embodiments, the first image capturing device 505 and the second image capturing device 507 have a fixed focal length and a coplanar capturing direction. The first image pickup device 505 and the second image pickup device 507 have the same elevation angle and depression angle with respect to the horizontal plane, and the elevation angle and the depression angle are not less than 75 degrees. The first camera 505 is arranged to take a first image of an aerial work area, while the second camera 507 is arranged to take a second image of a floor surveillance area.
According to some embodiments, the first image capturing device 505 and the second image capturing device 507 may be disposed at intervals. For example, the first camera 505 may be disposed less than 1 meter from the ground, and the second camera 507 may be disposed more than 2 meters from the ground.
According to some embodiments, the communication means 509 is for transmitting the first image and the second image to a back-end computing device. The communication means 509 may comprise a WIFI device or a 5G device, etc.
The inspection apparatus according to the example embodiment may be moved at a job site, and images are acquired through the first camera 505 and the second camera 507 and transferred to the computing apparatus, so that the computing apparatus may perform overhead job safety supervision by the method described above.
Fig. 6 shows a schematic diagram of a monitoring scheme for monitoring an overhead operation without protection and supervision according to an embodiment of the invention.
Referring to fig. 6, two image pickup apparatuses are prepared, for which an internal reference matrix and a distortion coefficient are obtained respectively using a Zhang Zhengyou internal reference calibration method, and the obtained internal reference matrices and distortion coefficients of the two image pickup devices are stored respectively in a configuration file.
The two camera devices acquire images and acquire image data I and image data II respectively. And carrying out distortion correction on the acquired first image data and second image data through internal reference matrixes and distortion coefficients of the two image pickup devices stored in the configuration file. And respectively detecting personnel of the image after distortion correction to obtain a first personnel area and a second personnel area. And acquiring m first personnel areas in the first correction image and n second personnel areas in the second correction image, wherein m is more than 0, n is more than or equal to 0, and m and n are integers. And judging whether the pixel distances in the horizontal direction between the m first personnel areas and the n second personnel areas are not smaller than a threshold value or n is 0. If yes, the person identified as the high-altitude operation does not correspond to the ground guardian, and an alarm is generated. If the pixel distances in the horizontal direction between the m first personnel areas and any one of the n second personnel areas are smaller than the threshold value respectively and n is not 0, no alarm is generated, and the personnel safety device is identified.
And respectively carrying out safety device identification on the personnel in the first personnel area and the second personnel area, wherein the first personnel area identifies whether the high-altitude operation personnel wear safety belts and safety helmets, and the second personnel area identifies whether the ground guardian wears the safety helmets. If no first security device is detected for any of the first personnel areas, a security pre-warning is generated. If all second personnel areas matched with the first personnel area do not detect the second safety device, a safety early warning is generated.
Those skilled in the art will readily appreciate from the disclosure of the exemplary embodiments that the present disclosure may be readily utilized as a basis for modifying or modifying other embodiments of the present disclosure.
According to the embodiment, whether safety early warning is generated or not is determined by utilizing the matching relation between the first personnel area and the second personnel area, the detection result of the first safety device and the detection result of the second safety device, so that safety problems possibly generated in the high-altitude operation process can be effectively avoided. According to the scheme provided by the embodiment of the invention, the protection and monitoring conditions of the high-altitude operation can be accurately identified by using lower cost, and the potential safety hazards of constructors can be effectively eliminated.
According to the technical scheme of the embodiment, images of the high-altitude operation area and the ground monitoring area are captured and matched by using the double cameras, and the high-altitude operation is identified in a non-protection and non-monitoring mode by using the methods of human body area, key point detection and head classification. According to the technical scheme, the cost of the cameras is low, the whole area of the air and the ground can be covered only by two cameras, important technical support can be provided for monitoring the high-altitude operation, and possible safety risks in the high-altitude operation process can be effectively avoided.
According to the embodiment, the upper body region of the human body can be determined through the intersection point of the human body waist key point connecting line of the human body key point and the human body detection frame. The key points are utilized to determine the upper body region of the human body, so that the subsequent identification of the safety device in the upper body region of the human body is facilitated, the identification precision is higher, and the false detection of the safety device is reduced.
According to the embodiment, the head area is subjected to quality classification, samples with seriously insufficient quality are filtered, and the second safety device classification detection models with different follow-up different quality can be selected, so that the safety cap identification accuracy can be improved, and the accuracy is higher.
FIG. 7 illustrates a block diagram of a computing device according to an example embodiment of the invention.
As shown in fig. 7, computing device 30 includes processor 12 and memory 14. Computing device 30 may also include a bus 22, a network interface 16, and an I/O interface 18. The processor 12, memory 14, network interface 16, and I/O interface 18 may communicate with each other via a bus 22.
The processor 12 may include one or more general purpose CPUs (Central Processing Unit, processors), microprocessors, or application specific integrated circuits, etc. for executing relevant program instructions. According to some embodiments, computing device 30 may also include a high performance display adapter (GPU) 20 that accelerates processor 12.
Memory 14 may include machine-system-readable media in the form of volatile memory, such as Random Access Memory (RAM), read Only Memory (ROM), and/or cache memory. Memory 14 is used to store one or more programs including instructions as well as data. The processor 12 may read instructions stored in the memory 14 to perform the methods according to embodiments of the invention described above.
Computing device 30 may also communicate with one or more networks through network interface 16. The network interface 16 may be a wireless network interface.
Bus 22 may be a bus including an address bus, a data bus, a control bus, etc. Bus 22 provides a path for exchanging information between the components.
It should be noted that, in the implementation, the computing device 30 may further include other components necessary to achieve normal operation. Furthermore, it will be understood by those skilled in the art that the above-described apparatus may include only the components necessary to implement the embodiments of the present description, and not all the components shown in the drawings.
The present invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the above method. The computer readable storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, DVDs, CD-ROMs, micro-drives, and magneto-optical disks, ROM, RAM, EPROM, EEPROM, DRAM, VRAM, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), network storage devices, cloud storage devices, or any type of media or device suitable for storing instructions and/or data.
Embodiments of the present invention also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform part or all of the steps of any one of the methods described in the method embodiments above.
It will be clear to a person skilled in the art that the solution according to the invention can be implemented by means of software and/or hardware. "Unit" and "module" in this specification refer to software and/or hardware capable of performing a specific function, either alone or in combination with other components, where the hardware may be, for example, a field programmable gate array, an integrated circuit, or the like.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present invention.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, such as a division of units, merely a division of logic functions, and there may be additional divisions in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in whole or in part in the form of a software product stored in a memory, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method of the various embodiments of the present invention.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
The exemplary embodiments of the present invention have been particularly shown and described above. It is to be understood that this invention is not limited to the precise arrangements, instrumentalities and instrumentalities described herein; on the contrary, the invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (10)

1. A method of supervising overhead safety, comprising:
acquiring a first image from a first camera device and a second image from a second camera device, wherein the first image comprises an aerial work area, and the second image comprises a ground monitoring area below the aerial work area;
identifying a first person region in the first image and a second person region in the second image;
a first security device that detects the first personnel area;
a second security device that detects the first personnel area and the second personnel area;
establishing a matching relationship between the first personnel area and the second personnel area;
and determining whether safety precaution is generated according to the matching relation between the first personnel area and the second personnel area, the detection result of the first safety device and the detection result of the second safety device.
2. The method of claim 1, wherein detecting the first security device of the first personnel area comprises:
detecting key points of a human body based on the first person region;
determining a human upper body region based on the human key point detection result of the first human region;
and identifying the upper body area of the human body by using a first classification model, and judging whether the first safety device exists in the first personnel area.
3. The method of claim 2, wherein detecting the second security device of the first personnel area and the second personnel area comprises:
detecting key points of the human body based on the second personnel area;
determining a human head area based on the human body key point detection result of the first person area and the human body key point detection result of the second person area and the first person area and the second person area;
classifying the quality of the human head region;
and identifying the head area of the human body by using a second classification model, and judging whether the second safety device exists in each of the first personnel area and the second personnel area.
4. The method of claim 1, wherein establishing a matching relationship between the first personnel area and the second personnel area comprises:
calculating the central point pixel coordinates of the first personnel area and the second personnel area;
respectively calculating the center point pixel distance between each first personnel area and all second personnel areas;
and if the center point pixel distance is smaller than a preset threshold value, determining that a matching relationship exists between the first personnel area and the corresponding second personnel area.
5. The method of claim 1, wherein determining whether to generate a security pre-warning based on the matching relationship of the first personnel area and the second personnel area and the detection result of the first security device and the detection result of the second security device comprises:
generating a security pre-warning if the first security device of the first personnel area is not detected; or case number:
if the matching relation between the first personnel area and the second personnel area does not exist, generating a safety early warning; or alternatively
And if all second personnel areas matched with the first personnel area do not detect the second safety device, generating a safety early warning.
6. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the focal length of the first camera device and the focal length of the second camera device are fixed, and the shooting directions are coplanar;
the first image pickup device and the second image pickup device have the same elevation angle and depression angle with respect to a horizontal plane, and the elevation angle and the depression angle are not less than 75 degrees.
7. An inspection apparatus for supervising the safety of overhead operations, comprising:
a mobile device;
the camera shooting rod is arranged on the mobile device;
the first imaging device and the second imaging device are arranged on the imaging rod, the first imaging device is arranged to shoot a first image of the aerial work area, and the second imaging device is arranged to shoot a second image of the ground monitoring area below the aerial work area;
communication means for transmitting the first image and the second image to a back-end computing device.
8. The inspection apparatus according to claim 7, wherein,
the focal length of the first camera device and the focal length of the second camera device are fixed, and the shooting directions are coplanar;
the first image pickup device and the second image pickup device have the same elevation angle and depression angle with respect to a horizontal plane, and the elevation angle and the depression angle are not less than 75 degrees.
9. The inspection apparatus according to claim 7, wherein,
the first camera device is arranged to be less than 1 meter away from the ground;
the second camera device is arranged to be more than 2 meters away from the ground.
10. A computing device, comprising:
a processor; and
memory storing a computer program which, when executed by the processor, implements the method according to any of claims 1-6.
CN202310897247.7A 2023-07-20 2023-07-20 Method for supervising safety of high-altitude operation, inspection equipment and computing equipment Active CN116959028B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310897247.7A CN116959028B (en) 2023-07-20 2023-07-20 Method for supervising safety of high-altitude operation, inspection equipment and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310897247.7A CN116959028B (en) 2023-07-20 2023-07-20 Method for supervising safety of high-altitude operation, inspection equipment and computing equipment

Publications (2)

Publication Number Publication Date
CN116959028A true CN116959028A (en) 2023-10-27
CN116959028B CN116959028B (en) 2024-03-01

Family

ID=88452375

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310897247.7A Active CN116959028B (en) 2023-07-20 2023-07-20 Method for supervising safety of high-altitude operation, inspection equipment and computing equipment

Country Status (1)

Country Link
CN (1) CN116959028B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118411500A (en) * 2024-06-27 2024-07-30 杭州海康威视数字技术股份有限公司 Portable imaging device-based operation scene detection method and portable imaging device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108449574A (en) * 2018-03-15 2018-08-24 南京慧尔视防务科技有限公司 A kind of security detection method and system based on microwave
CN111191586A (en) * 2019-12-30 2020-05-22 安徽小眯当家信息技术有限公司 Method and system for inspecting wearing condition of safety helmet of personnel in construction site
CN112184773A (en) * 2020-09-30 2021-01-05 华中科技大学 Helmet wearing detection method and system based on deep learning
CN113382202A (en) * 2021-04-28 2021-09-10 陈兆莉 A district monitored control system for property management
CN113935645A (en) * 2021-10-22 2022-01-14 中国海洋石油集团有限公司 High-risk operation management and control system and method based on identification analysis technology
CN114155492A (en) * 2021-12-09 2022-03-08 华电宁夏灵武发电有限公司 High-altitude operation safety belt hanging rope high-hanging low-hanging use identification method and device and electronic equipment
CN114209118A (en) * 2021-12-29 2022-03-22 国网瑞嘉(天津)智能机器人有限公司 High-altitude operation intelligent early warning method and device and intelligent safety helmet
US20220148322A1 (en) * 2019-03-01 2022-05-12 Hitachi, Ltd. Left Object Detection Device and Left Object Detection Method
CN115497054A (en) * 2022-11-17 2022-12-20 安徽深核信息技术有限公司 Method and device for detecting hanging state of safety rope hook for aerial work
US20230041612A1 (en) * 2020-03-05 2023-02-09 Nec Corporation Monitoring device, monitoring method, and program recording medium
CN116153015A (en) * 2023-02-23 2023-05-23 上海七弦智能科技有限公司 Monitoring system and monitoring method for monitoring forest fire prevention based on 5G video

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108449574A (en) * 2018-03-15 2018-08-24 南京慧尔视防务科技有限公司 A kind of security detection method and system based on microwave
US20220148322A1 (en) * 2019-03-01 2022-05-12 Hitachi, Ltd. Left Object Detection Device and Left Object Detection Method
CN111191586A (en) * 2019-12-30 2020-05-22 安徽小眯当家信息技术有限公司 Method and system for inspecting wearing condition of safety helmet of personnel in construction site
US20230041612A1 (en) * 2020-03-05 2023-02-09 Nec Corporation Monitoring device, monitoring method, and program recording medium
CN112184773A (en) * 2020-09-30 2021-01-05 华中科技大学 Helmet wearing detection method and system based on deep learning
CN113382202A (en) * 2021-04-28 2021-09-10 陈兆莉 A district monitored control system for property management
CN113935645A (en) * 2021-10-22 2022-01-14 中国海洋石油集团有限公司 High-risk operation management and control system and method based on identification analysis technology
CN114155492A (en) * 2021-12-09 2022-03-08 华电宁夏灵武发电有限公司 High-altitude operation safety belt hanging rope high-hanging low-hanging use identification method and device and electronic equipment
CN114209118A (en) * 2021-12-29 2022-03-22 国网瑞嘉(天津)智能机器人有限公司 High-altitude operation intelligent early warning method and device and intelligent safety helmet
CN115497054A (en) * 2022-11-17 2022-12-20 安徽深核信息技术有限公司 Method and device for detecting hanging state of safety rope hook for aerial work
CN116153015A (en) * 2023-02-23 2023-05-23 上海七弦智能科技有限公司 Monitoring system and monitoring method for monitoring forest fire prevention based on 5G video

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LILI WANG等: "Safety Helmet Wearing Detection Model Based on Improved YOLO-M", 《IEEE ACCESS》, pages 26247 - 26257 *
吴凡等: "激光测距雷达在剪叉式高空作业平台安全监测方面的研究及应用", 《中国建设信息化》, no. 5, pages 74 - 77 *
赵伟: "基于机器视觉的列检库作业人员安全监测系统研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》, no. 03, pages 138 - 2278 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118411500A (en) * 2024-06-27 2024-07-30 杭州海康威视数字技术股份有限公司 Portable imaging device-based operation scene detection method and portable imaging device

Also Published As

Publication number Publication date
CN116959028B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
CN110110613B (en) Track traffic abnormal personnel detection method based on motion recognition
CN109670441B (en) Method, system, terminal and computer readable storage medium for realizing wearing recognition of safety helmet
CN101465033B (en) Automatic tracking recognition system and method
CN108319926A (en) A kind of the safety cap wearing detecting system and detection method of building-site
CN111062429A (en) Chef cap and mask wearing detection method based on deep learning
CN108062542B (en) Method for detecting shielded human face
CN108256459A (en) Library algorithm is built in detector gate recognition of face and face based on multiple-camera fusion automatically
CN110390229B (en) Face picture screening method and device, electronic equipment and storage medium
CN116959028B (en) Method for supervising safety of high-altitude operation, inspection equipment and computing equipment
CN112235537A (en) Transformer substation field operation safety early warning method
CN111062303A (en) Image processing method, system and computer storage medium
CN112396658A (en) Indoor personnel positioning method and positioning system based on video
CN112257660B (en) Method, system, equipment and computer readable storage medium for removing invalid passenger flow
CN116152863B (en) Personnel information identification method and device, electronic equipment and storage medium
CN112183472A (en) Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet
CN112800918A (en) Identity recognition method and device for illegal moving target
Hakim et al. Mask Detection System with Computer Vision-Based on CNN and YOLO Method Using Nvidia Jetson Nano
CN116206255A (en) Dangerous area personnel monitoring method and device based on machine vision
CN113314230A (en) Intelligent epidemic prevention method, device, equipment and storage medium based on big data
CN112802100A (en) Intrusion detection method, device, equipment and computer readable storage medium
CN113052125A (en) Construction site violation image recognition and alarm method
CN115578455A (en) Method for positioning reserved hole in concrete structure room
CN117953577A (en) Method and computing device for identifying aloft work illegal behaviors
CN115171006A (en) Detection method for automatically identifying personnel entering electric power dangerous area based on deep learning
CN113780224A (en) Transformer substation unmanned inspection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant