CN113569730B - Protection state detection method and device and electronic equipment - Google Patents

Protection state detection method and device and electronic equipment Download PDF

Info

Publication number
CN113569730B
CN113569730B CN202110852736.1A CN202110852736A CN113569730B CN 113569730 B CN113569730 B CN 113569730B CN 202110852736 A CN202110852736 A CN 202110852736A CN 113569730 B CN113569730 B CN 113569730B
Authority
CN
China
Prior art keywords
image
detected
interest
region
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110852736.1A
Other languages
Chinese (zh)
Other versions
CN113569730A (en
Inventor
罗靖宇
刘明
武晓敏
段志伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Glodon Co Ltd
Original Assignee
Glodon Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Glodon Co Ltd filed Critical Glodon Co Ltd
Priority to CN202110852736.1A priority Critical patent/CN113569730B/en
Publication of CN113569730A publication Critical patent/CN113569730A/en
Application granted granted Critical
Publication of CN113569730B publication Critical patent/CN113569730B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a protection state detection method, a protection state detection device and electronic equipment, wherein the method comprises the steps of acquiring an image to be detected; performing scene recognition on the image to be detected, and determining whether the image to be detected is an image of a target protection scene; and when the image to be detected is an image of a target protection scene, detecting the protection state of the region of interest in the image to be detected so as to determine whether the protection state of the region of interest is normal or not. The detection method not based on the sensor is provided based on the computer vision technology, and before the protection state detection is carried out on the region of interest, scene recognition is carried out on the image to be detected, false detection possibly caused by the position deviation of the image acquisition equipment can be filtered, and the reliability of detection and the robustness of a detection algorithm are improved. Further, since the detection method is independent of the acquisition angle of the image, the mobility on different devices can be realized.

Description

Protection state detection method and device and electronic equipment
Technical Field
The invention relates to the technical field of image processing, in particular to a protection state detection method and device and electronic equipment.
Background
For construction scenes, protection is required for some areas with potential safety hazards in order to ensure personnel safety. Such as edges and openings described in construction engineering. Wherein, the limb refers to five limbs in the building engineering, which are respectively: grooves, pits, slots, and deep foundation edges; floor edge; the side edge of the stairs; the edge of the platform or balcony; the roof faces the edges. The hole refers to four holes in the building engineering, which are respectively: elevator entrance, reserved entrance and passage entrance.
The existing protection detection device is generally based on the technology of the Internet of things, and the monitoring and alarming of the protection state are realized in a mode of deploying sensors on a construction site. For example to detect the distance of a person from the protective facility and to issue an alarm when the distance is less than a safe distance.
However, in the above technical solution, the sensor is installed, and the scene of the construction site is complex, and the sensor is easily damaged in such a scene, so that the reliability of subsequent detection is low.
Disclosure of Invention
In view of the above, the embodiments of the present invention provide a method, an apparatus, and an electronic device for detecting a protection state, so as to solve the problem of low reliability of detection of the protection state.
According to a first aspect, an embodiment of the present invention provides a protection state detection method, including:
acquiring an image to be detected;
performing scene recognition on the image to be detected, and determining whether the image to be detected is an image of a target protection scene;
and when the image to be detected is an image of a target protection scene, detecting the protection state of the region of interest in the image to be detected so as to determine whether the protection state of the region of interest is normal or not.
According to the protection state detection method provided by the embodiment of the invention, a detection method which is not based on a sensor is provided based on a computer vision technology, and before the protection state detection is carried out on the region of interest, scene recognition is carried out on the image to be detected, whether the image is the image of the target protection scene or not is determined, false detection possibly caused by the position deviation of the image acquisition equipment can be filtered, and the detection reliability and the robustness of a detection algorithm are improved. Further, since the detection method is independent of the acquisition angle of the image, the mobility on different devices can be realized.
With reference to the first aspect, in a first implementation manner of the first aspect, the performing scene recognition on the image to be detected, and determining whether the image to be detected is an image of a target protection scene includes:
acquiring a scene classification model;
and inputting the image to be detected into the scene classification model, so as to carry out global feature extraction on the image to be detected by utilizing the scene classification model, and determining whether the image to be detected is an image of a target protection scene.
According to the protection state detection method provided by the embodiment of the invention, since the protection state detection of the subsequent interested area is performed based on the local features of the adjacent scene, and a large number of areas with the adjacent features but without protection exist on the construction site, the subsequent protection state detection can cause more false detection on equipment with frequent replacement of the acquired scene, so that the false detection can be filtered out from the scene by adding the global feature extraction of the scene classification model, and the reliability of the algorithm is improved.
With reference to the first embodiment of the first aspect, in a second implementation of the first aspect, the inputting the image to be detected into the scene classification model to perform global feature extraction on the image to be detected by using the scene classification model, and determining whether the image to be detected is an image of a target protection scene includes:
scaling the image to be detected to a preset size, and extracting global features of the scaled image by utilizing a convolution unit in the scene classification model to obtain global features, wherein the convolution unit comprises a plurality of convolution layers and an attention module connected with the last convolution layer;
and determining whether the image to be detected is an image of a target protection scene or not based on the global features.
According to the protection state detection method provided by the embodiment of the invention, the attention module is added into the convolution unit, so that the scene classification model can be better positioned in the region of the target feature when the feature is extracted, and the model performance is improved on the premise of not increasing the calculated amount.
With reference to any one of the first aspect to the second implementation manner of the first aspect, in a third implementation manner of the first aspect, the method further includes:
when the image to be detected is not the image of the target protection scene, determining that an image acquisition device for acquiring the image to be detected is offset, and sending information of the offset of the image acquisition device to a preset object.
According to the protection state detection method provided by the embodiment of the invention, when the image of the target protection scene is detected to be absent, the preset object image acquisition device is informed of shifting at the moment, and the preset object image acquisition device needs to be adjusted so as to timely adjust the acquisition angle of the image acquisition device.
With reference to the first aspect, in a fourth implementation manner of the first aspect, the detecting a protection state of the region of interest in the image to be detected to determine whether the protection state of the region of interest is normal includes:
acquiring a detection model and a protection classification model;
inputting the image to be detected into the detection model, and determining the position of the region of interest;
based on the position of the region of interest, capturing an image of interest from the image to be detected;
and inputting the image of interest into the protection classification model, and determining whether the protection state of the region of interest is normal.
The protection state detection method provided by the embodiment of the invention utilizes the detection and classification flow to identify whether the protection of the region of interest is normal, can ensure the accuracy of the identified protection state judgment to the greatest extent, and reduces false alarm and missing report of unprotected alarm.
With reference to the fourth implementation manner of the first aspect, in a fifth implementation manner of the first aspect, the inputting the image to be detected into the detection model, determining a category and a location of the region of interest includes:
extracting the critical edge characteristics of the image to be detected by using a characteristic extraction unit in the detection model;
and determining the category and the position of the region of interest based on the edge feature, wherein the region of interest comprises an edge or a hole.
According to the protection state detection method provided by the embodiment of the invention, the detection model is used for extracting the local characteristics of the image to be detected, so that the detection is more focused on details, and the detection reliability is improved.
With reference to the fourth implementation manner of the first aspect, in a sixth implementation manner of the first aspect, the capturing an image of interest from the image to be detected based on the position of the region of interest includes:
determining a range of the region of interest using the location of the region of interest;
the range of the determined region of interest is expanded outwards in equal proportion, and the range of the image of interest is determined;
and cutting out the image of interest from the image to be detected by utilizing the range of the image of interest.
According to the protection state detection method provided by the embodiment of the invention, the range of the region of interest is expanded outwards in an equal proportion, so that the image of interest can be contained to the maximum extent, and the accuracy of the detection method is improved.
According to a second aspect, an embodiment of the present invention further provides a protection state detection apparatus, including:
the acquisition module is used for acquiring the image to be detected;
the scene recognition module is used for carrying out scene recognition on the image to be detected and determining whether the image to be detected is an image of a target protection scene or not;
and the protection detection module is used for detecting the protection state of the region of interest in the image to be detected when the image to be detected is the image of the target protection scene so as to determine whether the protection state of the region of interest is normal or not.
The protection state detection device provided by the embodiment of the invention provides a detection method which is not based on a sensor based on a computer vision technology, and before the protection state detection is carried out on the region of interest, scene recognition is carried out on the image to be detected, whether the image is the image of the target protection scene or not is determined, so that false detection possibly caused by the position deviation of the image acquisition equipment can be carried out, and the detection reliability and the robustness of a detection algorithm are improved. Further, since the detection method is independent of the acquisition angle of the image, the mobility on different devices can be realized.
According to a third aspect, an embodiment of the present invention provides an electronic device, including: the protection state detection device comprises a memory and a processor, wherein the memory and the processor are in communication connection, the memory stores computer instructions, and the processor executes the computer instructions, so that the protection state detection method in the first aspect or any implementation manner of the first aspect is executed.
According to a fourth aspect, an embodiment of the present invention provides a computer readable storage medium storing computer instructions for causing the computer to perform the protection state detection method according to the first aspect or any implementation manner of the first aspect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a guard state detection method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a guard state detection method according to an embodiment of the present invention;
FIG. 3 is a schematic architecture diagram of a scene classification model according to an embodiment of the invention;
FIG. 4 is a flow chart of a guard state detection method according to an embodiment of the present invention;
FIG. 5 is a flow chart of detection classification according to an embodiment of the invention;
FIG. 6 is a schematic illustration of a cut-out of a region of interest according to an embodiment of the invention;
FIG. 7 is a block diagram of a guard state detecting apparatus according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to solve the problem of low detection reliability caused by hardware installation, a method is conceivable to detect protection based on a software algorithm, for example, detect the protection state of an acquired image based on a computer vision technology, and can detect edges first and then classify whether the edges are protected. However, in a building scenario, not all areas with edge features need to be protected. Such as non-pedestrian passages, edges with a height below 2m, etc. For the above scenario, only the edge detection and classification method will lead to false detection.
Based on the above, the protection state detection method provided by the embodiment of the invention detects the protection scene on the acquired image before edge detection, and determines whether protection detection is needed. When the method detection is needed, the protection state is detected.
According to an embodiment of the present invention, there is provided a protection state detection method embodiment, it should be noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different from that herein.
In this embodiment, a protection state detection method is provided, which may be used in the above electronic device, such as a camera, a mobile phone, a tablet computer, and smart glasses, and fig. 1 is a flowchart of the protection state detection method according to an embodiment of the present invention, and as shown in fig. 1, the flowchart includes the following steps:
s11, acquiring an image to be detected.
The image to be detected can be directly acquired by the electronic equipment or can be acquired by the electronic equipment from third-party equipment. For example, the electronic device has an image acquisition function, and acquires an image by using the image acquisition function, and accordingly, the electronic device can acquire an image to be detected; the electronic equipment is connected with the third party equipment, the third party equipment collects images and sends the collected images to the electronic equipment, and accordingly the electronic equipment can acquire the images to be detected. The source is not limited at all, and can be set correspondingly according to practical situations.
As a specific application scenario of this embodiment, the patrol personnel may collect the image of each area by using the patrol apparatus, or the patrol apparatus may perform the protection state detection method to perform the protection state detection, or the patrol apparatus may send the collected image to the electronic device, and the electronic device performs the detection, or the like.
S12, carrying out scene recognition on the image to be detected, and determining whether the image to be detected is an image of a target protection scene.
The target protection scene is used for representing a scene needing protection detection, such as a border and a hole. Specifically, the characteristics of the edges and the holes are stored in the electronic equipment, the characteristics of the image to be detected are extracted, and the extracted characteristics are compared with the characteristics of each target protection scene, so that whether the image to be detected is the image of the target protection scene or not can be determined.
Or, training a large number of scene images to obtain a scene classification model, driving the image to be detected into the scene classification model to perform scene recognition, and determining whether the image is an image of a target protection scene. Wherein specific details regarding the scene classification model will be described in detail below.
The specific processing mode of scene recognition is not limited, and the specific processing mode can be set correspondingly according to actual requirements, and the electronic equipment can be ensured to determine whether the image to be detected is the image of the target protection scene or not.
When the image to be detected is the image of the target protection scene, S13 is executed; otherwise, other operations are performed.
S13, detecting the protection state of the region of interest in the image to be detected to determine whether the protection state of the region of interest is normal.
After the electronic equipment determines that the image to be detected is the image of the target protection scene, the electronic equipment needs to detect the protection state so as to determine whether the protection state is normal or not. The region of interest is a region needing to be protected, such as a border and a hole.
The determination of the region of interest can be determined by using a detection network model, and after the region of interest is determined, the protection classification model is used for identifying the image of the region of interest to determine whether the protection state is normal.
Or extracting the characteristics of the image to be detected, and simultaneously detecting and protecting the region of interest by using the extracted characteristics so as to improve the detection efficiency.
According to the protection state detection method provided by the embodiment, a detection method which is not based on a sensor is provided based on a computer vision technology, and before the protection state detection is carried out on the region of interest, scene recognition is carried out on an image to be detected, whether the image is an image of a target protection scene or not is determined, false detection possibly caused by the position deviation of the image acquisition equipment can be filtered, and the detection reliability and the robustness of a detection algorithm are improved. Further, since the detection method is independent of the acquisition angle of the image, the mobility on different devices can be realized.
In this embodiment, a protection state detection method is provided, which may be used in the above electronic device, such as a camera, a mobile phone, a tablet computer, and smart glasses, and fig. 2 is a flowchart of the protection state detection method according to an embodiment of the present invention, and as shown in fig. 2, the flowchart includes the following steps:
s21, acquiring an image to be detected.
Please refer to S11 in the embodiment shown in fig. 1 in detail, which is not described herein.
S22, carrying out scene recognition on the image to be detected, and determining whether the image to be detected is an image of a target protection scene.
When the image to be detected is the image of the target protection scene, executing S23; otherwise, S24 is performed.
The target protection scene can be set correspondingly according to actual requirements, and is not limited in any way. For example, according to the actual scene of the construction site, a corresponding target protection scene is determined.
Specifically, the step S22 includes:
s221, acquiring a scene classification model.
The input of the scene classification model is an image to be detected, and the output is the probability that the image to be detected is a target protection scene. Since the subsequent detection model uses the edge feature of the image to be detected for detection, not all scenes with the edge feature need to be protected. Wherein, the edge feature is flat on one side and the drop on the other side. This factor may lead to more false detection of the detection model during the detection process. For example, the height drop is less than 2 meters, or is not the case for pedestrian traffic.
Therefore, before the detection model detects, the scene classification model is utilized to identify the target protection scene so as to filter out images which do not belong to the target protection scene. For example, positive samples include scene pictures of common indoor entrance to a cave scenes, outdoor entrance to a cave scenes, balcony scenes, stair scenes, etc., and negative samples are typically added to some other site scene pictures such as some scaffolds or other tools for close-up shooting, etc. The corresponding setting can be carried out according to actual requirements.
After training to obtain a scene classification model, verifying by using sample data in a verification data set; if the verification is failed, sample data corresponding to the failed image is added to strengthen the learning of the part of the image. That is, in the training process, the training sample data set is dynamically adjusted to obtain a scene classification model with accurate prediction results.
S222, inputting the image to be detected into a scene classification model, so as to perform global feature extraction on the image to be detected by using the scene classification model, and determining whether the image to be detected is an image of a target protection scene.
Since the detection of the scene needs to be started from the global of the image to be detected, the identification is carried out on the whole. Therefore, the scene classification model is utilized to carry out global feature extraction on the image to be detected, and whether the image is the image of the target protection scene or not can be accurately identified.
In an alternative implementation manner of this embodiment, in conjunction with fig. 3, S222 includes:
(1) Scaling the image to be detected to a preset size, and performing global feature extraction on the scaled image by utilizing a convolution unit in the scene classification model to obtain global features.
Wherein the convolution unit comprises a plurality of convolution layers and a attention module connected with the last convolution layer.
The scene classification model comprises an input unit, a convolution unit and an output processing unit. Specifically, the input unit is used for scaling the input image to a preset size to ensure that the image size input to the convolution unit remains consistent.
As shown in fig. 3, the preset size is 416×416. However, the scope of the present invention is not limited thereto, and may be set correspondingly according to actual situations.
The convolution unit comprises a plurality of convolution layers and an attention module, wherein the attention module is connected with the last convolution layer. The attention module is added into the convolution unit, so that the scene classification model can be better positioned to the region of the target feature when the feature is extracted, and the model performance is improved on the premise of not increasing the calculation amount. The specific number of the convolution layers can be set correspondingly according to actual requirements, and the specific number of the convolution layers is not limited in any way.
The electronic equipment inputs the image to be detected into the scene classification model, and the global features are extracted by utilizing the convolution unit, namely the global features are output from the attention module.
(2) And determining whether the image to be detected is an image of the target protection scene or not based on the global features.
As shown in fig. 3, global feature flattening is input to the fully connected layer and the Softmax layer to obtain final output, and the maximum value in the output vector represents the scene of the image to be detected predicted by the scene classification model. Accordingly, it can be determined whether the image to be detected is an image of the target protection scene.
The image to be detected is subjected to scene classification model to obtain the prediction of the model on the current scene, and if the image to be detected is a target protection scene, the subsequent protection state detection is carried out; if not, a result of the non-target scene may be returned and S24 is performed.
S23, detecting the protection state of the region of interest in the image to be detected to determine whether the protection state of the region of interest is normal.
When the electronic equipment determines that the scene in the image to be detected is the target protection scene, the protection state detection is carried out on the region of interest, so as to determine whether the protection state is normal or not.
This step will be described in detail later in detail.
S24, determining the offset of an image acquisition device for acquiring the image to be detected, and sending the offset information of the image acquisition device to a preset object.
When the scene in the image to be detected is determined not to be the target protection scene, the image acquisition device for acquiring the image to be detected is determined to be offset. For example, when the image acquisition device is a monitoring device, the lens of the monitoring device may be offset due to some reasons, so that the offset can be detected at this time, and the offset information is sent to a preset object to adjust the preset object in time.
When the image acquisition device is a mobile acquisition device, the problem of acquisition angle may be caused, and the operator needs to be reminded that the target protection scene may be misaligned, that is, the target protection scene may be misaligned due to the position of the lens, and the like.
According to the protection state detection method provided by the embodiment, since the protection state detection of the subsequent interested area is performed based on the local features of the adjacent scene, a large number of areas with the adjacent features but without protection exist on the construction site, and the subsequent protection state detection causes more false detection on equipment with frequent replacement of the acquired scene, the false detection can be filtered out from the scene by adding the global feature extraction of the scene classification model, and the reliability of the algorithm is improved. When the image of the target protection scene is detected to be absent, the preset object image acquisition device is notified of the offset, and the preset object image acquisition device needs to be adjusted so as to timely adjust the acquisition angle of the image acquisition device.
In this embodiment, a protection state detection method is provided, which may be used in the above electronic device, such as a camera, a mobile phone, a tablet computer, and smart glasses, and fig. 4 is a flowchart of the protection state detection method according to an embodiment of the present invention, and as shown in fig. 4, the flowchart includes the following steps:
s31, acquiring an image to be detected.
Please refer to the embodiment S21 shown in fig. 2 in detail, which is not described herein.
S32, carrying out scene recognition on the image to be detected, and determining whether the image to be detected is an image of a target protection scene.
When the image to be detected is the image of the target protection scene, S33 is executed; otherwise, S34 is performed.
Please refer to the embodiment S22 shown in fig. 2 in detail, which is not described herein.
S33, detecting the protection state of the region of interest in the image to be detected to determine whether the protection state of the region of interest is normal.
Specifically, the step S33 includes:
s331, acquiring a detection model and a protection classification model.
The specific network structures of the detection model and the protection classification model do not limit the detection model at all, and only the detection model is required to detect the region of interest in the image to be detected, and the protection classification model can classify whether the region of interest is protected or not.
The input of the detection model is an image to be detected, and the output comprises the position and the category of the region of interest in the image to be detected; the input of the protection classification model is an image of the region of interest, and the output is the probability of setting protection.
Further alternatively, the protection classification model may classify by category, for example, a model for classifying the protection of the borderline, a model for classifying the protection of the opening, and so on. Or, further, objects with large differences in visual characteristics may use different classifiers, such as stairs, balcony edges, reserved holes and elevator holes. The setting may be performed according to actual situations, and the setting is not limited in any way.
S332, inputting the image to be detected into a detection model, and determining the position of the region of interest.
The detection model is obtained by determining the region of interest based on local features of the image to be detected. In some optional implementations of this embodiment, the step S332 may include:
(1) And extracting the critical edge characteristics of the image to be detected by using a characteristic extraction unit in the detection model.
The feature extraction unit may include a plurality of convolution layers, or may add a attention module based on the convolution layers, or the like, and the specific structure of the feature extraction unit is not limited herein, and the electronic device extracts the edge feature of the image to be detected by using the feature extraction unit.
(2) Based on the edge feature, the category and location of the region of interest is determined, including the edge or the opening.
According to the difference of common category openings of the construction site in visual characteristics, the detection targets of the detection model are divided into four categories: the side of the stairs, the adjacent sides of the balcony and the storey, the elevator door opening and the circular or square opening. In the training stage of the detection model, a training set is formed by using the marked pictures with the four types of targets and the non-target negative examples, wherein the negative examples are obtained by uniformly sampling various pictures acquired from a construction site scene. In addition, a test set with a smaller number of samples but similar in distribution to and independent of the training set was constructed in the same manner. In the training process, after a version of trained detection model is obtained, testing is carried out on a testing set, then data of the training set are adjusted according to missing detection and false detection on the testing set, and positive and negative samples in corresponding scenes are increased according to the situation of false detection.
The object detector outputs the type of the region of interest, namely, which type of border or opening, the confidence level, the upper left corner coordinate of the region of interest in the image and the length and width of the region, and the position of the border region in the image can be determined through the two output parameters.
And extracting local characteristics of the image to be detected by the detection model, so that the detection is more focused on details, and the detection reliability is improved.
S333, based on the position of the region of interest, the image of interest is intercepted from the image to be detected.
The electronic equipment cuts the image to be detected by utilizing the position of the region of interest, and the purpose of cutting is to cut out the edge/hole area identified in the previous step. For example, the region of interest is cropped from the image to be detected using the opencv correlation function.
The output of the last detection model contains the upper left corner coordinates of the target region in the image and the length and width (x 1 ,y 1 Based on this, the lower right corner coordinates (x) of the region of interest in the image can be obtained 1, y 1, x 2, y 2 )。
In other optional implementation manners of this embodiment, S333 may include:
(1) The location of the region of interest is used to determine the extent of the region of interest.
During cutting, the interested area can be enlarged, so that the whole edge opening area is ensured to be in the cutting area. As shown in FIG. 6, (x) 1, y 1 ) And (x) 2, y 2 ) The upper left corner and lower right corner coordinates of the region of interest output by the detection model, respectively.
(2) And (5) expanding the range of the determined region of interest outwards in equal proportion, and determining the range of the image of interest.
When in cutting, the coordinates of the left upper corner and the right lower corner of the cutting area are respectively enlarged by 20 percent of the length and the width of the original area, and the specific calculation mode is as follows:
wherein w and h are the width and height of the region of interest output by the detection model, respectively.
It should be noted that fig. 6 is only a possible embodiment, but the scope of the present invention is not limited thereto, and the arrangement may be specifically performed according to the actual situation.
(3) And intercepting the image of interest from the image to be detected by utilizing the range of the image of interest.
After determining the range of the image of interest, the electronic device may intercept the image of interest from the image to be detected.
The range of the region of interest is expanded outwards in equal proportion, so that the image of interest can be contained to the maximum extent, and the accuracy of the detection method is improved.
S334, inputting the image of interest into the protection classification model, and determining whether the protection state of the region of interest is normal.
After the electronic equipment is cut, the cut interested image is sent to a protection classification model, and the protection classification model in the step is used for judging whether the edge opening area is provided with protection measures according to the regulations. For example, for edges and stairs, steel reinforced pipes or other steel materials are required to be welded to form guard rails; for the hole, a cover plate is used for covering the hole, and then guard rails are arranged around the hole. If the identification result of the protection classification model on the area is the type without protection, the algorithm returns the potential safety hazard of 'no protection at the edge/opening'.
Alternatively, the feature extraction structure of the classifier is similar to the scene classification model, but because a classifier is employed in the scene classification model, the last layer uses a sigmoid layer instead of the softmax layer. The output of the protection classification model is the probability that the region is protected, where a probability greater than 0.5 may be defined to indicate that the region is protected.
As an alternative implementation of this embodiment, as shown in fig. 5, S33 includes: inputting the image to be detected into a detection model, and outputting the category and the coordinates of the target area; judging whether the target area is an interested area or not, and finding out coordinates of the interested area on the original image when the target area is the interested area; and when the hidden danger of the region of interest does not exist, outputting.
After the coordinates of the region of interest are determined, amplifying the region of interest according to the coordinates, then cutting, outputting a cutting image, inputting the cutting image into a protection classification model, and judging whether protection measures are set. When the protective measures are set, returning to the adjacent side/the opening without hidden danger; when no protective measures are set, the return edge/opening has hidden danger.
S34, determining the offset of an image acquisition device for acquiring the image to be detected, and sending the offset information of the image acquisition device to a preset object.
Please refer to the embodiment S24 shown in fig. 2 in detail, which is not described herein.
According to the protection state detection method provided by the embodiment, whether the protection of the region of interest is normal is identified by utilizing the detection and classification flow, so that the accuracy of judging the identified protection state can be ensured to the greatest extent, and false alarm and missing report of unprotected alarms are reduced.
The protection state detection method provided by the embodiment of the invention is based on a computer vision technology, realizes the automatic identification of the protection potential safety hazard-free near-edge hole on the construction site, is not based on a sensor, has the advantages of convenient deployment and low maintenance cost compared with the existing scheme, and has great significance for the safety guarantee of workers in the building construction process. Image acquisition equipment such as cameras deployed at a job site may change orientation due to factors such as manual movement, resulting in acquired pictures that are not targeted borderline portal areas. In order to automatically identify the situation and avoid false alarm caused by hidden danger, the scheme of the embodiment of the invention adds a scene classification besides the detection and classification process, so as to identify whether the image input by the current image acquisition equipment is a target protection scene or not, and improve the robustness of the algorithm. In addition, as described above, the false detection possibly occurring in the detector can be effectively filtered by using scene recognition under the condition that the input image scene is frequently changed, so that the scheme can be conveniently migrated to the mobile phone or the intelligent glasses and other equipment possibly with frequent scene replacement. In particular, for a fixedly mounted image acquisition device, there is a possibility of a human shift, resulting in a shift of the entire image in the scene; for mobile phones and intelligent glasses, the situation can be screened out by shooting the image in any place, for example, shooting the image with a feature of a border and passing the image through a scene classification model. That is, adding a scene classification model drops some useless scenes.
The embodiment also provides a protection state detection device, which is used for implementing the above embodiment and the preferred implementation manner, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
The present embodiment provides a protection state detection device, as shown in fig. 7, including:
an acquisition module 41, configured to acquire an image to be detected;
the scene recognition module 42 is configured to perform scene recognition on the image to be detected, and determine whether the image to be detected is an image of a target protection scene;
and the protection detection module 43 is configured to detect a protection state of a region of interest in the image to be detected when the image to be detected is an image of a target protection scene, so as to determine whether the protection state of the region of interest is normal.
According to the protection state detection device, a detection method which is not based on a sensor is provided based on a computer vision technology, and before the protection state detection is carried out on the region of interest, scene recognition is carried out on an image to be detected, whether the image is an image of a target protection scene or not is determined, false detection possibly caused by the position deviation of the image acquisition equipment can be filtered, and the reliability of detection and the robustness of a detection algorithm are improved. Further, since the detection method is independent of the acquisition angle of the image, the mobility on different devices can be realized.
The guard state detection means in this embodiment is presented in the form of functional units, where units refer to ASIC circuits, processors and memories executing one or more software or firmware programs, and/or other devices that can provide the above described functionality.
Further functional descriptions of the above respective modules are the same as those of the above corresponding embodiments, and are not repeated here.
The embodiment of the invention also provides electronic equipment, which is provided with the protection state detection device shown in the figure 7.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an electronic device according to an alternative embodiment of the present invention, as shown in fig. 8, the electronic device may include: at least one processor 51, such as a CPU (Central Processing Unit ), at least one communication interface 53, a memory 54, at least one communication bus 52. Wherein the communication bus 52 is used to enable connected communication between these components. The communication interface 53 may include a Display screen (Display) and a Keyboard (Keyboard), and the selectable communication interface 53 may further include a standard wired interface and a wireless interface. The memory 54 may be a high-speed RAM memory (Random Access Memory, volatile random access memory) or a non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 54 may alternatively be at least one memory device located remotely from the aforementioned processor 51. Wherein the processor 51 may be in conjunction with the apparatus described in fig. 8, the memory 54 stores an application program, and the processor 51 invokes the program code stored in the memory 54 for performing any of the method steps described above.
The communication bus 52 may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The communication bus 52 may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, only one thick line is shown in fig. 8, but not only one bus or one type of bus.
Wherein the memory 54 may include volatile memory (english) such as random-access memory (RAM); the memory may also include a nonvolatile memory (english: non-volatile memory), such as a flash memory (english: flash memory), a hard disk (english: hard disk drive, abbreviated as HDD) or a solid state disk (english: solid-state drive, abbreviated as SSD); memory 54 may also include a combination of the types of memory described above.
The processor 51 may be a central processor (English: central processing unit, abbreviated: CPU), a network processor (English: network processor, abbreviated: NP) or a combination of CPU and NP.
The processor 51 may further include a hardware chip, among others. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof (English: programmable logic device). The PLD may be a complex programmable logic device (English: complex programmable logic device, abbreviated: CPLD), a field programmable gate array (English: field-programmable gate array, abbreviated: FPGA), a general-purpose array logic (English: generic array logic, abbreviated: GAL), or any combination thereof.
Optionally, the memory 54 is also used for storing program instructions. The processor 51 may invoke program instructions to implement the guard state detection methods as shown in the embodiments of fig. 1, 2 or 4 of the present application.
The embodiment of the invention also provides a non-transitory computer storage medium, which stores computer executable instructions, and the computer executable instructions can execute the protection state detection method in any of the above method embodiments. Wherein the storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a Flash Memory (Flash Memory), a Hard Disk (HDD), or a Solid State Drive (SSD); the storage medium may also comprise a combination of memories of the kind described above.
Although embodiments of the present invention have been described in connection with the accompanying drawings, various modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope of the invention as defined by the appended claims.

Claims (8)

1. A method for detecting a protection state, comprising:
acquiring an image to be detected;
performing scene recognition on the image to be detected, and determining whether the image to be detected is an image of a target protection scene;
when the image to be detected is an image of a target protection scene, detecting the protection state of an interested area in the image to be detected so as to determine whether the protection state of the interested area is normal or not;
the detecting the protection state of the region of interest in the image to be detected to determine whether the protection state of the region of interest is normal, including:
acquiring a detection model and a protection classification model;
inputting the image to be detected into the detection model, and determining the category and the position of the region of interest;
based on the position of the region of interest, capturing an image of interest from the image to be detected;
inputting the image of interest into the protection classification model, and determining whether the protection state of the region of interest is normal or not;
the step of inputting the image to be detected into the detection model, and determining the category and the position of the region of interest comprises the following steps:
extracting the critical edge characteristics of the image to be detected by using a characteristic extraction unit in the detection model;
and determining the category and the position of the region of interest based on the edge feature, wherein the region of interest comprises an edge or a hole.
2. The method of claim 1, wherein the scene recognition of the image to be detected, determining whether the image to be detected is an image of a target protection scene, comprises:
acquiring a scene classification model;
and inputting the image to be detected into the scene classification model, so as to carry out global feature extraction on the image to be detected by utilizing the scene classification model, and determining whether the image to be detected is an image of a target protection scene.
3. The method according to claim 2, wherein the inputting the image to be detected into the scene classification model to perform global feature extraction on the image to be detected by using the scene classification model, and determining whether the image to be detected is an image of a target protection scene includes:
scaling the image to be detected to a preset size, and extracting global features of the scaled image by utilizing a convolution unit in the scene classification model to obtain global features, wherein the convolution unit comprises a plurality of convolution layers and an attention module connected with the last convolution layer;
and determining whether the image to be detected is an image of a target protection scene or not based on the global features.
4. A method according to any one of claims 1-3, characterized in that the method further comprises:
when the image to be detected is not the image of the target protection scene, determining that an image acquisition device for acquiring the image to be detected is offset, and sending information of the offset of the image acquisition device to a preset object.
5. The method according to claim 1, wherein the capturing an image of interest from the image to be detected based on the location of the region of interest comprises:
determining a range of the region of interest using the location of the region of interest;
the range of the determined region of interest is expanded outwards in equal proportion, and the range of the image of interest is determined;
and cutting out the image of interest from the image to be detected by utilizing the range of the image of interest.
6. A protection state detection device, characterized by comprising:
the acquisition module is used for acquiring the image to be detected;
the scene recognition module is used for carrying out scene recognition on the image to be detected and determining whether the image to be detected is an image of a target protection scene or not;
the protection detection module is used for detecting the protection state of the region of interest in the image to be detected when the image to be detected is an image of a target protection scene so as to determine whether the protection state of the region of interest is normal or not;
the detecting the protection state of the region of interest in the image to be detected to determine whether the protection state of the region of interest is normal, including:
acquiring a detection model and a protection classification model;
inputting the image to be detected into the detection model, and determining the category and the position of the region of interest;
based on the position of the region of interest, capturing an image of interest from the image to be detected;
inputting the image of interest into the protection classification model, and determining whether the protection state of the region of interest is normal or not;
the step of inputting the image to be detected into the detection model, and determining the category and the position of the region of interest comprises the following steps:
extracting the critical edge characteristics of the image to be detected by using a characteristic extraction unit in the detection model;
and determining the category and the position of the region of interest based on the edge feature, wherein the region of interest comprises an edge or a hole.
7. An electronic device, comprising:
a memory and a processor, the memory and the processor being communicatively connected to each other, the memory having stored therein computer instructions, the processor executing the computer instructions to perform the guard state detection method of any one of claims 1-5.
8. A computer-readable storage medium storing computer instructions for causing a computer to execute the guard state detection method according to any one of claims 1 to 5.
CN202110852736.1A 2021-07-27 2021-07-27 Protection state detection method and device and electronic equipment Active CN113569730B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110852736.1A CN113569730B (en) 2021-07-27 2021-07-27 Protection state detection method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110852736.1A CN113569730B (en) 2021-07-27 2021-07-27 Protection state detection method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113569730A CN113569730A (en) 2021-10-29
CN113569730B true CN113569730B (en) 2024-02-27

Family

ID=78168101

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110852736.1A Active CN113569730B (en) 2021-07-27 2021-07-27 Protection state detection method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113569730B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758260B (en) * 2022-06-15 2022-10-18 成都鹏业软件股份有限公司 Construction site safety protection net detection method and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2544744A1 (en) * 2006-04-28 2007-10-28 Jocelyn Janson System and method for surveilling a scene
JP2016197093A (en) * 2015-01-04 2016-11-24 高橋 正人 Direction information acquisition device, direction information acquisition program and direction information acquisition method
CN107220786A (en) * 2017-07-26 2017-09-29 西交利物浦大学 A kind of construction site security risk is identificated and evaluated and prevention method
CN208152602U (en) * 2018-03-26 2018-11-27 中建八局第三建设有限公司 A kind of edge protection facility intellectual monitoring prior-warning device
CN110674702A (en) * 2019-09-04 2020-01-10 精英数智科技股份有限公司 Mine image scene classification method, device, equipment and system
CN111832760A (en) * 2020-07-14 2020-10-27 深圳市法本信息技术股份有限公司 Automatic inspection method for well lid based on visual algorithm
CN112382044A (en) * 2020-10-30 2021-02-19 北京中铁建建筑科技有限公司 Intelligent safety monitoring system for construction site
CN112464914A (en) * 2020-12-30 2021-03-09 南京积图网络科技有限公司 Guardrail segmentation method based on convolutional neural network
CN112861623A (en) * 2020-12-31 2021-05-28 深圳市特辰科技股份有限公司 Monitoring system and method for high-rise building engineering safety

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2544744A1 (en) * 2006-04-28 2007-10-28 Jocelyn Janson System and method for surveilling a scene
JP2016197093A (en) * 2015-01-04 2016-11-24 高橋 正人 Direction information acquisition device, direction information acquisition program and direction information acquisition method
CN107220786A (en) * 2017-07-26 2017-09-29 西交利物浦大学 A kind of construction site security risk is identificated and evaluated and prevention method
CN208152602U (en) * 2018-03-26 2018-11-27 中建八局第三建设有限公司 A kind of edge protection facility intellectual monitoring prior-warning device
CN110674702A (en) * 2019-09-04 2020-01-10 精英数智科技股份有限公司 Mine image scene classification method, device, equipment and system
CN111832760A (en) * 2020-07-14 2020-10-27 深圳市法本信息技术股份有限公司 Automatic inspection method for well lid based on visual algorithm
CN112382044A (en) * 2020-10-30 2021-02-19 北京中铁建建筑科技有限公司 Intelligent safety monitoring system for construction site
CN112464914A (en) * 2020-12-30 2021-03-09 南京积图网络科技有限公司 Guardrail segmentation method based on convolutional neural network
CN112861623A (en) * 2020-12-31 2021-05-28 深圳市特辰科技股份有限公司 Monitoring system and method for high-rise building engineering safety

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Ensuring Secure Health Data Exchange across Europe. SHIELD Project;López-Moreno, B 等,;《HEALTHINF: PROCEEDINGS OF THE 12TH INTERNATIONAL JOINT CONFERENCE ON BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES - VOL 5: HEALTHINF》;20190101;422-430 *
基于BIM的工程施工安全关键技术研究;张立茂 等,;《建筑经济》;20180805;第2018年卷(第8期);44-49 *
基于卷积神经网络的建筑工程施工安全预警研究;赵静,;《中国优秀硕士学位论文全文数据库 (工程科技Ⅰ辑)》;20200815;第2020年卷(第8期);B026-20 *

Also Published As

Publication number Publication date
CN113569730A (en) 2021-10-29

Similar Documents

Publication Publication Date Title
KR101926561B1 (en) Road crack detection apparatus of patch unit and method thereof, and computer program for executing the same
CN101751744B (en) Detection and early warning method of smoke
CN111368615B (en) Illegal building early warning method and device and electronic equipment
Yang et al. Deep learning‐based bolt loosening detection for wind turbine towers
CN110796819B (en) Detection method and system for platform yellow line invasion border crossing personnel
US20230005176A1 (en) Throwing position acquisition method and apparatus, computer device and storage medium
CN113283344A (en) Mining conveying belt deviation detection method based on semantic segmentation network
CN107220969A (en) The method of testing and detecting system of product lamp position
CN110956104A (en) Method, device and system for detecting overflow of garbage can
CN113569730B (en) Protection state detection method and device and electronic equipment
CN112270253A (en) High-altitude parabolic detection method and device
CN105976398A (en) Daylight fire disaster video detection method
CN114022810A (en) Method, system, medium and terminal for detecting working state of climbing frame protective net in construction site
CN111275984B (en) Vehicle detection method and device and server
CN115620192A (en) Method and device for detecting wearing of safety rope in aerial work
US20210150692A1 (en) System and method for early identification and monitoring of defects in transportation infrastructure
KR101542134B1 (en) The apparatus and method of surveillance a rock fall based on smart video analytic
CN112001336A (en) Pedestrian boundary crossing alarm method, device, equipment and system
CN114509540B (en) Industrial park air quality detection method and device based on data processing
CN116524539A (en) Collaborative sensing method for interaction risk of signalers and drivers
CN113554682B (en) Target tracking-based safety helmet detection method
CN115841730A (en) Video monitoring system and abnormal event detection method
CN115171220A (en) Abnormal traffic early warning method and device and electronic equipment
CN113920535A (en) Electronic region detection method based on YOLOv5
CN113516120A (en) Raise dust detection method, image processing method, device, equipment and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant