WO2024022059A1 - 环境检测报警方法、装置、计算机设备及存储介质 - Google Patents
环境检测报警方法、装置、计算机设备及存储介质 Download PDFInfo
- Publication number
- WO2024022059A1 WO2024022059A1 PCT/CN2023/105840 CN2023105840W WO2024022059A1 WO 2024022059 A1 WO2024022059 A1 WO 2024022059A1 CN 2023105840 W CN2023105840 W CN 2023105840W WO 2024022059 A1 WO2024022059 A1 WO 2024022059A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- detection
- dust
- alarm
- preset
- frame
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 642
- 238000000034 method Methods 0.000 title claims abstract description 53
- 239000000428 dust Substances 0.000 claims abstract description 430
- 230000007246 mechanism Effects 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 11
- 238000012549 training Methods 0.000 claims description 11
- 238000004458 analytical method Methods 0.000 claims description 10
- 238000013500 data storage Methods 0.000 claims description 10
- 230000007613 environmental effect Effects 0.000 claims description 8
- 230000004044 response Effects 0.000 claims description 8
- 238000004891 communication Methods 0.000 claims description 2
- 238000010276 construction Methods 0.000 description 13
- 230000008569 process Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 238000012545 processing Methods 0.000 description 7
- 239000002689 soil Substances 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 238000010410 dusting Methods 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 230000000644 propagated effect Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000001186 cumulative effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000002893 slag Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/48—Matching video sequences
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/245—Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/18—Status alarms
- G08B21/182—Level alarms, e.g. alarms responsive to variables exceeding a threshold
Definitions
- the present disclosure relates to the technical field of artificial intelligence and target detection, and specifically relates to an environmental detection and alarm method, device, computer equipment and storage medium.
- the present disclosure aims to solve at least one of the technical problems existing in the prior art and provide an environment detection and alarm method, device, computer equipment and storage medium.
- embodiments of the present disclosure provide an environment detection and alarm method, including:
- the video stream of the preset collection area and sequentially collect frame images from the video stream as detection images, input them into the image recognition model, and obtain the target detection results, wherein the target detection results include dust level information;
- the target detection result includes a dust detection frame
- determine the dust status information based on the positioning information of the dust detection frame
- a dust alarm is performed; when it is determined that the dust alarm conditions are met based on the detection data and the historical detection data set, the dust alarm conditions are updated based on the detection data the historical detection data set, and returns to the step of sequentially collecting frame images from the video stream as detection images.
- the dust status information includes a first status value that represents the presence of dust and a second status value that represents the absence of dust;
- the historical detection data set includes historical detections corresponding to at least one frame of historical detection images collected in the past. data;
- Determining the dust status information based on the positioning information of the dust detection frame includes:
- the dust status information is determined to be the first status value
- the historical detection data set can accommodate no more than a preset amount of historical detection data
- the updating of the historical detection data set according to the detection data includes:
- the historical detection data with the earliest storage time is removed from the current historical detection data set, and the detection data is The data is added to the historical detection data set as new historical detection data.
- the environment detection method further includes:
- the time when the detection image is collected is used as the dust start time, And generate dust alarm information.
- the detection data includes the dust level indicated by the dust level information
- a dust alarm is performed, including:
- the preset alarm mechanism is a real-time alarm
- the alarm will be based on the dust level in the detection data. and the dust level in the historical detection data set, and perform dust alarm;
- the preset alarm mechanism is an interval alarm
- the time difference between the current system time and the last alarm time after the dust alarm is greater than the interval alarm time
- the alarm will be based on the dust level in the detection data and the history Detect the dust level in the data set and issue a dust alarm.
- after alarming it also includes:
- the first preset condition includes that the area of the dust detection frame is greater than or equal to a third preset threshold; and/or, the intersection between the dust detection frame and the preset dust reference frame The ratio is greater than or equal to the fourth preset threshold.
- a dust alarm it also includes:
- the target detection results also include exposed muck detection frames
- the positioning information of the exposed muck detection frame is used to determine whether the exposed muck detection frame meets the second preset condition
- the exposed muck detection frames all meet the second preset condition, an exposed muck alarm is performed, and the exposed muck alarm information is generated; the exposed muck alarm information Including the location of exposed muck in the preset collection area.
- the second preset condition includes that the number of exposed muck detection frames is greater than or equal to the fifth preset threshold; and/or the area of the exposed muck detection frame is greater than or equal to the sixth preset threshold.
- a preset threshold; and/or, the intersection ratio between the exposed muck detection frame and the preset exposed muck reference frame is greater than or equal to the seventh preset threshold.
- a dust alarm it also includes:
- the target detection result also includes a person detection frame
- the number of people in the preset evacuation reference frame is determined based on the positioning information of the person detection frame and the positioning information of the preset evacuation reference frame;
- the evacuation duration is the difference between the current time of the system and the evacuation start time.
- the evacuation start time is the time in response to receiving the instruction to track the person.
- a dust alarm it also includes:
- the target detection results also include sign detection frames, according to the instructions
- the positioning information of the sign detection frame and the positioning information of the preset sign reference frame determine the matching result of the sign detection frame and the preset sign reference frame;
- the sign detection frame does not match the preset sign reference frame, a sign alarm is performed and sign alarm information is generated; the sign alarm information includes The location of the sign in the preset collection area.
- training the image recognition model includes:
- the sample label includes the position information of at least one reference frame corresponding to the preset collection area, and the location information of each reference frame.
- Category information includes one of weather category, person category, sign category and exposed dirt category;
- a weighted loss value is constructed, and the image recognition model is continuously trained by performing weighted backpropagation on the weighted loss value until the weighted loss value converges, and a trained image recognition model is obtained.
- embodiments of the present disclosure also provide an environment detection and alarm device, which includes a collection module, a target detection module, an alarm analysis module and a data storage module;
- the collection module is used to obtain the video stream of the preset collection area, and sequentially collect frame images from the video stream as detection images;
- the target detection module is used to input the detection image into the image recognition model to obtain a target detection result, wherein the target detection result includes dust level information; the target detection result includes a dust detection frame. Next, determine the dust status information based on the positioning information of the dust detection frame; use the dust status information and the dust level information as detection data;
- the alarm analysis module is used to issue a dust alarm when it is determined that dust alarm conditions are met based on the detection data and historical detection data sets;
- the data storage module is configured to update the historical detection data set based on the detection data when it is determined that dust alarm conditions are met based on the detection data and the historical detection data set.
- embodiments of the present disclosure also provide a computer device, which includes: a processor, a memory, and a bus, and the memory stores machine-readable instructions executable by the processor.
- a computer device which includes: a processor, a memory, and a bus, and the memory stores machine-readable instructions executable by the processor.
- the processor and the memory communicate through a bus, and when the machine-readable instructions are executed by the processor, the steps of the environment detection and alarm method in the first aspect or any example of the first aspect are performed.
- embodiments of the present disclosure also provide a computer non-transitory readable storage medium, wherein a computer program is stored on the computer non-transitory readable storage medium, and the computer program is executed when the processor is run as follows: The steps of the environment detection and alarm method in one aspect or any example of the first aspect.
- Figure 1 is a flow chart of an environment detection and alarm method provided by an embodiment of the present disclosure
- FIG. 2 is a schematic flowchart of specific dust detection provided by an embodiment of the present disclosure
- Figure 3a is a schematic flowchart of the specific process of exposed soil detection provided by an embodiment of the present disclosure
- Figure 3b is a schematic flowchart showing the specific flow of personnel safety evacuation detection provided by an embodiment of the present disclosure
- Figure 3c is a schematic flowchart of specific sign detection provided by an embodiment of the present disclosure.
- Figure 4 is a schematic network structure diagram of an image recognition model provided by an embodiment of the present disclosure.
- Figure 5 is a schematic diagram of an environment detection and alarm device provided by an embodiment of the present disclosure.
- FIG. 6 is a schematic structural diagram of a computer device according to an embodiment of the present disclosure.
- a plurality or several mentioned in this disclosure means two or more.
- “And/or” describes the relationship between related objects, indicating that there can be three relationships. For example, A and/or B can mean: A exists alone, A and B exist simultaneously, and B exists alone. The character “/” generally indicates that the related objects are in an "or” relationship.
- embodiments of the present disclosure provide an environmental detection and alarm method, which uses a trained and relatively mature image recognition model to detect the preset collection area.
- Target detection can be achieved more accurately using images
- the target detection results can directly give the dust level information.
- the dust alarm can provide rational safety management for the on-site environment corresponding to the video collection area, ensuring safe construction and improving project efficiency.
- the execution subject of the environment detection and alarm method provided by the embodiment of the disclosure is generally a computer device with certain computing capabilities.
- the computer device includes, for example, a terminal device or a server or other processing device.
- the terminal device may be a user equipment (User Equipment, UE), a mobile device, a user terminal, a vehicle-mounted device, a wearable device, etc.
- the environment detection and alarm method can be implemented by the processor calling computer readable instructions stored in the memory.
- the following describes the environment detection and alarm method provided by the embodiment of the present disclosure, taking the execution subject as a server as an example.
- FIG. 1 a flow chart is shown of an environment detection and alarm method provided by an embodiment of the present disclosure.
- the method includes steps S101 to S104, wherein:
- the preset collection area may be a fixed area that is preset and associated with dust detection, such as a Region of Interest (ROI). Normally, the preset collection area is set according to the detection task.
- the preset collection area may include construction sites and other areas with a high probability of dust occurrence.
- Video streams in embodiments of the present disclosure include, but are not limited to, video resources transmitted from real-time streaming protocols.
- Collect frame images from the video stream in sequence can be collected from the video stream frame by frame according to the playback order of the video stream. This continuous collection method can avoid missing detection of a certain frame image due to dust. situation occurs.
- the specific implementation mode can be selected according to the actual situation, and is not limited by the embodiments of the present disclosure.
- Each collected frame of detection image is detected separately. Specifically, one frame of image collected is used as a detection image and input into the image recognition model to obtain the target detection result.
- the image recognition model is a pre-trained image recognition model.
- the target detection results output include a variety of detection information, such as the positioning information of the dust detection frame and dust level information. It should be noted that when the target detection result does not include the dust detection frame, the positioning information of the dust detection frame will also be output, but the positioning information is empty information or invalid positioning information, which cannot indicate preset collection. The position in the area; when the target detection result does not contain a dust detection frame, the dust level information will also be output. The dust level is 0, which indicates that there is no dust.
- the target detection result includes a dust detection frame
- determine the dust status information based on the positioning information of the dust detection frame.
- the target detection result includes the dust detection frame, that is, the target detection result includes the effective positioning information of the dust detection frame.
- the location indicated by the dust detection frame is the detected location where dust may exist.
- the positioning information can indicate a certain location area located in the preset detection area.
- the dust status information can be determined.
- the dust status information can be determined to indicate the presence of dust based on the effective positioning information of the dust detection frame.
- the target detection result includes a dust detection frame
- it can also be further determined whether the dust detection frame meets a preset condition (that is, the first preset condition below). When the preset condition is met, the dust detection frame is determined.
- the status information is dust.
- the dust status information may be a status value that represents the dust status (a number that can be logically operated, that is, the first status value or the second status value below), and the dust level information may be a status value that represents the dust status.
- the number of the level determines the corresponding relationship between the two and serves as the detection data at the same time.
- the historical detection data set includes at least one historical detection data.
- Historical detection data is detection data obtained after processing S101 to S103 of historically collected detection images.
- the dust status information indicated by at least one historical detection data in the detection data and the historical detection data set is used to determine whether the dust alarm conditions are met; when the dust alarm conditions are met, a dust alarm is performed.
- a dust alarm when the dust alarm conditions are met, a dust alarm can be initiated based on the dust level information. For example, the higher the dust level indicated by the dust level information, the more serious the dust pollution in the current environment is, and a higher sound alarm frequency can be set; or correspondingly set different color indicator lights to represent different dust levels, and the corresponding dust levels are displayed. corresponding indicator light. Or, send an alarm to relevant personnel by sending a message, and the alarm message indicates the dust level.
- the historical detection data set is updated according to the detection data. Specifically, the detection data can be directly added to the historical detection data set as historical detection data to determine whether the next detection image meets the dust alarm condition. After that, return to execution S101 to sequentially collect frame images from the video stream as detection images to continuously detect dust in the preset collection area.
- the disclosed embodiment uses a trained image recognition model to perform target detection on images in a preset collection area.
- the dust level information of dust detection can be directly obtained, and the dust level information in the historical detection data set can be directly obtained. Analyze the historical detection data and the detection data of the currently collected detection images. When the dust alarm conditions are met, dust alarms can be carried out in a targeted manner based on the dust level information, which can provide rational safety management for the on-site environment corresponding to the video collection area. Ensure safe construction and improve project efficiency.
- the dust status information includes a first status value indicating the presence of dust and a second status value indicating the absence of dust. According to steps S102-1 to S102-2, dust status information is determined, where:
- the positioning information of the dust detection frame can specifically include the positioning coordinates of dust appearing in the preset collection area.
- the positioning coordinates of the preset collection area are known.
- the dust detection frame can be a rectangular frame. According to a certain vertex coordinate or center coordinate of the rectangular frame , as well as the width and height of the rectangular frame, can determine the area of the dust detection frame and the specific area located in the preset collection area.
- the first preset condition includes that the area area of the dust detection frame is greater than or equal to the third preset threshold; and/or the intersection ratio between the dust detection frame and the preset dust reference frame is greater than or equal to the fourth preset threshold.
- the preset dust reference frame is a preset fixed detection area within the preset collection area, which may be the entire preset collection area or a part of the preset collection area.
- the intersection ratio IOU 1 between the dust detection frame and the preset dust reference frame is the overlap area between the dust detection frame and the preset dust reference frame in the preset collection area, and the overlap between the dust detection frame and the preset dust reference frame The ratio of the total area covered in the preset collection area.
- the third preset threshold and the fourth preset threshold can be set based on experience, and are not specifically limited in the embodiments of the present disclosure.
- the area area of the dust detection frame is greater than or equal to the third preset threshold, it can be determined that the dust detection frame meets the first preset condition; and/or, according to the specific area of the dust detection frame located in the preset collection area, and the preset dust reference frame. If the intersection ratio IOU 1 between the dust detection frame and the preset dust reference frame is greater than or equal to the fourth preset threshold, it can be determined that the dust detection frame meets the first preset condition.
- S102-1 is executed for each dust detection frame. As long as one of the dust detection frames meets the first preset condition, it can be determined that the dust status information of the detection image is There is dust.
- the first status value can be set to 1; the second status value can be set to 0.
- the status value in the detection data is the status value indicated by the dust status information (that is, the first status value or the second status value).
- the status value in the historical detection data is the status value indicated by the dust status information corresponding to the historical detection image (that is, the first status value or the second status value).
- the amount of historical detection data in the historical detection data set can be set to a fixed value in advance, that is, the historical detection data set can only store a certain amount of historical detection data.
- the first stored historical detection data in the current historical detection data set can be eliminated to ensure that the amount of data in the historical detection data set remains unchanged.
- the status values are accumulated.
- N is a positive integer greater than 0.
- the historical detection data set may have no limit on the amount of historical detection data stored. Since the presence of dust can be determined by detecting a certain number of frames of images, in order to improve computing efficiency, the status value corresponding to the accumulated current frame detection image can be limited, as well as the corresponding N frames of historical detection images before the current frame detection image in the historical detection data set. The status value of , the cumulative sum can also be obtained.
- the first preset threshold may be set based on experience, and is not specifically limited in this embodiment of the disclosure.
- update the historical detection data set based on the detection data can The capacity does not exceed the preset amount of historical detection data. Specifically, taking the limited storage capacity of the historical detection data set as an example, it is determined whether the number of historical detection data in the current historical detection data set reaches the storage upper limit of the historical detection data set, that is, the amount of historical detection data in the historical detection data set. Whether it is equal to the preset number, if it has not been reached, the detection data can be directly added to the historical detection data set as new historical detection data.
- the historical detection data with the earliest storage time is removed from the current historical detection data set, and the detection data is added to the history as new historical detection data. detection data set.
- the historical detection data with the earliest storage time is stored, that is, the historical detection data with the longest storage time compared with other historical detection data in the current historical detection data set.
- the time when the detection image is collected is used as the dust start time, and the dust generation is generated Alarm information.
- the dust alarm information includes a text message sent to the user, the dust alarm information includes the dust start time.
- the detection data includes a dust level indicated by the dust level information.
- different alarm mechanisms can be set, such as real-time alarm mechanism and interval alarm mechanism.
- the dust alarm can be carried out based on the dust level in the detection data and the dust level in the historical detection data set.
- the so-called real-time alarm mechanism means that as long as the dust alarm conditions are met, the alarm will be issued. If the detected images in each continuous frame meet the dust alarm conditions, the alarm will be continued without interruption.
- the preset alarm mechanism is an interval alarm
- the time difference between the current system time and the last alarm time after the dust alarm is greater than the interval alarm time
- the alarm will be based on the dust level in the detection data and the historical detection data set. Dust level, dust alarm.
- the so-called interval alarm mechanism means that after a dust alarm stops, within the preset interval alarm time, no matter whether the dust alarm conditions are met or not, no alarm will be issued until the time after the alarm stops exceeds After the interval alarm duration is set, it will be judged whether the currently collected detection images meet the dust alarm conditions, and the interval alarm mechanism will be executed cyclically.
- the dust alarm can be directly based on the dust level in the detection data; or the dust alarm can also be based on the dust level in the detection data and the average level of each dust level in the historical detection data set; or, the dust alarm can also be based on the detection data.
- the dust level in the data and any dust level in the historical detection data set can be used to generate dust alarms; alternatively, the dust level can also be based on the dust level in the measurement data and some of the dust levels in the historical detection data set.
- the average level carries out dust alarm, etc.
- the dust end time can be recorded.
- the second preset threshold can be set to 0; or, within the error tolerance, it can be set to 1 or 2 (that is, one or two frames are allowed to exist in the N-frame detection image There is an error in the target detection result of the detection image).
- the step of detecting whether the dust has ended is performed.
- the target detection result corresponding to the detection image no longer includes the dust detection frame, that is, the cascaded dust status information at this time is the second status value indicating no dust.
- the dust end time can be the collection time of the historical detection image corresponding to any historical detection data in the current historical detection data set, or it can also be the first stored historical detection in the current historical detection data set.
- the collection time of the historical detection image corresponding to the data or it may be the collection time of the historical detection image corresponding to the last stored historical detection data in the current historical detection data set.
- FIG. 2 is a schematic flowchart of specific dust detection provided by an embodiment of the present disclosure.
- the specific execution process of the dust alarm is described below through S201 to S217, as shown in Figure 2.
- S201 Collect frame images as detection images.
- S202 Perform environment detection and determine the target detection result.
- environmental detection includes but is not limited to dust detection.
- S203 Determine whether the target detection result includes a dust detection frame. If so, execute S204 for one of the dust detection frames; if not, execute S201.
- S206 Determine whether the traversal of the dust detection frame in the detected image is completed. If the traversal is completed, execute S207; otherwise, execute S203. It should be noted that when the traversal is not completed and returns to execution S203, S204 to S206 are executed in a loop for the individuals of other dust detection frames in the target detection results.
- S209 Determine whether the cumulative sum of the status values in the accumulated detection data and the status values in each historical detection data in the historical detection data set is greater than or equal to the first preset threshold. If so, perform S210; otherwise, perform S201.
- S216 Determine whether the sum of the status values in the accumulated preset quantity historical detection data is less than or equal to the second preset threshold, and whether it is currently in the dust alarm state. If so, execute S217; otherwise, execute S209. It should be noted that here it is judged whether it is currently in the dust alarm state. If it is already in the alarm state, then if the judgment is yes, it can be determined that the dust has ended. If it is not in the alarm state, it means that the dust has not occurred. There is no need to record the end of dust raising, and only need to return to execute S209.
- the image recognition model provided by the embodiments of the present disclosure can also detect other environmental detection tasks related to the environment where the dust is located. For example, there may be exposed dirt accompanying the dust, signs turned over, and objects in the preset collection area. For tasks such as the safe evacuation of personnel, the safety of personnel in construction can be improved by expanding environmental detection. The tests mentioned here such as exposed dirt, overturned signboards, and safe transfer of personnel are all conducted after detecting the presence of dust and issuing a dust alarm. The following describes the environmental testing of exposed muck, overturned signs, and safe transfer of personnel.
- exposure alarms are followed by exposed soil detection. Specifically, return to execution S101.
- the target detection result also includes the exposed muck detection frame
- the re-collected detection image is input into the image recognition model for target detection, and it is judged whether the target detection result includes the exposed muck detection frame.
- the location indicated by the exposed muck detection box is the location where exposed muck exists.
- the second preset condition includes that the number of exposed muck detection frames is greater than or equal to the fifth preset threshold; and/or the area of the exposed muck detection frames is greater than or equal to the sixth preset threshold; and/or the exposed muck detection frame is greater than or equal to the sixth preset threshold; and/or the exposed muck detection frame is greater than or equal to the sixth preset threshold.
- the intersection ratio between the detection frame and the preset bare muck reference frame is greater than or equal to the seventh preset threshold.
- the position indicated by the default exposed muck reference frame is the location of the exposed muck to be detected in the real scene.
- the area of the exposed muck detection frame can be calculated based on the positioning information of the exposed muck detection frame.
- the setting principles of the preset exposed soil reference frame and the preset dust reference frame are similar, that is, a fixed detection area within the preset collection area can be the entire preset collection area, or it can be the entire preset collection area. partial area.
- the intersection-to-union ratio IOU 2 between the exposed muck detection frame and the preset exposed muck reference frame is the overlapping area of the exposed muck detection frame and the preset exposed muck reference frame in the preset collection area.
- the fifth preset threshold, the sixth preset threshold and the seventh preset threshold can be set based on experience, and are not specifically limited in the embodiment of the present disclosure.
- the first preset frame number range refers to a certain number of continuously collected detection images.
- the embodiment of the present disclosure does not limit the form of the exposed muck alarm.
- the generated exposed muck alarm information may include the location of the exposed muck in the preset collection area.
- Figure 3a is a schematic flowchart of the specific process of bare soil detection provided by an embodiment of the present disclosure.
- the specific execution process of the exposed muck alarm is described below through S301 to S309, as shown in Figure 3a.
- the windy weather may be the weather conditions of the current preset collection area obtained based on network information of weather forecast.
- S302 Collect frame images as detection images, perform environment detection, and determine target detection results.
- S307. Determine whether the traversal of the exposed muck detection frame in the detection image is completed. If the traversal is completed, execute S308; otherwise, execute S305. It should be noted that when the traversal is not completed and returns to execution S305, S305 to S307 are executed in a loop for the individuals of other exposed muck detection frames in the target detection results.
- dust alarms are followed by evacuation detection. Specifically, when the dust level indicated by the dust alarm reaches the preset dust level, an instruction for personnel to evacuate the preset collection area is sent; in response to receiving the instruction to track the personnel, return to execution S101, and the target detection result also includes the personnel detection frame.
- the number of people in the preset evacuation reference frame is determined based on the positioning information of the person detection frame and the preset evacuation reference frame; when the number of people is greater than or equal to the eighth preset threshold, and the evacuation duration is longer than the preset If the evacuation time is longer, an evacuation alarm will be issued.
- the evacuation duration is the difference between the current time of the system and the evacuation start time; the evacuation start time is the time in response to receiving instructions from the tracking personnel.
- the preset evacuation basic frame may be a fixed detection area within the preset collection area, the entire preset collection area, or a part of the preset collection area.
- the location indicated by the person detection box is the location of the person.
- the position indicated by the preset evacuation basic frame is the position where people are to be detected to evacuate. Generally, it is near the sign or a position with a high probability of exposed dirt.
- the center point of the person detection frame is located in the preset evacuation reference frame.
- the preset upper limit the eighth preset threshold
- the number of people if it is within the second preset frame number range, the number of people will be If it is zero, evacuation end information is generated. That is to say, if no person detection frame is detected in the detection images collected within a period of time, it can be determined that the personnel have been evacuated.
- FIG 3b is a schematic flowchart of a specific personnel safety evacuation detection provided by an embodiment of the present disclosure.
- the specific execution process of the personnel evacuation alarm is described below through S401 to S413, as shown in Figure 3b.
- S403. Collect frame images as detection images, perform environment detection, and determine target detection results.
- S407. Determine whether the traversal of the person detection frame in the detection image is completed. If the traversal is completed, execute S408; otherwise, execute S405. It should be noted that when the traversal is not completed and returns to execution S405, S405 to S407 are executed in a loop for the individuals of other person detection frames in the target detection results.
- dust alarms are followed by evacuation detection. Specifically, return to execution S101. If the target detection result also includes the sign detection frame, determine the sign detection frame and the preset sign based on the positioning information of the sign detection frame and the positioning information of the preset sign reference frame. The matching result of the reference frame; if within the third preset frame number range, the sign detection frame does not match the preset sign reference frame, a sign alarm will be performed and sign alarm information will be generated; the sign alarm information includes The location of the sign in the preset collection area.
- the re-collected detection image is input into the image recognition model for target detection, and it is judged whether the obtained target detection result includes the sign detection frame.
- the position indicated by the sign detection frame is the position of the sign in the detected image.
- the position indicated by the preset sign reference frame is the location of the sign specified in the real scene.
- the preset threshold here can be set based on experience. For example, in one case, the preset threshold can be set to 1, that is, the intersection ratio is 1, then it is determined that the sign detection frame completely matches the preset sign reference frame; or, in the case where detection errors are allowed, The preset threshold can be set to 0.95. If the intersection ratio is greater than 0.95, it is determined that the sign detection frame matches the preset sign reference frame; otherwise, there is no match.
- the sign detection Call the police. It should be noted that if the matching result between the sign detection frame and the preset sign reference frame indicates that the sign detection frame does not match the preset sign reference frame, it can be considered that dust (or other factors) caused the sign to flip over ( or shifted from the preset position), due to the sign flipping over, there is no directional information during the construction process in the area, which increases the risk of construction personnel. Therefore, when the sign detection frame is detected and the preset sign reference frame does not match When the sign is turned over, organize safety maintenance personnel in a timely manner to deal with the situation when the sign is turned over to ensure the safety of people in the area corresponding to the turned over sign.
- the embodiment of the present disclosure does not limit the form of sign alarm. If there is a sign in the detection image, in this case, when reminding the user in the form of a text message, the generated sign alarm information may include the location of the unmatched sign within the preset collection area.
- Figure 3c is a schematic flowchart of specific sign detection provided by an embodiment of the present disclosure.
- the specific execution process of the sign alarm is described below through S501 to S507, as shown in Figure 3c.
- S502 Collect frame images as detection images, perform environment detection, and determine target detection results.
- S504. Based on the positioning information of the sign detection frame and the positioning information of the preset sign reference frame, determine whether the sign detection frame matches the preset sign reference frame. If they do not match, perform S505; otherwise, perform S502.
- S506. Determine whether the detection image sign detection frame within the third preset frame number range does not match the preset sign reference frame. If so, perform S507; otherwise, perform S502.
- S507 Perform sign alarm and generate sign alarm information.
- Embodiments of the present disclosure also provide a method for training an image recognition model.
- the execution subject may be the server used to execute an environment detection and alarm method in the above embodiment, or it may be a separate server.
- the server that executes the environment detection and alarm method according to the embodiment of the present disclosure is taken as an example to illustrate.
- the specific training steps are as follows: S601 to S603, where:
- Sample images can be image information at different time nodes, mainly including video images under different weather and lighting conditions. It should be noted that the sample image may be an image within a preset collection area collected online, or it may also be an image within a preset collection area stored in advance.
- the sample label includes position information of at least one reference frame corresponding to the preset collection area, and category information of each reference frame; the category information includes weather category, person category, sign category and naked body category.
- weather categories can include rain, snow, fog, dust, sunny, etc.
- the embodiment of the present disclosure trains the image recognition model by setting sample labels of different weather categories, which can avoid the occurrence of rain, snow, and foggy weather that affects the misdetection of dust. That is, the embodiment of the present disclosure can train the image recognition model through the trained image recognition module. Obtain more accurate dust detection results, thereby improving the accuracy of environmental detection.
- the image recognition model to be trained can be a target detection deep neural network based on the image recognition technology yolov5.
- Figure 4 is a schematic network structure diagram of an image recognition model provided by an embodiment of the present disclosure.
- the base detector is the main network of yolov5, cls represents the category branch, reg represents the prediction frame coordinate regression branch, and obj represents the foreground confidence.
- Degree branch, level represents the dust level prediction branch.
- the base detector is a feature extraction process, which is specifically implemented by multi-layer convolution.
- the preprocessed sample image is input and the output is a feature map list with a length of 5, that is, [f 1 , f 2 , f 3 , f 4 , f 5 ].
- reg represents the prediction frame coordinate regression branch and obj represents the foreground confidence branch.
- the number of convolution input channels the number of output feature map channels
- 5 is the five components of the prediction frame coordinate regression, that is, [x, y, w, h, p], where x represents the prediction frame
- y represents the ordinate of the center point of the prediction box
- w represents the width of the center point of the prediction box
- h represents the height of the center point of the prediction box
- p represents the probability of the category to which the prediction
- the dust level branch level it is composed of a layer of 1 ⁇ 1 convolution, and the number of output channels is 4, that is, [c 1 , c 2 , c 3 , c 4 ], where, c 1 , c 2 , c 3 , c 4 respectively represents the probability of dust levels being no dust, weak dust, moderate dust and severe dust.
- a branch of the dust level is introduced from the feature map f 5 with the smallest scale, so that the dust level corresponding to the dust in the prediction frame can be obtained.
- Each i-th prediction frame ⁇ includes seven components of the target category [b 1 , b 2 , b 3 , b 4 , b 5 , b 6 , b 7 ], and five components of the prediction frame coordinate regression [x i , y i , w i , h i , p i ], and the four components of the dust level [c 1 , c 2 , c 3 , c 4 ].
- b 1 , b 2 , b 3 , b 4 , b 5 , b 6 , and b 7 are the category probabilities of rain, snow, fog, dust, people, signs, and exposed dirt respectively;
- x i , y i , w i and h i respectively represent the abscissa and ordinate coordinates of the center point of the i-th prediction box, and the width and height of the i-th prediction box;
- p i represents the probability that the prediction category is the category to which the i-th prediction box belongs (also That is, the foreground confidence of the i-th detection frame);
- c i represents the dust level.
- the category to which the i-th prediction box belongs can be determined based on the probabilities b 1 , b 2 , b 3 , b 4 , b 5 , b 6 , and b 7 , for example, b 1 , b 2 , b 3 , b 4 , If b 4 among b 5 , b 6 , and b 7 is the largest, then the i-th prediction box is determined to be the dust prediction box; similarly, if b 5 is the largest, then the i-th prediction box is determined to be the personnel prediction box; if b 6 is the largest, but Determine the i-th prediction box as the sign prediction box; if b 7 is the largest, determine the i-th prediction box as the exposed muck prediction box. 0 ⁇ i ⁇ n, n is an integer greater than or equal to 1.
- the image recognition module can be used to directly obtain the detection results of dust levels, realizing the prediction of dust severity levels.
- S603 training image recognition model please refer to S603-1 ⁇ S603-3 for details, among which:
- S603-2 Iteratively calculate the intersection ratio between each prediction box and the corresponding reference box, and obtain the first loss value corresponding to each prediction box; iteratively calculate the difference between the category information of each prediction box and the preset category label.
- the second loss value; the third loss value between the foreground confidence of each prediction frame and the baseline foreground confidence is traversed and calculated; the fourth loss value between the predicted dust level and the baseline dust level of the dust prediction frame corresponding to the dust category is traversed and calculated. loss value.
- the reference frame is a preset reference frame corresponding to each category, that is, the reference frame corresponding to the seven categories of rain, snow, fog, dust category, personnel category, sign category and exposed dirt.
- t can represent the preset category label, that is, the real category information of the prediction box; t' can represent the category information predicted by the prediction frame, that is, the model output value/predicted value.
- t can represent the baseline foreground confidence, that is, the true foreground confidence of the prediction frame
- t′ can represent the foreground confidence predicted by the prediction frame, that is, the model Output value/predicted value.
- t can represent the baseline dust level, that is, the real dust level of the prediction frame
- t′ can represent the dust level predicted by the prediction frame, that is, the model output value/ Predictive value.
- the label smoothing method is used for dust level labels during the training process to avoid the image recognition model from being overconfident in the correct label and reduce the gap between positive and negative sample prediction values.
- Label smooth label smooth see formula 2:
- t onehot represents the label code of the dust level (that is, the label code of no dust, weak dust, medium dust and severe dust);
- ⁇ is a hyperparameter
- K is the number of dust levels, in the embodiment of the present disclosure, there are The number K is 4, which means there are four levels: no dust, weak dust, moderate dust and severe dust.
- the overall loss value L total L obj +L cls +L reg +L level .
- an embodiment of the disclosure also provides an environment detection and alarm device.
- Figure 5 is a schematic diagram of the environment detection and alarm device provided by the embodiment of the disclosure. As shown in Figure 5, it includes a collection module 51 , target detection module 52, alarm analysis module 53 and data storage module 54.
- the collection module 51 is used to obtain the video stream of the preset collection area, and sequentially collect frame images from the video stream as detection images.
- the target detection module 52 is used to input the detection image into the image recognition model to obtain a target detection result, wherein the target detection result includes dust level information; when the target detection result includes a dust detection frame, based on the dust detection Positioning information of the frame to determine dust status information; Record the dust status information of the detection image and the dust level information in the target detection results; use the dust status information and dust level information as detection data.
- the alarm analysis module 53 is used to issue a dust alarm when it is determined that dust alarm conditions are met based on the detection data and historical detection data sets.
- the data storage module 54 is used to update the historical detection data set based on the detection data when it is determined that the dust alarm conditions are met based on the detection data and the historical detection data set.
- the dust status information includes a first status value indicating the presence of dust and a second status value indicating the absence of dust
- the historical detection data set includes a history corresponding to at least one frame of historical detection images collected in the history. Test data;
- the target detection module 52 is specifically used to determine whether the dust detection frame meets the first preset condition based on the positioning information of the dust detection frame; if the dust detection frame meets the first preset condition condition, the dust status information is determined to be the first status value; if the dust detection frame does not meet the first preset condition, the dust status information is determined to be the second status value; the alarm analysis module 53 includes an alarm Condition judgment unit, the alarm condition judgment unit is used to accumulate the status value in the detection data and the sum of the status values in each of the historical detection data in the historical detection data set to obtain the sum of status values; if the sum of status values is greater than Or equal to the first preset threshold, it is determined that the dust alarm condition is met.
- the historical detection data set can accommodate no more than a preset amount of historical detection data; the data storage module 54 is configured to allow the data amount of the historical detection data in the historical detection data set to be equal to the In the case of a preset number, the historical detection data with the earliest storage time is removed from the current historical detection data set, and the detection data is added to the historical detection data set as new historical detection data.
- the alarm analysis module 53 is also used to determine that the dust alarm condition is not met based on the historical detection data set, and determine that the dust alarm condition is met based on the detection data and the historical detection data set. , the time when the detection image is collected is used as the dust start time, and dust alarm information is generated.
- the detection data includes the dust level indicated by the dust level information
- the alarm analysis module 53 is used to, if the preset alarm mechanism is a real-time alarm, perform a dust alarm based on the dust level in the detection data and the dust level in the historical detection data set; if the preset alarm mechanism is an interval alarm, alarm, when the time difference between the current time of the system and the last alarm time after the dust alarm is greater than the interval alarm time, based on the dust level in the detection data and the dust level in the historical detection data set, an alarm is issued. Dust alarm.
- the alarm analysis module 53 is also configured to determine the preset collection area after the alarm is issued, if the sum of the status values in the accumulated preset number of historical detection data is less than or equal to the second preset threshold. The dusting ends and the time when the dusting ends is recorded.
- the first preset condition includes that the area of the dust detection frame is greater than or equal to a third preset threshold; and/or the intersection between the dust detection frame and the preset dust reference frame The sum ratio is greater than or equal to the fourth preset threshold.
- the environment detection alarm device also includes an exposed slag alarm module 55, which is used to, after performing a dust alarm, if the target detection result also includes an exposed dregs detection frame, based on the exposed dregs detection frame.
- the positioning information of the detection frame is used to determine whether the exposed muck detection frame meets the second preset condition; if within the first preset frame number range, the exposed muck detection frame meets the second preset condition, Then an exposed muck alarm is performed, and exposed muck alarm information is generated; the exposed muck alarm information includes the location of the exposed muck in the preset collection area.
- the second preset condition includes that the number of exposed muck detection frames is greater than or equal to the fifth preset threshold; and/or the area of the exposed muck detection frames is greater than or equal to the fifth preset threshold.
- the environment detection alarm device also includes a personnel evacuation alarm module 56, which is used to send personnel to evacuate the preset collection area when the dust level indicated by the dust alarm reaches the preset dust level after the dust alarm is performed.
- a personnel evacuation alarm module 56 which is used to send personnel to evacuate the preset collection area when the dust level indicated by the dust alarm reaches the preset dust level after the dust alarm is performed.
- Instruction in response to receiving the instruction to track the person, return to the step of sequentially collecting frame images from the video stream as the detection image, inputting it into the image recognition model, and obtaining the target detection result, where the target detection result also includes the person
- the number of people in the preset evacuation reference frame is determined based on the positioning information of the person detection frame and the positioning information of the preset evacuation reference frame; when the number of people is greater than or equal to the eighth preset Set a threshold, And if the evacuation time is longer than the preset evacuation time, a personnel evacuation alarm will be issued; the evacuation time is the difference between the current time of the system and the evacuation start
- the environment detection alarm device also includes a sign alarm module 57, configured to, after performing a dust alarm, if the target detection result also includes a sign detection frame, based on the sign detection frame.
- the positioning information and the positioning information of the preset sign reference frame determine the matching result of the sign detection frame and the preset sign reference frame; if it is within the third preset frame number range, the sign detection frame If it does not match the preset sign reference frame, a sign alarm is performed and sign alarm information is generated; the sign alarm information includes the location of the sign in the preset collection area.
- the environment detection alarm device also includes a model training module 58 for training the image recognition model.
- the model training module 58 is specifically used to obtain multi-frame sample images of the preset collection area, and label the sample images with sample labels; the sample labels include position information of at least one reference frame corresponding to the preset collection area, and Category information of each reference frame; the category information includes one of weather category, person category, sign category and exposed dirt category; based on the sample image and the sample label, the image recognition to be trained
- the model is trained; a weighted loss value is constructed, and the image recognition model is continuously trained by performing weighted backpropagation on the weighted loss value until the weighted loss value converges, and a trained image recognition model is obtained.
- FIG. 6 is a schematic structural diagram of a computer device according to an embodiment of the present disclosure.
- an embodiment of the present disclosure provides a computer device including: one or more processors 61 , a memory 62 , and one or more I/O interfaces 63 .
- One or more programs are stored on the memory 62.
- the one or more processors implement the environment detection and alarm method as in any of the above embodiments;
- One or more I/O interfaces 63 are connected between the processor and the memory, and are configured to implement information exchange between the processor and the memory.
- the processor 61 is a device with data processing capabilities, including but not limited to a central processing unit (CPU), etc.
- the memory 62 is a device with data storage capabilities, including but not limited to random access memory (RAM, more specifically Such as SDRAM, DDR, etc.), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory (FLASH); I/O interface (read-write The interface) 63 is connected between the processor 61 and the memory 62, and can realize information exchange between the processor 61 and the memory 62, including but not limited to a data bus (Bus), etc.
- RAM random access memory
- ROM read-only memory
- EEPROM electrically erasable programmable read-only memory
- FLASH flash memory
- I/O interface (read-write The interface) 63 is connected between the processor 61 and the memory 62, and can realize information exchange between the processor 61 and the memory 62, including but not limited to a data bus (Bus), etc.
- processor 61, memory 62, and I/O interface 63 are connected to each other and, in turn, to other components of the computing device via bus 64.
- a non-transitory computer-readable medium stores a computer program, wherein when the program is executed by the processor, the steps in the environment detection and alarm method in any of the above embodiments are implemented.
- embodiments of the present disclosure include a computer program product including a computer program carried on a machine-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
- the computer program may be downloaded and installed from the network via the communications component, and/or installed from removable media.
- CPU central processing unit
- non-transitory computer-readable medium shown in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
- the computer-readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination thereof. More specific examples of computer readable storage media may include, but are not limited to: an electrical connection having one or more wires, a portable computer disk, a hard drive, random access memory (RAM), read only memory (ROM), removable Programmd read-only memory (EPROM or flash memory), fiber optics, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
- a computer-readable signal medium may also be any non-transitory computer-readable medium other than computer-readable storage media that can be sent, propagated, or transmitted for use by an instruction execution system, apparatus, or device or programs used in conjunction with it.
- Program code embodied on a non-transitory computer-readable medium may be transmitted using any suitable medium, including but not limited to: wireless, wire, optical cable, RF, etc., or any suitable combination of the foregoing.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more components that implement the specified logical function(s). executable instructions.
- the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
- each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or operations. , or can be implemented using a combination of specialized hardware and computer instructions.
- the circuits or sub-circuits described in the embodiments of the present disclosure may be implemented in software or hardware.
- the described circuit or sub-circuit can also be provided in a processor.
- a processor including: a receiving circuit and a processing circuit.
- the processing module includes a writing sub-circuit and a reading sub-circuit.
- the names of these circuits or sub-circuits do not constitute a limitation on the circuit or sub-circuit itself under certain circumstances.
- a receiving circuit can also be described as "receiving video signals".
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Business, Economics & Management (AREA)
- Emergency Management (AREA)
- Image Analysis (AREA)
Abstract
本公开提供一种环境检测报警方法、装置、计算机设备及存储介质,属于目标检测技术领域,其中环境检测报警方法,获取预设采集区域的视频流,并从视频流中依次采集帧图像作为检测图像,输入至图像识别模型中,得到目标检测结果,其中,所述目标检测结果包括扬尘等级信息;在目标检测结果中包括扬尘检测框的情况下,根据扬尘检测框的定位信息,确定扬尘状态信息;将扬尘状态信息和扬尘等级信息,作为检测数据;在根据检测数据与历史检测数据集判断出满足扬尘报警条件时,进行扬尘报警,并根据检测数据更新历史检测数据集,并返回执行从视频流中依次采集帧图像作为检测图像的步骤。
Description
本公开人工智能及目标检测技术领域,具体涉及一种环境检测报警方法、装置、计算机设备及存储介质。
随着近年来城市建设发展进程加快,高楼大厦、轨道交通等铺设如雨后春笋,城市施工现场不断增加。在施工过程中建筑工地往往伴随着大量扬尘,对城市内的空气环境造成较大影响。人员在较为严重的扬尘环境内施工,极大地影响了人员的身体健康,因此,有必要在工地等场景中检测扬尘。
发明内容
本公开旨在至少解决现有技术中存在的技术问题之一,提供一种环境检测报警方法、装置、计算机设备及存储介质。
第一方面,本公开实施例提供了一种环境检测报警方法,包括:
获取预设采集区域的视频流,并从所述视频流中依次采集帧图像作为检测图像,输入至图像识别模型中,得到目标检测结果,其中,所述目标检测结果包括扬尘等级信息;
在所述目标检测结果中包括扬尘检测框的情况下,根据所述扬尘检测框的定位信息,确定扬尘状态信息;
将所述扬尘状态信息和所述扬尘等级信息,作为检测数据;
在根据所述检测数据与历史检测数据集判断出满足扬尘报警条件时,进行扬尘报警;在根据所述检测数据与所述历史检测数据集判断出满足扬尘报警条件时,根据所述检测数据更新所述历史检测数据集,并返回执行所述从所述视频流中依次采集帧图像作为检测图像的步骤。
在一些示例中,所述扬尘状态信息包括表征存在扬尘的第一状态值和表征无扬尘的第二状态值;所述历史检测数据集中包括历史采集到的至少一帧历史检测图像对应的历史检测数据;
所述根据所述扬尘检测框的定位信息,确定扬尘状态信息,包括:
根据所述扬尘检测框的定位信息,判断所述扬尘检测框是否满足第一预设条件;
若所述扬尘检测框满足所述第一预设条件,则确定所述扬尘状态信息为第一状态值;
若所述扬尘检测框不满足所述第一预设条件,则确定所述扬尘状态信息为第二状态值;
根据所述检测数据与历史检测数据集判断是否满足扬尘报警条件,包括:
累计所述检测数据中的状态值和所述历史检测数据集中的各个所述历史检测数据中的状态值的总和,得到状态值总和;
若所述状态值总和大于或等于第一预设阈值,确定满足扬尘报警条件。
在一些示例中,所述历史检测数据集能够容纳不超过预设数量的历史检测数据;
所述根据所述检测数据更新所述历史检测数据集,包括:
在所述历史检测数据集中的所述历史检测数据的数据量等于所述预设数量的情况下,从当前的所述历史检测数据集中剔除存储时间最早的一历史检测数据,并将所述检测数据作为新一历史检测数据,添加到所述历史检测数据集中。
在一些示例中,所述环境检测方法还包括:
在根据所述历史检测数据集判断出未满足所述扬尘报警条件,且根据所述检测数据与历史检测数据集判断出满足扬尘报警条件时,将采集所述检测图像的时间作为扬尘开始时间,并生成扬尘报警信息。
在一些示例中,所述检测数据包括所述扬尘等级信息指示的扬尘等级;
所述在根据所述检测数据与历史检测数据集判断出满足扬尘报警条件时,进行扬尘报警,包括:
若预先设置的报警机制为实时报警,则根据所述检测数据中的扬尘等级
和所述历史检测数据集中的扬尘等级,进行扬尘报警;
若预先设置的报警机制为间隔报警,则在系统当前时间与进行扬尘报警后的最后一次报警时间的时间差值大于间隔报警时长的情况下,根据所述检测数据中的扬尘等级和所述历史检测数据集中的扬尘等级,进行扬尘报警。
在一些示例中,在进行报警之后,还包括:
若累计预设数量所述历史检测数据中的状态值总和小于或等于第二预设阈值,确定所述预设采集区域扬尘结束,并记录扬尘结束时间。
在一些示例中,所述第一预设条件包括所述扬尘检测框的区域面积大于或等于第三预设阈值;和/或,所述扬尘检测框与预设扬尘基准框之间的交并比大于或等于第四预设阈值。
在一些示例中,在进行扬尘报警之后,还包括:
返回执行从所述视频流中依次采集帧图像作为检测图像,输入至图像识别模型中,得到目标检测结果的步骤,在所述目标检测结果还包括裸露渣土检测框的情况下,根据所述裸露渣土检测框的定位信息,判断所述裸露渣土检测框是否满足第二预设条件;
若在第一预设帧数范围内,所述裸露渣土检测框均满足所述第二预设条件,则进行裸露渣土报警,并生成裸露渣土报警信息;所述裸露渣土报警信息包括所述预设采集区域内裸露渣土所在位置。
在一些示例中,所述第二预设条件包括所述裸露渣土检测框的数量大于或等于第五预设阈值;和/或,所述裸露渣土检测框的区域面积大于或等于第六预设阈值;和/或,所述裸露渣土检测框与预设裸露渣土基准框之间的交并比大于或等于第七预设阈值。
在一些示例中,在进行扬尘报警之后,还包括:
在扬尘报警指示的扬尘等级达到预设扬尘等级时,发送人员撤离所述预设采集区域的指令;
响应于接收到跟踪人员的指令,返回执行从所述视频流中依次采集帧图像作为检测图像,输入至图像识别模型中,得到目标检测结果的步骤,在所
述目标检测结果还包括人员检测框的情况下,根据所述人员检测框的定位信息和预设撤离基准框的定位信息,确定在所述预设撤离基准框中的人员数量;
在所述人员数量大于或等于第八预设阈值,且撤离时长大于或等于预设撤离时长的情况下,则进行人员撤离报警;所述撤离时长为系统的当前时间与撤离开始时间的差值;所述撤离开始时间为响应于接收到跟踪人员的指令的时间。
在一些示例中,在进行扬尘报警之后,还包括:
返回执行从所述视频流中依次采集帧图像作为检测图像,输入至图像识别模型中,得到目标检测结果的步骤,在所述目标检测结果还包括指示牌检测框的情况下,根据所述指示牌检测框的定位信息和预设指示牌基准框的定位信息,确定所述指示牌检测框与所述预设指示牌基准框的匹配结果;
若在第三预设帧数范围内,所述指示牌检测框与所述预设指示牌基准框均不匹配,则进行指示牌报警,并生成指示牌报警信息;所述指示牌报警信息包括所述预设采集区域内指示牌所在位置。
在一些示例中,训练所述图像识别模型的步骤包括:
获取预设采集区域的多帧样本图像,并为所述样本图像标注样本标签;所述样本标签包括所述预设采集区域对应的至少一个基准框的位置信息,以及每个所述基准框的类别信息;所述类别信息包括天气类别、人员类别、指示牌类别和裸露渣土类别中的一种;
根据所述样本图像和所述样本标签,对待训练的图像识别模型进行训练;
构造加权损失值,通过对所述加权损失值进行加权反向传播以持续训练所述图像识别模型,直至所述加权损失值收敛,得到训练完成的图像识别模型。
第二方面,本公开实施例还提供了一种环境检测报警装置,其包括采集模块、目标检测模块、报警分析模块和数据存储模块;
所述采集模块,用于获取预设采集区域的视频流,并从所述视频流中依次采集帧图像作为检测图像;
所述目标检测模块,用于将所述检测图像输入至图像识别模型中,得到目标检测结果,其中,所述目标检测结果包括扬尘等级信息;在所述目标检测结果中包括扬尘检测框的情况下,根据所述扬尘检测框的定位信息,确定扬尘状态信息;将所述扬尘状态信息和所述扬尘等级信息,作为检测数据;
所述报警分析模块,用于在根据所述检测数据与历史检测数据集判断出满足扬尘报警条件时,进行扬尘报警;
所述数据存储模块,用于在根据所述检测数据与所述历史检测数据集判断出满足扬尘报警条件时,根据所述检测数据更新所述历史检测数据集。
第三方面,本公开实施例还提供了一种计算机设备,其中,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当计算机设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如第一方面或第一方面中任一种示例中的环境检测报警方法的步骤。
第四方面,本公开实施例还提供了一种计算机非瞬态可读存储介质,其中,该计算机非瞬态可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如第一方面或第一方面中任一种示例中的环境检测报警方法的步骤。
图1为本公开实施例提供的一种环境检测报警方法的流程图;
图2为本公开实施例提供的扬尘检测的具体流程示意图;
图3a为本公开实施例提供的裸露渣土检测的具体流程示意图;
图3b为本公开实施例提供的人员安全撤离检测的具体流程示意图;
图3c为本公开实施例提供的指示牌检测的具体流程示意图;
图4为本公开实施例提供的一种图像识别模型的网络结构示意图;
图5为本公开实施例提供的环境检测报警装置的示意图;
图6为本公开实施例的一种计算机设备的结构示意图。
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
除非另外定义,本公开使用的技术术语或者科学术语应当为本公开所属领域内具有一般技能的人士所理解的通常意义。本公开中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。同样,“一个”、“一”或者“该”等类似词语也不表示数量限制,而是表示存在至少一个。“包括”或者“包含”等类似的词语意指出现该词前面的对象涵盖出现在该词后面列举的对象及其等同,而不排除其他对象。
在本公开中提及的“多个或者若干个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
经研究发现,传统检测扬尘的检测设备,其检测扬尘实时性较差,且不能检测出扬尘的具体情况(例如扬尘等级等),因此也就不能制定出具有针对性的对施工现场进行较为合理的安全管理,可见传统扬尘检测普遍存在反馈不及时,监管困难等问题,从而影响安全施工与工程效率。
基于传统扬尘检测不能及时确定扬尘具体情况(例如扬尘等级等)的问题,本公开实施例提供了一种环境检测报警方法,其利用训练好的较为成熟的图像识别模型,对预设采集区域的图像进行目标检测,能够得到较为准确
的目标检测结果,该目标检测结果能够直接给出扬尘等级信息,通过对历史检测数据集中的历史检测数据以及当前采集到的检测图像的检测数据进行分析,在满足扬尘报警条件时,具有针对性的进行扬尘报警,能够为视频采集区域对应的现场环境进行合理化的安全管理,确保安全施工,提高工程效率。
为便于对本实施例进行理解,首先对本公开实施例所公开的一种环境检测报警方法进行详细介绍,本公开实施例所提供的环境检测报警方法的执行主体一般为具有一定计算能力的计算机设备,该计算机设备例如包括:终端设备或服务器或其它处理设备,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、车载设备、可穿戴设备等。在一些可能的实现方式中,该环境检测报警方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。
下面以执行主体为服务器为例对本公开实施例提供的环境检测报警方法加以说明。
参见图1所示,为本公开实施例提供的一种环境检测报警方法的流程图,该方法包括步骤S101~S104,其中:
S101、获取预设采集区域的视频流,并从视频流中依次采集帧图像作为检测图像,输入至图像识别模型中,得到目标检测结果。
本步骤中,预设采集区域可以是预先设置的、与扬尘检测关联的某一固定区域,比如感兴趣区域(Region of Interest,ROI)。通常情况下,预设采集区域根据检测任务进行设定,本公开实施例中预设采集区域可以包括施工工地等具有较大概率出现扬尘的区域。
本公开实施例中的视频流包括但不限于来自实时流传输协议传送的视频资源。从视频流中依次采集帧图像,具体地,可以是按照视频流播放顺序,从视频流中逐帧采集连续的帧图像,利用这种连续采集的方式,能够避免某一帧图像扬尘漏检的情况发生。或者,也可以是按照视频流播放顺序,以预设间隔帧数,从视频流中跳帧采集帧图像,通过跳帧获取检测图像,能够在
保证扬尘识别精度的情况下,减少图像识别过程中识别检测图像的数量,从而能够节省计算资源,减轻处理器进行图像识别的负担。具体实施方式,可以根据实际情况进行选择,本公开实施例不进行限定。
对采集到的每帧检测图像分别进行检测,具体地,针对采集到的一帧图像,将其作为检测图像,并输入至图像识别模型中,得到目标检测结果。图像识别模型是预先训练好的图像识别模型,根据图像识别模型的算法架构,其输出的目标检测结果中包含多种检测信息,例如扬尘检测框的定位信息和扬尘等级信息等。需要说明的是,在目标检测结果中不包含扬尘检测框的情况下,也会输出扬尘检测框的定位信息,只不过该定位信息为空信息,或者为无效定位信息,其不能指示预设采集区域中的位置;在目标检测结果中不包含扬尘检测框的情况下,也会输出扬尘等级的信息,该扬尘等级为0,用以表征不存在扬尘。
S102、在目标检测结果中包括扬尘检测框的情况下,根据扬尘检测框的定位信息,确定扬尘状态信息。
目标检测结果中包括扬尘检测框,也即目标检测结果中包括扬尘检测框有效的定位信息。扬尘检测框指示的位置,为检测出的可能存在扬尘的位置。该定位信息能够指示位于预设检测区域的某一位置区域。
根据扬尘检测框的定位信息,可以确定扬尘状态信息。本步骤中在目标检测结果中包括扬尘检测框的情况下,可以根据扬尘检测框有效的定位信息,确定扬尘状态信息为存在扬尘。或者,在目标检测结果中包括扬尘检测框的情况下,也可以进一步判断扬尘检测框是否满足预设条件(也即下述的第一预设条件),在满足该预设条件下,确定扬尘状态信息为存在扬尘。
记录检测图像的扬尘状态信息和目标检测结果中的扬尘等级信息,并将二者作为一组数据进行存储,或者,设定二者的对应关系再进行存储。
S103、将扬尘状态信息和扬尘等级信息,作为检测数据。
扬尘状态信息可以是表征扬尘状态的状态值(是可以进行逻辑运算的数字,也即下述的第一状态值或第二状态值),扬尘等级信息可以是表征扬尘
等级的数字,确定二者对应关系,并同时作为检测数据。
S104、在根据检测数据与历史检测数据集判断出满足扬尘报警条件时,进行扬尘报警;在根据检测数据与历史检测数据集判断出满足扬尘报警条件时,根据检测数据更新历史检测数据集,并返回执行S101中的从视频流中依次采集帧图像作为检测图像的步骤。
本步骤中,历史检测数据集包括至少一个历史检测数据。历史检测数据是历史采集到的检测图像经过S101~S103的处理之后,得到的检测数据。
具体实施时,在检测数据与历史检测数据集中的至少一个历史检测数据分别指示的扬尘状态信息,判断是否满足扬尘报警条件;在满足扬尘报警条件时进行扬尘报警。
在一些示例中,在满足扬尘报警条件时,可以根据扬尘等级信息进行扬尘报警。示例性的,扬尘等级信息指示的扬尘等级越高,表示当前环境扬尘污染越严重,可以设置较高的声音报警频率;或者,对应设置不同颜色指示灯,表征不同扬尘等级,按照对应扬尘等级显示对应的指示灯。或者,以发送信息的方式向相关人员报警,报警信息中标示扬尘等级。
根据检测数据更新历史检测数据集,具体地,可以直接将检测数据作为历史检测数据添加到历史检测数据集中,以用于判断下一检测图像是否满足扬尘报警条件。之后,返回执行S101中的从视频流中依次采集帧图像作为检测图像,以持续对预设采集区域进行扬尘检测。
本公开实施例利用训练好的图像识别模型,对预设采集区域的图像进行目标检测,根据图像识别模型的算法架构,能够直接得出扬尘检测的扬尘等级信息,并通过对历史检测数据集中的历史检测数据以及当前采集到的检测图像的检测数据进行分析,在满足扬尘报警条件时,根据扬尘等级信息具有针对性的进行扬尘报警,能够为视频采集区域对应的现场环境进行合理化的安全管理,确保安全施工,提高工程效率。
针对S102,扬尘状态信息包括表征存在扬尘的第一状态值和表征无扬尘的第二状态值。根据步骤S102-1~S102-2,确定扬尘状态信息,其中:
S102-1、根据扬尘检测框的定位信息,判断扬尘检测框是否满足第一预设条件。
扬尘检测框的定位信息,具体可以包括在预设采集区域出现扬尘的定位坐标,预设采集区域的定位坐标已知,扬尘检测框可以为矩形框,根据矩形框的某一顶点坐标或者中心坐标,以及矩形框的宽度和高度,可以确定扬尘检测框的区域面积、以及位于预设采集区域的具体区域。
第一预设条件包括扬尘检测框的区域面积大于或等于第三预设阈值;和/或,扬尘检测框与预设扬尘基准框之间的交并比大于或等于第四预设阈值。这里,预设扬尘基准框是预先设置的、预设采集区域内的某一固定检测区域,可以是整个预设采集区域,也可以是预设采集区域中的部分区域。扬尘检测框与预设扬尘基准框之间的交并比IOU1,即为扬尘检测框与预设扬尘基准框在预设采集区域内的交叠区域,与扬尘检测框与预设扬尘基准框分别在预设采集区域内覆盖的总区域的比值。需要说明的是,第三预设阈值和第四预设阈值可以根据经验设定,本公开实施例不进行具体限定。
示例性的,若扬尘检测框的区域面积大于或等于第三预设阈值,则可以确定扬尘检测框满足第一预设条件;和/或,根据扬尘检测框位于预设采集区域的具体区域,以及预设扬尘基准框,若扬尘检测框与预设扬尘基准框之间的交并比IOU1大于或等于第四预设阈值,则可以确定扬尘检测框满足第一预设条件。
需要说明的是,检测图像中可能存在多个扬尘检测框,针对每个扬尘检测框执行S102-1,只要其中一个扬尘检测框满足第一预设条件,即可确定检测图像的扬尘状态信息为存在扬尘。
S102-2、若扬尘检测框满足第一预设条件,则确定扬尘状态信息为第一状态值;若扬尘检测框不满足第一预设条件,则确定扬尘状态信息为第二状态值。
例如可以设置第一状态值为1;设置第二状态值为0。
在S102-1和S102-2确定了扬尘状态信息的具体状态值的情况下,根据
S103-1~S103-2判断是否满足扬尘报警条件,其中:
S103-1、累计检测数据中的状态值和历史检测数据集中的各个历史检测数据中的状态值的总和,得到状态值总和。
检测数据中的状态值是扬尘状态信息指示的状态值(也即第一状态值或第二状态值)。历史检测数据中的状态值是历史检测图像对应的扬尘状态信息指示的状态值(也即第一状态值或第二状态值)。
在一些示例中,为了降低数据存储量,提高系统运算效率,历史检测数据集中的历史检测数据的数据量可以预先设置固定值,也即历史检测数据集仅能存储一定量的历史检测数据,当更新的历史检测数据集中的历史检测数据量超过设定值时,可以剔除当前的历史检测数据集中的第一次所存储的历史检测数据,以确保历史检测数据集中的数据量保持不变。
以历史检测数据集包括N帧历史检测图像对应的历史检测数据为例累计状态值,比如N个历史检测数据的状态值分别为a1、a2、……、aN,其中,a1、a2、……、aN取1或0,“1”表示第一状态值(也即存在扬尘),“0”表示第二状态值(也即无扬尘);从第一个状态值开始累加,确定N个历史检测数据中的状态值,以及检测数据中的状态值的总和,得到状态值总和M,即M=a1+a2+…+aN+aN+1,其中,aN+1表示检测数据中的状态值,取1或0,“1”表示第一状态值(也即存在扬尘),“0”表示第二状态值(也即无扬尘)。在确保检测一定帧数的图像可以得到准确扬尘检测结果的情况下,通过累计固定帧数的状态值能够降低数据存储量,从而提高系统运算效率。N为大于0的正整数。
当然,在一些示例中历史检测数据集可以不限存储历史检测数据的数据量。由于检测一定帧数的图像即可判断是否存在扬尘,因此为了能够提高运算效率,可以限定累计当前帧检测图像对应的状态值,以及历史检测数据集中当前帧检测图像之前的N帧历史检测图像对应的状态值,同样能够得到累计总和。
S103-2、若状态值总和大于或等于第一预设阈值,确定满足扬尘报警条件。
第一预设阈值可以是根据经验设定的,本公开实施例不进行具体限定。
针对S104,根据检测数据更新历史检测数据集,历史检测数据即能够
容乃不超过预设数量的历史检测数据。具体地,以历史检测数据集存储能力有限为例,判断当前的历史检测数据集中的历史检测数据的数量是否达到历史检测数据集的存储上限,也即历史检测数据集中的历史检测数据的数据量是否等于预设数量,若未曾达到,则可以直接将检测数据作为新一历史检测数据,添加到历史检测数据集中。若历史检测数据集中的历史检测数据的数据量已经等于预设数量,则从当前的历史检测数据集中剔除存储时间最早的一历史检测数据,并将检测数据作为新一历史检测数据,添加到历史检测数据集中。这里,存储时间最早的历史检测数据,也即与当前的历史检测数据集中的其他历史检测数据相比,存储时间最长的历史检测数据。
在一些示例中,在根据历史检测数据集判断出未满足扬尘报警条件,且根据检测数据与历史检测数据集判断出满足扬尘报警条件时,将采集检测图像的时间作为扬尘开始时间,并生成扬尘报警信息。
具体地,若累计历史检测数据集中的各个历史检测数据对应的状态值,得到的状态值总和小于第一预设阈值,且累计检测数据的状态值与历史检测数据集中的各个历史检测数据对应的状态值,得到的状态值总和大于或等于第一预设阈值,则将采集检测图像的时间作为扬尘开始时间,在扬尘报警信息包括向用户发送的文字消息时,扬尘报警信息包括扬尘开始时间。
在一些示例中,检测数据包括扬尘等级信息指示的扬尘等级。针对S104的扬尘报警,可以设置不同报警机制,例如实时报警机制和间隔报警机制等。
若预先设置的报警机制为实时报警,则可以根据检测数据中的扬尘等级和历史检测数据集中的扬尘等级,进行扬尘报警。这里,所谓实时报警机制,也即只要满足扬尘报警条件,就进行报警,若连续每帧检测图像均满足扬尘报警条件,就连续报警不间断。
若预先设置的报警机制为间隔报警,则在系统当前时间与进行扬尘报警后的最后一次报警时间的时间差值大于间隔报警时长的情况下,根据检测数据中的扬尘等级和历史检测数据集中的扬尘等级,进行扬尘报警。这里,所谓间隔报警机制,也即在一次扬尘报警后停止后,在预先设置的间隔报警时长内,无论是否满足扬尘报警条件,均不报警,等到停止报警后的时长超过
间隔报警时长后,再判断当前采集的检测图像是否满足扬尘报警条件,并循环执行间隔报警机制。
示例性的,可以直接按照检测数据中的扬尘等级进行扬尘报警;或者,也可以按照检测数据中的扬尘等级和历史检测数据集中的各个扬尘等级的平均等级进行扬尘报警;或者,也可以按照检测数据中的扬尘等级和历史检测数据集中的各个扬尘等级中的任一扬尘等级进行扬尘报警;或者,也可以按照测数据中的扬尘等级和历史检测数据集中的各个扬尘等级中的部分扬尘等级的平均等级进行扬尘报警,等等。
在一些示例中,在进行报警之后,还可以检测扬尘何时结束。具体地,若累计预设数量历史检测数据中的状态值总和小于或等于第二预设阈值,则可以确定预设采集区域扬尘结束,并记录扬尘结束时间。
若以第二状态值为0为例,则第二预设阈值可以设置为0;或者,在误差允许范围内,可以设置为1或2(也即允许N帧检测图像中存在一到两帧检测图像的目标检测结果存在误差)。
这里,在已经进行扬尘报警之后,执行检测扬尘是否结束的步骤。通过不断更新历史检测数据集,在现实场景结束扬尘之后,检测图像对应的目标检测结果不再包括扬尘检测框,也即此时级联的扬尘状态信息为表征无扬尘的第二状态值,在累加N个状态值总和等于0时,可以确定N帧检测图像对应的目标检测结果均为无扬尘,即可确定预设采集区域内的扬尘结束。此时扬尘结束时间可以为当前的历史检测数据集中的、任意一个历史检测数据对应的历史检测图像的采集时间,或者,也可以是当前的历史检测数据集中的、第一次所存储的历史检测数据对应的历史检测图像的采集时间,或者,也可以是当前的历史检测数据集中的、最后一次所存储的历史检测数据对应的历史检测图像的采集时间。
图2为本公开实施例提供的扬尘检测的具体流程示意图。为了能够详细清楚地对本公开实施例提供的扬尘检测进行说明,基于上述实施例,下面通过S201~S217阐述扬尘报警的具体执行过程,如图2所示。
S201、采集帧图像作为检测图像。
S202、进行环境检测,并确定目标检测结果。这里,环境检测包括但不仅限于扬尘检测。
S203、判断目标检测结果中是否包括扬尘检测框,若是,则针对其中一个扬尘检测框执行S204;若否,在执行S201。
S204、判断扬尘检测框的区域面积是否大于或等于第三预设阈值,以及扬尘检测框与预设扬尘基准框之间的交并比IOU1是否大于或等于第四预设阈值;若均为是,则执行S205;否则,执行S206。
S205、确定当前的检测图像对应的扬尘检测结果为存在扬尘。
S206、判断检测图像中的扬尘检测框是否遍历完成,若遍历完成,则执行S207;否则,执行S203。需要说明的是,在未遍历完成返回执行S203时,针对目标检测结果中的其他扬尘检测框的个体,循环执行S204~S206。
S207、判断当前的检测图像对应的扬尘检测结果是否为存在扬尘,若是,则执行S208;否则,执行S215。
S208、确定扬尘状态信息为第一状态值。
S209、判断累计检测数据中的状态值和历史检测数据集中的各个历史检测数据中的状态值的累计总和,是否大于或等于第一预设阈值,若是,则执行S210;否则,执行S201。
S210、判断历史检测数据集中的各个历史检测数据中的状态值是否大于或等于第一预设阈值,若是,执行S212,若否,执行S211。
S211、记录扬尘开始时间。
S212、判断报警机制,若为实时报警,则执行S213,若为间隔报警机制,执行S214。
S213、进行扬尘报警。
S214、判断系统当前时间与进行扬尘报警后的最后一次报警时间的时间差值大于间隔报警时长,若是,则执行S213;否则,执行S201。
S215、确定扬尘状态信息为第二状态值。
S216、判断累计预设数量历史检测数据中的状态值总和是否小于或等于第二预设阈值,以及当前是否处于扬尘报警状态,若是,则执行S217,否则,执行S209。需要说明的是,这里判断当前是否处于扬尘报警状态,若已经处于报警状态中,则在判断为是的情况下,可以确定扬尘已经结束,如果未处于报警状态中,则说明一直未发生扬尘,也就不存在记录扬尘结束,只需要返回执行S209即可。
S217、扬尘结束,并记录扬尘结束时间。
本公开实施例提供的图像识别模型,除了能够检测扬尘,还可以检测与扬尘所在环境相关的其他环境检测任务,例如伴随扬尘可能会存在裸露渣土、指示牌翻到、预设采集区域内的人员安全撤离等任务,通过扩展环境检测,能够提高人员施工的安全性。这里提到的裸露渣土、指示牌翻到、人员安全转移等检测,均为检测出存在扬尘并进行扬尘报警后进行的。下面分别对裸露渣土、指示牌翻到、人员安全转移的环境检测进行说明。
在一些示例中,在扬尘报警之后,还包括裸露渣土检测。具体地,返回执行S101,在目标检测结果还包括裸露渣土检测框的情况下,根据裸露渣土检测框的定位信息,判断裸露渣土检测框是否满足第二预设条件;若在第一预设帧数范围内,裸露渣土检测框均满足第二预设条件,则进行裸露渣土报警,并生成裸露渣土报警信息;裸露渣土报警信息包括预设采集区域内裸露渣土所在位置。
在扬尘报警之后,重新采集的检测图像,在输入到图像识别模型中进行目标检测,判断得到的目标检测结果是否包括裸露渣土检测框。裸露渣土检测框指示的位置为存在裸露渣土的位置。
第二预设条件包括裸露渣土检测框的数量大于或等于第五预设阈值;和/或,裸露渣土检测框的区域面积大于或等于第六预设阈值;和/或,裸露渣土检测框与预设裸露渣土基准框之间的交并比大于或等于第七预设阈值。预设裸露渣土基准框指示的位置为现实场景下待检测裸露渣土的位置。
裸露渣土检测框的区域面积可以根据裸露渣土检测框的定位信息计算得到。预设裸露渣土基准框与预设扬尘基准框的设置原理类似,也即在预设采集区域内的某一固定检测区域,可以是整个预设采集区域,也可以是预设采集区域中的部分区域。裸露渣土检测框与预设裸露渣土基准框之间的交并比IOU2,即为裸露渣土检测框与预设裸露渣土基准框在预设采集区域内的交叠区域,与裸露渣土检测框与预设裸露渣土基准框分别在预设采集区域内覆盖的总区域的比值。需要说明的是,第五预设阈值、第六预设阈值和第七预设阈值可以根据经验设定,本公开实施例不进行具体限定。
第一预设帧数范围是指连续采集到的一定帧数的检测图像。
本公开实施例不限定裸露渣土报警的形式,在以文字消息的形式提醒用户的情况下,生成的裸露渣土报警信息可以包括预设采集区域内裸露渣土所在位置。
图3a为本公开实施例提供的裸露渣土检测的具体流程示意图。为了能够详细清楚地对本公开实施例提供的裸露渣土检测进行说明,基于上述实施例,下面通过S301~S309阐述裸露渣土报警的具体执行过程,如图3a所示。
S301、扬尘告警和/或大风天气。这里,大风天气可以是基于天气预报的网络信息获取到的当前预设采集区域的天气情况。
S302、采集帧图像作为检测图像,并进行环境检测,确定目标检测结果。
S303、判断目标检测结果中是否包括裸露渣土检测框,若是,则执行S304;若否,在执行S302。
S304、判断裸露渣土检测框的数量是否大于或等于第五预设阈值,若是,则执行S305;若否,则执行S302。
S305、针对其中一个裸露渣土检测框,判断裸露渣土检测框的区域面积是否大于或等于第六预设阈值,以及裸露渣土检测框与预设裸露渣土基准框之间的交并比IOU2是否大于或等于第七预设阈值,若是,则执行S306;否则,执行S307。
S306、确定当前的检测图像对应的裸露渣土检测结果为存在裸露渣土,
并记录存在裸露渣土的检测帧数。
S307、判断检测图像中的裸露渣土检测框是否遍历完成,若遍历完成,则执行S308;否则,执行S305。需要说明的是,在未遍历完成返回执行S305时,针对目标检测结果中的其他裸露渣土检测框的个体,循环执行S305~S307。
S308、判断第一预设帧数范围内的检测图像是否均检测出裸露渣土,若是,则执行S309,否则,执行S302。
S309、进行裸露渣土报警,并上报裸露渣土所在位置。
在一些示例中,在扬尘报警之后,还包括人员撤离检测。具体地,在扬尘报警指示的扬尘等级达到预设扬尘等级时,发送人员撤离预设采集区域的指令;响应于接收到跟踪人员的指令,返回执行S101,在目标检测结果还包括人员检测框的情况下,根据人员检测框的定位信息和预设撤离基准框的定位信息,确定在预设撤离基准框中的人员数量;在人员数量大于或等于第八预设阈值,且撤离时长大于预设撤离时长的情况下,则进行人员撤离报警。
撤离时长为系统的当前时间与撤离开始时间的差值;撤离开始时间为响应于接收到跟踪人员的指令的时间。
预设撤离基础框可以是在预设采集区域内的某一固定检测区域,可以是整个预设采集区域,也可以是预设采集区域中的部分区域。人员检测框指示的位置为人员所在位置。预设撤离基础框指示的位置为待检测人员是否撤离的位置,一般情况下为指示牌附近或具有较大概率出现裸露渣土的位置。
根据人员检测框的定位信息和预设撤离基准框的定位信息,可以判断人员检测框的中心点是否位于预设撤离基准框。在检测图像中可能存在多个人员检测框,针对每个人员检测框,若人员检测框的中心点是否位于预设撤离基准框,则确定该预设撤离基准框存在人员,基于此,遍历各个人员检测框,确定在预设撤离基准框中的人员数量。若人员数量达到预设上限值(第八预设阈值)时,进一步判断撤离时长是否超时,也即撤离时长大于预设撤离时长,若超时,则向人员发送人员撤离报警的催促提示信息。
另外,在判断人员数量的同时,若在第二预设帧数范围内,人员数量均
为零,则生成撤离结束信息。也即一段时间内采集到的检测图像均未检测到人员检测框的情况下,可以确定人员已经撤离完毕。
图3b为本公开实施例提供的人员安全撤离检测的具体流程示意图。为了能够详细清楚地对本公开实施例提供的人员安全撤离检测进行说明,基于上述实施例,下面通过S401~S413阐述人员撤离报警的具体执行过程,如图3b所示。
S401、扬尘告警。
S402、根据历史检测数据集中的各个历史检测数据中的扬尘等级,以及检测数据的扬尘等级,判断扬尘等级为严重的帧数是否大于设定阈值,若是,则执行S403;否则,执行S401。
S403、采集帧图像作为检测图像,并进行环境检测,确定目标检测结果。
S404、判断目标检测结果中是否包括人员检测框,若是,则执行S405;否则,执行S403。
S405、针对检测图像中的其中一个人员检测框,判断人员检测框的中心点是否位于预设人员基准框内,若是,则执行S406;否则,执行407。
S406、记录预设撤离基准框中的人员数量。
S407、判断检测图像中的人员检测框是否遍历完成,若遍历完成,则执行S408;否则,执行S405。需要说明的是,在未遍历完成返回执行S405时,针对目标检测结果中的其他人员检测框的个体,循环执行S405~S407。
S408、判断人员数量是否大于或等于第八预设阈值,若是,则执行S409;否则,执行S411。
S409、判断撤离时长是否大于预设撤离时长,若是,则执行S410,否则,执行S403。
S410、进行人员撤离报警。
S411、判断人员数量是否等于0,若是,则执行S412;否则,执行S403。
S412、判断第二预设帧数范围内的检测图像是否人员数量均为0,若是,
则执行S413;否则,执行S403。
S413、生成撤离结束信息。
在一些示例中,在扬尘报警之后,还包括人员撤离检测。具体地,返回执行S101,在目标检测结果还包括指示牌检测框的情况下,根据指示牌检测框的定位信息和预设指示牌基准框的定位信息,确定指示牌检测框与预设指示牌基准框的匹配结果;若在第三预设帧数范围内,指示牌检测框与预设指示牌基准框均不匹配,则进行指示牌报警,并生成指示牌报警信息;指示牌报警信息包括预设采集区域内指示牌所在位置。
在扬尘报警之后,重新采集的检测图像,在输入到图像识别模型中进行目标检测,判断得到的目标检测结果是否包括指示牌检测框。指示牌检测框指示的位置为检测图像中的指示牌所在的位置。预设指示牌基准框指示的位置为现实场景下规定的指示牌所在位置。
判断指示牌检测框与预设指示牌基准框是否匹配,具体地,判断指示牌检测框与预设指示牌基准框的交并比是否大于或等于预设阈值,这里预设阈值可以根据经验设定,例如一种情况下可以设置预设阈值为1,也即交并比为1,则确定指示牌检测框与预设指示牌基准框完全匹配;或者,在允许存在检测误差的情况下,可以设置预设阈值为0.95,若交并比大于0.95,则确定指示牌检测框与预设指示牌基准框匹配;否则,不匹配。
若连续采集的多帧检测图像(也即第三预设帧数范围内的检测图像)的目标检测结果中,对应的指示牌检测框与预设指示牌基准框均不匹配,则进行指示牌报警。需要说明的是,若指示牌检测框与预设指示牌基准框的匹配结果指示指示牌检测框与预设指示牌基准框不匹配,可以认为是扬尘(或其它因素)导致指示牌翻到(或偏移预设位置),由于指示牌翻到,造成该地区施工过程没有指向性信息,增大了人员施工的危险性,因此在检测到指示牌检测框与预设指示牌基准框不匹配时,及时组织安全维护人员对指示牌翻到情况进行处理,以确保翻到的指示牌所对应区域内的人员安全。
本公开实施例不限定指示牌报警的形式,若检测图像中存在指示牌的情
况下,在以文字消息的形式提醒用户时,生成的指示牌报警信息可以包括预设采集区域内不匹配的指示牌所在位置。
图3c为本公开实施例提供的指示牌检测的具体流程示意图。为了能够详细清楚地对本公开实施例提供的指示牌检测进行说明,基于上述实施例,下面通过S501~S507阐述指示牌报警的具体执行过程,如图3c所示。
S501、扬尘告警。
S502、采集帧图像作为检测图像,并进行环境检测,确定目标检测结果。
S503、判断目标检测结果中是否包括指示牌检测框,若是,则执行S504;若否,在执行S502。
S504、根据指示牌检测框的定位信息和预设指示牌基准框的定位信息,判断指示牌检测框与预设指示牌基准框是否匹配,若不匹配,则执行S505;否则,执行S502。
S505、记录不匹配的帧数。
S506、判断第三预设帧数范围内的检测图像指示牌检测框与预设指示牌基准框是否均不匹配,若是,则执行S507;否则,执行S502。
S507、进行指示牌报警,并生成指示牌报警信息。
本公开实施例还提供了一种训练图像识别模型的方法,具体地,其执行主体可以为上述实施例中用于执行一种环境检测报警方法的服务器,或者,也可以是单独的一种服务器,本公开实施例执行环境检测报警方法的服务器为例进行说明,具体训练步骤如S601~S603,其中:
S601、获取预设采集区域的多帧样本图像,并为样本图像标注样本标签。
样本图像可以是不同时间节点下的图像信息,主要包括不同天气、不同光照情况下的视频图像。需要说明的是,样本图像可以是在线采集的预设采集区域内的图像,或者,也可以是预先存储的预设采集区域内的图像。
样本标签包括预设采集区域对应的至少一个基准框的位置信息,以及每个基准框的类别信息;类别信息包括天气类别、人员类别、指示牌类别和裸
露渣土类别中的一种。其中,天气类别可以包括雨、雪、雾、扬尘和晴等。
本公开实施例通过设置不同天气类别的样本标签对图像识别模型进行训练,能够避免雨、雪、雾天气出现而影响扬尘误检的情况,也即本公开实施例通过训练好的图像识别模块能够得到较为精准的扬尘检测结果,进而提高环境检测精度。
S602、根据样本图像和样本标签,对待训练的图像识别模型进行训练。
待训练的图像识别模型可以为基于图像识别技术yolov5的目标检测深度神经网络。图4为本公开实施例提供的一种图像识别模型的网络结构示意图,如图4所示,base detector为yolov5的主体网络,cls表示类别分支、reg表示预测框坐标回归分支、obj表示前景置信度分支,level表示扬尘等级预测分支。base detector为特征提取的过程,具体由多层卷积实现,输入预处理后的样本图像,输出为长度为5的特征图列表,也即[f1,f2,f3,f4,f5]。针对类别分支、reg表示预测框坐标回归分支和obj表示前景置信度分支,其中,原始头有三个,分别应用于f3,f4和f5,具体地,由一层1×1卷积实现,卷积输入通道数=输出特征图通道数,输出通道数=na×no,其中,na为设置原始头anchor数,取3;no=7+5,其中,7为目标类别数量,表示雨、雪、雾、扬尘、人员、指示牌和裸露渣土七个类别;5为预测框坐标回归的五个分量,也即[x,y,w,h,p],其中,x表示预测框中心点的横坐标、y表示预测框中心点的纵坐标、w表示预测框中心点的宽度、h表示预测框中心点的高度、p表示预测框所属类别的概率(也即前景置信度)。针对扬尘等级分支level,由一层1×1卷积构成,输出通道数为4,也即[c1,c2,c3,c4],其中,c1,c2,c3,c4分别表示扬尘等级为无扬尘、微弱扬尘、中等扬尘以及严重扬尘的概率。
本公开实施例在目标类别以及预测框坐标回归的基础上,由尺度最小的特征图f5处,引入扬尘等级的分支,能够得到预测框对应扬尘的扬尘等级。
根据样本图像和样本标签,利用图4所示图像识别模块进行训练,得到n个预测框σ。其中第i各预测框σ包括目标类别的七个分量[b1,b2,b3,b4,b5,b6,b7]、预测框坐标回归的五个分量[xi,yi,wi,hi,pi]、以及扬尘等级的四个分量[c1,c2,c3,c4]。其中,b1,b2,b3,b4,b5,b6,b7分别为雨、雪、雾、扬尘、人员、指示牌和裸露渣土的类别概率;xi,yi,wi,hi分别表示第i个预测框的中心点的横坐标和纵坐标、以及第i个预测框的宽度和高度;pi表示预测类别为第i个预测框所属类别的概率(也即第i个检测框的前景置信度);ci表示扬尘等级。
具体地可以根据概率b1,b2,b3,b4,b5,b6,b7的大小,判断第i个预测框所属类别,例如b1,b2,b3,b4,b5,b6,b7中b4最大,则确定第i个预测框为扬尘预测框;同理,若b5最大,则确定第i个预测框为人员预测框;若b6最大,则
确定第i个预测框为指示牌预测框;若b7最大,则确定第i个预测框为裸露渣土预测框。0<i≤n,n为大于或等于1的整数。
S603、构造加权损失值,通过对加权损失值进行加权反向传播以持续训练图像识别模型,直至加权损失值收敛,得到训练完成的图像识别模型。
采用本公开实施例提供的图像识别模型的训练方法,基于yolov5以及扬尘等级分支架构组成的模型,通过构造加权损失值进行反向加权传播,解决了部分目标区域检测不到或者检测错的问题,提升了模型的准确率。同时,利用图像识别模块能够直接得出扬尘等级的检测结果,实现了扬尘严重等级的预测。
针对S603训练图像识别模型,具体参见S603-1~S603-3,其中:
S603-1、获取图像识别模型输出的多个预测框、每个预测框的类别信息、每个预测框的前景置信度、以及类别信息指示扬尘类别的预测扬尘等级。
S603-2、遍历计算每个预测框与对应的基准框的交并比,分别得到每个预测框对应的第一损失值;遍历计算每个预测框的类别信息与预设类别标签之间的第二损失值;遍历计算每个预测框的前景置信度与基准前景置信度之间的第三损失值;遍历计算扬尘类别对应的扬尘预测框的预测扬尘等级与基准扬尘等级之间的第四损失值。
这里,基准框是预先设置的、每个类别对应的基准框,也即雨、雪、雾、扬尘类别、人员类别、指示牌类别和裸露渣土七大类别分别对应的基准框。
遍历计算每个预测框与对应的基准框的交并比。以第i个预测框为例,计算第i个预测框与其对应的基准框的重合面积S1。之后,根据重合面积S1,计算第i个预测框与其对应的基准框的交并比IOU2,也即IOU2=S1/(S2+S3-S1),其中S2表示第i个预测框的面积,S3表示对应的基准框的面积。之后,可以将IOU2作为第i个预测框的第一损失值Lreg。
遍历计算每个预测框的类别信息与预设类别标签之间的第二损失值Lcle,参照公式1:
Lcle=-[tlogt′+(1-t)]log(1-t′)…………………公式1
其中,t可以表示预设类别标签,也即预测框真实的类别信息;t′可以表示预测框预测的类别信息,也即模型输出值/预测值。
遍历计算每个预测框的前景置信度与基准前景置信度之间的第三损失值Lobj,可以参照公式1。需要说明的是,利用公式1计算第三损失值时,其中,t可以表示基准前景置信度,也即预测框真实的前景置信度;t′可以表示预测框预测的前景置信度,也即模型输出值/预测值。
遍历计算扬尘类别对应的扬尘预测框的预测扬尘等级与基准扬尘等级之间的第四损失值Llevel,可以参照公式1。需要说明的是,利用公式1计算第四损失值时,其中,t可以表示基准扬尘等级,也即预测框真实的扬尘等级;t′可以表示预测框预测的扬尘等级,也即模型输出值/预测值。
由于不同扬尘等级边界处的界限模糊,扬尘等级标签在训练过程中采用标签平滑label smooth方式,避免图像识别模型对于正确标签过于自信,减小正负样本预测值差距。标签平滑label smooth参见公式2:
其中,tonehot表示扬尘等级的标签编码(也即无扬尘、微弱扬尘、中等扬尘以及严重扬尘的标签编码);,α为超参数,K为扬尘等级个数,本公开实施例中扬尘等级个数K为4,也即无扬尘、微弱扬尘、中等扬尘以及严重扬尘四个等级。
S603-3、将第一损失值、第二损失值、第三损失值和第四损失值之和作为总体损失值Ltotal,并根据总体损失值Ltotal进行反向传播,以持续训练图像识别模型。
其中,总体损失值Ltotal=Lobj+Lcls+Lreg+Llevel。
第二方面,基于同一发明构思,本公开实施例还提供了一种环境检测报警装置,图5为本公开实施例提供的环境检测报警装置的示意图,如图5所示,其包括采集模块51、目标检测模块52、报警分析模块53和数据存储模块54。
采集模块51,用于获取预设采集区域的视频流,并从视频流中依次采集帧图像作为检测图像。
目标检测模块52,用于将检测图像输入至图像识别模型中,得到目标检测结果,其中,所述目标检测结果包括扬尘等级信息;在目标检测结果中包括扬尘检测框的情况下,根据扬尘检测框的定位信息,确定扬尘状态信息;
记录检测图像的扬尘状态信息和目标检测结果中的扬尘等级信息;将扬尘状态信息和扬尘等级信息,作为检测数据。
报警分析模块53,用于在根据检测数据与历史检测数据集判断出满足扬尘报警条件时,进行扬尘报警。
数据存储模块54,用于在根据检测数据与历史检测数据集判断出满足扬尘报警条件时,根据检测数据更新历史检测数据集。
在一些实施例中,所述扬尘状态信息包括表征存在扬尘的第一状态值和表征无扬尘的第二状态值;所述历史检测数据集中包括历史采集到的至少一帧历史检测图像对应的历史检测数据;
目标检测模块52在确定扬尘状态时,具体用于根据所述扬尘检测框的定位信息,判断所述扬尘检测框是否满足第一预设条件;若所述扬尘检测框满足所述第一预设条件,则确定所述扬尘状态信息为第一状态值;若所述扬尘检测框不满足所述第一预设条件,则确定所述扬尘状态信息为第二状态值;报警分析模块53包括报警条件判断单元,报警条件判断单元用于累计所述检测数据中的状态值和所述历史检测数据集中的各个所述历史检测数据中的状态值的总和,得到状态值总和;若状态值总和大于或等于第一预设阈值,确定满足扬尘报警条件。
在一些实施例中,所述历史检测数据集能够容纳不超过预设数量的历史检测数据;数据存储模块54,用于在所述历史检测数据集中的所述历史检测数据的数据量等于所述预设数量的情况下,从当前的所述历史检测数据集中剔除存储时间最早的一历史检测数据,并将所述检测数据作为新一历史检测数据,添加到所述历史检测数据集中。
在一些实施例中,报警分析模块53,还用于在根据所述历史检测数据集判断出未满足所述扬尘报警条件,且根据所述检测数据与历史检测数据集判断出满足扬尘报警条件时,将采集所述检测图像的时间作为扬尘开始时间,并生成扬尘报警信息。
在一些实施例中,所述检测数据包括所述扬尘等级信息指示的扬尘等级;
报警分析模块53,用于若预先设置的报警机制为实时报警,则根据所述检测数据中的扬尘等级和所述历史检测数据集中的扬尘等级,进行扬尘报警;若预先设置的报警机制为间隔报警,则在系统当前时间与进行扬尘报警后的最后一次报警时间的时间差值大于间隔报警时长的情况下,根据所述检测数据中的扬尘等级和所述历史检测数据集中的扬尘等级,进行扬尘报警。
在一些实施例中,报警分析模块53,还用于在进行报警之后,若累计预设数量所述历史检测数据中的状态值总和小于或等于第二预设阈值,确定所述预设采集区域扬尘结束,并记录扬尘结束时间。
在一些实施例中,所述第一预设条件包括所述扬尘检测框的区域面积大于或等于第三预设阈值;和/或,所述扬尘检测框与预设扬尘基准框之间的交并比大于或等于第四预设阈值。
在一些实施例中,环境检测报警装置还包括裸露渣土报警模块55,用于在进行扬尘报警之后,在所述目标检测结果还包括裸露渣土检测框的情况下,根据所述裸露渣土检测框的定位信息,判断所述裸露渣土检测框是否满足第二预设条件;若在第一预设帧数范围内,所述裸露渣土检测框均满足所述第二预设条件,则进行裸露渣土报警,并生成裸露渣土报警信息;所述裸露渣土报警信息包括所述预设采集区域内裸露渣土所在位置。
在一些实施例中,所述第二预设条件包括所述裸露渣土检测框的数量大于或等于第五预设阈值;和/或,所述裸露渣土检测框的区域面积大于或等于第六预设阈值;和/或,所述裸露渣土检测框与预设裸露渣土基准框之间的交并比大于或等于第七预设阈值。
在一些实施例中,环境检测报警装置还包括人员撤离报警模块56,用于在进行扬尘报警之后,在扬尘报警指示的扬尘等级达到预设扬尘等级时,发送人员撤离所述预设采集区域的指令;响应于接收到跟踪人员的指令,返回执行从所述视频流中依次采集帧图像作为检测图像,输入至图像识别模型中,得到目标检测结果的步骤,在所述目标检测结果还包括人员检测框的情况下,根据所述人员检测框的定位信息和预设撤离基准框的定位信息,确定在所述预设撤离基准框中的人员数量;在所述人员数量大于或等于第八预设阈值,
且撤离时长大于预设撤离时长的情况下,则进行人员撤离报警;所述撤离时长为系统的当前时间与撤离开始时间的差值;所述撤离开始时间为响应于接收到跟踪人员的指令的时间。
在一些实施例中,环境检测报警装置还包括指示牌报警模块57,用于在进行扬尘报警之后,在所述目标检测结果还包括指示牌检测框的情况下,根据所述指示牌检测框的定位信息和预设指示牌基准框的定位信息,确定所述指示牌检测框与所述预设指示牌基准框的匹配结果;若在第三预设帧数范围内,所述指示牌检测框与所述预设指示牌基准框均不匹配,则进行指示牌报警,并生成指示牌报警信息;所述指示牌报警信息包括所述预设采集区域内指示牌所在位置。
在一些实施例中,环境检测报警装置还包括模型训练模块58,用于训练图像识别模型。模型训练模块58具体用于获取预设采集区域的多帧样本图像,并为所述样本图像标注样本标签;所述样本标签包括所述预设采集区域对应的至少一个基准框的位置信息,以及每个所述基准框的类别信息;所述类别信息包括天气类别、人员类别、指示牌类别和裸露渣土类别中的一种;根据所述样本图像和所述样本标签,对待训练的图像识别模型进行训练;构造加权损失值,通过对所述加权损失值进行加权反向传播以持续训练所述图像识别模型,直至所述加权损失值收敛,得到训练完成的图像识别模型。
第三方面,图6为本公开实施例的一种计算机设备的结构示意图。如图6所示,本公开实施例提供一种计算机设备包括:一个或多个处理器61、存储器62、一个或多个I/O接口63。存储器62上存储有一个或多个程序,当该一个或多个程序被该一个或多个处理器执行,使得该一个或多个处理器实现如上述实施例中任一的环境检测报警方法;一个或多个I/O接口63连接在处理器与存储器之间,配置为实现处理器与存储器的信息交互。
其中,处理器61为具有数据处理能力的器件,其包括但不限于中央处理器(CPU)等;存储器62为具有数据存储能力的器件,其包括但不限于随机存取存储器(RAM,更具体如SDRAM、DDR等)、只读存储器(ROM)、带电可擦可编程只读存储器(EEPROM)、闪存(FLASH);I/O接口(读写
接口)63连接在处理器61与存储器62间,能实现处理器61与存储器62的信息交互,其包括但不限于数据总线(Bus)等。
在一些实施例中,处理器61、存储器62和I/O接口63通过总线64相互连接,进而与计算设备的其它组件连接。
根据本公开的实施例,还提供一种非瞬态计算机可读介质。该非瞬态计算机可读介质上存储有计算机程序,其中,该程序被处理器执行时实现如上述实施例中任一的环境检测报警方法中的步骤。
特别地,根据本公开实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在机器可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分从网络上被下载和安装,和/或从可拆卸介质被安装。在该计算机程序被中央处理单元(CPU)执行时,执行本公开的系统中限定的上述功能。
需要说明的是,本公开所示的非瞬态计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何非瞬态计算机可读介质,该非瞬态计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用
或者与其结合使用的程序。非瞬态计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、RF等等,或者上述的任意合适的组合。
附图中的流程图和框图,图示了按照本公开各种实施例的装置、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,前述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的电路或子电路可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的电路或子电路也可以设置在处理器中,例如,可以描述为:一种处理器,包括:接收电路和处理电路,该处理模块包括写入子电路和读取子电路。其中,这些电路或子电路的名称在某种情况下并不构成对该电路或子电路本身的限定,例如,接收电路还可以被描述为“接收视频信号”。
可以理解的是,以上实施方式仅仅是为了说明本公开的原理而采用的示例性实施方式,然而本公开并不局限于此。对于本领域内的普通技术人员而言,在不脱离本公开的精神和实质的情况下,可以做出各种变型和改进,这些变型和改进也视为本公开的保护范围。
Claims (15)
- 一种环境检测报警方法,其中,包括:获取预设采集区域的视频流,并从所述视频流中依次采集帧图像作为检测图像,输入至图像识别模型中,得到目标检测结果,其中,所述目标检测结果包括扬尘等级信息;在所述目标检测结果中包括扬尘检测框的情况下,根据所述扬尘检测框的定位信息,确定扬尘状态信息;将所述扬尘状态信息和所述扬尘等级信息,作为检测数据;在根据所述检测数据与历史检测数据集判断出满足扬尘报警条件时,进行扬尘报警;在根据所述检测数据与所述历史检测数据集判断出满足扬尘报警条件时,根据所述检测数据更新所述历史检测数据集,并返回执行所述从所述视频流中依次采集帧图像作为检测图像的步骤。
- 根据权利要求1所述的环境检测报警方法,其中,所述扬尘状态信息包括表征存在扬尘的第一状态值和表征无扬尘的第二状态值;所述历史检测数据集中包括历史采集到的至少一帧历史检测图像对应的历史检测数据;所述根据所述扬尘检测框的定位信息,确定扬尘状态信息,包括:根据所述扬尘检测框的定位信息,判断所述扬尘检测框是否满足第一预设条件;若所述扬尘检测框满足所述第一预设条件,则确定所述扬尘状态信息为第一状态值;若所述扬尘检测框不满足所述第一预设条件,则确定所述扬尘状态信息为第二状态值;根据所述检测数据与历史检测数据集判断是否满足扬尘报警条件,包括:累计所述检测数据中的状态值和所述历史检测数据集中的各个所述历史检测数据中的状态值的总和,得到状态值总和;若所述状态值总和大于或等于第一预设阈值,确定满足扬尘报警条件。
- 根据权利要求1或2所述的环境检测报警方法,其中,所述历史检测数据集能够容纳不超过预设数量的历史检测数据;所述根据所述检测数据更新所述历史检测数据集,包括:在所述历史检测数据集中的所述历史检测数据的数据量等于所述预设数量的情况下,从当前的所述历史检测数据集中剔除存储时间最早的一历史检测数据,并将所述检测数据作为新一历史检测数据,添加到所述历史检测数据集中。
- 根据权利要求1所述的环境检测报警方法,其中,还包括:在根据所述历史检测数据集判断出未满足所述扬尘报警条件,且根据所述检测数据与历史检测数据集判断出满足扬尘报警条件时,将采集所述检测图像的时间作为扬尘开始时间,并生成扬尘报警信息。
- 根据权利要求1所述的环境检测报警方法,其中,所述检测数据包括所述扬尘等级信息指示的扬尘等级;所述在根据所述检测数据与历史检测数据集判断出满足扬尘报警条件时,进行扬尘报警,包括:若预先设置的报警机制为实时报警,则根据所述检测数据中的扬尘等级和所述历史检测数据集中的扬尘等级,进行扬尘报警;若预先设置的报警机制为间隔报警,则在系统当前时间与进行扬尘报警后的最后一次报警时间的时间差值大于间隔报警时长的情况下,根据所述检测数据中的扬尘等级和所述历史检测数据集中的扬尘等级,进行扬尘报警。
- 根据权利要求2所述的环境检测报警方法,其中,在进行报警之后,还包括:若累计预设数量所述历史检测数据中的状态值总和小于或等于第二预设阈值,确定所述预设采集区域扬尘结束,并记录扬尘结束时间。
- 根据权利要求2所述的环境检测报警方法,其中,所述第一预设条件包括所述扬尘检测框的区域面积大于或等于第三预设阈值;和/或,所述扬尘检测框与预设扬尘基准框之间的交并比大于或等于第四预设阈值。
- 根据权利要求1所述的环境检测报警方法,其中,在进行扬尘报警之后,还包括:返回执行从所述视频流中依次采集帧图像作为检测图像,输入至图像识别模型中,得到目标检测结果的步骤,在所述目标检测结果还包括裸露渣土检测框的情况下,根据所述裸露渣土检测框的定位信息,判断所述裸露渣土检测框是否满足第二预设条件;若在第一预设帧数范围内,所述裸露渣土检测框均满足所述第二预设条件,则进行裸露渣土报警,并生成裸露渣土报警信息;所述裸露渣土报警信息包括所述预设采集区域内裸露渣土所在位置。
- 根据权利要求8所述的环境检测报警方法,其中,所述第二预设条件包括所述裸露渣土检测框的数量大于或等于第五预设阈值;和/或,所述裸露渣土检测框的区域面积大于或等于第六预设阈值;和/或,所述裸露渣土检测框与预设裸露渣土基准框之间的交并比大于或等于第七预设阈值。
- 根据权利要求1所述的环境检测报警方法,其中,在进行扬尘报警之后,还包括:在扬尘报警指示的扬尘等级达到预设扬尘等级时,发送人员撤离所述预设采集区域的指令;响应于接收到跟踪人员的指令,返回执行从所述视频流中依次采集帧图像作为检测图像,输入至图像识别模型中,得到目标检测结果的步骤,在所述目标检测结果还包括人员检测框的情况下,根据所述人员检测框的定位信息和预设撤离基准框的定位信息,确定在所述预设撤离基准框中的人员数量;在所述人员数量大于或等于第八预设阈值,且撤离时长大于或等于预设撤离时长的情况下,则进行人员撤离报警;所述撤离时长为系统的当前时间与撤离开始时间的差值;所述撤离开始时间为响应于接收到跟踪人员的指令的时间。
- 根据权利要求1所述的环境检测报警方法,其中,在进行扬尘报警之后,还包括:返回执行从所述视频流中依次采集帧图像作为检测图像,输入至图像识别模型中,得到目标检测结果的步骤,在所述目标检测结果还包括指示牌检测框的情况下,根据所述指示牌检测框的定位信息和预设指示牌基准框的定位信息,确定所述指示牌检测框与所述预设指示牌基准框的匹配结果;若在第三预设帧数范围内,所述指示牌检测框与所述预设指示牌基准框均不匹配,则进行指示牌报警,并生成指示牌报警信息;所述指示牌报警信息包括所述预设采集区域内指示牌所在位置。
- 根据权利要求1所述的环境检测报警方法,其中,训练所述图像识别模型的步骤包括:获取预设采集区域的多帧样本图像,并为所述样本图像标注样本标签;所述样本标签包括所述预设采集区域对应的至少一个基准框的位置信息,以及每个所述基准框的类别信息;所述类别信息包括天气类别、人员类别、指示牌类别和裸露渣土类别中的一种;根据所述样本图像和所述样本标签,对待训练的图像识别模型进行训练;构造加权损失值,通过对所述加权损失值进行加权反向传播以持续训练所述图像识别模型,直至所述加权损失值收敛,得到训练完成的图像识别模型。
- 一种环境检测报警装置,其包括采集模块、目标检测模块、报警分析模块和数据存储模块;所述采集模块,用于获取预设采集区域的视频流,并从所述视频流中依次采集帧图像作为检测图像;所述目标检测模块,用于将所述检测图像输入至图像识别模型中,得到目标检测结果,其中,所述目标检测结果包括扬尘等级信息;在所述目标检测结果中包括扬尘检测框的情况下,根据所述扬尘检测框的定位信息,确定扬尘状态信息;将所述扬尘状态信息和所述扬尘等级信息,作为检测数据;所述报警分析模块,用于在根据所述检测数据与历史检测数据集判断出满足扬尘报警条件时,进行扬尘报警;所述数据存储模块,用于在根据所述检测数据与所述历史检测数据集判断出满足扬尘报警条件时,根据所述检测数据更新所述历史检测数据集。
- 一种计算机设备,其中,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当计算机设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如权利要求1至12中任一所述的环境检测报警方法的步骤。
- 一种计算机非瞬态可读存储介质,其中,该计算机非瞬态可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如权利要求1至12中任一项所述的环境检测报警方法的步骤。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210906240.2 | 2022-07-29 | ||
CN202210906240.2A CN115272656A (zh) | 2022-07-29 | 2022-07-29 | 环境检测报警方法、装置、计算机设备及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024022059A1 true WO2024022059A1 (zh) | 2024-02-01 |
Family
ID=83770932
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/105840 WO2024022059A1 (zh) | 2022-07-29 | 2023-07-05 | 环境检测报警方法、装置、计算机设备及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115272656A (zh) |
WO (1) | WO2024022059A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118351459A (zh) * | 2024-06-17 | 2024-07-16 | 四川国蓝中天环境科技集团有限公司 | 基于卫星遥感的城市工地和裸地扬尘管控成效评估方法 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115272656A (zh) * | 2022-07-29 | 2022-11-01 | 京东方科技集团股份有限公司 | 环境检测报警方法、装置、计算机设备及存储介质 |
CN115760779B (zh) * | 2022-11-17 | 2023-12-05 | 苏州中恒通路桥股份有限公司 | 一种道路施工监管系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112052744A (zh) * | 2020-08-12 | 2020-12-08 | 成都佳华物链云科技有限公司 | 一种环境检测模型训练方法、环境检测方法及装置 |
CN112132090A (zh) * | 2020-09-28 | 2020-12-25 | 天地伟业技术有限公司 | 一种基于yolov3的烟火自动检测预警方法 |
US20210279470A1 (en) * | 2020-03-04 | 2021-09-09 | Matroid, Inc. | Detecting content in a real-time video stream using machine-learning classifiers |
CN114445780A (zh) * | 2022-02-10 | 2022-05-06 | 青岛熙正数字科技有限公司 | 裸土苫盖的检测方法及装置、识别模型的训练方法及装置 |
CN115272656A (zh) * | 2022-07-29 | 2022-11-01 | 京东方科技集团股份有限公司 | 环境检测报警方法、装置、计算机设备及存储介质 |
-
2022
- 2022-07-29 CN CN202210906240.2A patent/CN115272656A/zh active Pending
-
2023
- 2023-07-05 WO PCT/CN2023/105840 patent/WO2024022059A1/zh unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210279470A1 (en) * | 2020-03-04 | 2021-09-09 | Matroid, Inc. | Detecting content in a real-time video stream using machine-learning classifiers |
CN112052744A (zh) * | 2020-08-12 | 2020-12-08 | 成都佳华物链云科技有限公司 | 一种环境检测模型训练方法、环境检测方法及装置 |
CN112132090A (zh) * | 2020-09-28 | 2020-12-25 | 天地伟业技术有限公司 | 一种基于yolov3的烟火自动检测预警方法 |
CN114445780A (zh) * | 2022-02-10 | 2022-05-06 | 青岛熙正数字科技有限公司 | 裸土苫盖的检测方法及装置、识别模型的训练方法及装置 |
CN115272656A (zh) * | 2022-07-29 | 2022-11-01 | 京东方科技集团股份有限公司 | 环境检测报警方法、装置、计算机设备及存储介质 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118351459A (zh) * | 2024-06-17 | 2024-07-16 | 四川国蓝中天环境科技集团有限公司 | 基于卫星遥感的城市工地和裸地扬尘管控成效评估方法 |
Also Published As
Publication number | Publication date |
---|---|
CN115272656A (zh) | 2022-11-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2024022059A1 (zh) | 环境检测报警方法、装置、计算机设备及存储介质 | |
CN110807429B (zh) | 基于tiny-YOLOv3的施工安全检测方法及系统 | |
Duan et al. | SODA: A large-scale open site object detection dataset for deep learning in construction | |
US10706285B2 (en) | Automatic ship tracking method and system based on deep learning network and mean shift | |
CN111582188B (zh) | 一种基于人工智能的混凝土浇筑旁站监理方法 | |
WO2019080881A1 (zh) | 行人流量漏斗生成方法及装置、存储介质、电子设备 | |
US20220165064A1 (en) | Method for acquiring traffic state, relevant apparatus, roadside device and cloud control platform | |
US20240071215A1 (en) | Detection method and apparatus of abnormal vehicle, device, and storage medium | |
CN112613569B (zh) | 图像识别方法、图像分类模型的训练方法及装置 | |
EP4020425A2 (en) | Method and apparatus for determining green wave speed, electronic device and storage medium | |
CN111914667A (zh) | 一种吸烟检测方法及装置 | |
CN116453186A (zh) | 一种基于YOLOv5改进的口罩佩戴检测方法 | |
CN112686887A (zh) | 混凝土表面裂纹的检测方法、系统、设备和介质 | |
CN110674887A (zh) | 一种基于视频分类的端到端道路拥堵检测算法 | |
Wang et al. | Automatic identification and location of tunnel lining cracks | |
CN115359471A (zh) | 图像处理及联合检测模型训练方法、装置、设备和存储介质 | |
Minnikhanov et al. | Detection of traffic anomalies for a safety system of smart city | |
Zhang et al. | Automated detection and segmentation of tunnel defects and objects using YOLOv8-CM | |
Basalamah et al. | Deep learning framework for congestion detection at public places via learning from synthetic data | |
CN112990659B (zh) | 一种疏散救援辅助方法、系统、计算机设备、处理终端 | |
CN113496182B (zh) | 基于遥感影像的道路提取方法及装置、存储介质及设备 | |
CN110942026B (zh) | 基于深度学习的胶囊机器人排水管病害检测方法及系统 | |
CN117611795A (zh) | 基于多任务ai大模型的目标检测方法、模型训练方法 | |
CN107564031A (zh) | 基于反馈背景提取的城市交通场景前景目标检测方法 | |
CN112052824A (zh) | 基于YOLOv3算法的煤气管道特定物体目标检测告警方法、装置、系统及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23845272 Country of ref document: EP Kind code of ref document: A1 |