CN112580600A - Dust concentration detection method and device, computer equipment and storage medium - Google Patents

Dust concentration detection method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112580600A
CN112580600A CN202011607972.9A CN202011607972A CN112580600A CN 112580600 A CN112580600 A CN 112580600A CN 202011607972 A CN202011607972 A CN 202011607972A CN 112580600 A CN112580600 A CN 112580600A
Authority
CN
China
Prior art keywords
dust
image
detection
target
dust concentration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011607972.9A
Other languages
Chinese (zh)
Inventor
杨文博
姜来福
左来宝
马光辉
穆霄刚
齐若宇
陈贵林
薛森
马少华
孙远
马君
李伟
贾宁
张建亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qinhuangdao Yanda Binyuan Technology Development Co ltd
Shenhua Huanghua Port Co Ltd
Original Assignee
Qinhuangdao Yanda Binyuan Technology Development Co ltd
Shenhua Huanghua Port Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qinhuangdao Yanda Binyuan Technology Development Co ltd, Shenhua Huanghua Port Co Ltd filed Critical Qinhuangdao Yanda Binyuan Technology Development Co ltd
Priority to CN202011607972.9A priority Critical patent/CN112580600A/en
Publication of CN112580600A publication Critical patent/CN112580600A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Abstract

The application relates to a dust concentration detection method and device, computer equipment and a storage medium. The method comprises the following steps: acquiring a dust detection image; inputting the dust detection image into a pre-trained dust labeling model, and acquiring a target detection frame output by the dust labeling model; the target detection frame is used for marking the position of dust; and calculating the transmissivity of a target image in a target detection frame in the dust detection image, and determining the dust concentration according to the transmissivity of the target image. By adopting the method, the dust characteristics can be automatically extracted and the dust concentration can be automatically determined, manual intervention processing is not needed, and the complex work of manually extracting the characteristics is avoided, so that the detection efficiency and the accuracy rate can be improved, the detection cost is reduced, and the method has the advantage of convenience in operation.

Description

Dust concentration detection method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of dust detection technologies, and in particular, to a method and an apparatus for detecting dust concentration, a computer device, and a storage medium.
Background
Different from container terminals, ports of bulk cargo terminals can generate a large amount of dust when loading and unloading operations are carried out through a ship loader, and the chute unloading is used as the last link of the ship loader loading operations and is a serious disaster area generating the dust. With the increase of environmental protection demands, environmental protection departments have taken the dust generated during loading and unloading operations as the key point for air pollution control. Particularly, for bulk cargo wharfs in ports in Kyojin Ji area, if the environmental protection requirements do not meet the standards, the wharfs are in danger of stopping production and stopping production or even being banned.
In order to ensure the efficiency and economic benefits of the dust removal process, the dust needs to be effectively detected, and the distribution state of the dust and the concentration level of the dust need to be accurately judged. However, the traditional technology can only detect the dust concentration manually, and has the problem of low detection efficiency.
Disclosure of Invention
In view of the above, it is necessary to provide a dust concentration detection method, apparatus, computer device, and storage medium capable of improving detection efficiency and reducing cost.
In order to achieve the above object, in a first aspect, an embodiment of the present application provides a dust concentration detection method, including the following steps:
acquiring a dust detection image; inputting the dust detection image into a pre-trained dust labeling model, and acquiring a target detection frame output by the dust labeling model; the target detection frame is used for marking the position of dust; and calculating the transmissivity of a target image in a target detection frame in the dust detection image, and determining the dust concentration according to the transmissivity of the target image.
In one embodiment, before inputting the dust detection image into the pre-trained dust labeling model, the method further comprises the following steps: acquiring data of each sample; the sample data comprises a sample image and a detection frame corresponding to the position of the dust; and training the YOLOv3-SPP model through each sample data to obtain a dust labeling model.
In one embodiment, the step of training the YOLOv3-SPP model with each sample data to obtain a dust labeling model includes: and inputting the data of each sample into a YOLOv3-SPP model, and training the YOLOv3-SPP model by taking a CIoU loss function as a regression loss function to obtain a dust labeling model.
In one embodiment, the step of determining the dust concentration from the target image transmittance comprises: and acquiring a mapping relation between the image transmittance and the dust concentration, and determining the dust concentration corresponding to the target image transmittance based on the mapping relation.
In one embodiment, the step of obtaining the mapping relationship between the image transmittance and the dust concentration includes: acquiring a plurality of test data; the test data comprises a test image and a corresponding dust concentration; and inputting the test image of the test data into the dust labeling model aiming at each test data to obtain a test detection frame of the test image, calculating the transmittance of the test image in the test detection frame in the test image, and generating a mapping relation according to the transmittance of the test image and the dust concentration of the test data.
In one embodiment, the method further comprises the steps of: and confirming the dust detection image as a target detection image when the dust concentration is greater than a concentration threshold value, and alarming when the number of the target detection images is greater than an alarm threshold value.
In one embodiment, the step of calculating the transmittance of the target image in the target detection frame in the dust detection image includes: processing the dust detection image by adopting a dark channel defogging algorithm to obtain the estimated transmittance in the target detection frame; and acquiring a gray-scale image of the dust detection image, and performing guide filtering on the estimated transmissivity by taking the gray-scale image as a guide image to obtain the transmissivity of the target image.
In one embodiment, the step of processing the dust detection image by using a dark channel defogging algorithm to obtain the estimated transmittance in the target detection frame includes: acquiring a dark primary color channel image of the dust detection image, and selecting each target pixel point with the maximum brightness value from the dark primary color channel image according to a pixel selection ratio; respectively determining the corresponding gray value of each target pixel point in the dust detection image, and determining the maximum gray value in each gray value as a global atmospheric light value; the estimated transmittance is obtained based on the following formula:
Figure BDA0002870707940000031
wherein x is a spatial coordinate value; i is a dust detection image; t (x) is the estimated transmittance of the pixel point with the space coordinate x in the dust detection image; omega is an adjusting parameter; omega (x) is a window taking a pixel point with a space coordinate of x as a center; y is a pixel point in the window; c is any channel in RGB three channels; and A is a global atmospheric light value.
In a second aspect, an embodiment of the present application provides a dust detection apparatus, which includes an image acquisition module, an image labeling module, and a dust concentration determination module. The image acquisition module is used for acquiring a dust detection image. The image labeling module is used for inputting the dust detection image into a pre-trained dust labeling model and acquiring a target detection frame output by the dust labeling model; the target detection frame is used for marking the position of dust. And the dust concentration determining module is used for calculating the transmissivity of a target image in the target detection frame in the dust detection image and determining the dust concentration according to the transmissivity of the target image.
In a third aspect, an embodiment of the present application provides a computer device, where the computer device includes a processor, and the processor implements the steps of the method in any of the above embodiments when executing a computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the steps of the method in any one of the above embodiments.
In the dust concentration detection method, the dust concentration detection device, the computer equipment and the storage medium, the dust detection image is marked by adopting a pre-trained dust marking model, a target detection frame which is output by the model and used for marking the dust position is obtained, the transmissivity of the target image in the target detection frame in the dust detection image is calculated, and the dust concentration is determined according to the transmissivity of the target image. Therefore, the dust characteristics can be automatically extracted, the dust concentration can be automatically determined, manual intervention processing is not needed, the complex work of manually extracting the characteristics is avoided, the detection efficiency and the accuracy rate can be improved, the detection cost is reduced, and the dust concentration detection device has the advantage of convenience in operation.
Drawings
FIG. 1 is a diagram showing an environment where a dust concentration detection method is applied in one embodiment;
FIG. 2 is a schematic view of a first process of a dust concentration detection method according to an embodiment;
FIG. 3 is a schematic view showing a second flow of a dust concentration detection method according to an embodiment;
FIG. 4 is a flowchart illustrating obtaining a mapping relationship according to an embodiment;
FIG. 5 is a schematic diagram of a process for obtaining transmissivity of a target image in one embodiment;
FIG. 6 is a schematic flow chart illustrating obtaining an estimated transmittance in one embodiment;
FIG. 7 is a block diagram showing the structure of a dust concentration detection apparatus according to an embodiment;
FIG. 8 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
As mentioned in the background art, the traditional method needs to detect the dust concentration by a manual detection mode, and has the problem of low detection efficiency. Under the influence of the mode, in the traditional technology, the mode of handling the flying dust of the bulk cargo wharf can be mainly divided into two types, one type is to open the dust removal device in the whole process during the operation, but the mode has low economic benefit; the other type is to judge when to turn on or off the dust removing device by manpower, but the method has high labor cost and is not beneficial to automatic operation upgrading of a port.
Therefore, there is a need for a dust concentration detection method, device, computer equipment and storage medium for detecting dust concentration in an automated manner, so as to improve detection efficiency and accuracy, reduce detection cost, and provide advantages of convenient operation. On the basis of optimizing the detection method, the start and stop of the dust removal device can be controlled according to the dust concentration obtained by automatic detection during dust raising treatment, so that the economic benefit can be improved, and the cost can be reduced.
The dust concentration detection method provided by the present application can be applied to a Video monitoring system as shown in fig. 1, where the system includes an NVR110(Network Video Recorder), a server 120, a Network card 130, a first switch 140, a plurality of second switches 150, and a plurality of camera devices 160. Wherein the first switch 140 is a ship loader video network switch; each second switch 150 is a dock switch and is respectively arranged on different docks; each camera device 160 can set up respectively on the different positions of shipment machine according to the dust control demand to carry out comprehensive control to the sign indicating number dust. Taking the chute tube unloading process of the bulk cargo wharf as an example, when a video monitoring system is built, the type and the installation position of the camera 160 need to be determined so as to shoot more accurately. Due to the unique operating environment of the bulk terminal, the type of the camera device 160 and the mounting position of the camera device 160 on the ship loader have a certain influence on the detection result. Therefore, to ensure the accuracy of the detection, the selection of the image capturing apparatus 160 generally needs to satisfy the following conditions: (1) since the ship loader vibrates during operation, which may cause an unclear shot image, the camera 160 needs to have strong shock resistance; (2) since the installation position is outdoors, the image pickup apparatus 160 needs to have strong waterproof property in consideration of the variability of weather; (3) the operating temperature range of the image pickup apparatus 160 may be-30 to 60 degrees celsius; (4) considering that the bulk cargo terminal has more electrical devices and large power, and strong electromagnetic interference can affect the imaging of the camera 160, the camera 160 needs to have strong anti-electromagnetic interference capability; (5) the camera device 160 is less sensitive to the intensity of illumination. In summary, to allow as good a viewing angle as possible, the imaging device 160 may be mounted vertically on the carriage platform, generally parallel to the carriage and aligned with the carriage mouth. Therefore, the device has a larger visual field, and can also place the detection center of gravity at the dust source of the flow cylinder opening.
In one embodiment, as shown in fig. 2, a dust concentration detection method is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
and step S210, acquiring a dust detection image.
Specifically, the server can respectively collect operation videos of the bulk cargo wharf in different scenes in the day and at night, intercept video frames in the operation videos, and detect the video frames as dust detection images to obtain corresponding dust concentrations. In one embodiment, the Video frames may be intercepted by the software Free Video to JPG switcher.
Step S220, inputting the dust detection image into a pre-trained dust labeling model, and acquiring a target detection frame output by the dust labeling model; the target detection frame is used for marking the position of dust.
The dust labeling model is used for determining the dust position in the input image and labeling the dust position by adopting a detection frame. It can be understood that the dust labeling model can output the detection frame in various forms, and the application does not specifically limit the detection frame, and only needs the detection frame to label the dust position. For example, the dust labeling model may only output the detection frame, and the labeling is completed by superimposing the detection frame on the input image; or the dust labeling model can also output the input image and the detection frame after synthesizing the input image and the detection frame into an output image. The target detection frame is used for marking the position of dust in the dust detection image, and the framed range of the target detection frame shows the dust area in the image.
Specifically, the dust labeling model is obtained by pre-training, and when the dust detection image is input into the dust labeling model, the dust labeling model can output a target detection frame corresponding to the position of dust in the dust detection image. In one embodiment, as shown in fig. 3, before step S220, the method may further include the steps of:
step S310, obtaining each sample data; the sample data includes a sample image and a detection box corresponding to the position of the dust.
Specifically, the server can collect operation videos in different scenes in the day and at night respectively, and intercept required Video frames through software Free Video to JPG Converter to obtain a plurality of sample images, and each sample image forms a sample gallery. To save computation, an image including dust may be selected as the sample image. After the sample images are determined, the sample images are marked, and the positions of dust in the sample images are marked by using the detection frame, so that sample data can be generated. In one example, the sample data may be data in the VOC2007 format.
And step S320, training the YOLOv3-SPP model through each sample data to obtain a dust labeling model.
Specifically, a YOLOv3-SPP model pre-constructed by each sample data is used for training, so that a dust labeling model can be obtained. Specifically, since the dust can be regarded as the flying dust, and the flying dust is a fluid and has no fixed shape and size, the detection capability of the YOLOv3 algorithm needs to be enhanced as much as possible when the dust is detected. By adding an SPP (Spatial Pyramid Pooling) module to the YOLOv3 convolutional neural network, 5 × 5, 9 × 9 and 13 × 13 maximal Pooling can be performed on layer77, and layer 78, layer 80 and layer82 can be obtained respectively. And connecting layer77, layer 78, layer 80 and layer82 to obtain a feature map layer 84, and reducing dimensions to 512 channels through 1-by-1 convolution operation to obtain a dust labeling model.
In this embodiment, the SPP module has a simple structure, and obtains a dust labeling model by training the YOLOv3-SPP model, so that fusion of local features and global features at feathermmap level can be effectively realized without increasing the calculation amount, and the expression capability of the final feature mAP is enriched, thereby improving the Average accuracy of the mAP.
In one embodiment, the step of training the YOLOv3-SPP model with each sample data to obtain a dust labeling model includes: and inputting the data of each sample into a YOLOv3-SPP model, and training the YOLOv3-SPP model by taking a CIoU loss function as a regression loss function to obtain a dust labeling model.
Specifically, the regression Loss function of the YOLOv3 algorithm adopts a Complete Intersection over Union (CIoU Loss) Loss function, i.e., a CIoU Loss function LCIoUThe calculation formula of (2) is as follows:
Figure BDA0002870707940000081
Figure BDA0002870707940000082
Figure BDA0002870707940000083
the IOU is a cross-over ratio and is used for representing a prediction detection frame b and a real detection frame bgtThe degree of overlap of (c); rho2(b,bgt) For predicting the center point of the detection frame b and the real detection frame bgtThe square of the distance of the center point of (a); c. C2To just contain the prediction detection frame b and the real detection frame bgtThe square of the length of the diagonal of the minimum detection box of (1); v is used to measure the aspect ratio of the prediction detection box b and the real detection box bgtSimilarity of aspect ratios of (a); α is a weight coefficient, related to v; w is agtFor true detection of frame bgtThe width of (d); h isgtFor true detection of frame bgtThe height of (d); w is the width of the prediction detection frame b; h is the height of the prediction detection box b.
In this embodiment, the YOLOv3-SPP model is trained by using the CIoU loss function as the regression loss function, so that the predicted detection frame can better conform to the real detection frame, and the detection accuracy can be improved.
Further, since a large number of stacked detection frames are likely to occur when performing dust detection, redundant predictive detection frames can be suppressed in conjunction with DIoU-NMS. The DIoU-NMS uses DIoU (Distance Intersection over Union) as a criterion of NMS (Non-Maximum Suppression), wherein NMS refers to a detection frame having the highest score (score) as a reference when a plurality of detection frames are very close in spatial position, and only a detection frame having a large score is retained when the Intersection ratio between the respective detection frames exceeds a threshold. The DIoU considers the center distance of the two detection frames on the basis of the IOU, and when the center distance of the two detection frames exceeds a threshold value, the two detection frames are reserved at the same time. The calculation formula of DIoU is as follows:
Figure BDA0002870707940000091
Figure BDA0002870707940000092
wherein, the center point of the detection frame with low score is the center point of the detection frame with low score; bgtThe center point of the detection frame with the highest score is obtained; rho2(b,bgt) Is the Euclidean distance between the two central points; c is the diagonal distance of the minimum closure area which can simultaneously contain two detection frames; IOU is the cross-over ratio.
In the embodiment, the Yolov3-SPP model is trained by combining with the DIoU-NMS, so that redundant prediction detection frames can be inhibited, the later data processing amount is reduced, and the detection efficiency can be improved.
And step S230, calculating the transmissivity of the target image in the target detection frame in the dust detection image, and determining the dust concentration according to the transmissivity of the target image.
The transmittance, which is an indication of the dust concentration, is a percentage of the ratio of the transmitted light flux of the object in the dust environment to the incident light flux of the object, and describes the blocking effect of the dust particles. The smaller the image transmittance, the more the incident light is absorbed, i.e., the greater the dust concentration; the greater the image transmission, the less the incident light is absorbed, i.e., the less the dust concentration. The image transmittance can be used for representing the dust concentration and realizing the visual measurement of the dust concentration.
Specifically, after each target detection frame output by the dust labeling model is obtained, the transmittance in the region framed by each target detection frame in the dust detection image, that is, the transmittance of the target image, can be calculated. In one embodiment, the step of determining the dust concentration from the target image transmittance comprises: and acquiring a mapping relation between the image transmittance and the dust concentration, and determining the dust concentration corresponding to the target image transmittance based on the mapping relation.
Since the transmittance has a certain correspondence relationship with the dust concentration, a mapping relationship between the image transmittance and the dust concentration can be determined in advance, and then the dust concentration in the dust detection image can be determined according to the target image transmittance and the mapping relationship. Therefore, the detection efficiency and the detection accuracy can be further improved. In one example, the dust concentration can be divided into several levels, thereby realizing real-time monitoring of the dust concentration level. When the dust concentration is detected in real time, the number of the target detection frames and the distance between the target detection frames can be used as important information for judging the dust concentration, and the important information and the target image transmittance are comprehensively judged together to obtain the dust concentration grade.
In one embodiment, as shown in fig. 4, the step of obtaining the mapping relationship between the image transmittance and the dust concentration includes:
step S410, acquiring a plurality of test data; the test data comprises a test image and a corresponding dust concentration;
step S420, inputting the test image of the test data into the dust labeling model according to each test data to obtain a test detection frame of the test image, calculating the transmittance of the test image in the test detection frame in the test image, and generating a mapping relation according to the transmittance of the test image and the dust concentration of the test data.
Wherein, each test image can comprise images under different scenes and different environments. It should be noted that there is no necessary connection between whether the test image applied in determining the mapping relationship is the acquired sample image.
Specifically, for each test datum, the corresponding test image is input into the dust labeling model, a test detection frame corresponding to the test image is obtained, and the test image transmittance in the test detection frame is calculated. And testing the real dust concentration in a scene or environment corresponding to the test image by using a laser dust concentration detector, and generating a mapping relation between the dust concentration and the transmissivity of the test image. By performing the foregoing processing on each test data, a mapping relationship between the dust concentration and the transmittance can be determined from each dust concentration and each test image transmittance. Furthermore, the mapping relation between the dust concentration and the image transmittance in different environments can be obtained by applying test data of multiple environments, so that the detection accuracy is improved.
In one embodiment, as shown in fig. 5, the step of calculating the transmittance of the target image in the target detection frame in the dust detection image includes:
step S510, processing the dust detection image by adopting a dark channel defogging algorithm to obtain the estimated transmissivity in the target detection frame;
and step S520, acquiring a gray-scale image of the dust detection image, and performing guide filtering on the estimated transmissivity by taking the gray-scale image as a guide image to obtain the transmissivity of the target image.
It is understood that the process of calculating the transmittance of the target image in the present embodiment can also be used to calculate the transmittance of the test image. The dark channel defogging algorithm is a classic algorithm in the defogging algorithm field, and the algorithm for estimating and calculating the transmittance is high in feasibility, simple in principle and small in calculation amount. The estimated transmittance can be calculated using the following formula when processing the dust detection image:
Figure BDA0002870707940000111
wherein x is a spatial coordinate value; i is a dust detection image (or test image); t (x) is the estimated transmittance of the pixel point with the spatial coordinate x in the image I; omega is an adjusting parameter, and can be determined according to factors such as scene, environment, detection precision and the like of the dust detection image, for example, 0.95 can be selected; Ω (x) is a window centered on a pixel point with a spatial coordinate x, and the side length of the window can be determined according to the detection requirement, for example, 15 (referred to as a pixel point); y is a pixel point in the window; c is any channel in RGB three channels; and A is a global atmospheric light value.
After the estimated transmissivity in the target detection frame is obtained through a dark channel defogging algorithm, the gray-scale image of the original input image (namely the dust detection image) is used as a guide image, and guide filtering is carried out on the estimated transmissivity to obtain the transmissivity in the target detection frame area, namely the transmissivity of the target image.
In one embodiment, as shown in fig. 6, the step of processing the dust detection image by using a dark channel defogging algorithm to obtain the estimated transmittance in the target detection frame includes:
step S610, acquiring a dark primary color channel image of the dust detection image, and selecting each target pixel point with the maximum brightness value from the dark primary color channel image according to a pixel selection ratio;
step S620, respectively determining the corresponding gray value of each target pixel point in the dust detection image, and determining the maximum gray value in each gray value as a global atmospheric light value;
and step S630, processing the dust detection image by adopting a global atmospheric light value and a dark channel defogging algorithm to obtain the estimated transmittance.
The pixel selection proportion can be preset and is used for indicating the number of the target pixel points.
Specifically, taking the pixel selection proportion as 1% as an example, after a dark primary color channel image of the dust detection image is obtained, selecting a pixel point with the highest brightness value of 1% from the dark primary color channel image as a target pixel point, then obtaining a corresponding gray value of each target pixel point in the dust detection image, taking the gray value with the highest value as a global atmospheric light value, and substituting the gray value into the calculation formula of the dark channel defogging algorithm to calculate to obtain the estimated transmittance. Therefore, the defogging effect can be improved, and the accuracy of dust detection can be improved.
In one embodiment, the dust detection method further comprises the steps of: and confirming the dust detection image as a target detection image when the dust concentration is greater than a concentration threshold value, and alarming when the number of the target detection images is greater than an alarm threshold value.
Specifically, when the dust concentration is greater than the concentration threshold value, or the dust concentration level is greater than the concentration level threshold value, the dust detection image may be determined as a target detection image, and if the number of target detection images is greater than the alarm threshold value, an alarm signal is generated. In one example, the alarm signal may be broadcast in real time on site and/or sent to a handheld device of a maintenance or management person to take appropriate action in time.
In the embodiment, the alarm threshold value and/or the concentration threshold value of the image can be adjusted according to the detection index requirement, so that the applicability and the convenience of dust concentration detection are improved. In one example, the alarm threshold of the image may be greater than or equal to 2, such that the pre-alarm signal is generated when the dust concentration of the consecutive images is greater than the threshold, thereby preventing a false alarm.
In the dust concentration detection method, a pre-trained dust labeling model is adopted to label a dust detection image, a target detection frame output by the model and used for labeling the dust position is obtained, the transmissivity of a target image in the target detection frame in the dust detection image is calculated, and the dust concentration is determined according to the transmissivity of the target image. Therefore, the dust characteristics can be automatically extracted, the dust concentration can be automatically determined, manual intervention processing is not needed, the complex work of manually extracting the characteristics is avoided, the detection efficiency and the accuracy rate can be improved, the detection cost is reduced, and the dust concentration detection device has the advantage of convenience in operation. Meanwhile, the device can also be used for dust suppression and dust removal of bulk cargo wharfs, plays an important role in meeting the requirement of environmental protection, can be used for area detection, and can better grasp the global dust distribution condition.
It should be understood that although the various steps in the flow charts of fig. 1-6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-6 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 7, a dust concentration detection apparatus is provided, which includes an image acquisition module 710, an image labeling module 720, and a dust concentration determination module 730.
The image acquisition module 710 is configured to acquire a dust detection image; the image labeling module 720 is used for inputting the dust detection image into a pre-trained dust labeling model and acquiring a target detection frame output by the dust labeling model; the target detection frame is used for marking the position of dust; the dust concentration determination module 730 is configured to calculate a transmittance of a target image in a target detection frame in the dust detection image, and determine the dust concentration according to the transmittance of the target image.
In one embodiment, the apparatus further comprises a sample data acquisition module and a model training module. The dust position detection device comprises a dust position detection module, a sample data acquisition module and a dust position detection module, wherein the sample data acquisition module is used for acquiring sample data, and the sample data comprises a sample image and a detection frame corresponding to the dust position; the model training module is used for training the YOLOv3-SPP model through each sample data to obtain a dust labeling model.
In one embodiment, the model training module is used for inputting each sample data into a YOLOv3-SPP model, and training the YOLOv3-SPP model by taking a CIoU loss function as a regression loss function to obtain a dust labeling model.
In one embodiment, the dust concentration determination module 730 includes a dust concentration acquisition unit. The dust concentration acquisition unit is used for acquiring a mapping relation between the image transmittance and the dust concentration and determining the dust concentration corresponding to the target image transmittance based on the mapping relation.
In one embodiment, the dust concentration acquisition unit includes a mapping relationship acquisition unit. The mapping relation obtaining unit is used for obtaining a plurality of test data; the test data comprises a test image and a corresponding dust concentration; and the method is also used for inputting the test image of the test data into the dust labeling model according to each test data to obtain a test detection frame of the test image, calculating the transmittance of the test image in the test detection frame in the test image, and generating a mapping relation according to the transmittance of the test image and the dust concentration of the test data.
In one embodiment, the device further comprises an alarm module, wherein the alarm module is used for confirming the dust detection image as the target detection image when the dust concentration is greater than the concentration threshold value, and giving an alarm when the number of the target detection images is greater than the alarm threshold value.
In one embodiment, the dust concentration determination module 730 includes an estimated transmittance acquisition unit and a guide filter unit. The estimated transmissivity acquiring unit is used for processing the dust detection image by adopting a dark channel defogging algorithm to obtain the estimated transmissivity in the target detection frame; and the guide filtering unit is used for acquiring a gray-scale image of the dust detection image, taking the gray-scale image as a guide image, and performing guide filtering on the estimated transmissivity to obtain the transmissivity of the target image.
In one embodiment, the estimated transmittance acquiring unit includes a global atmospheric light value determining unit and a calculating unit. The global atmospheric light value determining unit is used for obtaining a dark primary color channel image of the dust detection image and selecting each target pixel point with the maximum brightness value from the dark primary color channel image according to a pixel selection ratio; and the method is also used for respectively determining the corresponding gray value of each target pixel point in the dust detection image and confirming the maximum gray value in each gray value as the global atmospheric light value. The calculation unit is used for obtaining the estimated transmissivity based on the following formula:
Figure BDA0002870707940000151
wherein x is a spatial coordinate value; i is a dust detection image; t (x) is the estimated transmittance of the pixel point with the space coordinate x in the dust detection image; omega is an adjusting parameter; omega (x) is a window taking a pixel point with a space coordinate of x as a center; y is a pixel point in the window; c is any channel in RGB three channels; and A is a global atmospheric light value.
For specific limitations of the dust concentration detection device, reference may be made to the above limitations of the dust concentration detection method, which are not described herein again. All or part of the modules in the dust concentration detection device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 8. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data such as operation videos and dust detection images. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a dust concentration detection method.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring a dust detection image;
inputting the dust detection image into a pre-trained dust labeling model, and acquiring a target detection frame output by the dust labeling model; the target detection frame is used for marking the position of dust;
and calculating the transmissivity of a target image in a target detection frame in the dust detection image, and determining the dust concentration according to the transmissivity of the target image.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring data of each sample; the sample data comprises a sample image and a detection frame corresponding to the position of the dust; and training the YOLOv3-SPP model through each sample data to obtain a dust labeling model.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and inputting the data of each sample into a YOLOv3-SPP model, and training the YOLOv3-SPP model by taking a CIoU loss function as a regression loss function to obtain a dust labeling model.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and acquiring a mapping relation between the image transmittance and the dust concentration, and determining the dust concentration corresponding to the target image transmittance based on the mapping relation.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring a plurality of test data; the test data comprises a test image and a corresponding dust concentration; and inputting the test image of the test data into the dust labeling model aiming at each test data to obtain a test detection frame of the test image, calculating the transmittance of the test image in the test detection frame in the test image, and generating a mapping relation according to the transmittance of the test image and the dust concentration of the test data.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and confirming the dust detection image as a target detection image when the dust concentration is greater than a concentration threshold value, and alarming when the number of the target detection images is greater than an alarm threshold value.
In one embodiment, the processor, when executing the computer program, further performs the steps of: processing the dust detection image by adopting a dark channel defogging algorithm to obtain the estimated transmittance in the target detection frame; and acquiring a gray-scale image of the dust detection image, and performing guide filtering on the estimated transmissivity by taking the gray-scale image as a guide image to obtain the transmissivity of the target image.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring a dark primary color channel image of the dust detection image, and selecting each target pixel point with the maximum brightness value from the dark primary color channel image according to a pixel selection ratio;
respectively determining the corresponding gray value of each target pixel point in the dust detection image, and determining the maximum gray value in each gray value as a global atmospheric light value;
the estimated transmittance is obtained based on the following formula:
Figure BDA0002870707940000171
wherein x is a spatial coordinate value; i is a dust detection image; t (x) is the estimated transmittance of the pixel point with the space coordinate x in the dust detection image; omega is an adjusting parameter; omega (x) is a window taking a pixel point with a space coordinate of x as a center; y is a pixel point in the window; c is any channel in RGB three channels; and A is a global atmospheric light value.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a dust detection image;
inputting the dust detection image into a pre-trained dust labeling model, and acquiring a target detection frame output by the dust labeling model; the target detection frame is used for marking the position of dust;
and calculating the transmissivity of a target image in a target detection frame in the dust detection image, and determining the dust concentration according to the transmissivity of the target image.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring data of each sample; the sample data comprises a sample image and a detection frame corresponding to the position of the dust; and training the YOLOv3-SPP model through each sample data to obtain a dust labeling model.
In one embodiment, the computer program when executed by the processor further performs the steps of: and inputting the data of each sample into a YOLOv3-SPP model, and training the YOLOv3-SPP model by taking a CIoU loss function as a regression loss function to obtain a dust labeling model.
In one embodiment, the computer program when executed by the processor further performs the steps of: and acquiring a mapping relation between the image transmittance and the dust concentration, and determining the dust concentration corresponding to the target image transmittance based on the mapping relation.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a plurality of test data; the test data comprises a test image and a corresponding dust concentration; and inputting the test image of the test data into the dust labeling model aiming at each test data to obtain a test detection frame of the test image, calculating the transmittance of the test image in the test detection frame in the test image, and generating a mapping relation according to the transmittance of the test image and the dust concentration of the test data.
In one embodiment, the computer program when executed by the processor further performs the steps of: and confirming the dust detection image as a target detection image when the dust concentration is greater than a concentration threshold value, and alarming when the number of the target detection images is greater than an alarm threshold value.
In one embodiment, the computer program when executed by the processor further performs the steps of: processing the dust detection image by adopting a dark channel defogging algorithm to obtain the estimated transmittance in the target detection frame; and acquiring a gray-scale image of the dust detection image, and performing guide filtering on the estimated transmissivity by taking the gray-scale image as a guide image to obtain the transmissivity of the target image.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a dark primary color channel image of the dust detection image, and selecting each target pixel point with the maximum brightness value from the dark primary color channel image according to a pixel selection ratio; respectively determining the corresponding gray value of each target pixel point in the dust detection image, and determining the maximum gray value in each gray value as a global atmospheric light value; the estimated transmittance is obtained based on the following formula:
Figure BDA0002870707940000191
wherein x is a spatial coordinate value; i is a dust detection image; t (x) is the estimated transmittance of the pixel point with the space coordinate x in the dust detection image; omega is an adjusting parameter; omega (x) is a window taking a pixel point with a space coordinate of x as a center; y is a pixel point in the window; c is any channel in RGB three channels; and A is a global atmospheric light value.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (11)

1. A dust concentration detection method is characterized by comprising the following steps:
acquiring a dust detection image;
inputting the dust detection image into a pre-trained dust labeling model, and acquiring a target detection frame output by the dust labeling model; the target detection frame is used for marking the position of dust;
and calculating the transmissivity of a target image in the target detection frame in the dust detection image, and determining the dust concentration according to the transmissivity of the target image.
2. The dust concentration detection method according to claim 1, wherein before inputting the dust detection image into a pre-trained dust labeling model, the method further comprises the steps of:
acquiring data of each sample; the sample data comprises a sample image and a detection frame corresponding to the position of the dust;
and training a YOLOv3-SPP model through each sample data to obtain the dust labeling model.
3. The dust concentration detection method of claim 2, wherein the step of training a YOLOv3-SPP model through each sample data to obtain the dust labeling model comprises:
inputting each sample data into the YOLOv3-SPP model, and training the YOLOv3-SPP model by taking a CIoU loss function as a regression loss function to obtain the dust labeling model.
4. The dust concentration detection method according to claim 1, wherein the step of determining the dust concentration from the target image transmittance includes:
and acquiring a mapping relation between the image transmittance and the dust concentration, and determining the dust concentration corresponding to the target image transmittance based on the mapping relation.
5. The dust concentration detection method according to claim 4, wherein the step of obtaining the mapping relationship between the image transmittance and the dust concentration includes:
acquiring a plurality of test data; the test data comprises a test image and a corresponding dust concentration;
and inputting the test image of the test data into the dust labeling model aiming at each test data to obtain a test detection frame of the test image, calculating the test image transmissivity in the test detection frame in the test image, and generating the mapping relation according to the test image transmissivity and the dust concentration of the test data.
6. The dust concentration detection method according to claim 1, further comprising the steps of:
and confirming the dust detection image as a target detection image under the condition that the dust concentration is greater than a concentration threshold value, and alarming under the condition that the number of the target detection images is greater than an alarm threshold value.
7. The dust concentration detection method according to any one of claims 1 to 6, wherein the step of calculating a target image transmittance within the target detection frame in the dust detection image includes:
processing the dust detection image by adopting a dark channel defogging algorithm to obtain the estimated transmittance in the target detection frame;
and acquiring a gray-scale image of the dust detection image, and performing guide filtering on the estimated transmissivity by taking the gray-scale image as a guide image to obtain the transmissivity of the target image.
8. The dust concentration detection method according to claim 7, wherein the step of processing the dust detection image by using a dark channel defogging algorithm to obtain the estimated transmittance in the target detection frame comprises:
acquiring a dark primary color channel image of the dust detection image, and selecting each target pixel point with the maximum brightness value from the dark primary color channel image according to a pixel selection ratio;
respectively determining the corresponding gray value of each target pixel point in the dust detection image, and determining the maximum gray value in each gray value as a global atmospheric light value;
the estimated transmittance is obtained based on the following formula:
Figure FDA0002870707930000021
wherein x is a spatial coordinate value; i is the dust detection image; t (x) is the estimated transmittance of a pixel point with a spatial coordinate x in the dust detection image; omega is an adjusting parameter; omega (x) is a window taking a pixel point with a space coordinate of x as a center; y is a pixel point in the window; c is any channel in RGB three channels; a is the global atmospheric light value.
9. A dust detection apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring a dust detection image;
the image labeling module is used for inputting the dust detection image into a pre-trained dust labeling model and acquiring a target detection frame output by the dust labeling model; the target detection frame is used for marking the position of dust;
and the dust concentration determining module is used for calculating the transmissivity of a target image in the target detection frame in the dust detection image and determining the dust concentration according to the transmissivity of the target image.
10. A computer device comprising a processor, characterized in that the processor realizes the steps of the method of any one of claims 1 to 8 when executing a computer program.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
CN202011607972.9A 2020-12-29 2020-12-29 Dust concentration detection method and device, computer equipment and storage medium Pending CN112580600A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011607972.9A CN112580600A (en) 2020-12-29 2020-12-29 Dust concentration detection method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011607972.9A CN112580600A (en) 2020-12-29 2020-12-29 Dust concentration detection method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112580600A true CN112580600A (en) 2021-03-30

Family

ID=75144380

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011607972.9A Pending CN112580600A (en) 2020-12-29 2020-12-29 Dust concentration detection method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112580600A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610893A (en) * 2021-08-04 2021-11-05 江苏福鱼装饰材料有限公司 Dust tracing method and system based on computer vision
CN114130565A (en) * 2021-11-30 2022-03-04 国能神东煤炭集团有限责任公司 Spraying device control method and system, spraying device and storage medium
CN115797343A (en) * 2023-02-06 2023-03-14 山东大佳机械有限公司 Livestock and poultry breeding environment video monitoring method based on image data
CN117351426A (en) * 2023-10-24 2024-01-05 秦皇岛燕大滨沅科技发展有限公司 Bulk cargo port dust monitoring method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102353622A (en) * 2011-07-01 2012-02-15 黑龙江科技学院 Monitoring and measuring method for dust concentration in working faces in underground coal mine
CN106709903A (en) * 2016-11-22 2017-05-24 南京理工大学 PM2.5 concentration prediction method based on image quality
CN107194924A (en) * 2017-05-23 2017-09-22 重庆大学 Expressway foggy-dog visibility detecting method based on dark channel prior and deep learning
CN108460743A (en) * 2018-03-19 2018-08-28 西安因诺航空科技有限公司 A kind of unmanned plane image defogging algorithm based on dark
CN111292258A (en) * 2020-01-15 2020-06-16 长安大学 Image defogging method based on dark channel prior and bright channel prior
CN111950329A (en) * 2019-05-16 2020-11-17 长沙智能驾驶研究院有限公司 Target detection and model training method and device, computer equipment and storage medium
CN112101434A (en) * 2020-09-04 2020-12-18 河南大学 Infrared image weak and small target detection method based on improved YOLO v3

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102353622A (en) * 2011-07-01 2012-02-15 黑龙江科技学院 Monitoring and measuring method for dust concentration in working faces in underground coal mine
CN106709903A (en) * 2016-11-22 2017-05-24 南京理工大学 PM2.5 concentration prediction method based on image quality
CN107194924A (en) * 2017-05-23 2017-09-22 重庆大学 Expressway foggy-dog visibility detecting method based on dark channel prior and deep learning
CN108460743A (en) * 2018-03-19 2018-08-28 西安因诺航空科技有限公司 A kind of unmanned plane image defogging algorithm based on dark
CN111950329A (en) * 2019-05-16 2020-11-17 长沙智能驾驶研究院有限公司 Target detection and model training method and device, computer equipment and storage medium
CN111292258A (en) * 2020-01-15 2020-06-16 长安大学 Image defogging method based on dark channel prior and bright channel prior
CN112101434A (en) * 2020-09-04 2020-12-18 河南大学 Infrared image weak and small target detection method based on improved YOLO v3

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
冯鑫: ""基于机器视觉的煤尘检测算法研究"", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技I辑》 *
冯鑫: ""基于机器视觉的煤尘检测算法研究"", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技I辑》, no. 03, 15 March 2020 (2020-03-15), pages 5 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610893A (en) * 2021-08-04 2021-11-05 江苏福鱼装饰材料有限公司 Dust tracing method and system based on computer vision
CN114130565A (en) * 2021-11-30 2022-03-04 国能神东煤炭集团有限责任公司 Spraying device control method and system, spraying device and storage medium
CN115797343A (en) * 2023-02-06 2023-03-14 山东大佳机械有限公司 Livestock and poultry breeding environment video monitoring method based on image data
CN117351426A (en) * 2023-10-24 2024-01-05 秦皇岛燕大滨沅科技发展有限公司 Bulk cargo port dust monitoring method

Similar Documents

Publication Publication Date Title
CN112580600A (en) Dust concentration detection method and device, computer equipment and storage medium
CN114743119B (en) High-speed rail contact net hanger nut defect detection method based on unmanned aerial vehicle
KR20090043416A (en) Surveillance camera apparatus for detecting and suppressing camera shift and control method thereof
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN112750104B (en) Method and device for automatically matching optimal camera by monitoring ship through multiple cameras
CN113255590A (en) Defect detection model training method, defect detection method, device and system
CN112785557A (en) Belt material flow detection method and device and belt material flow detection system
CN114170569A (en) Method, system, storage medium and equipment for monitoring road surface abnormal condition
CN112052823A (en) Target detection method and device
CN111723656A (en) Smoke detection method and device based on YOLO v3 and self-optimization
JP3612565B2 (en) Road surface condition judgment method
CN115953726A (en) Machine vision container surface damage detection method and system
CN115684853A (en) Unmanned aerial vehicle power transmission line fault detection method and system with ultraviolet imager
CN107194923B (en) Ultraviolet image diagnosis method for defect inspection of contact network power equipment
CN113450385B (en) Night work engineering machine vision tracking method, device and storage medium
CN115797770A (en) Continuous image target detection method, system and terminal considering relative movement of target
CN111160224B (en) High-speed rail contact net foreign matter detection system and method based on FPGA and horizon line segmentation
US20100296743A1 (en) Image processing apparatus, image processing method and program
CN113449617A (en) Track safety detection method, system, device and storage medium
KR102015620B1 (en) System and Method for detecting Metallic Particles
CN113419075B (en) Ship speed measuring method, system, device and medium based on binocular vision
CN115345873A (en) Method and device based on rail grounding detection test
WO2024048065A1 (en) Information processing device, information processing method, and program
CN117173933A (en) Ocean safety evaluation method, device, equipment and medium based on image recognition
CN116596902A (en) Registration and accurate detection method and system for distribution network component image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination