CN113139427A - Steam pipe network intelligent monitoring method, system and equipment based on deep learning - Google Patents
Steam pipe network intelligent monitoring method, system and equipment based on deep learning Download PDFInfo
- Publication number
- CN113139427A CN113139427A CN202110271225.0A CN202110271225A CN113139427A CN 113139427 A CN113139427 A CN 113139427A CN 202110271225 A CN202110271225 A CN 202110271225A CN 113139427 A CN113139427 A CN 113139427A
- Authority
- CN
- China
- Prior art keywords
- steam pipe
- deep learning
- pipe network
- target frame
- image data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000012544 monitoring process Methods 0.000 title claims abstract description 41
- 238000013135 deep learning Methods 0.000 title claims abstract description 30
- 238000001514 detection method Methods 0.000 claims abstract description 57
- 238000007781 pre-processing Methods 0.000 claims abstract description 22
- 238000013136 deep learning model Methods 0.000 claims abstract description 21
- 238000012549 training Methods 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 13
- 238000003860 storage Methods 0.000 claims description 10
- 238000010606 normalization Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 6
- 238000012360 testing method Methods 0.000 claims description 5
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000006378 damage Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 239000012530 fluid Substances 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
Abstract
The application relates to the technical field of steam pipe networks, in particular to a steam pipe network intelligent monitoring method, system and device based on deep learning. The method comprises the following steps: monitoring in real time at a steam pipe network place through a camera, collecting monitoring image data, and preprocessing the collected image data; transmitting the preprocessed image data into a trained deep learning model; the trained deep learning model calculates the received preprocessed image data to obtain the position relation between the target frame and the corresponding detection area; and if the target frame is not in the detection area, detecting the next image data to be detected. The method and the system can realize real-time monitoring of targets such as pedestrians, find the pedestrian targets in the detection area around the steam pipe section, and send the warning message to the control center, so that operators can eliminate potential safety hazards in time.
Description
Technical Field
The application relates to the technical field of steam pipe networks, in particular to a method, a system and equipment for intelligently monitoring a steam pipe network based on deep learning.
Background
Fluid medium usually has higher pressure and temperature in the steam pipe network, mainly relies on the operation personnel to patrol and examine in the enterprise's production and discovers the potential safety hazard, and this mode can not in time discover to personnel's emergency such as being close to, illegal destruction. The fluid conveyed by the steam pipe network is steam, and the steam pipe network can be divided into low-pressure steam pipe networks, medium-pressure steam pipe networks and high-pressure steam pipe networks according to different conveyed steam pressures. Steam is used as high-temperature fluid and has certain pressure, and once leakage accidents occur, the damage is large.
In addition, a drainage device is arranged on the steam pipe network at regular intervals, mainly for discharging condensed water in the steam pipeline. Since the temperature of the steam condensate is still high, if a person approaches the drainage device, the person may be burned by high temperature.
Therefore, the steam pipe network and the peripheral area of the drainage device are intelligently monitored, the pedestrian targets are found and reported to the control center in time, operators can eliminate the potential hazards of scalding and artificial damage of the pipe network in time, and the intelligent monitoring system is of great importance to the safety production of enterprises.
Object detection is a basic visual recognition problem in computer vision, and has been widely studied in recent years. The purpose of visual object detection is to find and pinpoint objects in a given image that have a particular target class and assign a corresponding class label to each object instance. Because of the great success of image classification based on deep learning, in recent years, target detection technology using deep learning has been actively studied, and target detection algorithms are increasingly applied to the fields of public security, safety supervision and the like.
The real-time detection of the targets such as pedestrians can be realized by combining the existing deep learning target detection algorithm. If the algorithm finds a pedestrian target in the detection area around the steam pipe section, an alarm message is sent to the control center, so that operators can eliminate potential safety hazards in time.
The present application therefore proposes an improved method to at least partially solve the above technical problem.
Disclosure of Invention
In order to achieve the technical purpose, the application provides a steam pipe network intelligent monitoring method based on deep learning, which comprises the following steps:
monitoring in real time at a steam pipe network place through a camera, collecting monitoring image data, and preprocessing the collected image data;
transmitting the preprocessed image data into a trained deep learning model;
the trained deep learning model calculates the received preprocessed image data to obtain the position relation between the target frame and the corresponding detection area;
and if the target frame is not in the detection area, detecting the next image data to be detected.
Specifically, the deep learning model is a CNN model.
Further, the preprocessing of the image data is data enhancement, image scaling and/or normalization processing.
Specifically, the calculating, by the trained deep learning model, a position relationship between a target frame and a corresponding detection area on the received preprocessed image data includes: and calculating the coordinates of the central point of the target frame according to the coordinate data of the target frame.
Specifically, the detection area comprises areas on two sides of the steam pipe network and areas around the water drainage device.
Further, the coordinates of the center point of the target frame are calculated according to the coordinate data of the target frame, the number of intersection points of the ray emitted from the center point of the target frame and the polygon of the detection area is judged through a ray method, if the number of the intersection points is an odd number, the center point of the target frame is in the detection area, and if the number of the intersection points is not an odd number, the center point of the target frame is outside the detection area.
Further, the training process of the model comprises: establishing a pedestrian sample data set, preprocessing images, training and tuning algorithms, and testing a CNN model.
The application also provides a steam pipe network intelligent monitoring system based on degree of depth study, include:
the data acquisition module is used for acquiring the image data of the steam pipe network in real time;
the preprocessing module is used for preprocessing the steam pipe network image data;
the deep learning model is used for receiving the image data transmitted by the preprocessing module and calculating the position relation between the target frame and the corresponding detection area of the preprocessed image data;
and the data output module reports the event information to the control center if the target frame is in the detection area.
Specifically, the data acquisition module comprises a camera, and the deep learning model is a CNN model.
Further, the calculating of the position relationship between the target frame and the corresponding detection area is specifically to calculate a center point coordinate of the target frame according to coordinate data of the target frame.
Further, the coordinates of the center point of the target frame are calculated according to the coordinate data of the target frame, the number of intersection points of the ray emitted from the center point of the target frame and the polygon of the detection area is judged through a ray method, if the number of the intersection points is an odd number, the center point of the target frame is in the detection area, and if the number of the intersection points is not an odd number, the center point of the target frame is outside the detection area.
The application also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor runs the computer program to realize the steps of the intelligent monitoring method for the steam pipe network based on deep learning:
monitoring in real time at a steam pipe network place through a camera, collecting monitoring image data, and preprocessing the collected image data;
transmitting the preprocessed image data into a trained deep learning model;
the trained deep learning model calculates the received preprocessed image data to obtain the position relation between the target frame and the corresponding detection area;
and if the target frame is not in the detection area, detecting the next image data to be detected.
Embodiments of the fourth aspect of the present application provide a computer-readable storage medium, on which a computer program is stored, where the program is executed by a processor to implement the steps of the method for intelligent monitoring of a steam pipe network based on deep learning of the first aspect.
The beneficial effect of this application does:
the application provides a steam pipe network intelligent monitoring method, system and equipment based on deep learning, and can realize real-time monitoring of targets such as pedestrians, find the pedestrian target in the peripheral detection area of a steam pipe section, namely send an alarm message to a control center, so that operators can eliminate potential safety hazards in time, and the safety of the steam pipe section is guaranteed.
Drawings
FIG. 1 shows a schematic flow chart of the method of embodiment 1 of the present application;
FIG. 2 shows a schematic view of the detection of the steam pipe section area of embodiment 1 of the present application;
FIG. 3 is a schematic diagram showing a model training and online detection process in embodiment 1 of the present application;
fig. 4 shows a schematic structural diagram of a deep learning-based intelligent monitoring system for a steam pipe network in embodiment 2 of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 6 shows a schematic diagram of a storage medium provided in an embodiment of the present application.
Detailed Description
Hereinafter, embodiments of the present application will be described with reference to the accompanying drawings. It should be understood that the description is intended to be exemplary only, and is not intended to limit the scope of the present application. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present application. It will be apparent to one skilled in the art that the present application may be practiced without one or more of these details. In other instances, well-known features of the art have not been described in order to avoid obscuring the present application.
It should be noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments in accordance with the application. As used herein, the singular is intended to include the plural unless the context clearly dictates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Exemplary embodiments according to the present application will now be described in more detail with reference to the accompanying drawings. These exemplary embodiments may, however, be embodied in many different forms and should not be construed as limited to only the embodiments set forth herein. The figures are not drawn to scale, wherein certain details may be exaggerated and omitted for clarity. The shapes of various regions, layers, and relative sizes and positional relationships therebetween shown in the drawings are merely exemplary, and deviations may occur in practice due to manufacturing tolerances or technical limitations, and a person skilled in the art may additionally design regions/layers having different shapes, sizes, relative positions, as actually required.
Example 1:
in this embodiment, a deep learning-based intelligent monitoring method for a steam pipe network is implemented, as shown in fig. 1, including the following steps:
s1, monitoring in real time in a steam pipe network place through a camera, acquiring monitoring image data, and preprocessing the acquired image data;
s2, transmitting the preprocessed image data into a trained deep learning model;
s3, calculating the received preprocessed image data by the trained deep learning model to obtain the position relation between the target frame and the corresponding detection area;
and S4, if the target frame is in the detection area, reporting the event information to the control center, and if the target frame is not in the detection area, detecting the next image data to be detected.
Specifically, the deep learning model is a CNN model.
Further, the image data is pre-processed into data enhancement, image scaling and/or normalization processing.
Specifically, the trained deep learning model calculates the position relationship between the target frame and the corresponding detection area for the received preprocessed image data, and includes: and calculating the coordinates of the central point of the target frame according to the coordinate data of the target frame.
Specifically, the detection area includes areas on both sides of the steam pipe network and areas around the drainage device, as shown in fig. 2.
Further, the coordinates of the center point of the target frame are calculated according to the coordinate data of the target frame, the number of intersection points of the ray emitted from the center point of the target frame and the polygon of the detection area is judged through a ray method, if the number of the intersection points is an odd number, the center point of the target frame is in the detection area, and if not, the center point of the target frame is out of the detection area.
The trained deep learning model needs to be trained by the model and monitored online before being trained, as shown in fig. 3.
In the model training phase, firstly, a pedestrian sample data set is established, and the method mainly comprises the following processes.
Firstly, data acquisition and screening: acquiring a pedestrian sample picture in an actual scene, and ensuring the diversity of the sample picture by using pedestrian data in a coco data set;
secondly, data annotation: namely, marking the collected data sample picture, and marking the pedestrian category in the picture by using a marking tool.
And thirdly, dividing the data set into a training set and a testing set according to the ratio of 8:2, and respectively using the training set and the testing set for model training and evaluation.
In the model training stage and the data preprocessing stage, the quality of the image directly affects the design of the recognition algorithm and the precision of the effect, so that preprocessing is required before image analysis. The main purposes of image preprocessing are to eliminate irrelevant information in an image, recover useful real information, enhance the detectability of relevant information and simplify data to the maximum extent. The method mainly comprises the following steps of data enhancement, image scaling and normalization: data enhancement, namely random horizontal turning, random cutting and random rotation are adopted; scaling the image to a model input size; normalization: the pixel values are normalized to [0,1] by dividing by 255.0.
In the model training stage, the algorithm training and tuning stage is performed again. And (3) training a CNN model on a training set by using a tensoflow deep learning framework, evaluating on a test set, and optimizing to obtain an optimal model.
In the model training phase, the preferred model is finally saved as a pb format for use by the inference phase.
The on-line detection stage comprises the following four steps:
firstly, acquiring a detection picture: and installing a monitoring camera to cover the pipe section to be monitored, and acquiring a picture frame of the monitoring camera.
Secondly, image preprocessing: mainly comprises image scaling and normalization processing.
Thirdly, operating a detection model: and (3) loading the pb model by utilizing the tensoflowc + + API, and carrying out reasoning to obtain the position information (target frame coordinate information) and the category (pedestrian) of the target to be detected in the image frame.
Fourthly, calculating the position relation between the target frame and a corresponding detection area, wherein the detection area mainly comprises areas on two sides of the steam pipe network and areas around the drainage device; and calculating the coordinates of the central point of the target frame according to the coordinate data of the target frame. And judging whether the central point of the target frame is in the detection area, if so, reporting the event message to a control center, and otherwise, continuously detecting the next frame. And judging whether the central point P of the target frame is in the region by adopting a ray method, namely judging the number of intersection points of the ray emitted by the point P and the polygon of the detection region, wherein the number of the intersection points is an odd number and indicates that the point P is in the region, and otherwise, the point P is out of the region.
Example 2:
as shown in fig. 4, this embodiment implements a steam pipe network intelligent monitoring system based on deep learning, including:
the data receiving module 501 is used for receiving the steam pipe network image data transmitted by the camera;
the preprocessing module 502 is used for preprocessing the steam pipe network image data;
the deep learning module 503 is used for receiving the image data transmitted from the preprocessing module and calculating the position relation between the target frame and the corresponding detection area on the preprocessed image data;
and the output data module 504 is used for reporting the event information to the control center if the target frame is in the detection area.
Specifically, the deep learning model is a CNN model.
Further, calculating the position relationship between the target frame and the corresponding detection area includes: and calculating the coordinates of the central point of the target frame according to the coordinate data of the target frame.
Further, the coordinates of the center point of the target frame are calculated according to the coordinate data of the target frame, the number of intersection points of the ray emitted from the center point of the target frame and the polygon of the detection area is judged through a ray method, if the number of the intersection points is an odd number, the center point of the target frame is in the detection area, and if not, the center point of the target frame is out of the detection area.
Please refer to fig. 5, which illustrates a schematic diagram of an electronic device according to some embodiments of the present application. As shown in fig. 5, the electronic device 2 includes: the system comprises a processor 200, a memory 201, a bus 202 and a communication interface 203, wherein the processor 200, the communication interface 203 and the memory 201 are connected through the bus 202; the memory 201 stores a computer program that can be executed on the processor 200, and the processor 200 executes the method for intelligent monitoring of a steam pipe network based on deep learning provided by any one of the foregoing embodiments when executing the computer program.
The Memory 201 may include a high-speed Random Access Memory (RAM) and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 203 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used.
The processor 200 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 200. The Processor 200 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 201, and the processor 200 reads the information in the memory 201 and completes the steps of the method in combination with the hardware thereof.
The electronic equipment provided by the embodiment of the application and the steam pipe network intelligent monitoring method based on deep learning provided by the embodiment of the application have the same inventive concept and have the same beneficial effects as the method adopted, operated or realized by the electronic equipment.
Referring to fig. 6, the computer readable storage medium is an optical disc 30, and a computer program (i.e., a program product) is stored on the optical disc, and when the computer program is executed by a processor, the computer program may execute the method for monitoring a steam pipe network based on deep learning provided in any of the foregoing embodiments.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory, or other optical and magnetic storage media, which are not described in detail herein.
The computer-readable storage medium provided by the above-mentioned embodiment of the present application and the quantum key distribution channel allocation method in the spatial division multiplexing optical network provided by the embodiment of the present application have the same inventive concept, and have the same beneficial effects as the method adopted, run, or implemented by the application program stored in the computer-readable storage medium.
It should be noted that:
the algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may be used with the teachings herein. The required structure for constructing such a device will be apparent from the description above. In addition, this application is not directed to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present application as described herein, and any descriptions of specific languages are provided above to disclose the best modes of the present application.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the application, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the application and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in the creation apparatus of a virtual machine according to embodiments of the present application. The present application may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present application may be stored on a computer readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. A steam pipe network intelligent monitoring method based on deep learning is characterized by comprising the following steps:
monitoring in real time at a steam pipe network place through a camera, collecting monitoring image data, and preprocessing the collected image data;
transmitting the preprocessed image data into a trained deep learning model;
the trained deep learning model calculates the received preprocessed image data to obtain the position relation between the target frame and the corresponding detection area;
and if the target frame is not in the detection area, detecting the next image data to be detected.
2. The intelligent monitoring method for the steam pipe network based on the deep learning of claim 1, wherein the deep learning model is a CNN model.
3. The intelligent monitoring method for the steam pipe network based on the deep learning of claim 1, wherein the preprocessing of the image data is data enhancement, image scaling and/or normalization processing.
4. The intelligent monitoring method for the steam pipe network based on the deep learning of claim 1, wherein the trained deep learning model calculates the received preprocessed image data to obtain the position relationship between the target frame and the corresponding detection area, and comprises the following steps: and calculating the coordinates of the central point of the target frame according to the coordinate data of the target frame.
5. The intelligent monitoring method for the steam pipe network based on the deep learning of claim 4, wherein the detection area comprises areas on two sides of the steam pipe network and areas around the water drainage device.
6. The steam pipe network intelligent monitoring method based on deep learning of claim 4 or 5, wherein the coordinates of the center point of the target frame are calculated according to the coordinate data of the target frame, the number of intersection points of the ray emitted from the center point of the target frame and the polygon of the detection area is judged through a ray method, if the number of the intersection points is an odd number, the center point of the target frame is in the detection area, and if not, the center point of the target frame is outside the detection area.
7. The intelligent monitoring method for the steam pipe network based on the deep learning of claim 1, wherein the training process of the model comprises the following steps: establishing a pedestrian sample data set, preprocessing images, training and tuning algorithms, and testing a CNN model.
8. The utility model provides a steam pipe network intelligent monitoring system based on deep learning which characterized in that includes:
the data acquisition module is used for acquiring the image data of the steam pipe network in real time;
the preprocessing module is used for preprocessing the steam pipe network image data;
the deep learning model is used for receiving the image data transmitted by the preprocessing module and calculating the position relation between the target frame and the corresponding detection area of the preprocessed image data;
and the data output module reports the event information to the control center if the target frame is in the detection area.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor executes the computer program to implement the steps of the intelligent monitoring method for steam pipe network based on deep learning according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, wherein the program is executed by a processor to implement the method for intelligent monitoring of steam pipe network based on deep learning according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110271225.0A CN113139427A (en) | 2021-03-12 | 2021-03-12 | Steam pipe network intelligent monitoring method, system and equipment based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110271225.0A CN113139427A (en) | 2021-03-12 | 2021-03-12 | Steam pipe network intelligent monitoring method, system and equipment based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113139427A true CN113139427A (en) | 2021-07-20 |
Family
ID=76811026
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110271225.0A Pending CN113139427A (en) | 2021-03-12 | 2021-03-12 | Steam pipe network intelligent monitoring method, system and equipment based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113139427A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114399458A (en) * | 2021-11-30 | 2022-04-26 | 中国电子科技集团公司第十五研究所 | Crossing fence detection method and system based on deep learning target detection |
CN117372942A (en) * | 2023-02-03 | 2024-01-09 | 河海大学 | Reservoir floater automatic identification method based on improved SegNet model |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140118716A1 (en) * | 2012-10-31 | 2014-05-01 | Raytheon Company | Video and lidar target detection and tracking system and method for segmenting moving targets |
WO2019042728A2 (en) * | 2017-08-29 | 2019-03-07 | Osram Gmbh | Detection of road users on a traffic route |
CN109583396A (en) * | 2018-12-05 | 2019-04-05 | 广东亿迅科技有限公司 | A kind of region prevention method, system and terminal based on CNN two stages human testing |
CN109961009A (en) * | 2019-02-15 | 2019-07-02 | 平安科技(深圳)有限公司 | Pedestrian detection method, system, device and storage medium based on deep learning |
KR20200023221A (en) * | 2018-08-23 | 2020-03-04 | 서울대학교산학협력단 | Method and system for real-time target tracking based on deep learning |
CN111046797A (en) * | 2019-12-12 | 2020-04-21 | 天地伟业技术有限公司 | Oil pipeline warning method based on personnel and vehicle behavior analysis |
KR20200049277A (en) * | 2018-10-31 | 2020-05-08 | 정영규 | Method and Apparatus for Real-time Target Recognition Type Tracking |
US20200364863A1 (en) * | 2018-05-14 | 2020-11-19 | Tencent Technology (Shenzhen) Company Limited | Object recognition method and device, and storage medium |
CN112071058A (en) * | 2020-08-14 | 2020-12-11 | 深延科技(北京)有限公司 | Road traffic monitoring and vehicle abnormity, contraband and fire detection method and system based on deep learning |
-
2021
- 2021-03-12 CN CN202110271225.0A patent/CN113139427A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140118716A1 (en) * | 2012-10-31 | 2014-05-01 | Raytheon Company | Video and lidar target detection and tracking system and method for segmenting moving targets |
WO2019042728A2 (en) * | 2017-08-29 | 2019-03-07 | Osram Gmbh | Detection of road users on a traffic route |
US20200364863A1 (en) * | 2018-05-14 | 2020-11-19 | Tencent Technology (Shenzhen) Company Limited | Object recognition method and device, and storage medium |
KR20200023221A (en) * | 2018-08-23 | 2020-03-04 | 서울대학교산학협력단 | Method and system for real-time target tracking based on deep learning |
KR20200049277A (en) * | 2018-10-31 | 2020-05-08 | 정영규 | Method and Apparatus for Real-time Target Recognition Type Tracking |
CN109583396A (en) * | 2018-12-05 | 2019-04-05 | 广东亿迅科技有限公司 | A kind of region prevention method, system and terminal based on CNN two stages human testing |
CN109961009A (en) * | 2019-02-15 | 2019-07-02 | 平安科技(深圳)有限公司 | Pedestrian detection method, system, device and storage medium based on deep learning |
CN111046797A (en) * | 2019-12-12 | 2020-04-21 | 天地伟业技术有限公司 | Oil pipeline warning method based on personnel and vehicle behavior analysis |
CN112071058A (en) * | 2020-08-14 | 2020-12-11 | 深延科技(北京)有限公司 | Road traffic monitoring and vehicle abnormity, contraband and fire detection method and system based on deep learning |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114399458A (en) * | 2021-11-30 | 2022-04-26 | 中国电子科技集团公司第十五研究所 | Crossing fence detection method and system based on deep learning target detection |
CN117372942A (en) * | 2023-02-03 | 2024-01-09 | 河海大学 | Reservoir floater automatic identification method based on improved SegNet model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Xue et al. | A fast detection method via region‐based fully convolutional neural networks for shield tunnel lining defects | |
Kim et al. | Broken stitch detection method for sewing operation using CNN feature map and image-processing techniques | |
Akagic et al. | Pothole detection: An efficient vision based method using rgb color space image segmentation | |
CN113139427A (en) | Steam pipe network intelligent monitoring method, system and equipment based on deep learning | |
EP3243166A1 (en) | Structural masking for progressive health monitoring | |
Jadon et al. | Low-complexity high-performance deep learning model for real-time low-cost embedded fire detection systems | |
CN113642474A (en) | Hazardous area personnel monitoring method based on YOLOV5 | |
CN115751203A (en) | Natural gas pipeline leakage monitoring system based on thermal infrared imager | |
CN113255626A (en) | Intelligent tower crane structure state detection method and device based on scanned image analysis | |
Chen et al. | CrackEmbed: Point feature embedding for crack segmentation from disaster site point clouds with anomaly detection | |
Ji et al. | A high-performance framework for personal protective equipment detection on the offshore drilling platform | |
CN114926415A (en) | Steel rail surface detection method based on PCNN and deep learning | |
Guo et al. | Visual pattern recognition supporting defect reporting and condition assessment of wastewater collection systems | |
Feng et al. | Evaluation of feature-and pixel-based methods for deflection measurements in temporary structure monitoring | |
CN116229336A (en) | Video moving target identification method, system, storage medium and computer | |
WO2018110377A1 (en) | Video monitoring device | |
CN115497242A (en) | Intelligent monitoring system and monitoring method for foreign matter invasion in railway business line construction | |
CN116977249A (en) | Defect detection method, model training method and device | |
Vats et al. | An improved driver assistance system for detection of lane departure under urban and highway driving conditions | |
CN115221349A (en) | Target positioning method and system | |
CN113902742A (en) | TFT-LCD detection-based defect true and false judgment method and system | |
CN112347989A (en) | Reflective garment identification method and device, computer equipment and readable storage medium | |
CN111597954A (en) | Method and system for identifying vehicle position in monitoring video | |
Han et al. | Ceiling damage detection and safety assessment in large public buildings using semantic segmentation | |
CN113177452B (en) | Sample sealing method and device based on image processing and radio frequency technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210720 |