CN117560468B - Big data-based integrated fire-fighting equipment production monitoring system - Google Patents
Big data-based integrated fire-fighting equipment production monitoring system Download PDFInfo
- Publication number
- CN117560468B CN117560468B CN202311493172.2A CN202311493172A CN117560468B CN 117560468 B CN117560468 B CN 117560468B CN 202311493172 A CN202311493172 A CN 202311493172A CN 117560468 B CN117560468 B CN 117560468B
- Authority
- CN
- China
- Prior art keywords
- video
- monitoring
- area
- new
- pixel points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 108
- 238000004519 manufacturing process Methods 0.000 title claims abstract description 16
- 239000000523 sample Substances 0.000 claims abstract description 69
- 238000004458 analytical method Methods 0.000 claims abstract description 41
- 230000002159 abnormal effect Effects 0.000 claims abstract description 15
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 238000013500 data storage Methods 0.000 claims description 9
- 238000007405 data analysis Methods 0.000 claims description 8
- 238000012549 training Methods 0.000 claims description 8
- 238000005516 engineering process Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 5
- 238000001914 filtration Methods 0.000 claims description 4
- 238000000638 solvent extraction Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000000926 separation method Methods 0.000 claims description 3
- 238000005192 partition Methods 0.000 claims description 2
- 238000005265 energy consumption Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 230000001550 time effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000000034 method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/65—Control of camera operation in relation to power supply
- H04N23/651—Control of camera operation in relation to power supply for reducing power consumption by affecting camera operations, e.g. sleep mode, hibernation mode or power off of selective parts of the camera
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/91—Television signal processing therefor
- H04N5/915—Television signal processing therefor for field- or frame-skip recording or reproducing
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Alarm Systems (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The invention discloses an integrated fire-fighting equipment production monitoring system based on big data, which comprises an area dividing unit, an equipment collecting unit, an equipment preprocessing unit, an acquisition and analysis unit and an equipment control unit. According to the invention, the cameras in the target place are set as the common probes and the dormant probes, and the common probes are analyzed to obtain the trigger signals for triggering the dormant probes to be started, so that the cameras in the single-shared area can save a large amount of electric energy consumption, meanwhile, the occupation of video data monitored by the cameras in the single-shared area to a memory is reduced, and meanwhile, the original monitoring video in the common probes is reduced to the monitoring video containing dynamic objects, so that the memory occupied by the monitoring video is reduced, the monitoring video with the dynamic objects can meet the long-time storage requirement, the requirement of the quick searching of the existing monitoring video is met, and the timeliness of monitoring video abnormal time period checking by a supervisor is effectively shortened.
Description
Technical Field
The invention relates to the technical field of monitoring, in particular to an integrated fire-fighting equipment production monitoring system based on big data.
Background
In the production process of the fire-fighting equipment, scheduling and safety inspection are needed to be carried out on site conditions in time, the existing anti-theft monitoring system on the production line of the fire-fighting equipment is mainly provided with cameras or safety inspection personnel walk on site to observe site conditions, the aim of anti-theft monitoring is achieved, but in order to realize a complete monitoring system, more monitoring devices are arranged, corresponding monitoring videos are more, a plurality of security personnel are needed to ensure that all monitoring videos can be checked, and in a part of areas, almost no people flow passes in the corresponding time period, a large number of invalid monitoring videos are acquired when the monitoring cameras monitor the areas in real time, so that the monitoring personnel need to search corresponding monitoring periods of abnormal agents in steps when checking the corresponding monitoring videos, the searching time of the monitoring personnel is wasted, and meanwhile, a large amount of invalid monitoring data occupy a large amount of storage ends, so that the monitoring videos outside a certain period cannot be checked;
therefore, the invention provides the integrated fire-fighting equipment production monitoring system based on big data, so that the interactive experience of the monitoring system is realized, more accurate information is obtained more efficiently through processing the monitoring equipment, and the storage space of videos is occupied less.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an integrated fire-fighting equipment production monitoring system based on big data, which solves the problems in the background art.
In order to achieve the above purpose, the invention is realized by the following technical scheme: an integrated fire-fighting equipment production monitoring system based on big data, comprising:
The area dividing unit is used for dividing the target place into a shared area and a single shared area, wherein the shared area is the area position where all users are located, and the single shared user is the area position where the appointed user is located;
The equipment acquisition unit is used for acquiring a monitoring camera of a shared area and a camera of a single shared area in a target place and respectively marking the monitoring camera and the camera as a common probe and a dormant probe, wherein the common probe is a monitoring camera for continuously monitoring work, and the dormant probe is a monitoring camera for working through external triggering;
The common probe is also used for filtering, storing and processing the monitoring video;
the equipment preprocessing unit is used for marking the monitoring area of the common probe in a partitioning way to obtain a non-triggering area and a triggering area;
The acquisition analysis unit is used for acquiring a first video frame of a dynamic object in a monitoring area in real time through a dynamic capture technology by a common probe, judging the position of the dynamic object in the monitoring area in the video frame at the same time, and acquiring a trigger signal according to a judgment result;
and the equipment control unit is used for triggering the start of the dormant probe according to the trigger signal.
Preferably, the partition marking mode of the device preprocessing unit is as follows:
Firstly, acquiring a monitoring area of a common probe, wherein the monitoring area comprises a part of shared places and a part of single shared places, and the joint positions of the part of shared places and the part of single shared places in the monitoring area are marked by separation marks;
Then, respectively marking a part of shared places and a part of single shared places in the monitoring area as a non-trigger area and a trigger area;
Preferably, the judgment mode of the acquisition and analysis unit is as follows:
if the dynamic object in the first video frame is in the non-trigger area, acquiring a plurality of subsequent video frames, judging the position of the dynamic object in the monitoring area in the video frame, and generating a trigger signal when the dynamic object in the plurality of subsequent video frames, which contains one video frame, is in the trigger area;
if the dynamic object in the first video frame is in the trigger area, directly generating a trigger signal;
Preferably, the filtering, storing and processing mode of the monitoring video by the common probe is realized by the following modules, which comprises:
The data acquisition module is used for acquiring a monitoring video of a common probe, dividing the monitoring video into a plurality of short videos, extracting a plurality of video frames in the same time period of each short video, and transmitting the plurality of video frames extracted from each short video to the data analysis module;
The data analysis module is used for disassembling each short video into a plurality of video frames and importing the video frames into the pre-training analysis model, calculating the gray values of the same pixel points in each video frame through an image sample to obtain the absolute value of the difference value of each same pixel point, comparing the obtained result with a preset threshold value, obtaining the absolute value of the difference value larger than the preset threshold value in the comparison result, calculating and analyzing the absolute value of the difference value larger than the preset threshold value to obtain a continuous difference value, calculating an abnormal ratio B according to the number of the pixel points in each continuous difference value and the original number of the pixels of the image sample, comparing and analyzing the B with the preset value B0, intercepting a monitoring video containing a dynamic object according to the comparison analysis result, and sending the monitoring video to the data storage module;
The data storage module is used for storing the intercepted and processed video containing the monitoring video of the dynamic object;
preferably, the specific manner of analysis by the data analysis module is as follows:
E1, disassembling each short video into a plurality of video frames and importing the video frames into a corresponding analysis module of a pre-training analysis model, wherein the analysis module is a plurality of analysis modules which are formed by dividing the analysis model in advance through a blockchain technology, the plurality of analysis modules are used for simultaneously analyzing the plurality of short videos, the plurality of analysis modules comprise at least one same image sample, and the image sample is a video frame which is selected in advance from all video frames collected by a common probe and only comprises a background and does not comprise a dynamic object;
e2, selecting an analysis module to correspondingly analyze a short video, and then carrying out image graying treatment on the image sample and each video frame;
then, all pixel points of the image sample are acquired, each pixel point is marked as f, f represents the number of the pixel points, and f=1, 2, … … and m represent the marking quantity of the pixel points;
E3, acquiring gray values of the image samples and the pixel points in each video frame, performing difference value calculation on the gray values of the same pixel point in each video frame, acquiring absolute values of the difference values of the corresponding pixel points, and marking the absolute values of the difference values of the pixel points as CJF, wherein CJF represents the absolute value of the difference value of the pixel point;
E4, then, CJf are respectively compared with a preset threshold CJ0, and then CJf of CJf > CJ0 is obtained; and record it as new CJf;
e5, acquiring each adjacent pixel point from the position marks of each new CJf, and matching the adjacent pixel points of the new CJ with other new CJ respectively;
If other new CJ is matched with the positions of all adjacent pixel points in the new CJ, obtaining an adjacent new CJ;
E6, then obtaining all new CJ of the patch based on the adjacent new CJ, marking the new CJ as a patch difference value, and simultaneously obtaining the number td of pixel points in each patch difference value, d=1, 2, … …, v, wherein v represents the number of patch difference values;
E7, then pass Calculating an abnormal ratio B of a corresponding video frame, then comparing the B with a preset value B0, and if B is more than or equal to B0, indicating that a dynamic object exists in the video frame;
Then, in a group of short videos, video frames with B values larger than or equal to B0 of continuous video frames are obtained, and then videos in the continuous video frames are intercepted and sent to a data storage module;
Preferably, the position marking is to obtain adjacent pixel points around each pixel point according to a nine-grid form, and mark the positions of each pixel point and the adjacent pixel points around the pixel point; wherein, in the nine grids, one grid only contains one pixel point;
preferably, in step E6, if the positions of other new CJs and all adjacent pixels in the new CJs are not matched, the new CJs representing the pixels are abnormal factors, and the new CJs are rejected;
preferably, in step E8, if B < B0, it indicates that the video frame has no dynamic object;
Then in a group of short videos, video frames with B value smaller than B0 of continuous video frames are acquired, and then videos in the continuous video frames are intercepted and deleted.
Advantageous effects
The invention provides an integrated fire-fighting equipment production monitoring system based on big data. Compared with the prior art, the method has the following beneficial effects:
According to the invention, the cameras in the target place are set as the common probes and the dormant probes, and the common probes are analyzed to obtain the trigger signals for triggering the dormant probes to be started, so that the cameras in the single-shared area can save a large amount of electric energy consumption, meanwhile, the occupation of the video data of the cameras in the single-shared area to the memory is reduced, the monitoring video with dynamic objects can meet the requirement of long-time storage, and meanwhile, the requirement of quick searching of the existing monitoring video is met, and the time effect of monitoring video abnormal time period checking by a supervisor is effectively shortened;
According to the invention, after the monitoring video is obtained, the monitoring video is imported into a pre-training analysis model to obtain the absolute value of each difference value of the same pixel point, the absolute value is compared with a preset threshold value to obtain a patch difference value, an abnormal ratio B is calculated according to the number of the pixel points in each patch difference value and the original number of pixels of an image sample, then the B is compared with the preset value B0, the monitoring video containing a dynamic object can be obtained by intercepting according to an analysis result, and the reduction and reconstruction of the monitoring video are realized.
Drawings
FIG. 1 is a system block diagram of the present invention;
FIG. 2 is a block diagram of a system of probes commonly used in the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As an embodiment of the invention
Referring to fig. 1, the present invention provides a technical solution: an integrated fire-fighting equipment production monitoring system based on big data, comprising:
The area dividing unit is used for dividing the target place into a shared area and a single shared area, wherein the shared area is the area position where all users are located, and the single shared user is the area position where the appointed user is located;
The equipment acquisition unit is used for acquiring a monitoring camera of a shared area and a camera of a single shared area in a target place and marking the monitoring camera and the camera of the single shared area as a common probe and a dormant probe respectively;
the common probe is a monitoring camera which continuously monitors, and the dormant probe is a monitoring camera which intermittently monitors;
The device preprocessing unit is used for marking the monitoring area of the common probe in a partitioning way to obtain a non-triggering area and a triggering area, wherein the partitioning marking mode is as follows:
Firstly, acquiring a monitoring area of a common probe, wherein the monitoring area comprises a part of shared places and a part of single shared places, and the joint positions of the part of shared places and the part of single shared places in the monitoring area are marked by separation marks;
Then, respectively marking a part of shared places and a part of single shared places in the monitoring area as a non-trigger area and a trigger area;
The acquisition analysis unit is used for acquiring a first video frame of a dynamic object in a monitoring area in real time through a dynamic capture technology by a common probe, judging the position of the dynamic object in the monitoring area in the video frame at the same time, and acquiring a trigger signal according to a judgment result; the judgment mode is as follows:
if the dynamic object in the first video frame is in the non-trigger area, acquiring a plurality of subsequent video frames, judging the position of the dynamic object in the monitoring area in the video frame, and generating a trigger signal when the dynamic object in the plurality of subsequent video frames, which contains one video frame, is in the trigger area;
if the dynamic object in the first video frame is in the trigger area, directly generating a trigger signal;
The device control unit is used for triggering the start of the sleep probe according to the trigger signal, and triggering the start of the sleep device through the trigger signal is a common means in the technical field, so that details are not repeated here;
According to the embodiment, the cameras in the target places are set to be the common probes and the dormant probes, and the trigger signals for triggering the dormant probes to be started are obtained by analyzing the common probes, so that a large amount of electric energy consumption can be saved for the cameras in a single-sharing area, meanwhile, occupation of the video data of the cameras in the single-sharing area to a memory is reduced, the monitoring video with dynamic objects can be stored for a long time period, meanwhile, the requirement of quick searching of the existing monitoring video is met, and the time effect of monitoring video abnormal time periods is effectively shortened for monitoring staff to check.
As embodiment II of the present invention
Referring to fig. 2, the difference between this embodiment and the first embodiment is that the conventional probe further includes, on an embodiment basis:
The data acquisition module is used for acquiring a monitoring video of a common probe, dividing the monitoring video into a plurality of short videos, extracting a plurality of video frames in the same time period of each short video, and transmitting the plurality of video frames extracted from each short video to the data analysis module;
The data analysis module is used for disassembling each short video into a plurality of video frames and importing the video frames into the pre-training analysis model, calculating the gray values of the same pixel points in each video frame through an image sample to obtain the absolute value of the difference value of each same pixel point, comparing the obtained result with a preset threshold value, obtaining the absolute value of the difference value larger than the preset threshold value in the comparison result, calculating and analyzing the absolute value of the difference value larger than the preset threshold value to obtain a continuous difference value, calculating an abnormal ratio B according to the number of the pixel points in each continuous difference value and the original number of the pixels of the image sample, comparing and analyzing the B with the preset value B0, intercepting a monitoring video containing a dynamic object according to the comparison analysis result, and sending the monitoring video to the data storage module;
E1, disassembling each short video into a plurality of video frames and importing the video frames into a corresponding analysis module of a pre-training analysis model, wherein the analysis module is a plurality of analysis modules which are formed by dividing the analysis model in advance through a blockchain technology, the plurality of analysis modules are used for simultaneously analyzing the plurality of short videos, the plurality of analysis modules comprise at least one same image sample, and the image sample is a video frame which is selected in advance from all video frames collected by a common probe and only comprises a background and does not comprise a dynamic object;
E2, taking an analysis module for correspondingly analyzing a short video as an example, firstly marking each video frame of the video as Si, wherein i=1, 2, … … and n, i represents what number of video frames, si represents what number of video frames, S1, S2, … … and Sn are ordered according to the shooting time of the video frames, and n represents the number of video frames;
E3, performing image graying treatment on the image sample and each video frame, wherein the technology is the prior art, so that the description is omitted here;
then, all pixel points of the image sample are acquired, each pixel point is marked as f, f represents the number of the pixel points, and f=1, 2, … … and m represent the marking quantity of the pixel points;
Then, according to a nine-grid form, obtaining adjacent pixel points around each pixel point, and carrying out position marking on each pixel point and the adjacent pixel points around the pixel points; wherein, in the nine grids, one grid only contains one pixel point;
E4, acquiring gray values of the image samples and the pixel points in each video frame, performing difference value calculation on the gray values of the same pixel point in each video frame, acquiring absolute values of the difference values of the corresponding pixel points, and marking the absolute values of the difference values of the pixel points as CJF, wherein CJF represents the absolute value of the difference value of the pixel point;
e5, then comparing CJf with preset thresholds CJ0 respectively, and obtaining CJf of CJf > CJ 0; and record it as new CJf;
E6, acquiring each adjacent pixel point from the position marks of each new CJf, and matching the adjacent pixel points of the new CJ with other new CJ respectively;
If the positions of other new CJ and all adjacent pixel points in the new CJ are not matched, the new CJ of the pixel point is indicated as an abnormal factor, and the new CJ is rejected;
If other new CJ is matched with the positions of all adjacent pixel points in the new CJ, obtaining an adjacent new CJ;
E7, then acquiring all new CJ of the patch based on the adjacent new CJ, marking the new CJ as a patch difference value, and simultaneously acquiring the number td of pixel points in each patch difference value, d=1, 2, … …, v, wherein v represents the number of patch difference values;
E8, then pass Calculating an abnormal ratio B of a corresponding video frame, and then comparing the B with a preset value B0, wherein if B is more than or equal to B0, the video frame is represented to have a dynamic object, and if B is less than B0, the video frame is represented to have no dynamic object;
e9, then in a group of short videos, obtaining video frames with B values smaller than B0 of continuous video frames, and then intercepting videos in the continuous video frames and deleting the videos;
Meanwhile, in a group of short videos, video frames with the B value larger than or equal to B0 of continuous video frames are obtained, and then videos in the continuous video frames are intercepted and sent to a data storage module;
The data storage module is used for storing the intercepted and processed video containing the monitoring video of the dynamic object;
According to the embodiment, after the monitoring video is obtained, the monitoring video is imported into a pre-training analysis model to obtain absolute values of difference values of all the same pixel points, the absolute values are compared with a preset threshold to obtain a connecting difference value, an abnormal ratio B is calculated according to the number of the pixel points in each connecting difference value and the original number of pixels of an image sample, then the B is compared with the preset value B0, the monitoring video containing a dynamic object can be obtained through interception according to an analysis result, and the reduction and reconstruction of the monitoring video are realized.
And all that is not described in detail in this specification is well known to those skilled in the art.
The foregoing describes one embodiment of the present invention in detail, but the disclosure is only a preferred embodiment of the present invention and should not be construed as limiting the scope of the invention. All equivalent changes and modifications within the scope of the present invention are intended to be covered by the present invention.
Claims (5)
1. Big data-based integrated fire-fighting equipment production monitoring system, which is characterized by comprising:
The area dividing unit is used for dividing the target place into a shared area and a single shared area, wherein the shared area is the area position where all users are located, and the single shared user is the area position where the appointed user is located;
The equipment acquisition unit is used for acquiring a monitoring camera of a shared area and a camera of a single shared area in a target place and respectively marking the monitoring camera and the camera as a common probe and a dormant probe, wherein the common probe is a monitoring camera for continuously monitoring work, and the dormant probe is a monitoring camera for working through external triggering;
The common probe is also used for filtering, storing and processing the monitoring video;
the equipment preprocessing unit is used for marking the monitoring area of the common probe in a partitioning way to obtain a non-triggering area and a triggering area;
The acquisition analysis unit is used for acquiring a first video frame of a dynamic object in a monitoring area in real time through a dynamic capture technology by a common probe, judging the position of the dynamic object in the monitoring area in the video frame at the same time, and acquiring a trigger signal according to a judgment result;
The equipment control unit is used for triggering the start of the dormant probe according to the trigger signal;
The partition marking mode of the device preprocessing unit is as follows:
Firstly, acquiring a monitoring area of a common probe, wherein the monitoring area comprises a part of shared places and a part of single shared places, and the joint positions of the part of shared places and the part of single shared places in the monitoring area are marked by separation marks;
Then, respectively marking a part of shared places and a part of single shared places in the monitoring area as a non-trigger area and a trigger area;
The judgment mode of the acquisition and analysis unit is as follows:
if the dynamic object in the first video frame is in the non-trigger area, acquiring a plurality of subsequent video frames, judging the position of the dynamic object in the monitoring area in the video frame, and generating a trigger signal when the dynamic object in the plurality of subsequent video frames, which contains one video frame, is in the trigger area;
if the dynamic object in the first video frame is in the trigger area, directly generating a trigger signal;
the mode of filtering, storing and processing the monitoring video by the common probe is realized by the following modules:
The data acquisition module is used for acquiring a monitoring video of a common probe, dividing the monitoring video into a plurality of short videos, extracting a plurality of video frames in the same time period of each short video, and transmitting the plurality of video frames extracted from each short video to the data analysis module;
The data analysis module is used for disassembling each short video into a plurality of video frames and importing the video frames into the pre-training analysis model, calculating the gray values of the same pixel points in each video frame through an image sample to obtain the absolute value of the difference value of each same pixel point, comparing the obtained result with a preset threshold value, obtaining the absolute value of the difference value larger than the preset threshold value in the comparison result, calculating and analyzing the absolute value of the difference value larger than the preset threshold value to obtain a continuous difference value, calculating an abnormal ratio B according to the number of the pixel points in each continuous difference value and the original number of the pixels of the image sample, comparing and analyzing the B with the preset value B0, intercepting a monitoring video containing a dynamic object according to the comparison analysis result, and sending the monitoring video to the data storage module;
and the data storage module is used for storing the intercepted and processed video containing the monitoring video of the dynamic object.
2. The integrated fire-fighting equipment production monitoring system based on big data according to claim 1, wherein: the data analysis module analyzes the following specific modes:
E1, disassembling each short video into a plurality of video frames and importing the video frames into a corresponding analysis module of a pre-training analysis model, wherein the analysis module is a plurality of analysis modules which are formed by dividing the analysis model in advance through a blockchain technology, the plurality of analysis modules are used for simultaneously analyzing the plurality of short videos, the plurality of analysis modules comprise at least one same image sample, and the image sample is a video frame which is selected in advance from all video frames collected by a common probe and only comprises a background and does not comprise a dynamic object;
e2, selecting an analysis module to correspondingly analyze a short video, and then carrying out image graying treatment on the image sample and each video frame;
then, all pixel points of the image sample are acquired, each pixel point is marked as f, f represents the number of the pixel points, and f=1, 2, … … and m represent the marking quantity of the pixel points;
E3, acquiring gray values of the image samples and the pixel points in each video frame, performing difference value calculation on the gray values of the same pixel point in each video frame, acquiring absolute values of the difference values of the corresponding pixel points, and marking the absolute values of the difference values of the pixel points as CJF, wherein CJF represents the absolute value of the difference value of the pixel point;
E4, then, CJf are respectively compared with a preset threshold CJ0, and then CJf of CJf > CJ0 is obtained; and record it as new CJf;
e5, acquiring each adjacent pixel point from the position marks of each new CJf, and matching the adjacent pixel points of the new CJ with other new CJ respectively;
If other new CJ is matched with the positions of all adjacent pixel points in the new CJ, obtaining an adjacent new CJ;
E6, then obtaining all new CJ of the patch based on the adjacent new CJ, marking the new CJ as a patch difference value, and simultaneously obtaining the number td of pixel points in each patch difference value, d=1, 2, … …, v, wherein v represents the number of patch difference values;
E7, then pass Calculating an abnormal ratio B of a corresponding video frame, then comparing the B with a preset value B0, and if B is more than or equal to B0, indicating that a dynamic object exists in the video frame;
and then in a group of short videos, video frames with B value greater than or equal to B0 of continuous video frames are acquired, and then videos in the continuous video frames are intercepted and sent to a data storage module.
3. The integrated fire-fighting equipment production monitoring system based on big data according to claim 2, wherein: the position marking is to obtain adjacent pixel points around each pixel point according to a nine-grid form, and mark the positions of each pixel point and the adjacent pixel points around the pixel point; in the nine grids, one grid only contains one pixel point.
4. The integrated fire-fighting equipment production monitoring system based on big data according to claim 2, wherein: in step E6, if the positions of other new CJs and all adjacent pixels in the new CJs are not matched, the new CJs representing the pixels are abnormal factors, and the new CJs are rejected.
5. The integrated fire-fighting equipment production monitoring system based on big data according to claim 2, wherein: in step E8, if B < B0, the video frame is indicated that no dynamic object exists;
Then in a group of short videos, video frames with B value smaller than B0 of continuous video frames are acquired, and then videos in the continuous video frames are intercepted and deleted.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311493172.2A CN117560468B (en) | 2023-11-10 | 2023-11-10 | Big data-based integrated fire-fighting equipment production monitoring system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311493172.2A CN117560468B (en) | 2023-11-10 | 2023-11-10 | Big data-based integrated fire-fighting equipment production monitoring system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117560468A CN117560468A (en) | 2024-02-13 |
CN117560468B true CN117560468B (en) | 2024-05-14 |
Family
ID=89815800
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311493172.2A Active CN117560468B (en) | 2023-11-10 | 2023-11-10 | Big data-based integrated fire-fighting equipment production monitoring system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117560468B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016082424A (en) * | 2014-10-17 | 2016-05-16 | セコム株式会社 | Object monitoring device |
WO2018076614A1 (en) * | 2016-10-31 | 2018-05-03 | 武汉斗鱼网络科技有限公司 | Live video processing method, apparatus and device, and computer readable medium |
CN109040583A (en) * | 2018-07-25 | 2018-12-18 | 深圳市共进电子股份有限公司 | Web camera energy-saving control method, device, web camera and storage medium |
CN115410324A (en) * | 2022-10-28 | 2022-11-29 | 山东世拓房车集团有限公司 | Car as a house night security system and method based on artificial intelligence |
CN115565330A (en) * | 2022-09-22 | 2023-01-03 | 中建八局发展建设有限公司 | Building construction site spot-distribution type fire monitoring system and method |
-
2023
- 2023-11-10 CN CN202311493172.2A patent/CN117560468B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016082424A (en) * | 2014-10-17 | 2016-05-16 | セコム株式会社 | Object monitoring device |
WO2018076614A1 (en) * | 2016-10-31 | 2018-05-03 | 武汉斗鱼网络科技有限公司 | Live video processing method, apparatus and device, and computer readable medium |
CN109040583A (en) * | 2018-07-25 | 2018-12-18 | 深圳市共进电子股份有限公司 | Web camera energy-saving control method, device, web camera and storage medium |
CN115565330A (en) * | 2022-09-22 | 2023-01-03 | 中建八局发展建设有限公司 | Building construction site spot-distribution type fire monitoring system and method |
CN115410324A (en) * | 2022-10-28 | 2022-11-29 | 山东世拓房车集团有限公司 | Car as a house night security system and method based on artificial intelligence |
Also Published As
Publication number | Publication date |
---|---|
CN117560468A (en) | 2024-02-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107169426B (en) | Crowd emotion abnormality detection and positioning method based on deep neural network | |
CN109615572B (en) | Personnel intimacy degree analysis method and system based on big data | |
CN114040003B (en) | Emergency disposal system and method for emergency events in personnel dense area | |
CN112396658B (en) | Indoor personnel positioning method and system based on video | |
CN105208528B (en) | A kind of system and method for identifying with administrative staff | |
CN101834989B (en) | Helicopter electric power inspection real-time data acquisition and storage system | |
CN112257632A (en) | Transformer substation monitoring system based on edge calculation | |
CN107241572A (en) | Student's real training video frequency tracking evaluation system | |
CN112637568B (en) | Distributed security monitoring method and system based on multi-node edge computing equipment | |
CN112528825A (en) | Station passenger recruitment service method based on image recognition | |
CN108776453B (en) | Building safety monitoring system based on computer | |
CN114565845A (en) | Intelligent inspection system for underground tunnel | |
CN112911219B (en) | Method, system and equipment for identifying routing inspection route of power equipment | |
CN117560468B (en) | Big data-based integrated fire-fighting equipment production monitoring system | |
CN116824460B (en) | Face recognition-based examinee track tracking method, system and medium | |
CN117498225A (en) | Unmanned aerial vehicle intelligent power line inspection system | |
CN206948499U (en) | The monitoring of student's real training video frequency tracking, evaluation system | |
CN114898100A (en) | Point cloud data extraction method, device, system, equipment and storage medium | |
CN113111847A (en) | Automatic monitoring method, device and system for process circulation | |
CN113192042A (en) | Engineering main body structure construction progress identification method based on opencv | |
CN118015737B (en) | Intelligent door lock joint control system based on Internet of Things | |
CN113573169B (en) | Unmanned aerial vehicle distribution box data reading and detecting method and system | |
CN116863196A (en) | Distance measurement and alarm method and system for electric power operation training and production safety supervision | |
CN118366099A (en) | Intelligent security monitoring system based on face recognition | |
CN114898516A (en) | Operation and maintenance monitoring service management system based on Internet of things |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |