CN116055690B - Method and equipment for processing machine room monitoring video - Google Patents

Method and equipment for processing machine room monitoring video Download PDF

Info

Publication number
CN116055690B
CN116055690B CN202310339657.XA CN202310339657A CN116055690B CN 116055690 B CN116055690 B CN 116055690B CN 202310339657 A CN202310339657 A CN 202310339657A CN 116055690 B CN116055690 B CN 116055690B
Authority
CN
China
Prior art keywords
monitoring
image
machine room
preset
boundary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310339657.XA
Other languages
Chinese (zh)
Other versions
CN116055690A (en
Inventor
单辉
解汉忠
贾海港
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Jinyu Information Technology Group Co Ltd
Original Assignee
Shandong Jinyu Information Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Jinyu Information Technology Group Co Ltd filed Critical Shandong Jinyu Information Technology Group Co Ltd
Priority to CN202310339657.XA priority Critical patent/CN116055690B/en
Publication of CN116055690A publication Critical patent/CN116055690A/en
Application granted granted Critical
Publication of CN116055690B publication Critical patent/CN116055690B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/36Applying a local operator, i.e. means to operate on image points situated in the vicinity of a given point; Non-linear local filtering operations, e.g. median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2662Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Nonlinear Science (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Alarm Systems (AREA)

Abstract

The embodiment of the specification discloses a processing method and equipment of a machine room monitoring video, relates to the technical field of image processing, and is used for solving the problem that the calculation pressure is high when the existing monitoring video analysis is concentrated in monitoring equipment. The method comprises the following steps: acquiring a monitoring video in a machine room, and extracting a monitoring image in the monitoring video; acquiring a corresponding monitoring overlapping area according to the monitoring basic information of each monitoring camera so as to filter the monitoring overlapping area; extracting boundary pixels of the monitoring image to obtain the monitoring boundary image so as to determine a computer room static image and a computer room dynamic image of the monitoring image; inputting the static image of the machine room and the dynamic image of the machine room into different models for recognition so as to recognize the safety state of the equipment of the machine room and obtain the labels of all monitoring images; according to the preset code rate self-adaptive strategy, the monitoring video and the labels of all monitoring images in the monitoring video are transmitted to corresponding supervision equipment based on the corresponding transmission code rate, so that the efficiency of processing the monitoring video is improved.

Description

Method and equipment for processing machine room monitoring video
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for processing a machine room monitoring video.
Background
The machine room generally refers to telecommunications, network communication, mobile, two-wire, electric power, enterprises and the like, the servers are stored, various servers and small machines, such as an IBM small machine, an HP small machine, a SUN small machine and the like, are usually placed in the local machine room, the temperature and humidity of the machine room and antistatic measures are strictly required, and non-professional project personnel cannot generally enter the machine room, so that in order to find potential safety hazards in the machine room in time, each device in the machine room is ensured to operate in a safe environment, and video monitoring is required to be carried out on the machine room.
The existing video monitoring of the machine room generally comprises the steps of after video acquisition is carried out through monitoring cameras arranged at all positions of the machine room, sending acquired video to monitoring equipment for analysis, so that the current machine room is under the management of professional project personnel or in a safe operation environment. However, in a large-scale scene, when monitoring videos acquired by the multi-source monitoring cameras are concentrated in the monitoring equipment for analysis, the method is easy to cause the problem of video analysis errors due to the extremely high pressure brought to the calculation memory of the monitoring equipment. In addition, in the processing process of the existing monitoring video, all monitoring videos in the monitoring video are often analyzed by adopting the same analysis processing mode, so that whether potential safety hazards exist in a machine room or not is determined, however, when the processing is performed based on the mode, the use scenes of the monitoring video cannot be distinguished, and the operation conditions of equipment in the machine room are difficult to effectively identify.
Disclosure of Invention
In order to solve the technical problems, an embodiment of the present disclosure provides a method and an apparatus for processing a machine room monitoring video.
One or more embodiments of the present disclosure adopt the following technical solutions:
one or more embodiments of the present disclosure provide a method for processing a machine room monitoring video, where the method includes:
acquiring a monitoring video recorded by a preset monitoring camera in a machine room to extract a monitoring image in the monitoring video;
acquiring a monitoring overlapping area corresponding to each monitoring camera according to the monitoring basic information of each monitoring camera so as to filter the monitoring overlapping area based on the monitoring video acquisition quality of each monitoring camera; wherein the monitoring basic information includes: the installation position of the monitoring camera and the effective acquisition range of the monitoring camera;
extracting boundary pixels of the monitoring image to be analyzed to obtain a corresponding monitoring boundary image, so as to determine a computer room static image and a computer room dynamic image of the monitoring image to be analyzed based on the monitoring boundary image;
inputting the static image of the machine room into a preset signal lamp state identification model to output an operation state label of the machine room equipment, and inputting the dynamic image of the machine room into a preset target hidden danger identification model to output a safety state label of the machine room equipment;
And transmitting the monitoring video recorded by the preset monitoring cameras in the machine room and the labels of the monitoring images to be analyzed of the monitoring video to corresponding monitoring equipment based on the corresponding transmission code rate according to the preset code rate self-adaptive strategy.
Optionally, in one or more embodiments of the present disclosure, after obtaining a monitoring video recorded by a preset monitoring camera in a machine room, the method further includes:
according to the basic information of each device in the machine room, the flicker frequency of the status lamp of each device in the machine room is obtained to determine the highest flicker frequency of the status lamp of each device; the basic information is factory standard parameters of each device;
sequencing the highest flicker frequency of the status lights of the equipment covered by each preset monitoring camera to obtain a corresponding frequency sequence; the covered equipment is equipment contained in the monitoring range of the preset monitoring camera;
setting the monitoring image acquisition frequency of each preset monitoring camera according to the highest frequency in the frequency sequence; wherein the image acquisition frequency is greater than the highest frequency in the sequence of frequencies.
Optionally, in one or more embodiments of the present disclosure, according to monitoring basic information of each monitoring camera, a monitoring overlapping area corresponding to each monitoring camera is obtained, which specifically includes:
Acquiring the installation position of the monitoring camera and the effective acquisition range of the monitoring camera based on the basic information of the monitoring camera;
determining a visual field boundary of the monitoring camera according to the installation position of the monitoring camera;
determining an effective monitoring area of the monitoring camera according to the visual field boundary and the effective acquisition range of the monitoring camera;
and acquiring a monitoring overlapping area corresponding to each monitoring camera according to the effective monitoring area of each monitoring camera.
Optionally, in one or more embodiments of the present disclosure, filtering the monitoring overlapping area based on the monitoring video acquisition quality of each monitoring camera specifically includes:
determining the associated monitoring cameras corresponding to the monitoring overlapping areas to acquire the distribution information of the overlapping areas in the effective monitoring areas of the associated monitoring cameras;
acquiring historical monitoring images of each associated monitoring camera, and dividing the effective monitoring area based on the image quality of the historical monitoring images to obtain a high-quality monitoring area and a standard monitoring area;
Determining the overlapping area according to the distribution information, and respectively distributing areas in the high-quality monitoring area and the standard monitoring area;
and filtering the monitoring overlapping area according to the ratio of the distribution area of each high-quality monitoring area to the standard monitoring area.
Optionally, in one or more embodiments of the present disclosure, before extracting the monitoring image in the monitoring video, the method further includes:
acquiring an initial monitoring image in the monitoring video, and converting the initial monitoring image into a gray level image based on a preset weighted average value of each color component in the initial monitoring image;
filtering the gray image according to a preset Gaussian filter operator to obtain a denoising image of the gray image;
performing discrete wavelet transformation on the denoising image to obtain image information of a plurality of frequency ranges of the denoising image;
according to the image information of each frequency range, determining an image contrast value corresponding to each frequency range, and determining an adaptive enhancement function of each denoising image in each frequency range based on the image contrast value and a preset contrast threshold;
Respectively carrying out enhancement processing on the denoising images in each frequency range according to the self-adaptive enhancement function so as to obtain decomposition enhancement images in each frequency range of the denoising images;
and carrying out inverse discrete wavelet transform on the decomposed enhanced image to obtain an enhanced image of the denoising image, so as to extract the enhanced image as a monitoring image.
Optionally, in one or more embodiments of the present disclosure, extracting boundary pixels of the to-be-analyzed monitoring image to obtain a corresponding monitoring boundary image, so as to determine a machine room static image and a machine room dynamic image of the to-be-analyzed monitoring image based on the monitoring boundary image, which specifically includes:
extracting boundary pixels of the monitoring image to be analyzed in the boundary range according to a preset boundary range, so as to generate a corresponding monitoring boundary image based on the boundary pixels;
dividing the monitoring boundary image based on the boundary to obtain a plurality of monitoring boundary sub-images;
constructing a boundary pixel point matrix of each monitoring boundary sub-image according to the positions of each boundary pixel point in the monitoring boundary sub-image;
extracting the monitoring boundary sub-images according to a preset interval to obtain an initial background of the monitoring boundary sub-images;
Convolving the initial background based on a preset convolution check to obtain a boundary background model corresponding to the monitoring boundary sub-image;
determining whether a mutation target object exists in the boundary sub-image according to the difference value between the boundary background model and the boundary pixel point matrix;
if the mutation target object exists, dividing the monitoring image to be analyzed corresponding to the boundary sub-image into a machine room dynamic image;
and if the mutation target object does not exist and the mutation target object does not exist in other boundary sub-images which are positioned in the same monitoring image to be analyzed with the boundary sub-images, determining that the monitoring image to be analyzed is a machine room static image.
Optionally, in one or more embodiments of the present disclosure, before inputting the static image of the machine room into the preset signal lamp status recognition model to output an operation status label of the machine room device, the method further includes:
acquiring a static image of a historical machine room based on multiple channels to construct a data set of a signal lamp state identification model; wherein the dataset comprises: a history computer room static image sample in a normal running state and a history computer room static image sample in a fault state;
Performing image expansion processing on each sample image in the static image sample of the fault state machine room to obtain an expanded data set; wherein the image expansion processing includes: image rotation, image clipping and image color adjustment;
inputting the expanded data set into an initial signal lamp state identification model to obtain a prediction result; the initial signal lamp state identification model is a target detection model taking a MobileNet network as a main network;
acquiring a loss function of the initial signal lamp state identification model according to the prediction result; wherein the loss function comprises: a coordinate predictive loss function, a bounding box confidence predictive loss function containing a detected object, a bounding box confidence predictive loss function not containing a detected object, and an object category predictive loss function;
and adjusting the loss function to iteratively train the initial signal lamp state recognition model, and obtaining the initial signal lamp state recognition model meeting the requirements as the preset signal lamp state recognition model.
Optionally, in one or more embodiments of the present disclosure, before inputting the machine room dynamic image into a preset target hidden danger identification model to output a security status tag of machine room equipment, the method further includes:
Determining target hidden danger types of the machine room to obtain training sample images corresponding to the target hidden danger types; wherein, the target hidden trouble type includes: human hidden danger and environmental hidden danger;
training an initial deep learning network according to the training sample image to obtain a deep learning network model meeting the requirements;
acquiring staff information of the machine room, determining face input images of staff based on the staff information, and training a preset initial face recognition model based on the face input images to obtain a face recognition model meeting requirements;
and taking the face recognition model meeting the requirements as an access layer of the artificial hidden danger output end of the deep learning network model meeting the requirements so as to generate a preset target hidden danger recognition model.
Optionally, in one or more embodiments of the present disclosure, a monitoring video recorded by a preset monitoring camera in a machine room and a label of each monitoring image to be analyzed of the monitoring video are transmitted to a corresponding monitoring device based on a corresponding transmission code rate according to a preset code rate adaptive policy, which specifically includes:
monitoring the network condition of the corresponding monitoring equipment in real time to acquire an available bandwidth change curve of the corresponding monitoring equipment in a preset time period;
Inputting relevant parameters of the available bandwidth change curve into a preset random forest model to obtain the current predicted available bandwidth of the supervision equipment;
determining a current receivable video code rate level of the monitoring equipment according to the current predicted available bandwidth, and determining coding parameters corresponding to the monitoring video recorded by the preset monitoring camera and the labels of all monitoring images to be analyzed of the monitoring video based on the current receivable video code rate level;
and carrying out video coding on the monitoring video recorded by the preset monitoring camera and the labels of the monitoring images to be analyzed of the monitoring video according to the coding parameters to obtain transmission data of corresponding transmission code rates, so as to transmit the transmission data of the corresponding transmission code rates to corresponding monitoring equipment.
One or more embodiments of the present disclosure provide a processing device for monitoring video in a machine room, where the device includes:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
Acquiring a monitoring video recorded by a preset monitoring camera in a machine room to extract a monitoring image in the monitoring video;
acquiring a monitoring overlapping area corresponding to each monitoring camera according to the monitoring basic information of each monitoring camera so as to filter the monitoring overlapping area based on the monitoring video acquisition quality of each monitoring camera to obtain a monitoring image to be analyzed; wherein the monitoring basic information includes: the installation position of the monitoring camera and the effective acquisition range of the monitoring camera;
extracting boundary pixels of the monitoring image to be analyzed to obtain a corresponding monitoring boundary image, so as to determine a computer room static image and a computer room dynamic image of the monitoring image to be analyzed based on the monitoring boundary image;
inputting the static image of the machine room into a preset signal lamp state identification model to output an operation state label of the machine room equipment, and inputting the dynamic image of the machine room into a preset target hidden danger identification model to output a safety state label of the machine room equipment;
and transmitting the monitoring video recorded by the preset monitoring cameras in the machine room and the labels of the monitoring images to be analyzed of the monitoring video to corresponding monitoring equipment based on the corresponding transmission code rate according to the preset code rate self-adaptive strategy.
The above-mentioned at least one technical scheme that this description embodiment adopted can reach following beneficial effect:
the monitoring overlapping area is filtered according to the monitoring video acquisition quality of each monitoring camera to obtain the monitoring image to be analyzed, so that the analysis processing process of redundant repeated images is reduced, and the problems of calculation cost and time rise caused by repeated analysis of the monitoring overlapping area are solved. After the boundary pixels are determined, the monitoring image to be analyzed is divided into a machine room static image and a machine room dynamic image, and the machine room static image and the machine room dynamic image are respectively analyzed based on different models, so that the monitoring of the running state of the machine room equipment is realized, the running condition of the equipment in the machine room is effectively identified, the identification and monitoring of the target hidden danger in the machine room are also realized, and the utilization rate of the monitoring video is improved. In addition, the monitoring images are sent to the monitoring equipment after being analyzed, so that the problem of high calculation pressure caused by centralized analysis of the monitoring equipment is solved.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some of the embodiments described in the present description, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. In the drawings:
Fig. 1 is a schematic flow chart of a method for processing a machine room monitoring video according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of an internal structure of a processing device for monitoring video in a machine room according to an embodiment of the present disclosure.
Detailed Description
The embodiment of the specification provides a processing method and equipment for a machine room monitoring video.
In order to make the technical solutions in the present specification better understood by those skilled in the art, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present disclosure.
As shown in fig. 1, the embodiment of the present disclosure provides a flow chart of a method for processing a machine room monitoring video. As can be seen from fig. 1, a method for processing a machine room monitoring video includes the following steps:
s101: and acquiring a monitoring video recorded by a preset monitoring camera in the machine room to extract a monitoring image in the monitoring video.
In order to realize the processing of the monitoring video in the machine room so as to transmit the monitoring video to the corresponding monitoring equipment, the embodiment of the specification firstly obtains the monitoring video recorded by a plurality of preset monitoring cameras in the machine room, thereby extracting corresponding monitoring images from the monitoring video recorded by the monitoring cameras for subsequent analysis and processing.
Further, in order to accurately collect effective monitoring images in the monitoring video, accuracy of image analysis is guaranteed. In one or more embodiments of the present disclosure, after obtaining a monitoring video recorded by a preset monitoring camera in a machine room, the method further includes the following process:
firstly, according to basic information of each device in a machine room, namely according to factory standard parameters of the devices in factory specifications of each device, the flicker frequency of the status lamp of each device in the machine room is searched and determined, so that the highest flicker frequency of the status lamp of each device is determined. That is, each device has a plurality of states, and the flashing frequencies of the status lights corresponding to the plurality of states may be different, so that the highest flashing frequency of the status lights in different states of each device needs to be determined and used as the highest flashing frequency of each status light.
And then sequencing the highest flashing frequency of the status lights of the equipment covered by each preset monitoring camera to obtain a corresponding frequency sequence. For example: the highest frequency of the flashing of the status lamp of the equipment A in the machine room is 4HZ, the highest frequency of the flashing of the status lamp of the equipment B in the machine room is 1HZ, and the highest frequency of the flashing of the status lamp of the equipment C is 3HZ, so that the frequency sequence is {4, 3, 1}. It should be noted that the covered device is a device included in the monitoring range of the preset monitoring camera, that is, a device that can be monitored by the preset monitoring camera. Then, in order to ensure that the collected image sequence can accurately judge the running state of the equipment in the machine room, namely, the running state of the equipment in the current machine room can be accurately determined according to the collected change of the status lamp and the flicker frequency of the status lamp, in the embodiment of the specification, the monitoring image collection frequency of each preset monitoring camera is set according to the highest frequency in the frequency sequence. It will be appreciated that, to ensure the integrity of the acquired information, the image acquisition frequency is greater than the highest frequency in the above-described frequency sequence.
Further, in order to improve the recognition accuracy in the subsequent recognition of the monitoring boundary image, that is, to improve the quality of the monitoring image. In one or more embodiments of the present disclosure, before extracting the monitoring image in the monitoring video, the method further includes the steps of:
Firstly, acquiring an initial monitoring image in a monitoring video recorded by a monitoring camera according to the monitoring image acquisition frequency, and then converting the initial monitoring image into a gray level image according to a preset weighted average value of each color component in the initial monitoring image. And then, in order to remove the interference noise image in the gray image, filtering the gray image through a preset Gaussian filter operator, so as to obtain a denoising image of the gray image. The denoised image is then subjected to a discrete wavelet transform to obtain image information of the denoised image at a plurality of frequency ranges, i.e. at a plurality of frequency layers. And determining image contrast values corresponding to the frequency ranges according to the image information of the frequency ranges, so as to obtain the self-adaptive enhancement function of each denoising image in the frequency ranges according to the image contrast values and the preset contrast threshold value. And then respectively carrying out enhancement processing on the denoising images in each frequency range according to the self-adaptive enhancement function, so as to obtain decomposition enhancement images of the denoising images in each frequency range. And finally, carrying out inverse discrete wavelet transform on the decomposed enhanced image so as to obtain an enhanced image of the denoising image, and extracting the enhanced image as a monitoring image. The denoising process of the initial monitoring image is realized through the preprocessing process of the initial monitoring image, the influence of interference information on the image is reduced, and the information in the monitoring image is enhanced through the image enhancement processing, so that the accuracy of subsequent image processing and recognition is improved.
S102: acquiring a monitoring overlapping area corresponding to each monitoring camera according to the monitoring basic information of each monitoring camera so as to filter the monitoring overlapping area based on the monitoring video acquisition quality of each monitoring camera to obtain a monitoring image to be analyzed; wherein the monitoring basic information includes: the installation position of the monitoring camera and the effective acquisition range of the monitoring camera.
After the monitoring image in the monitoring video is obtained in the step S101, because there may be a cross overlapping monitoring overlapping area in the plurality of monitoring cameras in order to ensure the monitoring without dead angle in the machine room, repeated analysis on the monitoring overlapping area may result in an increase in calculation cost. Therefore, in order to reduce the analysis of redundant duplicate images, the embodiment of the present disclosure is based on the monitoring basic information of each monitoring camera, for example: the installation position of the monitoring cameras, the effective acquisition range of the monitoring cameras and the like, and the monitoring overlapping area corresponding to each monitoring camera is determined. And then, according to the monitoring video acquisition quality of each monitoring camera corresponding to the determined monitoring overlapping area, selecting and filtering the monitoring overlapping area based on the monitoring video acquisition quality, namely, after the monitoring camera for acquiring the monitoring overlapping area is selected, not carrying out subsequent acquisition and analysis on the monitoring overlapping areas acquired by other monitoring cameras.
Further, in one or more embodiments of the present disclosure, according to the monitoring basic information of each monitoring camera, a monitoring overlapping area corresponding to each monitoring camera is obtained, which specifically includes the following procedures:
firstly, acquiring the installation position of a monitoring camera and the effective acquisition range of the monitoring camera according to the basic information of the monitoring camera. And then determining the visual field boundary of the monitoring camera according to the installation position of the monitoring camera, and further determining the effective monitoring area of the monitoring camera according to the determined visual field boundary and the effective acquisition range of the monitoring camera. And then acquiring a monitoring overlapping area corresponding to each monitoring camera according to the effective monitoring area of each monitoring camera.
Further, in one or more embodiments of the present disclosure, according to the monitoring video acquisition quality of each monitoring camera, filtering the monitoring overlapping area to obtain a monitored image to be analyzed specifically includes the following steps:
firstly, determining the associated monitoring cameras corresponding to the monitoring overlapping areas, and accordingly obtaining distribution information of the overlapping areas in the effective monitoring areas of the associated monitoring cameras. And then acquiring historical monitoring images of each associated monitoring camera, and dividing the effective monitoring area according to the image quality of the historical monitoring images to obtain a high-quality monitoring area and a standard monitoring area. The area division may be based on the distance of the monitoring area from the monitoring camera to obtain a high quality monitoring area and a standard monitoring area. And then determining the distribution areas of the overlapped area in the high-quality monitoring area and the standard monitoring area respectively according to the acquired distribution information. And calculating the ratio of the distribution area of each high-quality monitoring area to the distribution area of the standard monitoring area, and selecting the monitoring camera with the highest ratio for monitoring based on the ratio, so that monitoring of other cameras is performed to obtain a monitoring overlapping area, filtering is performed, and the monitoring image to be analyzed collected by each monitoring camera is obtained.
S103: and extracting boundary pixels of the monitoring image to be analyzed to obtain a corresponding monitoring boundary image, so as to determine a computer room static image and a computer room dynamic image of the monitoring image to be analyzed based on the monitoring boundary image.
After filtering the acquired monitoring image to be analyzed in the monitoring overlapping area according to the steps, because the machine room is maintained and managed based on professionals, the equipment state in the monitoring image needs to be identified, and the economic loss caused by artificial potential safety hazards such as theft personnel and the sudden environmental potential safety hazards such as: wire age breakage or sparks at the wire connection, etc. When the potential safety hazard occurs, the motion state of the object in the monitoring video is changed. Therefore, in order to more efficiently perform targeted analysis on the to-be-analyzed monitoring image in the monitoring video, in the embodiment of the specification, the boundary pixels of the to-be-analyzed monitoring image are extracted to obtain the corresponding monitoring boundary image, so that the computer room static image and the computer room dynamic image of the to-be-analyzed monitoring image are rapidly determined based on the monitoring boundary image, and therefore analysis processing is conveniently performed on the computer room static image and the computer room dynamic image respectively.
Specifically, in one or more embodiments of the present disclosure, extracting boundary pixels of a monitored image to be analyzed to obtain a corresponding monitored boundary image, so as to determine a machine room static image and a machine room dynamic image of the monitored image to be analyzed based on the monitored boundary image, specifically including the following steps:
firstly, according to the preset boundary range of the monitoring images to be analyzed, extracting boundary pixels of each monitoring image to be analyzed in the boundary range, and generating corresponding monitoring boundary images according to the boundary pixels. And then dividing the monitoring boundary image according to the boundary where each boundary pixel is positioned to obtain a plurality of monitoring boundary sub-images. The monitoring boundary image is divided into monitoring boundary sub-images corresponding to an upper boundary A, a lower boundary B, a left boundary C and a right boundary D respectively. And constructing a boundary pixel point matrix of each monitoring boundary sub-image according to the positions of each boundary pixel point in the monitoring boundary sub-image. And extracting the monitoring boundary sub-images according to the preset interval, so as to obtain the initial background of the monitoring boundary sub-images. And then carrying out convolution processing on the initial background acquired in the process according to a preset convolution check, thereby obtaining a boundary background model corresponding to the monitoring boundary sub-image. And then determining whether a mutation target object exists in the boundary sub-image according to the difference value between the boundary background model and the boundary pixel point matrix. If the abrupt change target object exists, the monitoring image corresponding to the boundary sub-image is divided into the machine room dynamic image. Otherwise, if no abrupt change target exists and no abrupt change target exists in other boundary sub-images which are positioned in the same monitoring image with the boundary sub-image, namely, no abrupt change target exists in the boundary sub-image corresponding to the boundary A, B, C, D, the monitoring image is determined to be a machine room static image.
S104: and inputting the static image of the machine room into a preset signal lamp state identification model to output an operation state label of the machine room equipment, and inputting the dynamic image of the machine room into a preset target hidden danger identification model to output a safety state label of the machine room equipment.
After the monitoring image to be analyzed is divided into the machine room static image and the machine room dynamic image based on the step S103, in the embodiment of the present disclosure, the machine room static image is input into a preset signal lamp state identification model, so as to monitor the operation state of the equipment in the machine room, and output the operation state label of the equipment in the machine room. And meanwhile, inputting the dynamic image of the machine room into a preset target hidden danger identification model, so as to output the security state label of the machine room equipment. Through analyzing the static image and the dynamic image of the machine room based on different models, the monitoring of the running state of the equipment of the machine room is realized, the identification and monitoring of the target hidden danger in the machine room are also realized, and the utilization rate of the monitoring video is improved.
Further, before inputting the static image of the machine room into the preset signal lamp state identification model to output the running state label of the machine room equipment, the method further comprises the following steps:
Firstly, acquiring static images of a historical machine room based on multiple channels, namely acquiring the static images of the historical machine room based on a public website, a machine room local database and other ways, so as to construct a data set of a signal lamp state identification model; wherein, it should be noted that the data set includes: a static image sample of a history computer room in a normal running state and a static image sample of the history computer room in a fault state. Then, the number of the computer room static image samples in the fault state is small, so that the number of the data sets is enriched for expanding the computer room static image samples in the fault state. After the static image sample of the fault state machine room is obtained, image expansion processing is carried out on each sample image in the static image sample of the fault state machine room, so that an expanded data set is obtained. The image expansion process includes: image rotation, image cropping, image color adjustment, and the like. And then inputting the expanded data set into an initial signal lamp state recognition model so as to obtain a prediction result output by the initial signal lamp state recognition model. The following description is needed: the initial signal lamp state recognition model is a target detection model taking a MobileNet network as a backbone network. And obtaining a loss function of the initial signal lamp state identification model according to the prediction result. Wherein, it should be noted that the loss function includes: a coordinate predictive loss function, a prediction loss function of a binding box confidence containing a detected object, a prediction loss function of a binding box confidence containing no detected object, and a prediction loss function of an object class. And adjusting the loss function so as to iteratively train the initial signal lamp state recognition model, and obtaining the initial signal lamp state recognition model meeting the requirements as a preset signal lamp state recognition model.
Further, before inputting the machine room dynamic image into the preset target hidden danger identification model to output the security state label of the machine room equipment, the method further comprises the following steps:
and determining the target hidden danger type of the machine room, thereby acquiring training sample images corresponding to the target hidden danger types. Wherein, it should be noted that the types of the target hidden trouble include: human hidden danger such as non-machine room operators entering the machine room and the like, environmental hidden danger such as smoke, sparks and the like. And then training the initial deep learning network according to the training sample image to obtain a deep learning network model meeting the requirements. And acquiring the staff information of the machine room, determining face input images of staff according to the staff information, and training a preset initial face recognition model based on the face input images to acquire the face recognition model meeting the requirements. And using the obtained face recognition model meeting the requirements as an access layer of the artificial hidden danger output end of the deep learning network model meeting the requirements to generate a preset target hidden danger recognition model.
S105: and transmitting the monitoring video recorded by the preset monitoring cameras in the machine room and the labels of the monitoring images to be analyzed of the monitoring video to corresponding monitoring equipment based on the corresponding transmission code rate according to the preset code rate self-adaptive strategy.
In order to transmit the monitoring video to the corresponding monitoring device with a proper transmission code rate, in the embodiment of the present disclosure, the monitoring video recorded by the monitoring camera and the labels of the monitoring images to be analyzed of the monitoring video in the machine room are preset according to a preset code rate self-adaptive strategy, and are transmitted to the corresponding monitoring device according to the corresponding transmission code rate.
Specifically, according to a preset code rate self-adaptive strategy, a monitoring video recorded by a monitoring camera in a machine room and labels of monitoring images to be analyzed of the monitoring video are preset, and the labels are transmitted to corresponding monitoring equipment according to corresponding transmission code rates, and the method specifically comprises the following steps:
firstly, monitoring the network condition of the corresponding monitoring equipment in real time, and thus obtaining the available bandwidth change curve of the corresponding monitoring equipment in a preset time period. And then inputting relevant parameters of the available bandwidth change curve into a preset random forest model, such as available bandwidth time of each point in the available bandwidth change curve, available bandwidth time change slope of each point, timestamp corresponding to the available bandwidth time of each point and the like, so as to obtain the current predicted available bandwidth of the supervision equipment. And then determining the current receivable video code rate level of the monitoring equipment according to the current predicted available bandwidth so as to determine the coding parameters corresponding to the monitoring video recorded by the preset monitoring camera and the labels of all monitoring images of the monitoring video according to the current receivable video code rate level. And carrying out video coding on the monitoring video recorded by the preset monitoring camera and the labels of all monitoring images of the monitoring video according to the coding parameters to obtain transmission data of corresponding transmission code rates, so that the transmission data of the corresponding transmission code rates are transmitted to corresponding monitoring equipment.
As shown in fig. 2, the embodiment of the present disclosure provides a processing device for monitoring video in a machine room, where as can be seen from fig. 2, the device includes:
at least one processor 201; the method comprises the steps of,
a memory 202 communicatively coupled to the at least one processor 201; wherein,,
the memory 201 stores instructions executable by the at least one processor 201 to enable the at least one processor 201 to:
acquiring a monitoring video recorded by a preset monitoring camera in a machine room to extract a monitoring image in the monitoring video;
acquiring a monitoring overlapping area corresponding to each monitoring camera according to the monitoring basic information of each monitoring camera so as to filter the monitoring overlapping area based on the monitoring video acquisition quality of each monitoring camera to obtain a monitoring image to be analyzed; wherein the monitoring basic information includes: the installation position of the monitoring camera and the effective acquisition range of the monitoring camera;
extracting boundary pixels of the monitoring image to be analyzed to obtain a corresponding monitoring boundary image, so as to determine a computer room static image and a computer room dynamic image of the monitoring image to be analyzed based on the monitoring boundary image;
Inputting the static image of the machine room into a preset signal lamp state identification model to output an operation state label of the machine room equipment, and inputting the dynamic image of the machine room into a preset target hidden danger identification model to output a safety state label of the machine room equipment;
and transmitting the monitoring video recorded by the preset monitoring cameras in the machine room and the labels of the monitoring images to be analyzed of the monitoring video to corresponding monitoring equipment based on the corresponding transmission code rate according to the preset code rate self-adaptive strategy.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for apparatus, devices, non-volatile computer storage medium embodiments, the description is relatively simple, as it is substantially similar to method embodiments, with reference to the section of the method embodiments being relevant.
Finally, it should be understood that the above embodiments of the present invention are only used to illustrate the technical solution of the present invention, and not to limit it. Although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. The method for processing the machine room monitoring video is characterized by comprising the following steps:
acquiring a monitoring video recorded by a preset monitoring camera in a machine room to extract a monitoring image in the monitoring video;
acquiring a monitoring overlapping area corresponding to each monitoring camera according to the monitoring basic information of each monitoring camera so as to filter the monitoring overlapping area based on the monitoring video acquisition quality of each monitoring camera to obtain a monitoring image to be analyzed; wherein the monitoring basic information includes: the installation position of the monitoring camera and the effective acquisition range of the monitoring camera;
extracting boundary pixels of the monitoring image to be analyzed to obtain a corresponding monitoring boundary image, so as to determine a computer room static image and a computer room dynamic image of the monitoring image to be analyzed based on the monitoring boundary image;
inputting the static image of the machine room into a preset signal lamp state identification model to output an operation state label of the machine room equipment, and inputting the dynamic image of the machine room into a preset target hidden danger identification model to output a safety state label of the machine room equipment;
according to a preset code rate self-adaptive strategy, a monitoring video recorded by a preset monitoring camera in a machine room and labels of monitoring images to be analyzed of the monitoring video are transmitted to corresponding monitoring equipment based on corresponding transmission code rates; the filtering the monitoring overlapping area based on the monitoring video acquisition quality of each monitoring camera specifically comprises the following steps:
Determining the associated monitoring cameras corresponding to the monitoring overlapping areas to acquire the distribution information of the overlapping areas in the effective monitoring areas of the associated monitoring cameras;
acquiring historical monitoring images of each associated monitoring camera, and dividing the effective monitoring area based on the image quality of the historical monitoring images to obtain a high-quality monitoring area and a standard monitoring area;
determining the overlapping area according to the distribution information, and respectively distributing areas in the high-quality monitoring area and the standard monitoring area;
and filtering the monitoring overlapping area according to the ratio of the distribution area of each high-quality monitoring area to the standard monitoring area.
2. The method for processing a monitoring video of a machine room according to claim 1, wherein after the monitoring video recorded by a preset monitoring camera in the machine room is obtained, the method further comprises:
according to the basic information of each device in the machine room, the flicker frequency of the status lamp of each device in the machine room is obtained to determine the highest flicker frequency of the status lamp of each device; the basic information is factory standard parameters of each device;
Sequencing the highest flicker frequency of the status lights of the equipment covered by each preset monitoring camera to obtain a corresponding frequency sequence; the covered equipment is equipment contained in the monitoring range of the preset monitoring camera;
setting the monitoring image acquisition frequency of each preset monitoring camera according to the highest frequency in the frequency sequence; wherein the image acquisition frequency is greater than the highest frequency in the sequence of frequencies.
3. The method for processing a machine room monitoring video according to claim 1, wherein the obtaining, according to the monitoring basic information of each monitoring camera, a monitoring overlapping area corresponding to each monitoring camera specifically includes:
acquiring the installation position of the monitoring camera and the effective acquisition range of the monitoring camera based on the basic information of the monitoring camera;
determining a visual field boundary of the monitoring camera according to the installation position of the monitoring camera;
determining an effective monitoring area of the monitoring camera according to the visual field boundary and the effective acquisition range of the monitoring camera;
and acquiring a monitoring overlapping area corresponding to each monitoring camera according to the effective monitoring area of each monitoring camera.
4. The method for processing a machine room monitoring video according to claim 1, wherein before the extracting the monitoring image in the monitoring video, the method further comprises:
acquiring an initial monitoring image in the monitoring video, and converting the initial monitoring image into a gray level image based on a preset weighted average value of each color component in the initial monitoring image;
filtering the gray image according to a preset Gaussian filter operator to obtain a denoising image of the gray image;
performing discrete wavelet transformation on the denoising image to obtain image information of a plurality of frequency ranges of the denoising image;
according to the image information of each frequency range, determining an image contrast value corresponding to each frequency range, and determining an adaptive enhancement function of each denoising image in each frequency range based on the image contrast value and a preset contrast threshold;
respectively carrying out enhancement processing on the denoising images in each frequency range according to the self-adaptive enhancement function so as to obtain decomposition enhancement images in each frequency range of the denoising images;
and carrying out inverse discrete wavelet transform on the decomposed enhanced image to obtain an enhanced image of the denoising image, so as to extract the enhanced image as a monitoring image.
5. The method for processing a machine room monitoring video according to claim 1, wherein the extracting boundary pixels of the monitoring image to be analyzed to obtain a corresponding monitoring boundary image, so as to determine a machine room static image and a machine room dynamic image of the monitoring image to be analyzed based on the monitoring boundary image, specifically comprises:
extracting boundary pixels of the monitoring image to be analyzed in the boundary range according to a preset boundary range, so as to generate a corresponding monitoring boundary image based on the boundary pixels;
dividing the monitoring boundary image based on the boundary to obtain a plurality of monitoring boundary sub-images;
constructing a boundary pixel point matrix of each monitoring boundary sub-image according to the positions of each boundary pixel point in the monitoring boundary sub-image;
extracting the monitoring boundary sub-images according to a preset interval to obtain an initial background of the monitoring boundary sub-images;
convolving the initial background based on a preset convolution check to obtain a boundary background model corresponding to the monitoring boundary sub-image;
determining whether a mutation target object exists in the boundary sub-image according to the difference value between the boundary background model and the boundary pixel point matrix;
If the mutation target object exists, dividing the monitoring image to be analyzed corresponding to the boundary sub-image into a machine room dynamic image;
and if the mutation target object does not exist and the mutation target object does not exist in other boundary sub-images which are positioned in the same monitoring image to be analyzed with the boundary sub-images, determining that the monitoring image to be analyzed is a machine room static image.
6. The method for processing a machine room monitoring video according to claim 1, wherein before inputting the static image of the machine room into a preset signal lamp state identification model to output an operation state label of machine room equipment, the method further comprises:
acquiring a static image of a historical machine room based on multiple channels to construct a data set of a signal lamp state identification model; wherein the dataset comprises: a history computer room static image sample in a normal running state and a history computer room static image sample in a fault state;
performing image expansion processing on each sample image in the static image sample of the fault state machine room to obtain an expanded data set; wherein the image expansion processing includes: image rotation, image clipping and image color adjustment;
Inputting the expanded data set into an initial signal lamp state identification model to obtain a prediction result; the initial signal lamp state identification model is a target detection model taking a MobileNet network as a main network;
acquiring a loss function of the initial signal lamp state identification model according to the prediction result; wherein the loss function comprises: a coordinate predictive loss function, a bounding box confidence predictive loss function containing a detected object, a bounding box confidence predictive loss function not containing a detected object, and an object category predictive loss function;
and adjusting the loss function to iteratively train the initial signal lamp state recognition model, and obtaining the initial signal lamp state recognition model meeting the requirements as the preset signal lamp state recognition model.
7. The method for processing a machine room monitoring video according to claim 1, wherein before inputting the machine room dynamic image into a preset target hidden danger identification model to output a security state tag of machine room equipment, the method further comprises:
determining target hidden danger types of the machine room to obtain training sample images corresponding to the target hidden danger types; wherein, the target hidden trouble type includes: human hidden danger and environmental hidden danger;
Training an initial deep learning network according to the training sample image to obtain a deep learning network model meeting the requirements;
acquiring staff information of the machine room, determining face input images of staff based on the staff information, and training a preset initial face recognition model based on the face input images to obtain a face recognition model meeting requirements;
and taking the face recognition model meeting the requirements as an access layer of the artificial hidden danger output end of the deep learning network model meeting the requirements so as to generate a preset target hidden danger recognition model.
8. The method for processing the monitoring video of the machine room according to claim 1, wherein the monitoring video recorded by the preset monitoring camera in the machine room and the labels of the monitoring images to be analyzed of the monitoring video are transmitted to the corresponding monitoring devices based on the corresponding transmission code rate according to the preset code rate adaptive strategy, and specifically comprises the following steps:
monitoring the network condition of the corresponding monitoring equipment in real time to acquire an available bandwidth change curve of the corresponding monitoring equipment in a preset time period;
inputting relevant parameters of the available bandwidth change curve into a preset random forest model to obtain the current predicted available bandwidth of the supervision equipment;
Determining a current receivable video code rate level of the monitoring equipment according to the current predicted available bandwidth, and determining coding parameters corresponding to the monitoring video recorded by the preset monitoring camera and the labels of all monitoring images to be analyzed of the monitoring video based on the current receivable video code rate level;
and carrying out video coding on the monitoring video recorded by the preset monitoring camera and the labels of the monitoring images to be analyzed of the monitoring video according to the coding parameters to obtain transmission data of corresponding transmission code rates, so as to transmit the transmission data of the corresponding transmission code rates to corresponding monitoring equipment.
9. Processing equipment of computer lab surveillance video, equipment includes:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring a monitoring video recorded by a preset monitoring camera in a machine room to extract a monitoring image in the monitoring video;
acquiring a monitoring overlapping area corresponding to each monitoring camera according to the monitoring basic information of each monitoring camera so as to filter the monitoring overlapping area based on the monitoring video acquisition quality of each monitoring camera to obtain a monitoring image to be analyzed; wherein the monitoring basic information includes: the installation position of the monitoring camera and the effective acquisition range of the monitoring camera;
Extracting boundary pixels of the monitoring image to be analyzed to obtain a corresponding monitoring boundary image, so as to determine a computer room static image and a computer room dynamic image of the monitoring image to be analyzed based on the monitoring boundary image;
inputting the static image of the machine room into a preset signal lamp state identification model to output an operation state label of the machine room equipment, and inputting the dynamic image of the machine room into a preset target hidden danger identification model to output a safety state label of the machine room equipment;
according to a preset code rate self-adaptive strategy, a monitoring video recorded by a preset monitoring camera in a machine room and labels of monitoring images to be analyzed of the monitoring video are transmitted to corresponding monitoring equipment based on corresponding transmission code rates; the filtering the monitoring overlapping area based on the monitoring video acquisition quality of each monitoring camera specifically comprises the following steps:
determining the associated monitoring cameras corresponding to the monitoring overlapping areas to acquire the distribution information of the overlapping areas in the effective monitoring areas of the associated monitoring cameras;
acquiring historical monitoring images of each associated monitoring camera, and dividing the effective monitoring area based on the image quality of the historical monitoring images to obtain a high-quality monitoring area and a standard monitoring area;
Determining the overlapping area according to the distribution information, and respectively distributing areas in the high-quality monitoring area and the standard monitoring area;
and filtering the monitoring overlapping area according to the ratio of the distribution area of each high-quality monitoring area to the standard monitoring area.
CN202310339657.XA 2023-04-03 2023-04-03 Method and equipment for processing machine room monitoring video Active CN116055690B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310339657.XA CN116055690B (en) 2023-04-03 2023-04-03 Method and equipment for processing machine room monitoring video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310339657.XA CN116055690B (en) 2023-04-03 2023-04-03 Method and equipment for processing machine room monitoring video

Publications (2)

Publication Number Publication Date
CN116055690A CN116055690A (en) 2023-05-02
CN116055690B true CN116055690B (en) 2023-06-09

Family

ID=86118579

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310339657.XA Active CN116055690B (en) 2023-04-03 2023-04-03 Method and equipment for processing machine room monitoring video

Country Status (1)

Country Link
CN (1) CN116055690B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116582693B (en) * 2023-05-15 2024-02-02 广州劲源科技发展股份有限公司 Camera calling control method based on video resource pool
CN116668859B (en) * 2023-07-31 2023-11-14 深圳市汇龙净化技术有限公司 GIS equipment computer lab environment monitoring device
CN117333981B (en) * 2023-10-31 2024-03-22 浙江泰源科技有限公司 Machine room integrated monitoring management method and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113312957A (en) * 2021-03-11 2021-08-27 上海东普信息科技有限公司 off-Shift identification method, device, equipment and storage medium based on video image

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1294724C (en) * 2003-09-19 2007-01-10 北京航星网讯技术开发有限公司 Monitored object
JP2006050308A (en) * 2004-08-05 2006-02-16 Matsushita Electric Ind Co Ltd Image pickup device
JP2006197102A (en) * 2005-01-12 2006-07-27 Matsushita Electric Ind Co Ltd Remote monitor device
DE102011075412A1 (en) * 2011-05-06 2012-11-08 Deutsche Telekom Ag A method and system for detecting a static scene, determining raw events, and detecting free areas in an observation area
CN102663743B (en) * 2012-03-23 2016-06-08 西安电子科技大学 Personage's method for tracing that in a kind of complex scene, many Kameras are collaborative
CN110458090A (en) * 2019-08-08 2019-11-15 成都睿云物联科技有限公司 Working state of excavator detection method, device, equipment and storage medium
CN110989594A (en) * 2019-12-02 2020-04-10 交控科技股份有限公司 Intelligent robot inspection system and method
CN113741527B (en) * 2021-09-13 2024-01-19 德仕能源科技集团股份有限公司 Oil well inspection method, equipment and medium based on multiple unmanned aerial vehicles
CN114550037A (en) * 2022-02-18 2022-05-27 福建水口发电集团有限公司 Hydropower station video data digital safety early warning method and device
CN115661753A (en) * 2022-11-03 2023-01-31 郑州中科集成电路与系统应用研究院 Machine room video detection method based on YOLOv3

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113312957A (en) * 2021-03-11 2021-08-27 上海东普信息科技有限公司 off-Shift identification method, device, equipment and storage medium based on video image

Also Published As

Publication number Publication date
CN116055690A (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN116055690B (en) Method and equipment for processing machine room monitoring video
CN111928888B (en) Intelligent monitoring and analyzing method and system for water pollution
CN111582016A (en) Intelligent maintenance-free power grid monitoring method and system based on cloud edge collaborative deep learning
CN112770088A (en) AI video linkage perception monitoring system
CN116629465B (en) Smart power grids video monitoring and risk prediction response system
CN111045889A (en) Closed network equipment state monitoring system, method and device and readable storage medium
CN109509190A (en) Video monitoring image screening technique, device, system and storage medium
CN211184122U (en) Intelligent video analysis system for linkage of railway operation safety prevention and control and large passenger flow early warning
CN113936175A (en) Method and system for identifying events in video
CN117612060A (en) Video early warning system, method, equipment and medium based on artificial intelligent detection
CN107547867A (en) A kind of outside transformer substation video monitoring system and monitoring method
CN113095160B (en) Power system personnel safety behavior identification method and system based on artificial intelligence and 5G
CN117768610A (en) High-speed railway perimeter intrusion risk monitoring method and system based on multi-target recognition
CN116823795A (en) Defect detection method and device for rubber product, electronic equipment and storage medium
CN117235443A (en) Electric power operation safety monitoring method and system based on edge AI
CN116823233A (en) User data processing method and system based on full-period operation and maintenance
CN111541877A (en) Automatic monitoring system for substation equipment
CN108073854A (en) A kind of detection method and device of scene inspection
CN116419059A (en) Automatic monitoring method, device, equipment and medium based on behavior label
CN114202738A (en) Power grid monitoring method, device and equipment based on edge calculation and artificial intelligence
Ahmed et al. Automated intruder detection from image sequences using minimum volume sets
CN113343841A (en) Method and device for determining abnormal condition of power tunnel
CN113206935A (en) Campus dual-prevention security prevention and control cloud platform and method
CN111598007A (en) Automatic monitoring method for substation equipment
CN113076983A (en) Image identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant