CN113435371A - Equipment surrounding environment control method, device and equipment based on power internet of things and artificial intelligence - Google Patents

Equipment surrounding environment control method, device and equipment based on power internet of things and artificial intelligence Download PDF

Info

Publication number
CN113435371A
CN113435371A CN202110747744.XA CN202110747744A CN113435371A CN 113435371 A CN113435371 A CN 113435371A CN 202110747744 A CN202110747744 A CN 202110747744A CN 113435371 A CN113435371 A CN 113435371A
Authority
CN
China
Prior art keywords
state
image
equipment
video
animal invasion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110747744.XA
Other languages
Chinese (zh)
Inventor
李南南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen New Material Technology Co ltd
Original Assignee
Shenzhen New Material Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen New Material Technology Co ltd filed Critical Shenzhen New Material Technology Co ltd
Priority to CN202110747744.XA priority Critical patent/CN113435371A/en
Publication of CN113435371A publication Critical patent/CN113435371A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y10/00Economic sectors
    • G16Y10/35Utilities, e.g. electricity, gas or water
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y20/00Information sensed or collected by the things
    • G16Y20/10Information sensed or collected by the things relating to the environment, e.g. temperature; relating to location
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/10Detection; Monitoring
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/20Analytics; Diagnosis
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/30Control
    • G16Y40/35Management of things, i.e. controlling in accordance with a policy or in order to achieve specified objectives

Landscapes

  • Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Toxicology (AREA)
  • Mathematical Physics (AREA)
  • Accounting & Taxation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Environmental & Geological Engineering (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Development Economics (AREA)
  • Software Systems (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, a device and equipment for managing and controlling the peripheral environment of equipment based on an electric power internet of things and artificial intelligence, belonging to the technical field of intelligent power grids, wherein the method comprises the following steps: acquiring a video of a device to be monitored, analyzing the video, extracting a device running state and an environment state in the video, and sending the device running state and the environment state to a state processor; under the condition that the environment running state is a fire fighting state, acquiring a pre-stored fire fighting rectification strategy, automatically generating hidden danger processing data according to the fire fighting rectification strategy, and sending the hidden danger processing data to each fire fighting subsystem; under the condition that the environment running state is an animal invasion state, carrying out animal invasion control according to the type identification result of pests; and judging the running state of the equipment according to the preset threshold value of the equipment, and sending alarm information under the condition that the running state of the equipment reaches the preset threshold value. The method can find potential accident points, early warn and dispose in time, and ensure the safety of personnel and equipment.

Description

Equipment surrounding environment control method, device and equipment based on power internet of things and artificial intelligence
Technical Field
The invention belongs to the technical field of smart power grids, and particularly relates to a method, a device and equipment for managing and controlling the peripheral environment of equipment based on an electric power internet of things and artificial intelligence.
Background
The inventor finds that in the prior art, manual inspection is often used in a communication machine room, and the safety of the machine room is monitored by the quality and responsibility center of inspection personnel, so that the working efficiency is low, and the risk of missed inspection exists.
The inventor finds that in the prior art, manual inspection is often used in a communication machine room, and the safety of the machine room is monitored by the quality and responsibility of inspection personnel, so that the working efficiency is low, and the risk of missed inspection exists.
Disclosure of Invention
In order to at least solve the technical problems, the invention provides a method, a device and equipment for managing and controlling the surrounding environment of equipment based on the power internet of things and artificial intelligence.
According to a first aspect of the invention, a method for managing and controlling the surrounding environment of equipment based on an electric power internet of things and artificial intelligence is provided, which comprises the following steps:
acquiring a video image of a device to be monitored;
performing frame extraction on the video image to obtain a video frame sequence;
extracting video features, performing unsupervised learning on the extracted video features, and learning by taking a binary task of whether the video frame sequence has pixel mutation as supervision, wherein a positive sample is the non-mutated video frame sequence, and a negative sample is the mutated video frame sequence;
according to the result of unsupervised learning, enabling the result to correspond to the running state and the environmental state of equipment, enabling a non-mutation video frame to correspond to the normal state, enabling the mutation video frame to correspond to the abnormal state, and sending the running state and the environmental state of the equipment to a state processor;
and judging the running state of the equipment according to a preset threshold value of the equipment, and sending alarm information under the condition that the running state of the equipment reaches the preset threshold value.
Further, under the condition that the environment running state is a fire fighting state, a pre-stored fire fighting rectification strategy is obtained, hidden danger processing data are automatically generated according to the fire fighting rectification strategy, and the hidden danger processing data are sent to all fire fighting subsystems;
further, under the condition that the environment running state is an animal invasion state, animal invasion prevention and control are carried out according to the type recognition result of the animal invasion state.
Further, the corresponding, according to the result of unsupervised learning, the result to the device running state and the environmental state, the corresponding non-abrupt change video frame to the normal state, the corresponding abrupt change video frame to the abnormal state, and the sending of the device running state and the environmental state to the state processor includes:
according to the result of unsupervised learning, image processing is carried out, image recognition is carried out on an image obtained by image processing, a character recognition result, a mark identification, a picture and an image background are obtained, the character recognition result, the mark identification, the picture and the image background are analyzed, the running state and the environment state of equipment are obtained, a video frame without mutation corresponds to a normal state, the video frame with mutation corresponds to an abnormal state, and the running state and the environment state of the equipment are sent to a state processor in a state code mode.
Further, the corresponding the result to the device running state and the environmental state according to the result of the unsupervised learning includes: and corresponding the result of the unsupervised learning to a power supply voltage state, a switch state, an indicator light state, a fire fighting state and an animal invasion state.
Further, the acquiring a video image of the device to be monitored includes: the intelligent camera is adopted to collect the video of the equipment to be monitored in real time, and when the brightness of the external environment light is smaller than the set brightness, the intelligent camera used for collecting the video of the equipment to be monitored is supplemented with light.
Further, the image processing includes: automatically rotating the image direction, identifying the image edge and beautifying the image.
Further, the identifying the image edge includes: filtering images in the video by adopting a Gaussian function to obtain a smooth data array; performing gradient calculation according to the smooth data array;
carrying out non-maximum suppression, and calculating the gradient amplitude of two adjacent pixels in the pixel gradient direction for each pixel on the image; and carrying out double-threshold detection and edge connection, and connecting the discontinuities of the edges in the strong edge points of the high-threshold value image to obtain an edge image.
Further, in the case that the environmental operation state is an animal invasion state, the animal invasion prevention and treatment according to the type recognition result of the animal invasion state includes:
selecting a corresponding animal invasion prevention and control device according to the type recognition result of the animal invasion state, recognizing the distance between the animal invasion state and the animal invasion prevention and control device and the direction of the animal invasion state relative to the animal invasion prevention and control device, and adjusting the position and the posture of the animal invasion prevention and control device according to the distance between the animal invasion state and the animal invasion prevention and control device and the direction of the animal invasion state relative to the animal invasion prevention and control device.
Further, the method further comprises: and detecting the invasion state of the animal based on the millimeter wave radar.
According to a second aspect of the present invention, a machine room monitoring device based on a ubiquitous power internet of things includes:
the image acquisition module is used for acquiring a video image of the equipment to be monitored;
the video compression module is used for extracting frames of the video image to obtain a video frame sequence;
the learning module is used for extracting video features, performing unsupervised learning on the extracted video features, and learning by taking a binary task of whether the video frame sequence has pixel mutation as supervision, wherein a positive sample is the non-mutated video frame sequence, and a negative sample is the mutated video frame sequence;
the state analysis module is used for corresponding the result to the running state and the environmental state of the equipment according to the result of unsupervised learning, corresponding the video frame without mutation to the normal state, corresponding the video frame with mutation to the abnormal state and sending the running state and the environmental state of the equipment to the state processor;
and the equipment control module is used for judging the running state of the equipment according to a preset threshold value of the equipment and sending alarm information under the condition that the running state of the equipment reaches the preset threshold value.
Further, the apparatus further comprises: the fire control module is used for acquiring a pre-stored fire control rectification strategy under the condition that the environment running state is a fire control state, automatically generating hidden danger processing data according to the fire control rectification strategy and sending the hidden danger processing data to each fire control subsystem;
further, the apparatus further comprises: and the pest control module is used for carrying out animal invasion prevention and control according to the type recognition result of the animal invasion state under the condition that the environment operation state is the animal invasion state.
According to a third aspect of the invention, an electronic device comprises a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method according to any of the first aspect when executing the program.
According to a fourth aspect of the invention, a computer readable storage medium stores a program which, when executed, is capable of implementing the method of any one of the first aspects.
The invention has the beneficial effects that: by adopting the technical scheme of the invention, the running state and the environmental state of the equipment are extracted by analyzing the video of the equipment to be monitored, so that potential accident points can be found in time, early warning and disposal can be realized, the safety of personnel and the equipment can be ensured, the abnormality of the equipment can be found at the first time, and an alarm can be given in time to prevent accidents in the bud. On the other hand, can reduce the personnel of patrolling and examining, save the running cost, work efficiency is higher moreover, and the security is better.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which,
FIG. 1 is a flow chart of a method for managing and controlling the surrounding environment of equipment based on the Internet of things and artificial intelligence of electric power provided by the invention;
fig. 2 is a schematic structural diagram of a machine room monitoring device based on a ubiquitous power internet of things, provided by the invention;
fig. 3 is a schematic structural diagram of an electronic device according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of illustrating the present invention and are not to be construed as limiting the present invention.
In order to more clearly illustrate the invention, the invention is further described below with reference to preferred embodiments and the accompanying drawings. Similar parts in the figures are denoted by the same reference numerals. It is to be understood by persons skilled in the art that the following detailed description is illustrative and not restrictive, and is not to be taken as limiting the scope of the invention.
In a first aspect of the present invention, a method for managing and controlling a device peripheral environment based on an electric power internet of things and artificial intelligence is provided, as shown in fig. 1, the method includes:
step 101: acquiring a video image of a device to be monitored;
in the invention, the intelligent camera is adopted to collect the video image of the equipment to be monitored in real time, and the uploaded existing video image can also be obtained, so that the two methods can be more convenient for the user to carry out related operation.
Further, adopt intelligent camera to gather in real time the video image of treating supervisory equipment, still include: when the brightness of the external environment light is smaller than the set brightness, the intelligent camera for collecting the video of the equipment to be monitored is supplemented with light.
When the video image of the device to be monitored is obtained, the method further comprises the step of carrying out image processing on the image in the video, wherein the image processing comprises the steps of automatically rotating the image direction, identifying the image edge and beautifying the image.
Further, identifying the image edge may include:
and filtering the image in the video by adopting a Gaussian function to obtain a smooth data array.
Figure 192124DEST_PATH_IMAGE001
Wherein the content of the first and second substances,
Figure 44673DEST_PATH_IMAGE002
for the purpose of the image in the video,
Figure 909861DEST_PATH_IMAGE003
in order to smooth out the data array(s),
Figure 766827DEST_PATH_IMAGE004
is a dispersion parameter of a gaussian function to reflect the degree of smoothing.
Performing gradient calculations based on the smoothed data array
Figure 84676DEST_PATH_IMAGE005
Is calculated by 2 x 2 first order finite difference approximation
Figure 803234DEST_PATH_IMAGE006
And
Figure 257349DEST_PATH_IMAGE007
two arrays of partial derivatives
Figure 301528DEST_PATH_IMAGE008
And
Figure 588897DEST_PATH_IMAGE009
. Wherein the content of the first and second substances,
Figure 111145DEST_PATH_IMAGE010
Figure 685346DEST_PATH_IMAGE011
-
Figure 634847DEST_PATH_IMAGE012
Figure 645397DEST_PATH_IMAGE013
Figure 236916DEST_PATH_IMAGE014
Figure 400044DEST_PATH_IMAGE005
-
Figure 520446DEST_PATH_IMAGE015
Figure 34604DEST_PATH_IMAGE013
finite differences were averaged over 2 x 2 squares and the partial derivative gradients of x and y were calculated at the same point in the image. The amplitude and azimuth are respectively:
Figure 649388DEST_PATH_IMAGE016
θ
Figure 932601DEST_PATH_IMAGE017
=arctan(
Figure 489485DEST_PATH_IMAGE018
) Wherein, the arctangent function contains two parameters, and the calculated result is an angle, and the value range is the circumference range.
And carrying out non-maximum value suppression, calculating the gradient amplitude of two adjacent pixels in the pixel gradient direction for each pixel on the image M (x, y), if the gradient amplitude of the current pixel is not less than the two values, determining the current pixel as an edge point, and if the gradient amplitude of the current pixel is less than the two values, determining the current pixel as a non-edge pixel. And thinning the image edge into one pixel degree, and obtaining an image NMS (x, y) from the gradient amplitude image M (x, y) through non-maximum suppression.
And finally, carrying out double-threshold detection and edge connection, specifically adopting a high threshold Th and a low threshold Tl to extract edges, and obtaining strong edge points and weak edge points of the edge image for each pixel point of the NMS (x, y) image through a high threshold value and a low threshold value respectively. And tracking the edge in the strong edge point, searching edge points in 8 neighborhoods of corresponding positions of the image weak edge points to connect the discontinuity in the strong edge point when the edge is powered off, continuously searching and tracking the edge, and connecting the discontinuity of the edge in the high threshold value image strong edge points to obtain an edge image.
The image processing in the present invention may further include: edge sharpening, pseudo-color processing, contrast enhancement, etc., to facilitate subsequent identification. Wherein, the edge sharpening specifically comprises:
edge images of images in the video are extracted using a second order difference operator. And adding the edge image to the source image to realize the function of enhancing the edge. The edge sharpening operation may also be performed using a first order difference operator, i.e., a gradient.
After the image direction is automatically rotated for the image in the video, the method also comprises the steps of correcting the image direction, identifying the edge of the image after automatic rotation, carrying out normalization processing, and carrying out denoising, smoothing and gray level histogram processing on the image after normalization processing.
Step 102: performing frame extraction on a video image to obtain a video frame sequence;
in the invention, a video frame sequence is obtained by extracting a plurality of frames from the interval preset frames of the video image so as to shoot a picture at intervals and combine the pictures to form a video.
Furthermore, in the process of frame extraction of the video image, the key frame is reserved, and the non-key frame is deleted, so that the storage capacity of the video image can be reduced, and the definition of the obtained image can be ensured to be unchanged after frame extraction.
Furthermore, extracting the key frame based on a frame difference Euclidean distance method, and adopting a formula:
Figure 225359DEST_PATH_IMAGE019
wherein, eulerdisciff (i) represents the frame difference euclidean distance of the ith frame image, N is the number of frame images in a lens of the video, and is the gray value of the ith, i +1, i +2 frame images respectively.
Calculating frame difference Euclidean distances of each image frame by frame, wherein N-2 frame difference Euclidean distances are totally arranged in a lens of the N frame images, calculating extreme values of the N-2 frame difference Euclidean distances and function values corresponding to the extreme values, calculating mean values of the function values, comparing the function values corresponding to the extreme values with the mean values, taking out points larger than the mean values, and the corresponding frame images are the key frame images to be selected.
Step 103: extracting video features, performing unsupervised learning on the extracted video features, and learning by taking a binary task of whether the video frame sequence has pixel mutation as supervision, wherein a positive sample is the non-mutated video frame sequence, and a negative sample is the mutated video frame sequence;
in the invention, an unsupervised learning method is adopted for processing a sample set which is not classified and marked when a classifier is designed, and automatically classifying the extracted video features. Further, when the video feature classification is carried out, the extraction of static features and the extraction of motion features are included. Wherein the static features include: color features, texture features, shape features, and the like. The motion characteristics include: local motion generated by object objects in the scene and global motion based on movement of the video image collector. And performing unsupervised learning on the video characteristics, and performing learning classification by taking a binary classification task of whether the video frame sequence has pixel mutation as supervision, so that a positive sample is the non-mutated video frame sequence, and a negative sample is the mutated video frame sequence.
Step 104: according to the result of unsupervised learning, enabling the result to correspond to the running state and the environmental state of equipment, enabling a non-mutation video frame to correspond to the normal state, enabling the mutation video frame to correspond to the abnormal state, and sending the running state and the environmental state of the equipment to a state processor;
according to the method, image processing is carried out according to the result of unsupervised learning, image recognition is carried out on the image obtained by the image processing, a character recognition result, a mark identification, a picture and an image background are obtained, the character recognition result, the mark identification, the picture and the image background are analyzed, the running state and the environment state of equipment are obtained, a video frame without mutation corresponds to a normal state, the video frame with mutation corresponds to an abnormal state, and the running state and the environment state of the equipment are sent to a state processor in a state code mode.
Device state analysis, comprising: analysis of mains voltage state, on-off state, pilot lamp state, wherein, to mains voltage state analysis namely to mains voltage digital monitoring, include: and starting to identify information contained in the image, wherein the information comprises the steps of analyzing the layout of the image to obtain a character area in the image, cutting characters in the character area, identifying the characters, and analyzing a character identification result to obtain the power supply voltage state of the equipment in the image. The equipment in the image is the equipment to be monitored, and by adopting the method, the contents of the mark identification, the picture, the image background and the like in the image can be identified. When the operation state of the equipment is extracted, the operation state of the equipment can be pushed to each user and staff who establish connection with the system in a state code mode.
Extracting an environmental state in a video, comprising: and extracting the fire fighting state and the animal invasion state in the video. Wherein, the animal in the state of invasion is selected from mice, rabbits, squirrels, weasel, wild cats, etc.
Wherein, extracting the fire protection state in the video comprises: identifying the building type of each building in the video, processing the building type by natural language to obtain character data, automatically comparing the character data and the like to be standard, identifying whether fire hidden dangers exist in the video, under the condition that the fire hidden dangers exist, marking the fire hidden dangers in the video of the equipment to be monitored, analyzing hidden danger problems, generating an environment state result, and sending the environment state result to the state processor in a state code mode. Furthermore, the building types of all buildings in the video can be identified, a fire protection evaluation list is loaded according to the building types, fire protection condition examination is carried out according to the fire protection evaluation list, and whether fire hazard exists or not is judged.
In another embodiment of the present invention, a thermal sensor such as a laser radar or a smoke sensor may be configured to detect smoke and extinguish the fire source within a predetermined local range when smoke is detected, thereby preventing the fire from being damaged. Furthermore, when fire, cigarette, when the temperature reaches certain accumulation, start fire extinguishing system, adjust the position of shower nozzle orientation to towards the source of a fire, open the shower nozzle for the shower nozzle is towards the high place automatic injection water of temperature or the fire-fighting water in the fire hydrant, makes it put out a fire, go out the temperature, go out the cigarette, cool down. By adopting the method, the condition that the fire is missed due to small fire can be avoided when the fire is detected by vision. The personal injury and financial loss caused by the fire are effectively avoided, and the fire is practically eliminated in the bud state.
When the spray head is installed, the position of the maximum coverage area of the spray head during water spraying can be calculated according to the space shape and the structure of the spray head installation place, the area where inflammable matters are located in the spray head installation place can also be obtained, and the spray head is installed at the position where the inflammable matters can be covered in the maximum area.
In the invention, the extraction of the environmental state in the video comprises the steps of extracting the animal invasion state in the video, namely, after image processing, identifying the type of insects in the image, generating a corresponding animal invasion alarm signal according to the obtained type identification result, and sending the animal invasion alarm signal as the animal invasion state in the environmental state to a state processor so as to give an early warning to workers.
In the embodiment of the invention, under the condition that the environment running state is the fire fighting state, the pre-stored fire fighting rectification strategy is obtained, the hidden danger processing data is automatically generated according to the fire fighting rectification strategy, and the hidden danger processing data is sent to each fire fighting subsystem.
Furthermore, the hidden danger processing data is pushed to experts of various professions such as buildings, fire prevention, electricity, heating ventilation and fire water supply for rechecking, and the expert feedback opinions are stored in a fire control rectification strategy.
The system also comprises a pre-stored fire safety assessment system model, fire law standard specifications, an assessment system index detection method, a fire hazard base and a fire correction strategy, and hidden hazard processing data are generated according to basic building information, equipment monitoring videos, the fire correction strategy and expert feedback opinions.
In the embodiment of the invention, under the condition that the environment running state is the animal invasion state, the animal invasion prevention and treatment are carried out according to the type identification result. Wherein, the animal in the state of invasion is usually selected from mice, rabbits, squirrels, weasel, wild cats, etc.
In the present invention, a distance of an animal intrusion state from an animal intrusion prevention device and a direction of the animal intrusion state with respect to the animal intrusion prevention device are identified, wherein the animal intrusion prevention device includes at least one of a medicine spraying device, a radiation generating device, or a light odor color control device. And adjusting the position and the posture of the animal invasion prevention and control device according to the recognized distance between the animal invasion state and the animal invasion prevention and control device and the direction of the animal invasion state relative to the animal invasion prevention and control device. Pests in the insect pest include, but are not limited to, cockroaches, rats.
In another embodiment of the invention, the invasion state of the animal can be detected based on the millimeter wave radar, the operation is simple, and the detection result is more timely and accurate.
Step 105: and judging the running state of the equipment according to the preset threshold value of the equipment, and sending alarm information under the condition that the running state of the equipment reaches the preset threshold value.
In the invention, the system can acquire the numerical value input by the user as the preset threshold value of the equipment, so that the user can set the threshold value by himself.
In the invention, the system can also detect whether the animal invasion state exists or not based on an infrared imaging technology, and can kill the animal invasion state at the first time when the animal invasion state exists.
In another embodiment of the invention, a temperature sensor, a pressure sensor and a combustible gas sensor can be used for environment monitoring, and the temperature can be adjusted and alarmed according to a preset temperature regulation strategy under the condition that the detection value of the temperature sensor reaches a temperature threshold value; and under the condition that the detection value of the pressure sensor reaches the pressure threshold value, adjusting the pressure according to a preset pressure regulation strategy and giving an alarm. And under the condition that the concentration of the combustible gas reaches a preset threshold value, alarming, closing a combustible gas switch, and driving a door and window switch to open the door and window.
In the embodiment of the invention, the global map can be drawn based on the equipment running state and the environment state, so that the whole monitoring result is displayed more clearly and intuitively.
In a second aspect of the present invention, there is provided a machine room monitoring device based on a ubiquitous power internet of things, as shown in fig. 2, including:
an image obtaining module 201, configured to obtain a video image of a device to be monitored;
in the present invention, the image acquisition module 201 is configured to acquire a video image of a device to be monitored in real time by using an intelligent camera; and the method can also be used for acquiring the uploaded existing video image, and the related operation can be more convenient for the user through the two methods.
Further, the image obtaining module 201 is used for acquiring a video of the device to be monitored in real time by using an intelligent camera, and further includes: when the brightness of the external environment light is smaller than the set brightness, the intelligent camera for collecting the video of the equipment to be monitored is supplemented with light.
When the video of the device to be monitored is acquired, image processing is carried out on the image in the video, wherein the image processing comprises automatic image direction rotation, image edge identification and image beautification.
Further, identifying the image edge may include:
and filtering the image in the video by adopting a Gaussian function to obtain a smooth data array.
Figure 876790DEST_PATH_IMAGE001
Wherein the content of the first and second substances,
Figure 14510DEST_PATH_IMAGE020
for the purpose of the image in the video,
Figure 7874DEST_PATH_IMAGE003
in order to smooth out the data array(s),
Figure 965465DEST_PATH_IMAGE021
is a dispersion parameter of a gaussian function to reflect the degree of smoothing.
Performing gradient calculations based on the smoothed data array
Figure 436898DEST_PATH_IMAGE005
Is calculated by 2 x 2 first order finite difference approximation
Figure 460085DEST_PATH_IMAGE006
And
Figure 93192DEST_PATH_IMAGE007
two arrays of partial derivatives
Figure 69238DEST_PATH_IMAGE008
And
Figure 78782DEST_PATH_IMAGE009
. Wherein the content of the first and second substances,
Figure 925515DEST_PATH_IMAGE010
Figure 244370DEST_PATH_IMAGE011
-
Figure 442133DEST_PATH_IMAGE012
Figure 255368DEST_PATH_IMAGE013
Figure 487767DEST_PATH_IMAGE014
Figure 462676DEST_PATH_IMAGE005
-
Figure 898468DEST_PATH_IMAGE015
Figure 249815DEST_PATH_IMAGE013
finite differences were averaged over 2 x 2 squares and the partial derivative gradients of x and y were calculated at the same point in the image. The amplitude and azimuth are respectively:
Figure 336719DEST_PATH_IMAGE016
θ
Figure 482530DEST_PATH_IMAGE017
=arctan(
Figure 638573DEST_PATH_IMAGE018
) Wherein, the arctangent function contains two parameters, and the calculated result is an angle, and the value range is the circumference range.
And carrying out non-maximum value suppression, calculating the gradient amplitude of two adjacent pixels in the pixel gradient direction for each pixel on the image M (x, y), if the gradient amplitude of the current pixel is not less than the two values, determining the current pixel as an edge point, and if the gradient amplitude of the current pixel is less than the two values, determining the current pixel as a non-edge pixel. And thinning the image edge into one pixel degree, and obtaining an image NMS (x, y) from the gradient amplitude image M (x, y) through non-maximum suppression.
And finally, carrying out double-threshold detection and edge connection, specifically adopting a high threshold Th and a low threshold Tl to extract edges, and obtaining strong edge points and weak edge points of the edge image for each pixel point of the NMS (x, y) image through a high threshold value and a low threshold value respectively. And tracking the edge in the strong edge point, searching edge points in 8 neighborhoods of corresponding positions of the image weak edge points to connect the discontinuity in the strong edge point when the edge is powered off, continuously searching and tracking the edge, and connecting the discontinuity of the edge in the high threshold value image strong edge points to obtain an edge image.
The image processing in the present invention may further include: edge sharpening, pseudo-color processing, contrast enhancement, etc., to facilitate subsequent identification. Edge images of images in the video are extracted using a second order difference operator. And adding the edge image to the source image to realize the function of enhancing the edge. The edge sharpening operation may also be performed using a first order difference operator, i.e., a gradient.
After the image direction is automatically rotated for the image in the video, the method also comprises the steps of correcting the image direction, identifying the edge of the image after automatic rotation, carrying out normalization processing, and carrying out denoising, smoothing and gray level histogram processing on the image after normalization processing.
The video compression module 202 is configured to perform frame extraction on the video image to obtain a video frame sequence;
in the present invention, the video compression module 202 is configured to extract a plurality of frames from the preset frames at intervals of the video image to obtain a sequence of video frames, so as to take a picture at intervals and combine the pictures to form a video.
Further, the video compression module 202 retains the key frames and deletes the non-key frames during the process of frame extraction of the video image, which not only can reduce the storage capacity of the video image, but also can ensure that the definition of the obtained image is unchanged after frame extraction.
Furthermore, extracting the key frame based on a frame difference Euclidean distance method, and adopting a formula:
Figure 793611DEST_PATH_IMAGE019
wherein, eulerdisciff (i) represents the frame difference euclidean distance of the ith frame image, N is the number of frame images in a lens of the video, and is the gray value of the ith, i +1, i +2 frame images respectively.
Calculating frame difference Euclidean distances of each image frame by frame, wherein N-2 frame difference Euclidean distances are totally arranged in a lens of the N frame images, calculating extreme values of the N-2 frame difference Euclidean distances and function values corresponding to the extreme values, calculating mean values of the function values, comparing the function values corresponding to the extreme values with the mean values, taking out points larger than the mean values, and the corresponding frame images are the key frame images to be selected.
The learning module 203 is configured to extract video features, perform unsupervised learning on the extracted video features, and perform learning by taking a binary task of whether the video frame sequence has a pixel mutation as a supervision, where a positive sample is the non-mutated video frame sequence and a negative sample is the mutated video frame sequence;
in the present invention, the learning module 203 adopts an unsupervised learning method for processing the sample set that is not classified and labeled when designing the classifier, and automatically classifies the extracted video features.
Further, the learning module 203, when performing the video feature classification, includes extracting static features and extracting motion features. Wherein the static features include: color features, texture features, shape features, and the like. The motion characteristics include: local motion generated by object objects in the scene and global motion based on movement of the video image collector. And performing unsupervised learning on the video characteristics, and performing learning classification by taking a binary classification task of whether the video frame sequence has pixel mutation as supervision, so that a positive sample is the non-mutated video frame sequence, and a negative sample is the mutated video frame sequence.
The state analysis module 204 is configured to, according to a result of the unsupervised learning, correspond the result to an equipment running state and an environmental state, correspond a non-mutation video frame to a normal state, correspond the mutation video frame to an abnormal state, and send the equipment running state and the environmental state to the state processor;
in the present invention, the state analysis module 204 is configured to perform image processing according to a result of unsupervised learning, perform image recognition on an image obtained by the image processing to obtain a character recognition result, a logo, a picture, and an image background, analyze the character recognition result, the logo, the picture, and the image background to obtain an equipment operation state and an environment state, correspond a non-mutation video frame to a normal state, correspond the mutation video frame to an abnormal state, and send the equipment operation state and the environment state to the state processor in a state code manner.
Device state analysis, comprising: analysis of mains voltage state, on-off state, pilot lamp state, wherein, to mains voltage state analysis namely to mains voltage digital monitoring, include: and starting to identify information contained in the image, wherein the information comprises the steps of analyzing the layout of the image to obtain a character area in the image, cutting characters in the character area, identifying the characters, and analyzing a character identification result to obtain the power supply voltage state of the equipment in the image. The equipment in the image is the equipment to be monitored, and by adopting the method, the contents of the mark identification, the picture, the image background and the like in the image can be identified. When the operation state of the equipment is extracted, the operation state of the equipment can be pushed to each user and each state processor which are connected with the device in a state code mode.
Taking the analysis of the power voltage state as an example, the collected electric meter image is subjected to image processing, including image graying processing, electric meter outer frame removal, image enhancement, edge detection, inclination correction operation, and then the power voltage value is extracted.
Extracting an environmental state in a video, comprising: and extracting the fire fighting state and the animal invasion state in the video.
Wherein, extracting the fire protection state in the video comprises: identifying the building type of each building in the video, processing the building type by natural language to obtain character data, automatically comparing the character data and the like to be standard, identifying whether fire hidden dangers exist in the video, under the condition that the fire hidden dangers exist, marking the fire hidden dangers in the video of the equipment to be monitored, analyzing hidden danger problems, generating an environment state result, and sending the environment state result to the state processor in a state code mode. Furthermore, the building types of all buildings in the video can be identified, a fire protection evaluation list is loaded according to the building types, fire protection condition examination is carried out according to the fire protection evaluation list, and whether fire hazard exists or not is judged.
In another embodiment of the present invention, a thermal sensor such as a laser radar or a smoke sensor may be configured to detect smoke and extinguish the fire source within a predetermined local range when smoke is detected, thereby preventing the fire from being damaged. Furthermore, when fire, cigarette, when the temperature reaches certain accumulation, start fire extinguishing system, adjust the position of shower nozzle orientation to towards the source of a fire, open the shower nozzle for the shower nozzle is towards the high place automatic injection water of temperature or the fire-fighting water in the fire hydrant, makes it put out a fire, go out the temperature, go out the cigarette, cool down. By adopting the method, the condition that the fire is missed due to small fire can be avoided when the fire is detected by vision. The personal injury and financial loss caused by the fire are effectively avoided, and the fire is practically eliminated in the bud state.
When the spray head is installed, the position of the maximum coverage area of the spray head during water spraying can be calculated according to the space shape and the structure of the spray head installation place, the area where inflammable matters are located in the spray head installation place can also be obtained, and the spray head is installed at the position where the inflammable matters can be covered in the maximum area.
In the invention, the extraction of the environmental state in the video comprises the steps of extracting the animal invasion state in the video, namely, after image processing, identifying the type of insects in the image, generating a corresponding animal invasion alarm signal according to the obtained type identification result, and sending the animal invasion alarm signal as the animal invasion state in the environmental state to a state processor so as to give an early warning to workers.
And the fire control module 205 is configured to, in a case that the environment operation state is the fire protection state, acquire a pre-stored fire protection rectification strategy, automatically generate hidden danger processing data according to the fire protection rectification strategy, and send the hidden danger processing data to each fire protection subsystem.
Further, the fire control module 205 is configured to push the hidden danger processing data to experts of each specialty such as building, fire protection, electrical, heating ventilation, fire water supply, and the like for rechecking, and store expert feedback opinions in a fire control reforming strategy.
In the device, the fire control module 205 further comprises a pre-stored fire safety assessment system model, fire law standard specifications, assessment system index detection method, fire hazard library and fire correction strategy, and generates hazard processing data according to basic building information, equipment monitoring video, fire correction strategy and expert feedback.
And the pest control module 206 is used for carrying out animal invasion control according to the type recognition result under the condition that the environment operation state is an animal invasion state.
In the present invention, the pest control module 206 is configured to select a corresponding animal intrusion control device according to the type recognition result of the animal intrusion state, recognize a distance between the animal intrusion state and the animal intrusion control device and a direction of the animal intrusion state relative to the animal intrusion control device, and adjust a position and a posture of the animal intrusion control device according to the distance between the animal intrusion state and the animal intrusion control device and the direction of the animal intrusion state relative to the animal intrusion control device.
Further, identifying a distance of the animal intrusion status from the animal intrusion control device and a direction of the animal intrusion status relative to the animal intrusion control device, wherein the animal intrusion control device includes at least one of a medication spray device, a radiation generation device, or a light odor color control device. And adjusting the position and the posture of the animal invasion prevention and control device according to the recognized distance between the animal invasion state and the animal invasion prevention and control device and the direction of the animal invasion state relative to the animal invasion prevention and control device. Animal invasion states include, but are not limited to, cockroaches, rats.
In another embodiment of the invention, the invasion state of the animal can be detected based on the millimeter wave radar, the operation is simple, and the detection result is more timely and accurate.
The device control module 207 is configured to determine the device operating state according to a device preset threshold, and send an alarm message when the device operating state reaches the preset threshold.
In the present invention, the device control module 207 may acquire a numerical value input by a user as a device preset threshold, so that the user may set the threshold by himself.
In the present invention, the device can also detect the presence of an animal invasive state based on infrared imaging technology and can kill the animal invasive state at the first time when the presence of the animal invasive state is detected.
In another embodiment of the invention, a temperature sensor, a pressure sensor and a combustible gas sensor can be used for environment monitoring, and the temperature can be adjusted and alarmed according to a preset temperature regulation strategy under the condition that the detection value of the temperature sensor reaches a temperature threshold value; and under the condition that the detection value of the pressure sensor reaches the pressure threshold value, adjusting the pressure according to a preset pressure regulation strategy and giving an alarm. And under the condition that the concentration of the combustible gas reaches a preset threshold value, alarming, closing a combustible gas switch, and driving a door and window switch to open the door and window.
In the embodiment of the invention, the global map can be drawn based on the equipment running state and the environment state, so that the whole monitoring result is displayed more clearly and intuitively.
According to a third aspect of the present invention, there is provided an electronic device, as shown in fig. 3, including a memory 301, a processor 302, and a computer program stored on the memory 301 and executable on the processor 302, wherein the processor 302 executes the program to implement a method including:
acquiring a video image of a device to be monitored;
performing frame extraction on the video image to obtain a video frame sequence;
extracting video features, performing unsupervised learning on the extracted video features, and learning by taking a binary task of whether the video frame sequence has pixel mutation as supervision, wherein a positive sample is the non-mutated video frame sequence, and a negative sample is the mutated video frame sequence;
according to the result of unsupervised learning, enabling the result to correspond to the running state and the environmental state of equipment, enabling a non-mutation video frame to correspond to the normal state, enabling the mutation video frame to correspond to the abnormal state, and sending the running state and the environmental state of the equipment to a state processor;
under the condition that the environment running state is a fire fighting state, acquiring a pre-stored fire fighting rectification strategy, automatically generating hidden danger processing data according to the fire fighting rectification strategy, and sending the hidden danger processing data to each fire fighting subsystem;
under the condition that the environment running state is an animal invasion state, carrying out animal invasion prevention and control according to the type identification result of the animal invasion state;
and judging the running state of the equipment according to a preset threshold value of the equipment, and sending alarm information under the condition that the running state of the equipment reaches the preset threshold value.
Further, the acquiring a video image of the device to be monitored includes: the intelligent camera is adopted to collect the video image of the equipment to be monitored in real time, and when the brightness of the external environment light is smaller than the set brightness, the intelligent camera for collecting the video of the equipment to be monitored is supplemented with light.
Further, the corresponding the result to the device running state and the environmental state according to the result of the unsupervised learning includes:
and according to the result of unsupervised learning, carrying out image processing, carrying out image recognition on the image obtained by image processing to obtain a character recognition result, a mark identifier, a picture and an image background, and analyzing the character recognition result, the mark identifier, the picture and the image background to obtain the running state and the environment state of the equipment.
Further, the device operating state includes:
power supply voltage state, switch state, indicator light state;
the environmental states include: fire fighting conditions and animal invasion conditions.
Further, the image processing includes: automatically rotating the image direction, identifying the image edge and beautifying the image.
Further, in the case that the environmental operation state is an animal invasion state, the animal invasion prevention and treatment according to the type recognition result of the animal invasion state includes:
selecting a corresponding animal invasion prevention and control device according to the type recognition result of the animal invasion state, recognizing the distance between the animal invasion state and the animal invasion prevention and control device and the direction of the animal invasion state relative to the animal invasion prevention and control device, and adjusting the position and the posture of the animal invasion prevention and control device according to the distance between the animal invasion state and the animal invasion prevention and control device and the direction of the animal invasion state relative to the animal invasion prevention and control device.
Further, the method further comprises: and detecting the invasion state of the animal based on the millimeter wave radar.
As used herein, the singular forms "a", "an", "the" and "the" include plural referents unless the context clearly dictates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
It should be understood that the above detailed description of the technical solution of the present invention with the help of preferred embodiments is illustrative and not restrictive. On the basis of reading the description of the invention, a person skilled in the art can modify the technical solutions described in the embodiments, or make equivalent substitutions for some technical features; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for managing and controlling the peripheral environment of equipment based on the Internet of things of electric power and artificial intelligence is characterized by comprising the following steps:
acquiring a video image of a device to be monitored;
performing frame extraction on the video image to obtain a video frame sequence;
extracting video features, performing unsupervised learning on the extracted video features, and learning by taking a binary task of whether the video frame sequence has pixel mutation as supervision, wherein a positive sample is the non-mutated video frame sequence, and a negative sample is the mutated video frame sequence;
according to the result of unsupervised learning, enabling the result to correspond to an equipment running state and an environment running state, enabling a non-mutation video frame to correspond to a normal state, enabling the mutation video frame to correspond to an abnormal state, and sending the equipment running state and the environment state to a state processor;
and judging the running state of the equipment according to a preset threshold value of the equipment, and sending alarm information under the condition that the running state of the equipment reaches the preset threshold value.
2. The method of claim 1,
the acquiring of the video image of the device to be monitored comprises the following steps: the intelligent camera is adopted to collect the video image of the equipment to be monitored in real time, and when the brightness of the external environment light is smaller than the set brightness, the intelligent camera for collecting the video of the equipment to be monitored is supplemented with light.
3. The method of claim 1,
the step of enabling the result to correspond to the running state and the environmental state of the equipment according to the result of the unsupervised learning comprises the following steps:
and according to the result of unsupervised learning, carrying out image processing, carrying out image recognition on the image obtained by image processing to obtain a character recognition result, a mark identifier, a picture and an image background, and analyzing the character recognition result, the mark identifier, the picture and the image background to obtain the running state and the environment state of the equipment.
4. The method of claim 1,
and under the condition that the environment running state is a fire fighting state, acquiring a pre-stored fire fighting rectification strategy, automatically generating hidden danger processing data according to the fire fighting rectification strategy, and sending the hidden danger processing data to each fire fighting subsystem.
5. The method of claim 3,
the image processing includes: automatically rotating the image direction, identifying the image edge and beautifying the image.
6. The method of claim 5,
the identifying of the image edge comprises:
filtering images in the video by adopting a Gaussian function to obtain a smooth data array;
performing gradient calculation according to the smooth data array;
carrying out non-maximum suppression, and calculating the gradient amplitude of two adjacent pixels in the pixel gradient direction for each pixel on the image;
and carrying out double-threshold detection and edge connection, and connecting the discontinuities of the edges in the strong edge points of the high-threshold value image to obtain an edge image.
7. The method according to claim 1, wherein in the case where the environmental operation state is an animal invasion state, animal invasion control is performed based on a type recognition result of the animal invasion state.
8. The method of claim 7,
the animal invasion prevention and control according to the type recognition result of the animal invasion state under the condition that the environment running state is the animal invasion state comprises the following steps:
selecting a corresponding animal invasion prevention and control device according to the type recognition result of the animal invasion state, recognizing the distance between the animal invasion state and the animal invasion prevention and control device and the direction of the animal invasion state relative to the animal invasion prevention and control device, and adjusting the position and the posture of the animal invasion prevention and control device according to the distance between the animal invasion state and the animal invasion prevention and control device and the direction of the animal invasion state relative to the animal invasion prevention and control device.
9. The utility model provides a computer lab monitoring device based on ubiquitous electric power thing networking which characterized in that includes:
the image acquisition module is used for acquiring a video image of the equipment to be monitored;
the video compression module is used for extracting frames of the video image to obtain a video frame sequence;
the learning module is used for extracting video features, performing unsupervised learning on the extracted video features, and learning by taking a binary task of whether the video frame sequence has pixel mutation as supervision, wherein a positive sample is the non-mutated video frame sequence, and a negative sample is the mutated video frame sequence;
the state analysis module is used for corresponding the result to the running state and the environmental state of the equipment according to the result of unsupervised learning, corresponding the video frame without mutation to the normal state, corresponding the video frame with mutation to the abnormal state and sending the running state and the environmental state of the equipment to the state processor;
and the equipment control module is used for judging the running state of the equipment according to a preset threshold value of the equipment and sending alarm information under the condition that the running state of the equipment reaches the preset threshold value.
10. The method of claim 9, wherein the apparatus further comprises:
the fire control module is used for acquiring a pre-stored fire control rectification strategy under the condition that the environment running state is a fire control state, automatically generating hidden danger processing data according to the fire control rectification strategy and sending the hidden danger processing data to each fire control subsystem;
and the pest control module is used for carrying out animal invasion prevention and control according to the type recognition result of the animal invasion state under the condition that the environment operation state is the animal invasion state.
CN202110747744.XA 2021-07-02 2021-07-02 Equipment surrounding environment control method, device and equipment based on power internet of things and artificial intelligence Withdrawn CN113435371A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110747744.XA CN113435371A (en) 2021-07-02 2021-07-02 Equipment surrounding environment control method, device and equipment based on power internet of things and artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110747744.XA CN113435371A (en) 2021-07-02 2021-07-02 Equipment surrounding environment control method, device and equipment based on power internet of things and artificial intelligence

Publications (1)

Publication Number Publication Date
CN113435371A true CN113435371A (en) 2021-09-24

Family

ID=77758557

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110747744.XA Withdrawn CN113435371A (en) 2021-07-02 2021-07-02 Equipment surrounding environment control method, device and equipment based on power internet of things and artificial intelligence

Country Status (1)

Country Link
CN (1) CN113435371A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114999097A (en) * 2022-06-07 2022-09-02 中联科锐消防科技有限公司 Method and system for evaluating effectiveness of smoke fire detector in grille suspended ceiling

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114999097A (en) * 2022-06-07 2022-09-02 中联科锐消防科技有限公司 Method and system for evaluating effectiveness of smoke fire detector in grille suspended ceiling

Similar Documents

Publication Publication Date Title
CN112418069B (en) High-altitude parabolic detection method and device, computer equipment and storage medium
CN109461168B (en) Target object identification method and device, storage medium and electronic device
US9224278B2 (en) Automated method and system for detecting the presence of a lit cigarette
CN108710126B (en) Automatic target detection and eviction method and system
US10195008B2 (en) System, device and method for observing piglet birth
KR101716365B1 (en) Module-based intelligent video surveillance system and antitheft method for real-time detection of livestock theft
CN107911653A (en) The module of intelligent video monitoring in institute, system, method and storage medium
Xia et al. In situ detection of small-size insect pests sampled on traps using multifractal analysis
CN104969875A (en) Pet behavior detection system based on image change
CN111428681A (en) Intelligent epidemic prevention system
US20190096066A1 (en) System and Method for Segmenting Out Multiple Body Parts
CN114664048B (en) Fire monitoring and fire early warning method based on satellite remote sensing monitoring
JP5042177B2 (en) Image sensor
CN115880598B (en) Ground image detection method and related device based on unmanned aerial vehicle
Tan et al. Embedded human detection system based on thermal and infrared sensors for anti-poaching application
CN116704411A (en) Security control method, system and storage medium based on Internet of things
CN114120171A (en) Fire smoke detection method, device and equipment based on video frame and storage medium
CN116403377A (en) Abnormal behavior and hidden danger detection device in public place
CN113435371A (en) Equipment surrounding environment control method, device and equipment based on power internet of things and artificial intelligence
Jayashree et al. System to detect fire under surveillanced area
CN114283367B (en) Artificial intelligent open fire detection method and system for garden fire early warning
Rajan et al. Forest fire detection using machine learning
WO2020172546A1 (en) Machine vision sensor system for optimal growing conditions
US20190096045A1 (en) System and Method for Realizing Increased Granularity in Images of a Dataset
CN112837471A (en) Security monitoring method and device for internet contract room

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210924