CN113902993A - Environmental state analysis method and system based on environmental monitoring - Google Patents

Environmental state analysis method and system based on environmental monitoring Download PDF

Info

Publication number
CN113902993A
CN113902993A CN202111186450.0A CN202111186450A CN113902993A CN 113902993 A CN113902993 A CN 113902993A CN 202111186450 A CN202111186450 A CN 202111186450A CN 113902993 A CN113902993 A CN 113902993A
Authority
CN
China
Prior art keywords
target
monitoring video
environment monitoring
frame
environmental
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111186450.0A
Other languages
Chinese (zh)
Inventor
李薇薇
郑信江
赵茜茜
郑皖予
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202111186450.0A priority Critical patent/CN113902993A/en
Publication of CN113902993A publication Critical patent/CN113902993A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Abstract

The invention provides an environmental state analysis method and system based on environmental monitoring, and relates to the technical field of environmental monitoring. In the invention, multiple environment monitoring videos are subjected to duplication elimination screening processing to obtain at least one corresponding target environment monitoring video; for each target environment monitoring video, performing action recognition processing on the target environment monitoring video to obtain target action characteristic information of a monitored object corresponding to the target environment monitoring video; analyzing the target action characteristic information corresponding to each target environment monitoring video to obtain environment safety state information of the target monitoring area, wherein the environment safety state information is used for representing the environment safety degree information of the target monitoring area. Based on the method, the problems of low computing efficiency and waste of computing resources easily occurring when the environmental safety state of the monitored area is determined in the prior art can be solved.

Description

Environmental state analysis method and system based on environmental monitoring
Technical Field
The invention relates to the technical field of environmental monitoring, in particular to an environmental state analysis method and system based on environmental monitoring.
Background
The environmental monitoring technology is applied in various fields, such as safety guarantee, accident tracing, accident prediction and the like. Therefore, in the prior art, many monitoring cameras are arranged in some important areas, such as shopping malls and traffic intersections, so as to perform image monitoring.
In the prior art, a surveillance video frame acquired by a surveillance camera is generally processed directly, for example, the surveillance video frame is directly used for analyzing a corresponding environmental security state, and thus, the problems of low calculation efficiency and waste of calculation resources occur in the process of analyzing the environmental security state.
Disclosure of Invention
In view of the above, an object of the present invention is to provide an environmental status analysis method and system based on environmental monitoring, so as to solve the problems in the prior art that the computing efficiency is low and the computing resources are wasted when determining the environmental security status of the monitored area.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
an environmental state analysis method based on environmental monitoring is applied to an environmental monitoring background server, the environmental monitoring background server is in communication connection with a plurality of environmental monitoring devices, and the environmental state analysis method based on environmental monitoring comprises the following steps:
based on object identification results corresponding to a plurality of environment monitoring videos sent by a plurality of environment monitoring devices, performing duplicate removal screening processing on the plurality of environment monitoring videos to obtain at least one corresponding target environment monitoring video, wherein each environment monitoring device is respectively arranged at different positions of a target monitoring area, each environment monitoring device is used for performing object monitoring on at least part of positions of the target monitoring area to obtain the corresponding environment monitoring video, each target environment monitoring video comprises at least one environment monitoring video frame, and each environment monitoring video frame in the at least one environment monitoring video frame has the same monitored object;
for each target environment monitoring video in the at least one target environment monitoring video, performing action recognition processing on the target environment monitoring video to obtain target action characteristic information of a monitored object corresponding to the target environment monitoring video;
analyzing the target action characteristic information corresponding to each target environment monitoring video in the at least one target environment monitoring video to obtain environment safety state information of the target monitoring area, wherein the environment safety state information is used for representing environment safety degree information of the target monitoring area.
In some preferred embodiments, in the environmental status analysis method based on environmental monitoring, the step of performing motion recognition processing on each target environment monitoring video of the at least one target environment monitoring video to obtain target motion characteristic information of a monitored object corresponding to the target environment monitoring video includes:
obtaining a motion recognition model obtained based on pre-training of a plurality of sample videos, wherein the motion recognition model is a neural network model;
and aiming at each target environment monitoring video, performing action recognition processing on the target environment monitoring video based on the action recognition model to obtain target action characteristic information of a monitored object corresponding to the target environment monitoring video.
In some preferred embodiments, in the environmental state analysis method based on environmental monitoring, the step of performing, for each target environment monitoring video, motion recognition processing on the target environment monitoring video based on the motion recognition model to obtain target motion characteristic information of a monitored object corresponding to the target environment monitoring video includes:
counting the number of frames of environment monitoring video frames included in the target environment monitoring video aiming at each target environment monitoring video to obtain a counted frame number corresponding to the target environment monitoring video, and determining the relative size relationship between the counted frame number and a preset counted frame number threshold;
for each target environment monitoring video, if the statistical frame number corresponding to the target environment monitoring video is less than or equal to the statistical frame number threshold, sequentially performing action recognition processing on each frame of environment monitoring video frame included in the target environment monitoring video based on the action recognition model to obtain an action recognition result corresponding to each frame of environment monitoring video frame included in the target environment monitoring video, and sequentially combining action recognition results corresponding to each frame of environment monitoring video frame included in the target environment monitoring video to obtain target action characteristic information of a monitored object corresponding to the target environment monitoring video;
for each target environment monitoring video, if the number of the statistical frames corresponding to the target environment monitoring video is greater than the statistical frame number threshold, traversing each frame of environment monitoring video frame included in the target environment monitoring video, and judging whether the interframe difference value between the currently traversed environment monitoring video frame and the previous frame of environment monitoring video frame is less than a predetermined difference threshold, and when the interframe difference value is less than the difference threshold, screening out the previous frame of environment monitoring video frame to obtain at least one target environment monitoring video frame corresponding to the target environment monitoring video;
and for each target environment monitoring video, sequentially performing action recognition processing on each frame of target environment monitoring video frame included in the target environment monitoring video based on the action recognition model to obtain an action recognition result corresponding to each frame of target environment monitoring video frame included in the target environment monitoring video, and sequentially combining action recognition results corresponding to each frame of target environment monitoring video frame included in the target environment monitoring video to obtain target action characteristic information of a monitored object corresponding to the target environment monitoring video.
In some preferred embodiments, in the method for analyzing an environmental status based on environmental monitoring, for each target environmental monitoring video, if the statistical frame number corresponding to the target environmental monitoring video is greater than the statistical frame number threshold, traversing each frame of environmental monitoring video frames included in the target environmental monitoring video, and determining whether an inter-frame difference value between a currently traversed environmental monitoring video frame and a previous frame of environmental monitoring video frame is less than a predetermined difference threshold, and when the inter-frame difference value is less than the difference threshold, screening the previous frame of environmental monitoring video frame to obtain at least one target environmental monitoring video frame corresponding to the target environmental monitoring video, the method includes:
for each target environment monitoring video, traversing each frame of environment monitoring video frame included in the target environment monitoring video if the statistical frame number corresponding to the target environment monitoring video is greater than the statistical frame number threshold;
calculating the difference value of the pixel values of the corresponding pixel positions between the currently traversed environment monitoring video frame and the previous frame environment plus memory video frame to obtain the pixel difference value of each corresponding pixel position, and calculating the sum value of the pixel difference values of each pixel position to obtain the interframe difference value between the currently traversed environment monitoring video frame and the previous frame environment monitoring video frame;
and judging whether the interframe difference value between the currently traversed environmental monitoring video frame and the previous frame of environmental monitoring video frame is smaller than a predetermined difference threshold value, and screening out the previous frame of environmental monitoring video frame when the interframe difference value is smaller than the difference threshold value to obtain at least one target environmental monitoring video frame corresponding to the target environmental monitoring video.
In some preferred embodiments, in the method for analyzing an environmental status based on environmental monitoring, for each target environmental monitoring video, if the statistical frame number corresponding to the target environmental monitoring video is greater than the statistical frame number threshold, traversing each frame of environmental monitoring video frames included in the target environmental monitoring video, and determining whether an inter-frame difference value between a currently traversed environmental monitoring video frame and a previous frame of environmental monitoring video frame is less than a predetermined difference threshold, and when the inter-frame difference value is less than the difference threshold, screening the previous frame of environmental monitoring video frame to obtain at least one target environmental monitoring video frame corresponding to the target environmental monitoring video, the method includes:
for each target environment monitoring video, traversing each frame of environment monitoring video frame included in the target environment monitoring video if the statistical frame number corresponding to the target environment monitoring video is greater than the statistical frame number threshold;
calculating the difference value of pixel values of corresponding partial pixel positions between a currently traversed environment monitoring video frame and a previous frame environment plus memory video frame to obtain the pixel difference value of each pixel position in the corresponding partial pixel positions, and calculating the sum value of the pixel difference values of all the pixel positions to obtain the inter-frame difference value between the currently traversed environment monitoring video frame and the previous frame environment monitoring video frame, wherein the partial pixel positions are determined in all the pixel positions according to the preset pixel interval;
and judging whether the interframe difference value between the currently traversed environmental monitoring video frame and the previous frame of environmental monitoring video frame is smaller than a predetermined difference threshold value, and screening out the previous frame of environmental monitoring video frame when the interframe difference value is smaller than the difference threshold value to obtain at least one target environmental monitoring video frame corresponding to the target environmental monitoring video.
In some preferred embodiments, in the environmental status analysis method based on environmental monitoring, the step of analyzing the target action characteristic information corresponding to each of the at least one target environmental monitoring video to obtain the environmental safety status information of the target monitoring area includes:
determining whether the target action characteristic information corresponding to the target environment monitoring video belongs to multiple preset standard action characteristic information or not aiming at each target environment monitoring video in the at least one target environment monitoring video to obtain a corresponding action characteristic comparison result;
and determining the environmental safety state information of the target monitoring area based on the action characteristic comparison result corresponding to each target environment monitoring video.
In some preferred embodiments, in the environmental status analysis method based on environmental monitoring, the step of determining the environmental safety status information of the target monitoring area based on the action characteristic comparison result corresponding to each target environmental monitoring video includes:
judging whether the action characteristic comparison result belongs to a first action characteristic comparison result or not aiming at the action characteristic comparison result corresponding to each target environment monitoring video, wherein the first action characteristic comparison result is used for representing that the corresponding target action characteristic information does not belong to any standard action characteristic information in the multiple standard action characteristic information;
and counting the quantity ratio of the action characteristic comparison results belonging to the first action characteristic comparison result, and determining the environmental safety state information of the target monitoring area based on the quantity ratio.
The embodiment of the invention also provides an environmental state analysis system based on environmental monitoring, which is applied to an environmental monitoring background server, wherein the environmental monitoring background server is in communication connection with a plurality of environmental monitoring devices, and the environmental state analysis system based on environmental monitoring comprises:
the monitoring video duplicate removal screening module is used for carrying out duplicate removal screening processing on a plurality of environment monitoring videos sent by a plurality of environment monitoring devices based on obtained object identification results corresponding to the plurality of environment monitoring videos to obtain at least one corresponding target environment monitoring video, wherein each environment monitoring device is respectively arranged at different positions of a target monitoring area, each environment monitoring device is used for carrying out object monitoring on at least part of positions of the target monitoring area to obtain the corresponding environment monitoring video, each target environment monitoring video comprises at least one environment monitoring video frame, and monitoring objects of each environment monitoring video frame in the at least one environment monitoring video frame are the same;
the monitoring video action recognition module is used for carrying out action recognition processing on the target environment monitoring video aiming at each target environment monitoring video in the at least one target environment monitoring video to obtain target action characteristic information of a monitored object corresponding to the target environment monitoring video;
and the environment safety state determining module is used for analyzing and processing the target action characteristic information corresponding to each target environment monitoring video in the at least one target environment monitoring video to obtain environment safety state information of the target monitoring area, wherein the environment safety state information is used for representing environment safety degree information of the target monitoring area.
In some preferred embodiments, in the environmental status analysis system based on environmental monitoring, the monitoring video motion recognition module is specifically configured to:
obtaining a motion recognition model obtained based on pre-training of a plurality of sample videos, wherein the motion recognition model is a neural network model;
and aiming at each target environment monitoring video, performing action recognition processing on the target environment monitoring video based on the action recognition model to obtain target action characteristic information of a monitored object corresponding to the target environment monitoring video.
In some preferred embodiments, in the environmental state analysis system based on environmental monitoring, the environmental security state determination module is specifically configured to:
determining whether the target action characteristic information corresponding to the target environment monitoring video belongs to multiple preset standard action characteristic information or not aiming at each target environment monitoring video in the at least one target environment monitoring video to obtain a corresponding action characteristic comparison result;
and determining the environmental safety state information of the target monitoring area based on the action characteristic comparison result corresponding to each target environment monitoring video.
According to the environmental state analysis method and system based on environmental monitoring, after a plurality of environmental monitoring videos are obtained, the multiple environmental monitoring videos can be subjected to duplication elimination screening processing to obtain at least one corresponding target environmental monitoring video, so that the target environmental monitoring videos can be subjected to action identification processing to obtain target action characteristic information of a corresponding monitored object, then the target action characteristic information corresponding to each target environmental monitoring video is analyzed to obtain environmental safety state information of a target monitoring area, and therefore, due to the duplication elimination screening processing, the data volume of the videos can be reduced, and therefore the problems of low computing efficiency and waste of computing resources when the environmental safety state of the monitoring area is determined in the prior art can be solved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
Fig. 1 is a block diagram of a background server for environment monitoring according to an embodiment of the present invention.
Fig. 2 is a schematic flowchart illustrating steps included in an environmental status analysis method based on environmental monitoring according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of functional modules included in an environmental status analysis system based on environmental monitoring according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides an environment monitoring background server. Wherein the environment monitoring background server may include a memory and a processor.
In detail, the memory and the processor are electrically connected directly or indirectly to realize data transmission or interaction. For example, they may be electrically connected to each other via one or more communication buses or signal lines. The memory can have stored therein at least one software function (computer program) which can be present in the form of software or firmware. The processor may be configured to execute the executable computer program stored in the memory, so as to implement the environmental monitoring-based environmental status analysis method provided by the embodiment of the present invention (described later).
Alternatively, in some possible implementations, the Memory may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
Optionally, in some possible implementations, the Processor may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), a System on Chip (SoC), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
Optionally, in some possible implementations, the structure shown in fig. 1 is only an illustration, and the environment monitoring backend server may further include more or fewer components than those shown in fig. 1, or have a different configuration from that shown in fig. 1, for example, may include a communication unit for information interaction with other devices (e.g., an environment monitoring device for object monitoring, such as a camera, etc.).
With reference to fig. 2, an embodiment of the present invention further provides an environmental status analysis method based on environmental monitoring, which is applicable to the environmental monitoring background server. The method steps defined by the flow related to the environmental state analysis method based on environmental monitoring can be realized by the environmental monitoring background server, and the environmental monitoring background server is in communication connection with a plurality of environmental monitoring devices.
The specific process shown in FIG. 2 will be described in detail below.
Step S100, based on the obtained object identification results corresponding to the plurality of environment monitoring videos sent by the plurality of environment monitoring devices, performing duplicate removal screening processing on the plurality of environment monitoring videos to obtain at least one corresponding target environment monitoring video.
In the embodiment of the present invention, the environmental monitoring background server may perform duplicate removal screening processing on the plurality of environmental monitoring videos based on the obtained object identification results corresponding to the plurality of environmental monitoring videos sent by the plurality of environmental monitoring devices, so as to obtain the corresponding at least one target environmental monitoring video. The environment monitoring device is respectively arranged at different positions of a target monitoring area, and is used for carrying out object monitoring on at least part of positions of the target monitoring area to obtain corresponding environment monitoring videos, each target environment monitoring video comprises at least one frame of environment monitoring video frame, and each frame of environment monitoring video frame in the at least one frame of environment monitoring video frame has the same monitoring object.
Step S300, for each target environment monitoring video of the at least one target environment monitoring video, performing motion recognition processing on the target environment monitoring video to obtain target motion characteristic information of a monitored object corresponding to the target environment monitoring video.
In the embodiment of the present invention, the environment monitoring background server may perform, for each of the obtained at least one target environment monitoring video, action recognition processing on the target environment monitoring video to obtain target action characteristic information of a monitored object corresponding to the target environment monitoring video.
Step S500, analyzing the target motion characteristic information corresponding to each target environment monitoring video in the at least one target environment monitoring video to obtain the environmental safety state information of the target monitoring area.
In the embodiment of the present invention, the environment monitoring background server may analyze and process the target action characteristic information corresponding to each of the at least one target environment monitoring video to obtain the environmental safety state information of the target monitoring area. And the environment safety state information is used for representing the environment safety degree information of the target monitoring area.
Based on the environmental state analysis method based on environmental monitoring, after a plurality of environmental monitoring videos are obtained, the multiple environmental monitoring videos can be subjected to duplication elimination screening processing to obtain at least one corresponding target environmental monitoring video, so that the target environmental monitoring videos can be subjected to action identification processing to obtain target action characteristic information of a corresponding monitored object, then the target action characteristic information corresponding to each target environmental monitoring video is analyzed to obtain environmental safety state information of a target monitoring area, and thus, the data volume of the videos can be reduced due to the duplication elimination screening processing, and therefore the problems of low calculation efficiency and waste of calculation resources when the environmental safety state of the monitoring area is determined in the prior art can be solved.
Optionally, in some possible implementations, step S100 may include the following step S110, step S130, and step S150, and specific contents may refer to the following.
Step S110, obtaining the environmental monitoring videos respectively sent by the plurality of environmental monitoring devices, and obtaining a plurality of corresponding environmental monitoring videos.
In the embodiment of the present invention, the environment monitoring background server may obtain the environment monitoring videos respectively sent by the plurality of environment monitoring devices, so as to obtain a plurality of corresponding environment monitoring videos. Each environment monitoring device is respectively arranged at different positions of a target monitoring area, and each environment monitoring device is used for monitoring objects at least at partial positions of the target monitoring area to obtain a corresponding environment monitoring video.
Step S130, performing object identification on each of the plurality of environment monitoring videos to obtain an object identification result corresponding to each of the environment monitoring videos.
In the embodiment of the present invention, the environmental monitoring background server may perform object identification on each of the plurality of acquired environmental monitoring videos, so as to obtain an object identification result corresponding to each of the environmental monitoring videos.
Step S150, the multiple environment monitoring videos are subjected to duplication elimination screening processing based on the object identification result corresponding to each environment monitoring video, and at least one corresponding target environment monitoring video is obtained.
In the embodiment of the present invention, the environmental monitoring background server may perform duplicate removal screening processing on the plurality of environmental monitoring videos based on the object identification result corresponding to each environmental monitoring video obtained through identification, so as to obtain at least one corresponding target environmental monitoring video.
According to the environmental state analysis method based on environmental monitoring, after a plurality of environmental monitoring videos sent by a plurality of environmental monitoring devices are obtained, object recognition can be performed on each environmental monitoring video to obtain an object recognition result corresponding to each environmental monitoring video, then duplicate removal screening processing is performed on the plurality of environmental monitoring videos based on the object recognition result corresponding to each environmental monitoring video to obtain at least one corresponding target environmental monitoring video, and therefore on the basis of guaranteeing data reliability, the data quantity of monitoring videos needing to be processed and stored can be reduced, and the problem of resource waste easily occurring in the existing environmental monitoring technology is solved.
Optionally, in some possible implementations, step S110 may include the following steps:
firstly, judging whether object monitoring request information for indicating object monitoring on the target monitoring area is received or not, and generating corresponding object monitoring notification information when the object monitoring request information for indicating object monitoring on the target monitoring area is received;
secondly, synchronously sending the object monitoring notification information to each environment monitoring device in the plurality of environment monitoring devices, wherein each environment monitoring device is used for carrying out object monitoring on at least part of positions of the target monitoring area based on the object monitoring notification information after receiving the object monitoring notification information to obtain a corresponding environment monitoring video;
and then, respectively acquiring environment monitoring videos which are respectively sent by the plurality of environment monitoring devices based on the object monitoring notification information to obtain a plurality of corresponding environment monitoring videos.
Optionally, in some possible implementation manners, the step of determining whether to receive object monitoring request information for instructing to perform object monitoring on the target monitoring area, and when receiving the object monitoring request information for instructing to perform object monitoring on the target monitoring area, generating corresponding object monitoring notification information may include the following steps:
firstly, judging whether object monitoring request information for indicating object monitoring on the target monitoring area is received or not, and when the object monitoring request information for indicating the object monitoring on the target monitoring area is received, analyzing the object monitoring request information to obtain corresponding target time information for requesting the start of object monitoring;
secondly, generating corresponding object monitoring notification information based on the target time information, wherein the object monitoring notification information is sent to each of the plurality of environment monitoring devices before the target time information, and each of the environment monitoring devices starts to perform object monitoring on at least part of positions of the target monitoring area after receiving the object monitoring notification information, so as to obtain a corresponding environment monitoring video.
Optionally, in some possible implementations, step S130 may include the following steps:
firstly, determining whether each environmental monitoring video frame in each environmental monitoring video in the plurality of environmental monitoring videos comprises a plurality of monitoring objects or not according to each environmental monitoring video frame in the plurality of environmental monitoring videos;
secondly, determining each frame of environment monitoring video frame in each environment monitoring video in the plurality of environment monitoring videos as an environment monitoring video frame to be processed when the environment monitoring video frame comprises a plurality of monitoring objects;
then, for each frame of the to-be-processed environmental monitoring video frame, decomposing the to-be-processed environmental monitoring video frame based on the number of monitoring objects included in the to-be-processed environmental monitoring video frame to obtain a corresponding multi-frame environmental monitoring sub-video frame, wherein the number of the multi-frame environmental monitoring sub-video frame is the same as the number of the monitoring objects included in the corresponding to-be-processed environmental monitoring video frame, and one frame of the environmental monitoring sub-video frame includes one monitoring object;
then, for each environmental monitoring video in the plurality of environmental monitoring videos, if the environmental monitoring video includes the to-be-processed environmental monitoring video frame, using the multi-frame environmental monitoring sub-video frame corresponding to the to-be-processed environmental monitoring video frame as a multi-frame new environmental monitoring video frame to replace the to-be-processed environmental monitoring video frame, so as to update the environmental monitoring video;
finally, for each of the environmental monitoring videos, performing object recognition (for example, recognition may be performed based on a recognition model obtained by training) on each frame of environmental monitoring video frame included in the environmental monitoring video, so as to obtain an object recognition result corresponding to the environmental monitoring video.
Optionally, in some possible implementation manners, the step of performing object identification on each frame of environment monitoring video frame included in each environment monitoring video for each environment monitoring video to obtain an object identification result corresponding to the environment monitoring video may include the following steps:
firstly, for each environmental monitoring video, performing object recognition processing on every two frames of environmental monitoring video frames included in the environmental monitoring video based on an object recognition model obtained through pre-training to obtain a recognition result of whether the monitored objects between every two frames of environmental monitoring video frames are the same (wherein the object recognition model can be a neural network model obtained through pre-training and can be a binary neural network model to determine whether the monitored objects between the two frames of environmental monitoring video frames are the same);
secondly, for each of the environmental monitoring videos, based on an identification result of whether a monitored object between every two frames of environmental monitoring video frames included in the environmental monitoring video is the same, numbering the monitored object of each frame of environmental monitoring video frame included in the environmental monitoring video to obtain monitored object number information corresponding to each frame of environmental monitoring video frame included in the environmental monitoring video, and taking the monitored object number information as a corresponding object identification result (if corresponding monitored objects are the same, the monitored objects can be identified by the same object number information, if corresponding monitored objects are different, the monitored objects can be identified by different object number information).
Optionally, in some possible implementations, step S150 may include the following steps:
firstly, for each environment monitoring video, based on the object identification result corresponding to the environment monitoring video, carrying out duplication elimination screening processing on a plurality of frames of environment monitoring video frames included in the environment monitoring video to obtain an environment monitoring screening video corresponding to the environment monitoring video;
secondly, the environmental monitoring screening videos corresponding to the environmental monitoring videos are subjected to duplicate removal screening processing, and at least one corresponding target environmental monitoring video is obtained.
Optionally, in some possible implementation manners, the step of, for each environmental monitoring video, performing deduplication screening processing on multiple frames of environmental monitoring video frames included in the environmental monitoring video based on the object identification result corresponding to the environmental monitoring video to obtain an environmental monitoring screening video corresponding to the environmental monitoring video may include the following steps:
firstly, aiming at each environmental monitoring video, carrying out target duplicate removal screening processing on a plurality of frames of environmental monitoring video frames included in the environmental monitoring video based on the object identification result corresponding to the environmental monitoring video to obtain an environmental monitoring screening video corresponding to the environmental monitoring video;
wherein the target de-duplication screening process comprises:
for each frame of the environmental monitoring video frame in the environmental monitoring video, performing image segmentation on the environmental monitoring video frame based on a monitoring object of the environmental monitoring video frame to obtain multiple frames of environmental monitoring sub-video frames, and constructing the multiple frames of the environmental monitoring sub-video frames to form a sub-video frame set corresponding to the environmental monitoring video frame, wherein each frame of the environmental monitoring sub-video frame comprises partial image information of the monitoring object of the corresponding environmental monitoring video frame, and partial image information of each environmental monitoring sub-video frame included in one sub-video frame set is constructed to obtain all image information of the monitoring object of the corresponding environmental monitoring video frame;
and the corresponding sub-video frame set corresponding to the environmental monitoring video frame screens each environmental monitoring video frame included in each frame of the environmental monitoring video based on the environmental monitoring video to obtain an environmental monitoring screening video corresponding to the environmental monitoring video.
Optionally, in some possible implementation manners, the step of, for each frame of the environmental monitoring video frame in the environmental monitoring video, performing image segmentation on the environmental monitoring video frame based on a monitoring object possessed by the environmental monitoring video frame to obtain multiple frames of environmental monitoring sub-video frames corresponding to the environmental monitoring video frame, and constructing the multiple frames of environmental monitoring sub-video frames to form a sub-video frame set corresponding to the environmental monitoring video frame may include the following steps:
firstly, for each frame of the environmental monitoring video frame in the environmental monitoring video, performing image segmentation on the environmental monitoring video frame based on each object part (such as a head and neck part, a trunk part, a left hand part, a right hand part, a left foot part and a right foot part) included by a monitored object of the environmental monitoring video frame to obtain corresponding multi-frame environmental monitoring sub-video frames, wherein each frame of the environmental monitoring sub-video frames includes image information of one object part of the monitored object, and different environmental monitoring sub-video frames include different object parts of the monitored object;
secondly, aiming at each frame of the environmental monitoring video frame in the environmental monitoring video, the multi-frame environmental monitoring sub-video frame corresponding to the environmental monitoring video frame is constructed to form a sub-video frame set corresponding to the environmental monitoring video frame.
Optionally, in some possible implementation manners, the step of filtering, based on the corresponding sub-video frame set corresponding to each frame of the environmental monitoring video frame in the environmental monitoring video, each environmental monitoring video frame included in the environmental monitoring video to obtain an environmental monitoring filtered video corresponding to the environmental monitoring video may include the following steps:
the method comprises the steps that firstly, aiming at each frame of the environmental monitoring video frame in the environmental monitoring video, corresponding index relations are respectively established between each frame of the environmental monitoring sub-video frame in the sub-video frame set corresponding to the environmental monitoring video frame and the environmental monitoring video frame;
secondly, classifying the environment monitoring sub-video frames corresponding to each frame of the environment monitoring video frames included in the environment monitoring video to obtain at least one video frame classification, wherein partial image information of monitoring images of the environment monitoring sub-video frames included in each video frame classification is the same, and partial image information of monitoring images of environment monitoring sub-video frames included in different video frame classifications is different;
thirdly, determining each frame of the environmental monitoring video frame corresponding to each environmental monitoring sub-video frame based on the index relation aiming at each environmental monitoring sub-video frame to obtain an environmental monitoring video frame set corresponding to each environmental monitoring sub-video frame;
fourthly, regarding each environmental monitoring sub-video frame, taking the environmental monitoring video frame with the earliest collection time in the environmental monitoring video frame set corresponding to the environmental monitoring sub-video frame as a first environmental monitoring video frame, and adding the first environmental monitoring video frame into a first video frame set constructed in advance;
fifthly, traversing each frame of the environmental monitoring video frame in the environmental monitoring video frame set, and determining whether monitored objects existing between the currently traversed environmental monitoring video frame and the environmental monitoring video frame in the first video frame set are the same or not based on the object identification result corresponding to the currently traversed environmental monitoring video frame and the environmental monitoring video frame in the first video frame set;
a sixth step of, if the monitored objects existing between the currently traversed environment monitoring video frame and the environment monitoring video frame in the first video frame set are different, adding the currently traversed environment monitoring video frame to the first video frame set, traversing the environment monitoring video frame of a next frame in the environment monitoring video frame set, and if the monitored objects existing between the currently traversed environment monitoring video frame and the environment monitoring video frame in the first video frame set are the same, determining whether a time difference value between a collection time of the currently traversed environment monitoring video frame and a collection time of the environment monitoring video frame in the first video frame set is greater than or equal to a preset time difference value threshold value;
seventhly, if the time difference value between the currently traversed acquisition time of the environment monitoring video frame and the acquisition time of the environment monitoring video frame in the first video frame set is greater than or equal to the time difference value threshold, adding the currently traversed environment monitoring video frame into the first video frame set, traversing the next environment monitoring video frame in the environment monitoring video frame set, and if the time difference value between the currently traversed acquisition time of the environment monitoring video frame and the acquisition time of the environment monitoring video frame in the first video frame set is less than the time difference value threshold, traversing the next environment monitoring video frame in the environment monitoring video frame set;
and eighthly, after traversing all the environmental monitoring video frames in the environmental monitoring video frame set, taking the union of the first video frame set corresponding to each environmental monitoring sub-video frame as the environmental monitoring screening video corresponding to the environmental monitoring video.
Optionally, in some possible implementation manners, the step of performing de-duplication screening processing on the environment monitoring screening video corresponding to each environment monitoring video to obtain at least one corresponding target environment monitoring video may include the following steps:
firstly, classifying each frame of environmental monitoring video frame included in the environmental monitoring screening video according to whether the included monitoring object is the same or not, obtaining at least one environmental monitoring video frame classification set corresponding to the environmental monitoring screening video, and sequencing each frame of the environmental monitoring video frame included in the environmental monitoring video frame classification set according to each environmental monitoring video frame classification set to form a corresponding new environmental monitoring screening video so as to replace the corresponding environmental monitoring screening video (namely, storing the new environmental monitoring screening video and screening out the corresponding environmental monitoring screening video) according to the sequence relation of the acquisition time of each frame of the environmental monitoring video frame included in the environmental monitoring video frame classification set;
secondly, performing duplication elimination screening processing on each current environment monitoring screening video to obtain at least one corresponding target environment monitoring video.
Optionally, in some possible implementation manners, the step of performing de-rescreening processing on each current environment monitoring screening video to obtain at least one corresponding target environment monitoring video may include the following steps:
firstly, classifying each current environment monitoring screening video based on whether the included monitoring objects are the same or not to obtain at least one corresponding screening video classification set, and respectively judging whether a plurality of environment monitoring screening videos are included in each screening video classification set or not;
secondly, for each screening video classification set, if the environment monitoring screening videos included in the screening video classification set are not multiple (namely one), determining the environment monitoring screening videos included in the screening video classification set as target environment monitoring videos;
then, for each of the screening video category sets, if the environment monitoring screening videos included in the screening video category set are multiple, action feature extraction processing (which may be extracted based on an action recognition model obtained through pre-training) is performed on a monitored object of each of the environment monitoring screening videos included in the screening video category set, so as to obtain action feature information corresponding to each of the environment monitoring screening videos included in the screening video category set, and a part of the plurality of environment monitoring screening videos having the same corresponding action feature information is screened (for example, only one of the plurality of environment monitoring screening videos is reserved, which may be any one, or one having the largest number of corresponding video frames), and each of the environment monitoring screening videos that are not screened is determined as a target environment monitoring video.
Optionally, in some possible implementations, step S300 may include the following steps:
firstly, obtaining a motion recognition model obtained based on pre-training of a plurality of sample videos, wherein the motion recognition model is a neural network model;
secondly, for each target environment monitoring video, performing action recognition processing on the target environment monitoring video based on the action recognition model to obtain target action characteristic information of a monitored object corresponding to the target environment monitoring video.
Optionally, in some possible implementation manners, the step of performing, for each target environment monitoring video, motion recognition processing on the target environment monitoring video based on the motion recognition model to obtain target motion characteristic information of a monitored object corresponding to the target environment monitoring video may include the following steps:
firstly, counting the frame number of an environment monitoring video frame included in each target environment monitoring video to obtain a counting frame number corresponding to the target environment monitoring video, and determining the relative size relationship between the counting frame number and a preset counting frame number threshold;
secondly, for each target environment monitoring video, if the statistical frame number corresponding to the target environment monitoring video is less than or equal to the statistical frame number threshold, sequentially performing action recognition processing on each frame of environment monitoring video frame included in the target environment monitoring video based on the action recognition model to obtain an action recognition result corresponding to each frame of environment monitoring video frame included in the target environment monitoring video, and sequentially combining action recognition results corresponding to each frame of environment monitoring video frame included in the target environment monitoring video to obtain target action characteristic information (on the ground, in the air, on the ground, can be combined to obtain a jump) of a monitored object corresponding to the target environment monitoring video;
then, for each target environment monitoring video, if the number of the statistical frames corresponding to the target environment monitoring video is greater than the statistical frame number threshold, traversing each frame of environment monitoring video frame included in the target environment monitoring video, and judging whether the interframe difference value between the currently traversed environment monitoring video frame and the previous frame of environment monitoring video frame is less than a predetermined difference threshold, and when the interframe difference value is less than the difference threshold, screening out the previous frame of environment monitoring video frame to obtain at least one target environment monitoring video frame corresponding to the target environment monitoring video;
and finally, for each target environment monitoring video, sequentially performing action recognition processing on each frame of target environment monitoring video frame included in the target environment monitoring video based on the action recognition model to obtain an action recognition result corresponding to each frame of target environment monitoring video frame included in the target environment monitoring video, and sequentially combining action recognition results corresponding to each frame of target environment monitoring video frame included in the target environment monitoring video to obtain target action characteristic information of a monitored object corresponding to the target environment monitoring video.
Optionally, in some possible implementation manners, for each target environment monitoring video, if the statistical frame number corresponding to the target environment monitoring video is greater than the statistical frame number threshold, traversing each frame of environment monitoring video frame included in the target environment monitoring video, and determining whether an inter-frame difference value between a currently traversed environment monitoring video frame and a previous frame of environment monitoring video frame is less than a predetermined difference threshold, and when the inter-frame difference value is less than the difference threshold, removing the previous frame of environment monitoring video frame to obtain at least one target environment monitoring video frame corresponding to the target environment monitoring video, may include the following steps:
firstly, for each target environment monitoring video, traversing each frame of environment monitoring video frame included in the target environment monitoring video if the statistical frame number corresponding to the target environment monitoring video is greater than the statistical frame number threshold;
secondly, calculating the difference value of the pixel values of the corresponding pixel positions between the currently traversed environment monitoring video frame and the previous frame of environment plus memory video frame to obtain the pixel difference value of each corresponding pixel position, and calculating the sum value of the pixel difference values of each pixel position to obtain the inter-frame difference value between the currently traversed environment monitoring video frame and the previous frame of environment monitoring video frame;
and then, judging whether the interframe difference value between the currently traversed environmental monitoring video frame and the previous frame of environmental monitoring video frame is smaller than a predetermined difference threshold value, and screening out the previous frame of environmental monitoring video frame when the interframe difference value is smaller than the difference threshold value to obtain at least one target environmental monitoring video frame corresponding to the target environmental monitoring video.
Optionally, in another possible implementation manner, for each target environment monitoring video, if the statistical frame number corresponding to the target environment monitoring video is greater than the statistical frame number threshold, traversing each frame of environment monitoring video frame included in the target environment monitoring video, and determining whether an inter-frame difference value between a currently traversed environment monitoring video frame and a previous frame of environment monitoring video frame is less than a predetermined difference threshold, and when the inter-frame difference value is less than the difference threshold, removing the previous frame of environment monitoring video frame to obtain at least one target environment monitoring video frame corresponding to the target environment monitoring video may include the following steps:
firstly, for each target environment monitoring video, traversing each frame of environment monitoring video frame included in the target environment monitoring video if the statistical frame number corresponding to the target environment monitoring video is greater than the statistical frame number threshold;
secondly, calculating a difference value of pixel values of corresponding partial pixel positions between the currently traversed environment monitoring video frame and a previous frame of environment plus memory video frame to obtain a pixel difference value of each pixel position in the corresponding partial pixel positions, and calculating a sum value of the pixel difference values of each pixel position to obtain an inter-frame difference value between the currently traversed environment monitoring video frame and the previous frame of environment monitoring video frame, wherein the partial pixel positions are determined in all pixel positions according to preset pixel intervals (for example, one pixel position or a plurality of pixel positions are determined every other pixel position, or one pixel position or a plurality of pixel positions are determined every other pixel positions);
and then, judging whether the interframe difference value between the currently traversed environmental monitoring video frame and the previous frame of environmental monitoring video frame is smaller than a predetermined difference threshold value, and screening out the previous frame of environmental monitoring video frame when the interframe difference value is smaller than the difference threshold value to obtain at least one target environmental monitoring video frame corresponding to the target environmental monitoring video.
Optionally, in some possible implementations, step S500 may include the following steps:
firstly, aiming at each target environment monitoring video in at least one target environment monitoring video, determining whether the target action characteristic information corresponding to the target environment monitoring video belongs to multiple preset standard action characteristic information or not, and obtaining a corresponding action characteristic comparison result;
and secondly, determining the environmental safety state information of the target monitoring area based on the action characteristic comparison result corresponding to each target environment monitoring video.
Optionally, in some possible implementation manners, the step of determining the environmental safety state information of the target monitoring area based on the action characteristic comparison result corresponding to each target environmental monitoring video may include the following steps:
firstly, judging whether an action characteristic comparison result belongs to a first action characteristic comparison result or not according to the action characteristic comparison result corresponding to each target environment monitoring video, wherein the first action characteristic comparison result is used for representing that the corresponding target action characteristic information does not belong to any standard action characteristic information (such as jumping, heel turning, reciprocating movement and the like) in the multiple standard action characteristic information;
secondly, counting the quantity ratio of the action characteristic comparison results belonging to the first action characteristic comparison result, and determining the environmental safety state information of the target monitoring area based on the quantity ratio (wherein the larger the quantity ratio is, the higher the safety degree of the target monitoring area is).
With reference to fig. 3, an embodiment of the present invention further provides an environmental status analysis system based on environmental monitoring, which is applicable to the environmental monitoring background server. The environmental state analysis system based on environmental monitoring can comprise the following functional modules:
the monitoring video duplicate removal screening module is used for carrying out duplicate removal screening processing on a plurality of environment monitoring videos sent by a plurality of environment monitoring devices based on obtained object identification results corresponding to the plurality of environment monitoring videos to obtain at least one corresponding target environment monitoring video, wherein each environment monitoring device is respectively arranged at different positions of a target monitoring area, each environment monitoring device is used for carrying out object monitoring on at least part of positions of the target monitoring area to obtain the corresponding environment monitoring video, each target environment monitoring video comprises at least one environment monitoring video frame, and monitoring objects of each environment monitoring video frame in the at least one environment monitoring video frame are the same;
the monitoring video action recognition module is used for carrying out action recognition processing on the target environment monitoring video aiming at each target environment monitoring video in the at least one target environment monitoring video to obtain target action characteristic information of a monitored object corresponding to the target environment monitoring video;
and the environment safety state determining module is used for analyzing and processing the target action characteristic information corresponding to each target environment monitoring video in the at least one target environment monitoring video to obtain environment safety state information of the target monitoring area, wherein the environment safety state information is used for representing environment safety degree information of the target monitoring area.
Optionally, in some possible implementations, the monitoring video motion recognition module is specifically configured to: obtaining a motion recognition model obtained based on pre-training of a plurality of sample videos, wherein the motion recognition model is a neural network model; and aiming at each target environment monitoring video, performing action recognition processing on the target environment monitoring video based on the action recognition model to obtain target action characteristic information of a monitored object corresponding to the target environment monitoring video.
Optionally, in some possible implementations, the environment security state determination module is specifically configured to: determining whether the target action characteristic information corresponding to the target environment monitoring video belongs to multiple preset standard action characteristic information or not aiming at each target environment monitoring video in the at least one target environment monitoring video to obtain a corresponding action characteristic comparison result; and determining the environmental safety state information of the target monitoring area based on the action characteristic comparison result corresponding to each target environment monitoring video.
In summary, according to the environmental status analysis method and system based on environmental monitoring provided by the present invention, after a plurality of environmental monitoring videos are obtained, a duplicate removal screening process may be performed on the plurality of environmental monitoring videos to obtain at least one corresponding target environmental monitoring video, so that a motion recognition process may be performed on the target environmental monitoring video to obtain target motion characteristic information of a corresponding monitored object, and then, the target motion characteristic information corresponding to each target environmental monitoring video is analyzed to obtain environmental safety status information of a target monitored area.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. The environmental state analysis method based on environmental monitoring is characterized by being applied to an environmental monitoring background server, wherein the environmental monitoring background server is in communication connection with a plurality of environmental monitoring devices, and the environmental state analysis method based on environmental monitoring comprises the following steps:
based on object identification results corresponding to a plurality of environment monitoring videos sent by a plurality of environment monitoring devices, performing duplicate removal screening processing on the plurality of environment monitoring videos to obtain at least one corresponding target environment monitoring video, wherein each environment monitoring device is respectively arranged at different positions of a target monitoring area, each environment monitoring device is used for performing object monitoring on at least part of positions of the target monitoring area to obtain the corresponding environment monitoring video, each target environment monitoring video comprises at least one environment monitoring video frame, and each environment monitoring video frame in the at least one environment monitoring video frame has the same monitored object;
for each target environment monitoring video in the at least one target environment monitoring video, performing action recognition processing on the target environment monitoring video to obtain target action characteristic information of a monitored object corresponding to the target environment monitoring video;
analyzing the target action characteristic information corresponding to each target environment monitoring video in the at least one target environment monitoring video to obtain environment safety state information of the target monitoring area, wherein the environment safety state information is used for representing environment safety degree information of the target monitoring area.
2. The environmental status analysis method based on environmental monitoring according to claim 1, wherein the step of performing motion recognition processing on the target environment monitoring video for each of the at least one target environment monitoring video to obtain target motion characteristic information of a monitored object corresponding to the target environment monitoring video includes:
obtaining a motion recognition model obtained based on pre-training of a plurality of sample videos, wherein the motion recognition model is a neural network model;
and aiming at each target environment monitoring video, performing action recognition processing on the target environment monitoring video based on the action recognition model to obtain target action characteristic information of a monitored object corresponding to the target environment monitoring video.
3. The environmental status analysis method based on environmental monitoring according to claim 2, wherein the step of performing, for each target environmental monitoring video, action recognition processing on the target environmental monitoring video based on the action recognition model to obtain target action characteristic information of the monitored object corresponding to the target environmental monitoring video includes:
counting the number of frames of environment monitoring video frames included in the target environment monitoring video aiming at each target environment monitoring video to obtain a counted frame number corresponding to the target environment monitoring video, and determining the relative size relationship between the counted frame number and a preset counted frame number threshold;
for each target environment monitoring video, if the statistical frame number corresponding to the target environment monitoring video is less than or equal to the statistical frame number threshold, sequentially performing action recognition processing on each frame of environment monitoring video frame included in the target environment monitoring video based on the action recognition model to obtain an action recognition result corresponding to each frame of environment monitoring video frame included in the target environment monitoring video, and sequentially combining action recognition results corresponding to each frame of environment monitoring video frame included in the target environment monitoring video to obtain target action characteristic information of a monitored object corresponding to the target environment monitoring video;
for each target environment monitoring video, if the number of the statistical frames corresponding to the target environment monitoring video is greater than the statistical frame number threshold, traversing each frame of environment monitoring video frame included in the target environment monitoring video, and judging whether the interframe difference value between the currently traversed environment monitoring video frame and the previous frame of environment monitoring video frame is less than a predetermined difference threshold, and when the interframe difference value is less than the difference threshold, screening out the previous frame of environment monitoring video frame to obtain at least one target environment monitoring video frame corresponding to the target environment monitoring video;
and for each target environment monitoring video, sequentially performing action recognition processing on each frame of target environment monitoring video frame included in the target environment monitoring video based on the action recognition model to obtain an action recognition result corresponding to each frame of target environment monitoring video frame included in the target environment monitoring video, and sequentially combining action recognition results corresponding to each frame of target environment monitoring video frame included in the target environment monitoring video to obtain target action characteristic information of a monitored object corresponding to the target environment monitoring video.
4. The method of claim 3, wherein the step of, for each of the target environment surveillance videos, traversing each frame of the environment surveillance video frames included in the target environment surveillance video if the number of the statistical frames corresponding to the target environment surveillance video is greater than the statistical frame number threshold, and determining whether an inter-frame difference value between a currently traversed environment surveillance video frame and a previous frame of the environment surveillance video frame is less than a predetermined difference threshold, and when the inter-frame difference value is less than the difference threshold, removing the previous frame of the environment surveillance video frame to obtain at least one target environment surveillance video frame corresponding to the target environment surveillance video comprises:
for each target environment monitoring video, traversing each frame of environment monitoring video frame included in the target environment monitoring video if the statistical frame number corresponding to the target environment monitoring video is greater than the statistical frame number threshold;
calculating the difference value of the pixel values of the corresponding pixel positions between the currently traversed environment monitoring video frame and the previous frame environment plus memory video frame to obtain the pixel difference value of each corresponding pixel position, and calculating the sum value of the pixel difference values of each pixel position to obtain the interframe difference value between the currently traversed environment monitoring video frame and the previous frame environment monitoring video frame;
and judging whether the interframe difference value between the currently traversed environmental monitoring video frame and the previous frame of environmental monitoring video frame is smaller than a predetermined difference threshold value, and screening out the previous frame of environmental monitoring video frame when the interframe difference value is smaller than the difference threshold value to obtain at least one target environmental monitoring video frame corresponding to the target environmental monitoring video.
5. The method of claim 3, wherein the step of, for each of the target environment surveillance videos, traversing each frame of the environment surveillance video frames included in the target environment surveillance video if the number of the statistical frames corresponding to the target environment surveillance video is greater than the statistical frame number threshold, and determining whether an inter-frame difference value between a currently traversed environment surveillance video frame and a previous frame of the environment surveillance video frame is less than a predetermined difference threshold, and when the inter-frame difference value is less than the difference threshold, removing the previous frame of the environment surveillance video frame to obtain at least one target environment surveillance video frame corresponding to the target environment surveillance video comprises:
for each target environment monitoring video, traversing each frame of environment monitoring video frame included in the target environment monitoring video if the statistical frame number corresponding to the target environment monitoring video is greater than the statistical frame number threshold;
calculating the difference value of pixel values of corresponding partial pixel positions between a currently traversed environment monitoring video frame and a previous frame environment plus memory video frame to obtain the pixel difference value of each pixel position in the corresponding partial pixel positions, and calculating the sum value of the pixel difference values of all the pixel positions to obtain the inter-frame difference value between the currently traversed environment monitoring video frame and the previous frame environment monitoring video frame, wherein the partial pixel positions are determined in all the pixel positions according to the preset pixel interval;
and judging whether the interframe difference value between the currently traversed environmental monitoring video frame and the previous frame of environmental monitoring video frame is smaller than a predetermined difference threshold value, and screening out the previous frame of environmental monitoring video frame when the interframe difference value is smaller than the difference threshold value to obtain at least one target environmental monitoring video frame corresponding to the target environmental monitoring video.
6. The environmental monitoring-based environmental status analysis method according to any one of claims 1 to 5, wherein the step of analyzing the target action characteristic information corresponding to each of the at least one target environmental monitoring video to obtain the environmental safety status information of the target monitoring area includes:
determining whether the target action characteristic information corresponding to the target environment monitoring video belongs to multiple preset standard action characteristic information or not aiming at each target environment monitoring video in the at least one target environment monitoring video to obtain a corresponding action characteristic comparison result;
and determining the environmental safety state information of the target monitoring area based on the action characteristic comparison result corresponding to each target environment monitoring video.
7. The environmental monitoring-based environmental status analysis method according to claim 6, wherein the step of determining the environmental safety status information of the target monitoring area based on the action characteristic comparison result corresponding to each target environmental monitoring video includes:
judging whether the action characteristic comparison result belongs to a first action characteristic comparison result or not aiming at the action characteristic comparison result corresponding to each target environment monitoring video, wherein the first action characteristic comparison result is used for representing that the corresponding target action characteristic information does not belong to any standard action characteristic information in the multiple standard action characteristic information;
and counting the quantity ratio of the action characteristic comparison results belonging to the first action characteristic comparison result, and determining the environmental safety state information of the target monitoring area based on the quantity ratio.
8. The utility model provides an environmental condition analytic system based on environmental monitoring which characterized in that is applied to environmental monitoring backend server, environmental monitoring backend server communication connection has a plurality of environmental monitoring equipment, environmental condition analytic system based on environmental monitoring includes:
the monitoring video duplicate removal screening module is used for carrying out duplicate removal screening processing on a plurality of environment monitoring videos sent by a plurality of environment monitoring devices based on obtained object identification results corresponding to the plurality of environment monitoring videos to obtain at least one corresponding target environment monitoring video, wherein each environment monitoring device is respectively arranged at different positions of a target monitoring area, each environment monitoring device is used for carrying out object monitoring on at least part of positions of the target monitoring area to obtain the corresponding environment monitoring video, each target environment monitoring video comprises at least one environment monitoring video frame, and monitoring objects of each environment monitoring video frame in the at least one environment monitoring video frame are the same;
the monitoring video action recognition module is used for carrying out action recognition processing on the target environment monitoring video aiming at each target environment monitoring video in the at least one target environment monitoring video to obtain target action characteristic information of a monitored object corresponding to the target environment monitoring video;
and the environment safety state determining module is used for analyzing and processing the target action characteristic information corresponding to each target environment monitoring video in the at least one target environment monitoring video to obtain environment safety state information of the target monitoring area, wherein the environment safety state information is used for representing environment safety degree information of the target monitoring area.
9. The environmental status analysis system based on environmental monitoring of claim 8, wherein the monitoring video action recognition module is specifically configured to:
obtaining a motion recognition model obtained based on pre-training of a plurality of sample videos, wherein the motion recognition model is a neural network model;
and aiming at each target environment monitoring video, performing action recognition processing on the target environment monitoring video based on the action recognition model to obtain target action characteristic information of a monitored object corresponding to the target environment monitoring video.
10. The environmental monitoring-based environmental status analysis system of claim 8, wherein the environmental security status determination module is specifically configured to:
determining whether the target action characteristic information corresponding to the target environment monitoring video belongs to multiple preset standard action characteristic information or not aiming at each target environment monitoring video in the at least one target environment monitoring video to obtain a corresponding action characteristic comparison result;
and determining the environmental safety state information of the target monitoring area based on the action characteristic comparison result corresponding to each target environment monitoring video.
CN202111186450.0A 2021-10-12 2021-10-12 Environmental state analysis method and system based on environmental monitoring Withdrawn CN113902993A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111186450.0A CN113902993A (en) 2021-10-12 2021-10-12 Environmental state analysis method and system based on environmental monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111186450.0A CN113902993A (en) 2021-10-12 2021-10-12 Environmental state analysis method and system based on environmental monitoring

Publications (1)

Publication Number Publication Date
CN113902993A true CN113902993A (en) 2022-01-07

Family

ID=79191561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111186450.0A Withdrawn CN113902993A (en) 2021-10-12 2021-10-12 Environmental state analysis method and system based on environmental monitoring

Country Status (1)

Country Link
CN (1) CN113902993A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114090828A (en) * 2022-01-24 2022-02-25 一道新能源科技(衢州)有限公司 Big data processing method and system applied to light photovoltaic module production
CN114581856A (en) * 2022-05-05 2022-06-03 广东邦盛北斗科技股份公司 Agricultural unit motion state identification method and system based on Beidou system and cloud platform

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114090828A (en) * 2022-01-24 2022-02-25 一道新能源科技(衢州)有限公司 Big data processing method and system applied to light photovoltaic module production
CN114090828B (en) * 2022-01-24 2022-04-22 一道新能源科技(衢州)有限公司 Big data processing method and system applied to light photovoltaic module production
CN114581856A (en) * 2022-05-05 2022-06-03 广东邦盛北斗科技股份公司 Agricultural unit motion state identification method and system based on Beidou system and cloud platform

Similar Documents

Publication Publication Date Title
US20150154457A1 (en) Object retrieval in video data using complementary detectors
CN111241343A (en) Road information monitoring and analyzing detection method and intelligent traffic control system
CN113902993A (en) Environmental state analysis method and system based on environmental monitoring
CN114140713A (en) Image recognition system and image recognition method
CN110458126B (en) Pantograph state monitoring method and device
CN111723773B (en) Method and device for detecting carryover, electronic equipment and readable storage medium
CN111223129A (en) Detection method, detection device, monitoring equipment and computer readable storage medium
CN114581856B (en) Agricultural unit motion state identification method and system based on Beidou system and cloud platform
CN112818173A (en) Method and device for identifying associated object and computer readable storage medium
CN111611944A (en) Identity recognition method and device, electronic equipment and storage medium
CN114139016A (en) Data processing method and system for intelligent cell
CN114697618A (en) Building control method and system based on mobile terminal
CN113868471A (en) Data matching method and system based on monitoring equipment relationship
CN113612645A (en) Internet of things data processing method and system
CN113902412A (en) Environment monitoring method based on data processing
CN115424193A (en) Training image information processing method and system
CN115457467A (en) Building quality hidden danger positioning method and system based on data mining
CN115147752A (en) Video analysis method and device and computer equipment
CN115375886A (en) Data acquisition method and system based on cloud computing service
CN113903001A (en) Object flow prediction method and system based on environmental monitoring
CN114677615A (en) Environment detection method and system
CN114189535A (en) Service request method and system based on smart city data
CN114095734A (en) User data compression method and system based on data processing
CN113537087A (en) Intelligent traffic information processing method and device and server
CN113780126A (en) Security protection method and device based on RFID (radio frequency identification)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220107

WW01 Invention patent application withdrawn after publication