CN114639035A - Intelligent environment detection system and method thereof - Google Patents

Intelligent environment detection system and method thereof Download PDF

Info

Publication number
CN114639035A
CN114639035A CN202210141710.0A CN202210141710A CN114639035A CN 114639035 A CN114639035 A CN 114639035A CN 202210141710 A CN202210141710 A CN 202210141710A CN 114639035 A CN114639035 A CN 114639035A
Authority
CN
China
Prior art keywords
environment detection
video
target
terminal device
detection terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210141710.0A
Other languages
Chinese (zh)
Inventor
陈文瑞
刘彦明
刘伟
李波
李红杰
李思雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202210141710.0A priority Critical patent/CN114639035A/en
Publication of CN114639035A publication Critical patent/CN114639035A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an intelligent environment detection system and a method thereof, and relates to the technical field of environment detection. In the invention, at least one environment detection terminal device is determined as a target environment detection terminal device; for each target environment detection terminal device, performing video frame screening processing on environment detection video frames included in an environment detection video obtained by the target environment detection terminal device to obtain a corresponding target environment detection video; and aiming at each target environment detection video, carrying out target object identification processing on each frame of environment detection video frame included in the target environment detection video to obtain a corresponding environment detection result, wherein the environment detection result is used for representing the object danger degree of a corresponding environment area, and the target object identification processing is used for identifying whether a target dangerous object exists. Based on the method, the problem that environment detection resources are easy to waste in the prior art can be solved.

Description

Intelligent environment detection system and method thereof
Technical Field
The invention relates to the technical field of environment detection, in particular to an intelligent environment detection system and an intelligent environment detection method.
Background
Effective detection of the environment is one of important means for ensuring the safety of the environment, and for example, by detecting the environment, whether a danger exists in the environment can be determined. In the prior art, in order to ensure the reliability of detection, more environment detection terminal devices are generally deployed to detect each environment area, and all the environment detection video frames are identified each time detection is performed, so that the problem of resource waste of environment detection (identification processing) is easily caused.
Disclosure of Invention
In view of the above, an objective of the present invention is to provide an intelligent environment detection system and method thereof, so as to solve the problem that environment detection resources are easily wasted in the prior art.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
an intelligent environment detection method is applied to an environment detection server, the environment detection server is in communication connection with a plurality of environment detection terminal devices, and the method comprises the following steps:
determining at least one environment detection terminal device as a target environment detection terminal device from the plurality of environment detection terminal devices, wherein the target environment detection terminal device is used for detecting a corresponding environment area under the control of the environment detection server to obtain a current environment detection video, and the environment detection video comprises a plurality of frames of environment detection video frames;
for each target environment detection terminal device, performing video frame screening processing on an environment detection video frame included in an environment detection video obtained by the target environment detection terminal device to obtain a target environment detection video corresponding to the target environment detection terminal device, wherein the target environment detection video includes at least one frame of environment detection video frame;
and aiming at each target environment detection video, carrying out target object identification processing on each frame of environment detection video frame included in the target environment detection video to obtain an environment detection result corresponding to the target environment detection video, wherein the environment detection result is used for representing the object danger degree of a corresponding environment area, and the target object identification processing is used for identifying whether a target dangerous object exists.
In some preferred embodiments, in the above intelligent environment detection method, the step of, for each target environment detection terminal device, performing video frame screening processing on an environment detection video frame included in an environment detection video obtained by the target environment detection terminal device to obtain a target environment detection video corresponding to the target environment detection terminal device includes:
for each target environment detection terminal device, performing frame rate identification processing on an environment detection video obtained by detection of the target environment detection terminal device to obtain video frame rate information corresponding to the target environment detection terminal device, and counting the number of environment detection video frames included in the environment detection video to obtain the number of video frames corresponding to the target environment detection terminal device;
and for each target environment detection terminal device, determining whether video frame screening processing needs to be performed on environment detection video frames included in the environment detection video obtained by the target environment detection terminal device based on the video frame rate information and the video frame number corresponding to the target environment detection terminal device, and performing video frame screening processing on the environment detection video frames included in the environment detection video when it is determined that video frame screening processing needs to be performed on the environment detection video frames included in the environment detection video to obtain the target environment detection video corresponding to the target environment detection terminal device.
In some preferred embodiments, in the above intelligent environment detection method, the step of determining, for each target environment detection terminal device, based on video frame rate information and a number of video frames corresponding to the target environment detection terminal device, whether video frame screening processing needs to be performed on an environment detection video frame included in an environment detection video obtained by the target environment detection terminal device, and when it is determined that video frame screening processing needs to be performed on an environment detection video frame included in the environment detection video, performing video frame screening processing on an environment detection video frame included in the environment detection video to obtain a target environment detection video corresponding to the target environment detection terminal device includes:
for each target environment detection terminal device, determining a relative size relationship between video frame rate information corresponding to the target environment detection terminal device and pre-configured video frame rate threshold information, and determining a relative size relationship between the number of video frames corresponding to the target environment detection terminal device and a pre-configured video frame number threshold;
for each target environment detection terminal device, if the video frame rate information corresponding to the target environment detection terminal device is greater than or equal to the video frame rate threshold information, and the number of video frames corresponding to the target environment detection terminal device is greater than or equal to the threshold value of the number of video frames, determining that video frame screening processing needs to be performed on environment detection video frames included in the environment detection video detected by the target environment detection terminal device, if video frame rate information corresponding to the target environment detection terminal device is smaller than the video frame rate threshold information, and/or the number of video frames corresponding to the target environment detection terminal device is less than the threshold value of the number of video frames, determining that video frame screening processing is not required to be performed on environment detection video frames included in the environment detection video obtained by the target environment detection terminal equipment;
and for each target environment detection terminal device, when determining that video frame screening processing needs to be performed on the environment detection video frames included in the environment detection video corresponding to the target environment detection terminal device, performing video frame screening processing on the environment detection video frames included in the environment detection video to obtain the target environment detection video corresponding to the target environment detection terminal device.
In some preferred embodiments, in the above intelligent environment detection method, when it is determined that video frame screening processing needs to be performed on an environment detection video frame included in an environment detection video corresponding to each target environment detection terminal device, the step of performing video frame screening processing on the environment detection video frame included in the environment detection video to obtain the target environment detection video corresponding to the target environment detection terminal device includes:
for each target environment detection terminal device, when it is determined that video frame screening processing needs to be performed on environment detection video frames included in an environment detection video corresponding to the target environment detection terminal device, determining a corresponding first screening proportion based on video frame rate information corresponding to the target environment detection terminal device, and determining a corresponding second screening proportion based on the number of video frames corresponding to the target environment detection terminal device, wherein the video frame rate information and the first screening proportion have a positive correlation, and the number of video frames and the second screening proportion have a positive correlation;
for each target environment detection terminal device, acquiring each historical environment detection result corresponding to the target environment detection video, performing fusion processing on each historical environment detection result corresponding to the target environment detection video to obtain a historical environment detection fusion result corresponding to the target environment detection video, and determining a third screening proportion based on the object danger degree represented by the historical environment detection fusion result, wherein the third screening proportion and the object danger degree represented by the historical environment detection fusion result have a negative correlation;
for each target environment detection terminal device, performing weighted summation calculation on the first screening proportion, the second screening proportion and a third screening proportion corresponding to the target environment detection terminal device to obtain a weighted screening proportion corresponding to the target environment detection terminal device, wherein a weighting coefficient corresponding to the third screening proportion is greater than a weighting coefficient corresponding to the first screening proportion, and a weighting coefficient corresponding to the third screening proportion is greater than a weighting coefficient corresponding to the second screening proportion;
for each target environment detection terminal device, performing similarity calculation on every two adjacent frames of environment detection video frames included in the environment detection video corresponding to the target environment detection terminal device to obtain video frame similarity between the two adjacent frames of environment detection video frames;
for each target environment detection terminal device, performing segmentation processing on an environment detection video corresponding to the target environment detection terminal device based on a first quantity value configured in advance to obtain a plurality of video segments corresponding to the environment detection video, respectively determining an average value of video frame similarity between two adjacent frames of environment detection video frames in each video segment, and determining a video segment with the largest average value of corresponding proportion quantity as the target video segment based on a weighting screening proportion corresponding to the target environment detection terminal device, wherein the quantity of the environment detection video frames included in each video segment is equal to the first quantity value;
and screening out partial environment detection video frames in each target video segment corresponding to each target environment detection terminal device aiming at each target environment detection terminal device, and obtaining a target environment detection video corresponding to the target environment detection terminal device based on the environment detection video frames which are not screened out in the target video segment and the environment detection video frames included in other video segments.
In some preferred embodiments, in the above intelligent environment detection method, the step of performing, for each target environment detection video, target object identification processing on each frame of environment detection video frame included in the target environment detection video to obtain an environment detection result corresponding to the target environment detection video includes:
respectively carrying out target object identification processing on each frame of environment detection video frame included in the target environment detection video based on a pre-trained neural network model aiming at each target environment detection video to obtain a target object identification result corresponding to each frame of environment detection video frame;
and aiming at each target environment detection video, fusing target object identification results corresponding to each frame of environment detection video frame included in the target environment detection video to obtain an environment detection result corresponding to the target environment detection video.
In some preferred embodiments, in the above intelligent environment detection method, the step of performing fusion processing on the target object identification result corresponding to each frame of environment detection video frame included in each target environment detection video to obtain the environment detection result corresponding to the target environment detection video, includes:
counting the number of first target object identification results in target object identification results corresponding to environment detection video frames included in each target environment detection video to obtain the number of identification result statistics corresponding to the target environment detection video, wherein the first target object identification results are used for representing that target dangerous objects exist in the corresponding environment detection video frames;
and calculating the ratio of the identification result statistical number corresponding to each target environment detection video to the number of the environment detection video frames included in the target environment detection video, and obtaining the environment detection result corresponding to the target environment detection video based on the ratio.
In some preferred embodiments, in the above intelligent environment detection method, the step of determining, among the plurality of environment detection terminal devices, at least one environment detection terminal device as a target environment detection terminal device includes:
respectively acquiring a historical video set obtained by detecting a corresponding environment area in history by each environment detection terminal device in the plurality of environment detection terminal devices to obtain a plurality of historical video sets corresponding to the plurality of environment detection terminal devices, wherein each historical video set comprises a plurality of historical environment detection videos, and each historical environment detection video comprises a plurality of continuous multi-frame historical environment detection video frames in time sequence obtained by detecting the corresponding environment area;
for each historical video set in the plurality of historical video sets, performing video content comparison analysis on the historical environment detection videos included in the historical video set to obtain environment change characteristic information historically formed by the environment area corresponding to the historical video set;
and determining at least one environment detection terminal device as a target environment detection terminal device in the plurality of environment detection terminal devices based on the environment change characteristic information corresponding to each historical video set to obtain at least one target environment detection terminal device.
The embodiment of the invention also provides an intelligent environment detection system, which is applied to an environment detection server, wherein the environment detection server is in communication connection with a plurality of environment detection terminal devices, and the system comprises:
a target detection device determining module, configured to determine, among the multiple environment detection terminal devices, at least one environment detection terminal device as a target environment detection terminal device, where the target environment detection terminal device is configured to detect a corresponding environment area under the control of the environment detection server to obtain a current environment detection video, where the environment detection video includes multiple frames of environment detection video frames;
the video frame screening processing module is used for screening video frames of environment detection videos which are obtained by detecting the target environment detection terminal equipment aiming at each target environment detection terminal equipment to obtain target environment detection videos corresponding to the target environment detection terminal equipment, wherein the target environment detection videos comprise at least one frame of environment detection video frame;
and the video frame identification processing module is used for carrying out target object identification processing on each frame of environment detection video frame included in each target environment detection video aiming at each target environment detection video to obtain an environment detection result corresponding to the target environment detection video, wherein the environment detection result is used for representing the object danger degree of a corresponding environment area, and the target object identification processing is used for identifying whether a target dangerous object exists or not.
In some preferred embodiments, in the above intelligent environment detection system, the video frame screening processing module is specifically configured to:
for each target environment detection terminal device, performing frame rate identification processing on an environment detection video obtained by detection of the target environment detection terminal device to obtain video frame rate information corresponding to the target environment detection terminal device, and counting the number of environment detection video frames included in the environment detection video to obtain the number of video frames corresponding to the target environment detection terminal device;
and for each target environment detection terminal device, determining whether video frame screening processing needs to be performed on environment detection video frames included in the environment detection video obtained by the target environment detection terminal device based on the video frame rate information and the video frame number corresponding to the target environment detection terminal device, and performing video frame screening processing on the environment detection video frames included in the environment detection video when it is determined that video frame screening processing needs to be performed on the environment detection video frames included in the environment detection video to obtain the target environment detection video corresponding to the target environment detection terminal device.
In some preferred embodiments, in the above intelligent environment detection system, the video frame identification processing module is specifically configured to:
for each target environment detection video, respectively performing target object identification processing on each frame of environment detection video frame included in the target environment detection video based on a pre-trained neural network model to obtain a target object identification result corresponding to each frame of environment detection video frame;
and aiming at each target environment detection video, fusing target object identification results corresponding to each frame of environment detection video frame included in the target environment detection video to obtain an environment detection result corresponding to the target environment detection video.
The intelligent environment detection system and the method thereof provided by the embodiment of the invention can determine at least one environment detection terminal device as a target environment detection terminal device, and then, for each target environment detection terminal device, perform video frame screening processing on an environment detection video frame included in an environment detection video obtained by the target environment detection terminal device to obtain a corresponding target environment detection video, so that for each target environment detection video, each frame of environment detection video frame included in the target environment detection video can be subjected to target object identification processing to obtain a corresponding environment detection result, thereby improving the problem that environment detection resources are easy to waste in the prior art (namely, screening the environment detection videos can reduce the number of identifications to a certain extent).
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
Fig. 1 is a block diagram of an environment detection server according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of steps included in the intelligent environment detection method according to the embodiment of the present invention.
Fig. 3 is a schematic diagram of modules included in the environment detection system according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides an environment detection server. Wherein the environment detection server may include a memory and a processor.
In detail, the memory and the processor are electrically connected directly or indirectly to realize data transmission or interaction. For example, they may be electrically connected to each other via one or more communication buses or signal lines. The memory can have at least one software functional module (computer program) stored therein, which can be in the form of software or firmware. The processor may be configured to execute the executable computer program stored in the memory, so as to implement the intelligent environment detection method provided by the embodiment of the present invention (as described later).
For example, in an application example, the Memory may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
For example, in an application example, the Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), a System on Chip (SoC), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
Also, the structure shown in fig. 1 is only an illustration, and the environment detection server may further include more or less components than those shown in fig. 1, or have a different configuration from that shown in fig. 1, for example, may include a communication unit for information interaction with other devices. For example, the environment detection server may be communicatively connected with a plurality of environment detection terminal devices (e.g., image capture devices).
With reference to fig. 2, an embodiment of the present invention further provides an intelligent environment detection method, which is applicable to the environment detection server. Wherein, the method steps defined by the flow related to the intelligent environment detection method can be realized by the environment detection server.
The specific process shown in FIG. 2 will be described in detail below.
Step S100, determining at least one environment detection terminal device among the plurality of environment detection terminal devices as a target environment detection terminal device.
In this embodiment of the present invention, the environment detection server may determine, from the plurality of environment detection terminal devices, at least one environment detection terminal device as a target environment detection terminal device. The target environment detection terminal device is used for detecting a corresponding environment area under the control of the environment detection server to obtain a current environment detection video, and the environment detection video comprises a plurality of frames of environment detection video frames.
Step S200, aiming at each target environment detection terminal device, carrying out video frame screening processing on environment detection video frames included in the environment detection video obtained by the target environment detection terminal device to obtain a target environment detection video corresponding to the target environment detection terminal device.
In the embodiment of the present invention, the environment detection server may perform video frame screening processing on an environment detection video frame included in an environment detection video obtained by detecting the target environment detection terminal device, for each target environment detection terminal device, to obtain a target environment detection video corresponding to the target environment detection terminal device. Wherein the target environment detection video comprises at least one frame of environment detection video (i.e. at least one frame is reserved in the screening).
Step S300, aiming at each target environment detection video, carrying out target object identification processing on each frame of environment detection video frame included in the target environment detection video to obtain an environment detection result corresponding to the target environment detection video.
In the embodiment of the present invention, the environment detection server may perform, for each target environment detection video, target object identification processing on each frame of environment detection video frame included in the target environment detection video, so as to obtain an environment detection result corresponding to the target environment detection video. The environment detection result is used for representing the object danger degree of the corresponding environment area, and the target object identification processing is used for identifying whether a target dangerous object exists or not.
Based on the steps S100, S200, and S300 included in the above method, at least one environment detection terminal device may be determined as a target environment detection terminal device, and then, for each target environment detection terminal device, video frame screening processing may be performed on an environment detection video frame included in an environment detection video obtained by the target environment detection terminal device, so as to obtain a corresponding target environment detection video, so that for each target environment detection video, target object identification processing may be performed on each frame of environment detection video frame included in the target environment detection video, so as to obtain a corresponding environment detection result, thereby improving a problem that environment detection resources are easily wasted in the prior art (i.e., screening the environment detection videos, which may reduce the number of identifications to a certain extent).
For example, in an application example, the step S100 may include the following steps, such as step S110, step S120, and step S130.
Step S110, respectively obtaining a historical video set obtained by detecting, in history, a corresponding environment area by each of the plurality of environment detection terminal devices, and obtaining a plurality of historical video sets corresponding to the plurality of environment detection terminal devices.
In this embodiment of the present invention, the environment detection server may respectively obtain a historical video set obtained by detecting, by each of the plurality of environment detection terminal devices, a corresponding environment area in history, so as to obtain a plurality of historical video sets corresponding to the plurality of environment detection terminal devices. Each historical video set comprises a plurality of historical environment detection videos, and each historical environment detection video comprises a plurality of frames of historical environment detection video frames which are obtained by detecting corresponding environment areas and are continuous in time sequence.
Step S120, for each historical video set in the plurality of historical video sets, performing video content comparison analysis on the historical environment detection videos included in the historical video set to obtain environment change characteristic information historically formed by the environment area corresponding to the historical video set.
In this embodiment of the present invention, the environment detection server may perform, for each historical video set in the plurality of historical video sets, video content comparison analysis on the historical environment detection videos included in the historical video set, so as to obtain environment change feature information historically formed in an environment area corresponding to the historical video set.
Step S130, determining, based on the environment change characteristic information corresponding to each historical video set, at least one environment detection terminal device among the plurality of environment detection terminal devices as a target environment detection terminal device.
In this embodiment of the present invention, the environment detection server may determine, based on the environment change feature information corresponding to each historical video set, at least one environment detection terminal device as a target environment detection terminal device from among the plurality of environment detection terminal devices. The target environment detection terminal device is used for detecting a corresponding environment area under the control of the environment detection server to obtain a current environment detection video.
Based on the steps S110, S120, and S130 included in the above method, a historical video set obtained by detecting a corresponding environmental area historically by each environmental detection terminal device may be obtained, respectively, to obtain a plurality of historical video sets corresponding to the plurality of environmental detection terminal devices, then, for each historical video set in the plurality of historical video sets, video content comparison analysis may be performed on a historical environmental detection video included in the historical video set, so as to obtain environmental change characteristic information historically formed by the environmental area corresponding to the historical video set, so that, based on the environmental change characteristic information corresponding to each historical video set, in the plurality of environmental detection terminal devices, at least one environmental detection terminal device may be determined as a target environmental detection terminal device, that is, the target environmental detection terminal device may be determined by combining with the historical environmental detection video, and then, environment detection is carried out based on the target environment detection terminal equipment, and compared with a conventional scheme of directly carrying out detection based on all the environment detection terminal equipment, the resource consumption of environment detection can be reduced to a certain extent, so that the problem of resource waste easily occurring in the environment detection in the prior art is solved.
For example, in an application example, the step S110 may include the following steps:
firstly, aiming at each environment detection terminal device in the plurality of environment detection terminal devices, obtaining each historical environment detection video obtained by detecting a corresponding environment area historically by the environment detection terminal device, and counting the number of the historical environment detection videos to obtain the statistical number of the historical videos corresponding to the environment detection terminal device;
secondly, determining a relative size relation between the historical video statistic quantity corresponding to each environment detection terminal device and a preset video statistic quantity threshold value for each environment detection terminal device in the plurality of environment detection terminal devices;
then, for each environment detection terminal device in the plurality of environment detection terminal devices, if the historical video statistics number corresponding to the environment detection terminal device is smaller than the video statistics number threshold, a historical video set corresponding to the environment detection terminal device is constructed based on each historical environment detection video detected by the environment detection terminal device, if the historical video statistics number corresponding to the environment detection terminal device is larger than or equal to the video statistics number threshold, all the historical environment detection videos detected by the environment detection terminal device are screened, and a corresponding historical video set is constructed based on the screened historical environment detection videos.
For example, in an application example, the step of, for each of the plurality of environment detection terminal devices, if the historical video statistics number corresponding to the environment detection terminal device is smaller than the video statistics number threshold, constructing a historical video set corresponding to the environment detection terminal device based on each historical environment detection video detected by the environment detection terminal device, and if the historical video statistics number corresponding to the environment detection terminal device is greater than or equal to the video statistics number threshold, screening all the historical environment detection videos detected by the environment detection terminal device, and constructing a corresponding historical video set based on the screened historical environment detection videos may include the following steps:
firstly, for each environment detection terminal device in the plurality of environment detection terminal devices, if the historical video statistic number corresponding to the environment detection terminal device is smaller than the video statistic number threshold, constructing a historical video set corresponding to the environment detection terminal device based on each historical environment detection video detected by the environment detection terminal device;
secondly, aiming at each environment detection terminal device in the environment detection terminal devices, if the historical video statistic quantity corresponding to the environment detection terminal device is larger than or equal to the video statistic quantity threshold value, respectively acquiring the historical detection time information of each historical environment detection video detected by the environment detection terminal device, screening based on the historical detection time information, and constructing a corresponding historical video set based on the screened historical environment detection videos, wherein the quantity of the historical environment detection videos included in the prime number historical video set is equal to the video statistic quantity threshold value, and the historical detection time information of each screened historical environment detection video is later than that of each unseen historical environment detection video.
For example, in an application example, the step S120 may include the following steps:
firstly, for each historical video set in the plurality of historical video sets, calculating the video similarity between every two historical environment detection videos included in the historical video set;
secondly, aiming at each historical video set in the plurality of historical video sets, based on the video similarity between every two historical environment detection videos included in the historical video set, environment change characteristic information formed historically by the environment area corresponding to the historical video set is obtained.
For example, in an application example, the step of calculating, for each historical video set in the plurality of historical video sets, a video similarity between every two historical environment detection videos included in the historical video set may include the following steps:
firstly, for each historical video set in the plurality of historical video sets, performing similarity calculation operation on every two historical environment detection videos included in the historical video set to obtain the video similarity between every two historical environment detection videos.
Wherein the similarity calculation operation may include the following steps:
firstly, respectively determining the two historical environment detection videos as a first historical environment detection video and a second historical environment detection video (any one of the two historical environment detection videos is used as the first historical environment detection video, and the other one of the two historical environment detection videos is used as the second historical environment detection video), respectively determining the number of the historical environment detection video frames included in the first historical environment detection video and the number of the historical environment detection video frames included in the second historical environment detection video to obtain a corresponding first video frame number and a second video frame number, respectively carrying out video interception (intercepting the historical environment detection video frames with the time sequence before) on the first historical environment detection video and the second historical environment detection video based on the smaller value of the first video frame number and the second video frame number to obtain a corresponding first video segment and a corresponding second video segment, then, the historical environment detection video frames included in the first video clip and the second video clip are processed in a one-to-one correspondence mode according to corresponding time sequences;
secondly, performing feature point extraction processing on a first frame historical environment detection video frame included in the first video clip (for example, performing extraction based on the existing ORB algorithm, organized FAST and related BRIEF), obtaining a plurality of image feature points corresponding to the first frame historical environment detection video frame, determining a plurality of target image feature points of which the number is not more than the smaller value of the number of the first video frame and the number of the second video frame from the plurality of image feature points, and sequencing the plurality of target image feature points based on the positions of the plurality of target image feature points in the first frame historical environment detection video frame (for example, sequencing from top left to bottom right and the like), so as to obtain a sequencing serial number of each target image feature point;
then, according to the corresponding sequence numbers, carrying out one-to-one correspondence processing on the multiple target image feature points and the corresponding number of multiple frames of historical environment detection video frames included in the first video segment, and carrying out one-to-one correspondence processing on the multiple target image feature points and the corresponding number of multiple frames of historical environment detection video frames included in the second video segment, wherein the multiple frames of historical environment detection video frames included in the first video segment and corresponding to the multiple target image feature points have a correspondence relation in time sequence with the multiple frames of historical environment detection video frames included in the second video segment and corresponding to the multiple target image feature points;
then, detecting a video frame aiming at each frame of historical environment included in the first video clip, if the detected video frame of the historical environment corresponds to one target image characteristic point, extracting pixel values of the target image characteristic point in the detected video frame of the historical environment, and constructing and obtaining a first pixel value sequence of the plurality of target image characteristic points in the first video clip on the basis of the pixel values respectively corresponding to the plurality of target image characteristic points;
then, detecting a video frame aiming at each frame of historical environment included in the second video clip, if the detected video frame of the historical environment corresponds to one target image characteristic point, extracting pixel values of the target image characteristic point in the detected video frame of the historical environment, and constructing and obtaining a second pixel value sequence of the plurality of target image characteristic points in the second video clip on the basis of the pixel values respectively corresponding to the plurality of target image characteristic points;
and finally, calculating the sequence similarity of the first pixel value sequence and the second pixel value sequence, and taking the sequence similarity as the video similarity between the two historical environment detection videos, wherein the sequence similarity is obtained based on the similarity between the pixel values of the corresponding sequence positions between the first pixel value sequence and the second pixel value sequence (such as performing mean calculation).
For example, in an application example, the step of obtaining, for each historical video set in the plurality of historical video sets, environment change feature information historically formed by an environment area corresponding to the historical video set based on video similarity between every two historical environment detection videos included in the historical video set may include the following steps:
firstly, for each historical video set in the plurality of historical video sets, screening out the video similarity between every two adjacent historical environment detection videos in the video similarity between every two historical environment detection videos included in the historical video set, wherein the video similarity is used as a first video similarity, so as to obtain a plurality of first video similarities corresponding to the historical video set;
secondly, respectively calculating the average value of the historical detection time information of two historical environment detection videos corresponding to each of the multiple first video similarities corresponding to the historical video set aiming at each of the multiple historical video sets to obtain the historical detection time average value corresponding to each of the first video similarities;
then, for each historical video set in the plurality of historical video sets, respectively performing update processing on each first video similarity except for a first video similarity in the plurality of first video similarities corresponding to the historical video set, wherein the update processing is based on a video similarity between a subsequent historical environment detection video and an adjacent previous historical environment detection video of a previous historical environment detection video in two historical environment detection videos corresponding to the first video similarity, and if the video similarity is greater than the corresponding first video similarity, the first video similarity is increased when the first video similarity is updated, if the video similarity is equal to the corresponding first video similarity, the first video similarity is maintained when the first video similarity is updated, and if the video similarity is less than the corresponding first video similarity, reducing the first video similarity when updating the first video similarity;
finally, for each historical video set in the multiple historical video sets, performing coordinate point mapping processing based on multiple first video similarities corresponding to the historical video set at present and a historical detection time average value corresponding to each first video similarity, and performing curve fitting processing based on coordinate points obtained through mapping processing to obtain environment change characteristic information, formed historically, of an environment area corresponding to the historical video set, wherein each coordinate point comprises one corresponding first video similarity and one corresponding historical detection time average value.
For example, in an application example, the step S130 may include the following steps:
firstly, for each environment detection terminal device in the plurality of environment detection terminal devices, determining a magnitude relation between a change amplitude represented by environment change characteristic information corresponding to a history video set corresponding to the environment detection terminal device (as described above, calculating a change amplitude of a curve obtained through fitting processing, for example, calculating a difference between every adjacent peak and trough, and calculating an accumulated sum of the differences as a corresponding change amplitude) and a pre-configured change amplitude threshold;
secondly, for each environment detection terminal device in the plurality of environment detection terminal devices, if the variation amplitude represented by the environment variation characteristic information corresponding to the historical video set corresponding to the environment detection terminal device is greater than or equal to the variation amplitude threshold, determining the environment detection terminal device as a target environment detection terminal device.
For example, in an application example, the step S130 may further include the following steps:
firstly, for each environment detection terminal device in the plurality of environment detection terminal devices, if the change amplitude represented by the environment change feature information corresponding to the historical video set corresponding to the environment detection terminal device is smaller than the change amplitude threshold, determining the latest historical detection time information in the historical detection time information corresponding to each historical environment detection video in the historical video set corresponding to the environment detection terminal device, and determining the relative size relationship between the difference between the historical detection time information and the current time information and a preset time difference threshold;
secondly, for each environment detection terminal device in the plurality of environment detection terminal devices, if a difference value between the historical detection time information and the current time information corresponding to the environment detection terminal device is greater than or equal to the time difference value threshold, determining the environment detection terminal device as a target environment detection terminal device, and if the difference value between the historical detection time information and the current time information corresponding to the environment detection terminal device is less than the time difference value threshold, determining that the environment detection terminal device is not determined as the target environment detection terminal device.
For example, in an application example, the step S200 may include the following steps:
firstly, aiming at each target environment detection terminal device, performing frame rate identification processing on an environment detection video obtained by detection of the target environment detection terminal device to obtain video frame rate information corresponding to the target environment detection terminal device, and counting the number of environment detection video frames included in the environment detection video to obtain the number of video frames corresponding to the target environment detection terminal device;
secondly, for each target environment detection terminal device, determining whether video frame screening processing needs to be performed on environment detection video frames included in an environment detection video obtained by the target environment detection terminal device based on video frame rate information and the number of video frames corresponding to the target environment detection terminal device, and performing video frame screening processing on the environment detection video frames included in the environment detection video when it is determined that video frame screening processing needs to be performed on the environment detection video frames included in the environment detection video, so as to obtain the target environment detection video corresponding to the target environment detection terminal device.
For example, in an application example, the step of determining, for each target environment detection terminal device, whether to perform video frame screening processing on an environment detection video frame included in an environment detection video detected by the target environment detection terminal device based on video frame rate information and a video frame number corresponding to the target environment detection terminal device, and when determining that video frame screening processing is required to be performed on an environment detection video frame included in the environment detection video, performing video frame screening processing on an environment detection video frame included in the environment detection video to obtain a target environment detection video corresponding to the target environment detection terminal device may include the following steps:
firstly, for each target environment detection terminal device, determining a relative size relationship between video frame rate information corresponding to the target environment detection terminal device and pre-configured video frame rate threshold information, and determining a relative size relationship between the number of video frames corresponding to the target environment detection terminal device and a pre-configured video frame number threshold;
secondly, for each target environment detection terminal device, if the video frame rate information corresponding to the target environment detection terminal device is greater than or equal to the video frame rate threshold information, and the number of video frames corresponding to the target environment detection terminal device is greater than or equal to the threshold value of the number of video frames, determining that video frame screening processing needs to be performed on environment detection video frames included in the environment detection video detected by the target environment detection terminal device, if video frame rate information corresponding to the target environment detection terminal device is smaller than the video frame rate threshold information, and/or the number of video frames corresponding to the target environment detection terminal device is less than the threshold value of the number of video frames, determining that video frame screening processing is not required to be performed on environment detection video frames included in the environment detection video obtained by the target environment detection terminal equipment;
then, for each target environment detection terminal device, when it is determined that video frame screening processing needs to be performed on the environment detection video frames included in the environment detection video corresponding to the target environment detection terminal device, video frame screening processing is performed on the environment detection video frames included in the environment detection video to obtain a target environment detection video corresponding to the target environment detection terminal device.
For example, in an application example, when it is determined that video frame screening processing needs to be performed on an environment detection video frame included in an environment detection video corresponding to each target environment detection terminal device, the step of performing video frame screening processing on the environment detection video frame included in the environment detection video to obtain the target environment detection video corresponding to the target environment detection terminal device may include the following steps:
firstly, for each target environment detection terminal device, when it is determined that video frame screening processing needs to be performed on environment detection video frames included in an environment detection video corresponding to the target environment detection terminal device, determining a corresponding first screening proportion based on video frame rate information corresponding to the target environment detection terminal device, and determining a corresponding second screening proportion based on a number of video frames corresponding to the target environment detection terminal device, wherein the video frame rate information and the first screening proportion have a positive correlation, and the number of video frames and the second screening proportion have a positive correlation;
secondly, for each target environment detection terminal device, obtaining each historical environment detection result corresponding to the target environment detection video, performing fusion processing (such as mean value calculation) on each historical environment detection result corresponding to the target environment detection video to obtain a historical environment detection fusion result corresponding to the target environment detection video, and determining a third screening proportion based on the object risk degree represented by the historical environment detection fusion result, wherein the third screening proportion and the object risk degree represented by the historical environment detection fusion result have a negative correlation;
then, for each target environment detection terminal device, performing weighted summation calculation on a first screening proportion, a second screening proportion and a third screening proportion corresponding to the target environment detection terminal device to obtain a weighted screening proportion corresponding to the target environment detection terminal device, wherein a weighting coefficient corresponding to the third screening proportion is greater than a weighting coefficient corresponding to the first screening proportion, and a weighting coefficient corresponding to the third screening proportion is greater than a weighting coefficient corresponding to the second screening proportion;
then, for each target environment detection terminal device, performing similarity calculation on every two adjacent frames of environment detection video frames included in the environment detection video corresponding to the target environment detection terminal device to obtain video frame similarity between the two adjacent frames of environment detection video frames (which may be calculating image similarity between the two frames of environment detection video frames);
then, for each target environment detection terminal device, performing segmentation processing on an environment detection video corresponding to the target environment detection terminal device based on a first quantity value configured in advance to obtain a plurality of video segments corresponding to the environment detection video, respectively determining an average value of video frame similarities between two adjacent frames of environment detection video frames in each video segment, and determining a video segment with the largest average value of corresponding proportion quantity as a target video segment based on a weighting screening proportion corresponding to the target environment detection terminal device, wherein the quantity of the environment detection video frames included in each video segment is equal to the first quantity value;
and finally, screening out partial environment detection video frames in each target video clip corresponding to each target environment detection terminal device aiming at each target environment detection terminal device.
For example, in an application example, the step S300 may include the following steps:
firstly, respectively carrying out target object identification processing on each frame of environment detection video frame included in each target environment detection video based on a pre-trained neural network model aiming at each target environment detection video to obtain a target object identification result corresponding to each frame of environment detection video frame;
and secondly, for each target environment detection video, fusing target object identification results corresponding to each frame of environment detection video frame included in the target environment detection video to obtain an environment detection result corresponding to the target environment detection video.
For example, in an application example, the step of performing fusion processing on the target object identification result corresponding to each frame of the environment detection video included in each target environment detection video to obtain the environment detection result corresponding to the target environment detection video may include the following steps:
firstly, counting the number of first target object identification results in target object identification results corresponding to environment detection video frames included in each target environment detection video to obtain the number of identification results corresponding to the target environment detection video, wherein the first target object identification results are used for representing that target dangerous objects exist in the corresponding environment detection video frames (different definitions can be provided according to different application scenes);
then, for each target environment detection video, calculating a ratio between the identification result statistical number corresponding to the target environment detection video and the number of the environment detection video frames included in the target environment detection video, and obtaining an environment detection result corresponding to the target environment detection video based on the ratio (for example, taking the ratio as the environment detection result).
With reference to fig. 3, an embodiment of the present invention further provides an intelligent environment detection system, which can be applied to the environment detection server. Wherein the intelligent environment detection system may include the following:
a target detection device determining module, configured to determine, among the multiple environment detection terminal devices, at least one environment detection terminal device as a target environment detection terminal device, where the target environment detection terminal device is configured to detect a corresponding environment area under the control of the environment detection server to obtain a current environment detection video, where the environment detection video includes multiple frames of environment detection video frames;
the video frame screening processing module is used for screening video frames of environment detection videos which are obtained by detecting the target environment detection terminal equipment aiming at each target environment detection terminal equipment to obtain target environment detection videos corresponding to the target environment detection terminal equipment, wherein the target environment detection videos comprise at least one frame of environment detection video frame;
and the video frame identification processing module is used for carrying out target object identification processing on each frame of environment detection video frame included in each target environment detection video aiming at each target environment detection video to obtain an environment detection result corresponding to the target environment detection video, wherein the environment detection result is used for representing the object danger degree of a corresponding environment area, and the target object identification processing is used for identifying whether a target dangerous object exists.
For example, in an application example, the video frame filtering processing module is specifically configured to:
for each target environment detection terminal device, performing frame rate identification processing on an environment detection video obtained by detection of the target environment detection terminal device to obtain video frame rate information corresponding to the target environment detection terminal device, and counting the number of environment detection video frames included in the environment detection video to obtain the number of video frames corresponding to the target environment detection terminal device;
and determining whether video frame screening processing needs to be performed on environment detection video frames included in environment detection videos obtained by the target environment detection terminal equipment or not based on video frame rate information and the number of video frames corresponding to the target environment detection terminal equipment, and performing video frame screening processing on the environment detection video frames included in the environment detection videos when it is determined that video frame screening processing needs to be performed on the environment detection video frames included in the environment detection videos to obtain the target environment detection videos corresponding to the target environment detection terminal equipment.
For example, in an application example, the video frame identification processing module is specifically configured to:
respectively carrying out target object identification processing on each frame of environment detection video frame included in the target environment detection video based on a pre-trained neural network model aiming at each target environment detection video to obtain a target object identification result corresponding to each frame of environment detection video frame;
and aiming at each target environment detection video, fusing target object identification results corresponding to each frame of environment detection video frame included in the target environment detection video to obtain an environment detection result corresponding to the target environment detection video.
In summary, according to the intelligent environment detection system and the method thereof provided by the present invention, at least one environment detection terminal device can be determined as a target environment detection terminal device, and then, for each target environment detection terminal device, video frame screening processing can be performed on an environment detection video frame included in an environment detection video obtained by the target environment detection terminal device, so as to obtain a corresponding target environment detection video, so that for each target environment detection video, target object identification processing can be performed on each frame of environment detection video frame included in the target environment detection video, and a corresponding environment detection result can be obtained, thereby improving the problem that environment detection resources are easily wasted in the prior art (i.e., screening the environment detection videos, and reducing the number of identifications to a certain extent).
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An intelligent environment detection method is applied to an environment detection server, the environment detection server is in communication connection with a plurality of environment detection terminal devices, and the method comprises the following steps:
determining at least one environment detection terminal device as a target environment detection terminal device from the plurality of environment detection terminal devices, wherein the target environment detection terminal device is used for detecting a corresponding environment area under the control of the environment detection server to obtain a current environment detection video, and the environment detection video comprises a plurality of frames of environment detection video frames;
for each target environment detection terminal device, performing video frame screening processing on an environment detection video frame included in an environment detection video obtained by the target environment detection terminal device to obtain a target environment detection video corresponding to the target environment detection terminal device, wherein the target environment detection video includes at least one frame of environment detection video frame;
and aiming at each target environment detection video, carrying out target object identification processing on each frame of environment detection video frame included in the target environment detection video to obtain an environment detection result corresponding to the target environment detection video, wherein the environment detection result is used for representing the object danger degree of a corresponding environment area, and the target object identification processing is used for identifying whether a target dangerous object exists.
2. The intelligent environment detection method according to claim 1, wherein the step of, for each target environment detection terminal device, performing video frame screening processing on an environment detection video frame included in an environment detection video obtained by the target environment detection terminal device to obtain a target environment detection video corresponding to the target environment detection terminal device includes:
for each target environment detection terminal device, performing frame rate identification processing on an environment detection video obtained by detection of the target environment detection terminal device to obtain video frame rate information corresponding to the target environment detection terminal device, and counting the number of environment detection video frames included in the environment detection video to obtain the number of video frames corresponding to the target environment detection terminal device;
and for each target environment detection terminal device, determining whether video frame screening processing needs to be performed on environment detection video frames included in the environment detection video obtained by the target environment detection terminal device based on the video frame rate information and the video frame number corresponding to the target environment detection terminal device, and performing video frame screening processing on the environment detection video frames included in the environment detection video when it is determined that video frame screening processing needs to be performed on the environment detection video frames included in the environment detection video to obtain the target environment detection video corresponding to the target environment detection terminal device.
3. The intelligent environment detection method according to claim 2, wherein the step of determining, for each target environment detection terminal device, whether video frame screening processing needs to be performed on the environment detection video frames included in the environment detection video obtained by the target environment detection terminal device based on the video frame rate information and the number of video frames corresponding to the target environment detection terminal device, and when it is determined that video frame screening processing needs to be performed on the environment detection video frames included in the environment detection video, performing video frame screening processing on the environment detection video frames included in the environment detection video to obtain the target environment detection video corresponding to the target environment detection terminal device includes:
for each target environment detection terminal device, determining a relative size relationship between video frame rate information corresponding to the target environment detection terminal device and pre-configured video frame rate threshold information, and determining a relative size relationship between the number of video frames corresponding to the target environment detection terminal device and a pre-configured video frame number threshold;
for each target environment detection terminal device, if the video frame rate information corresponding to the target environment detection terminal device is greater than or equal to the video frame rate threshold information, and the number of video frames corresponding to the target environment detection terminal device is greater than or equal to the threshold value of the number of video frames, determining that video frame screening processing needs to be performed on environment detection video frames included in the environment detection video detected by the target environment detection terminal device, if video frame rate information corresponding to the target environment detection terminal device is smaller than the video frame rate threshold information, and/or the number of video frames corresponding to the target environment detection terminal device is less than the threshold value of the number of video frames, determining that video frame screening processing is not required to be performed on environment detection video frames included in the environment detection video obtained by the target environment detection terminal equipment;
and for each target environment detection terminal device, when determining that video frame screening processing needs to be performed on the environment detection video frames included in the environment detection video corresponding to the target environment detection terminal device, performing video frame screening processing on the environment detection video frames included in the environment detection video to obtain the target environment detection video corresponding to the target environment detection terminal device.
4. The intelligent environment detection method according to claim 3, wherein, when it is determined that video frame screening processing needs to be performed on an environment detection video frame included in an environment detection video corresponding to each of the target environment detection terminal devices, the step of performing video frame screening processing on the environment detection video frame included in the environment detection video to obtain the target environment detection video corresponding to the target environment detection terminal device includes:
for each target environment detection terminal device, when it is determined that video frame screening processing needs to be performed on environment detection video frames included in an environment detection video corresponding to the target environment detection terminal device, determining a corresponding first screening proportion based on video frame rate information corresponding to the target environment detection terminal device, and determining a corresponding second screening proportion based on the number of video frames corresponding to the target environment detection terminal device, wherein the video frame rate information and the first screening proportion have a positive correlation, and the number of video frames and the second screening proportion have a positive correlation;
for each target environment detection terminal device, acquiring each historical environment detection result corresponding to the target environment detection video, performing fusion processing on each historical environment detection result corresponding to the target environment detection video to obtain a historical environment detection fusion result corresponding to the target environment detection video, and determining a third screening proportion based on the object danger degree represented by the historical environment detection fusion result, wherein the third screening proportion and the object danger degree represented by the historical environment detection fusion result have a negative correlation;
for each target environment detection terminal device, performing weighted summation calculation on the first screening proportion, the second screening proportion and a third screening proportion corresponding to the target environment detection terminal device to obtain a weighted screening proportion corresponding to the target environment detection terminal device, wherein a weighting coefficient corresponding to the third screening proportion is greater than a weighting coefficient corresponding to the first screening proportion, and a weighting coefficient corresponding to the third screening proportion is greater than a weighting coefficient corresponding to the second screening proportion;
for each target environment detection terminal device, performing similarity calculation on every two adjacent frames of environment detection video frames included in the environment detection video corresponding to the target environment detection terminal device to obtain video frame similarity between the two adjacent frames of environment detection video frames;
for each target environment detection terminal device, performing segmentation processing on an environment detection video corresponding to the target environment detection terminal device based on a first quantity value configured in advance to obtain a plurality of video segments corresponding to the environment detection video, respectively determining an average value of video frame similarity between two adjacent frames of environment detection video frames in each video segment, and determining a video segment with the largest average value of corresponding proportion quantity as a target video segment based on a weighting screening proportion corresponding to the target environment detection terminal device, wherein the quantity of the environment detection video frames included in each video segment is equal to the first quantity value;
and screening out partial environment detection video frames in each target video segment corresponding to each target environment detection terminal device, and obtaining a target environment detection video corresponding to the target environment detection terminal device based on the environment detection video frames which are not screened out in the target video segment and the environment detection video frames included in other video segments.
5. The intelligent environment detection method according to claim 1, wherein the step of performing target object recognition processing on each frame of environment detection video frame included in the target environment detection video for each target environment detection video to obtain the environment detection result corresponding to the target environment detection video comprises:
respectively carrying out target object identification processing on each frame of environment detection video frame included in the target environment detection video based on a pre-trained neural network model aiming at each target environment detection video to obtain a target object identification result corresponding to each frame of environment detection video frame;
and aiming at each target environment detection video, fusing target object identification results corresponding to each frame of environment detection video frame included in the target environment detection video to obtain an environment detection result corresponding to the target environment detection video.
6. The intelligent environment detection method according to claim 5, wherein the step of performing fusion processing on the target object recognition result corresponding to each frame of the environment detection video included in the target environment detection video to obtain the environment detection result corresponding to the target environment detection video, for each target environment detection video, comprises:
counting the number of first target object identification results in target object identification results corresponding to environment detection video frames included in each target environment detection video to obtain the number of identification result statistics corresponding to the target environment detection video, wherein the first target object identification results are used for representing that target dangerous objects exist in the corresponding environment detection video frames;
and calculating the ratio of the identification result statistical number corresponding to each target environment detection video to the number of the environment detection video frames included in the target environment detection video, and obtaining the environment detection result corresponding to the target environment detection video based on the ratio.
7. The intelligent environment detection method according to any one of claims 1 to 6, wherein the step of determining at least one environment detection terminal device among the plurality of environment detection terminal devices as a target environment detection terminal device comprises:
respectively acquiring a historical video set obtained by detecting a corresponding environment area in history by each environment detection terminal device in the plurality of environment detection terminal devices to obtain a plurality of historical video sets corresponding to the plurality of environment detection terminal devices, wherein each historical video set comprises a plurality of historical environment detection videos, and each historical environment detection video comprises a plurality of continuous multi-frame historical environment detection video frames in time sequence obtained by detecting the corresponding environment area;
for each historical video set in the plurality of historical video sets, performing video content comparison analysis on the historical environment detection videos included in the historical video set to obtain environment change characteristic information historically formed by the environment area corresponding to the historical video set;
and determining at least one environment detection terminal device as a target environment detection terminal device in the plurality of environment detection terminal devices based on the environment change characteristic information corresponding to each historical video set to obtain at least one target environment detection terminal device.
8. The utility model provides an intelligence environmental detection system which characterized in that is applied to the environmental detection server, environmental detection server communication connection has a plurality of environmental detection terminal equipment, the system includes:
a target detection device determining module, configured to determine, among the multiple environment detection terminal devices, at least one environment detection terminal device as a target environment detection terminal device, where the target environment detection terminal device is configured to detect a corresponding environment area under the control of the environment detection server to obtain a current environment detection video, where the environment detection video includes multiple frames of environment detection video frames;
a video frame screening processing module, configured to perform video frame screening processing on an environment detection video frame included in an environment detection video detected by each target environment detection terminal device, to obtain a target environment detection video corresponding to the target environment detection terminal device, where the target environment detection video includes at least one frame of environment detection video frame;
and the video frame identification processing module is used for carrying out target object identification processing on each frame of environment detection video frame included in each target environment detection video aiming at each target environment detection video to obtain an environment detection result corresponding to the target environment detection video, wherein the environment detection result is used for representing the object danger degree of a corresponding environment area, and the target object identification processing is used for identifying whether a target dangerous object exists or not.
9. The intelligent environment detection system of claim 8, wherein the video frame screening processing module is specifically configured to:
for each target environment detection terminal device, performing frame rate identification processing on an environment detection video obtained by detection of the target environment detection terminal device to obtain video frame rate information corresponding to the target environment detection terminal device, and counting the number of environment detection video frames included in the environment detection video to obtain the number of video frames corresponding to the target environment detection terminal device;
and determining whether video frame screening processing needs to be performed on environment detection video frames included in environment detection videos obtained by the target environment detection terminal equipment or not based on video frame rate information and the number of video frames corresponding to the target environment detection terminal equipment, and performing video frame screening processing on the environment detection video frames included in the environment detection videos when it is determined that video frame screening processing needs to be performed on the environment detection video frames included in the environment detection videos to obtain the target environment detection videos corresponding to the target environment detection terminal equipment.
10. The intelligent environment detection system of claim 8, wherein the video frame recognition processing module is specifically configured to:
respectively carrying out target object identification processing on each frame of environment detection video frame included in the target environment detection video based on a pre-trained neural network model aiming at each target environment detection video to obtain a target object identification result corresponding to each frame of environment detection video frame;
and aiming at each target environment detection video, fusing target object identification results corresponding to each frame of environment detection video frame included in the target environment detection video to obtain an environment detection result corresponding to the target environment detection video.
CN202210141710.0A 2022-02-16 2022-02-16 Intelligent environment detection system and method thereof Withdrawn CN114639035A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210141710.0A CN114639035A (en) 2022-02-16 2022-02-16 Intelligent environment detection system and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210141710.0A CN114639035A (en) 2022-02-16 2022-02-16 Intelligent environment detection system and method thereof

Publications (1)

Publication Number Publication Date
CN114639035A true CN114639035A (en) 2022-06-17

Family

ID=81945687

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210141710.0A Withdrawn CN114639035A (en) 2022-02-16 2022-02-16 Intelligent environment detection system and method thereof

Country Status (1)

Country Link
CN (1) CN114639035A (en)

Similar Documents

Publication Publication Date Title
CN114140713A (en) Image recognition system and image recognition method
CN114581856B (en) Agricultural unit motion state identification method and system based on Beidou system and cloud platform
CN115018840B (en) Method, system and device for detecting cracks of precision casting
CN116403094B (en) Embedded image recognition method and system
CN115091472B (en) Target positioning method based on artificial intelligence and clamping manipulator control system
CN114140712A (en) Automatic image recognition and distribution system and method
CN112232206B (en) Face recognition method and face recognition platform based on big data and artificial intelligence
CN114724215A (en) Sensitive image identification method and system
CN114697618A (en) Building control method and system based on mobile terminal
CN113902993A (en) Environmental state analysis method and system based on environmental monitoring
CN114187763A (en) Vehicle driving data screening method and system for intelligent traffic
CN114189535A (en) Service request method and system based on smart city data
CN114677615A (en) Environment detection method and system
CN113869327A (en) Data processing method and system based on soil element content detection
CN115620243B (en) Pollution source monitoring method and system based on artificial intelligence and cloud platform
CN114639035A (en) Intelligent environment detection system and method thereof
CN115375886A (en) Data acquisition method and system based on cloud computing service
CN115424193A (en) Training image information processing method and system
CN114095734A (en) User data compression method and system based on data processing
CN115457467A (en) Building quality hidden danger positioning method and system based on data mining
CN114720812A (en) Method and system for processing and positioning faults of power distribution network
CN114549884A (en) Abnormal image detection method, device, equipment and medium
CN113902412A (en) Environment monitoring method based on data processing
CN113848305A (en) Region identification method and system based on soil element content detection
CN113537087A (en) Intelligent traffic information processing method and device and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220617