CN117173614A - Internet-based monitoring method and system - Google Patents
Internet-based monitoring method and system Download PDFInfo
- Publication number
- CN117173614A CN117173614A CN202311213694.2A CN202311213694A CN117173614A CN 117173614 A CN117173614 A CN 117173614A CN 202311213694 A CN202311213694 A CN 202311213694A CN 117173614 A CN117173614 A CN 117173614A
- Authority
- CN
- China
- Prior art keywords
- video frame
- risk
- target
- monitoring
- monitoring video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 393
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000012545 processing Methods 0.000 claims abstract description 47
- 238000013528 artificial neural network Methods 0.000 claims abstract description 27
- 230000006399 behavior Effects 0.000 claims description 386
- 230000004927 fusion Effects 0.000 claims description 25
- 238000012216 screening Methods 0.000 claims description 25
- 238000007499 fusion processing Methods 0.000 claims description 13
- 238000004458 analytical method Methods 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 10
- 230000000875 corresponding effect Effects 0.000 description 205
- 238000004364 calculation method Methods 0.000 description 6
- 238000010835 comparative analysis Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 3
- 238000004590 computer program Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Landscapes
- Alarm Systems (AREA)
Abstract
The invention provides a monitoring method and a monitoring system based on the Internet, and relates to the technical field of data processing. In the invention, a risk behavior supervision operation is carried out on a target monitoring video acquired by a target internet terminal so as to determine a corresponding risk behavior combination, wherein the risk behavior combination comprises at least one target monitoring object behavior, and one target monitoring object behavior corresponds to one frame in the target monitoring video; extracting target monitoring video frames corresponding to the risk behavior combination from the target monitoring video to form a corresponding candidate risk monitoring video frame set; and carrying out video frame identification processing on the candidate risk monitoring video frame set by utilizing the target risk video frame identification neural network so as to obtain a corresponding target risk identification result. Based on the method, the reliability of risk monitoring can be improved to a certain extent.
Description
Technical Field
The invention relates to the technical field of data processing, in particular to a monitoring method and system based on the Internet.
Background
With the continuous maturity of deep learning technology, the application range of the neural network based on the deep learning technology is gradually increased, for example, the training obtained neural network can be utilized to perform video frame recognition on the monitoring video so as to obtain a corresponding risk recognition result. However, in the prior art, the monitoring video is generally identified entirely or only after repeated video frames are screened, so that interference is easily generated due to invalid video frames included in the monitoring video, and thus the reliability is not high.
Disclosure of Invention
Accordingly, the present invention is directed to an internet-based monitoring method and system, so as to improve the reliability of risk monitoring to a certain extent.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical scheme:
an internet-based monitoring method, comprising:
performing risk behavior supervision operation on a target monitoring video acquired by a target internet terminal to determine a corresponding risk behavior combination, wherein the risk behavior combination comprises at least one target monitoring object behavior, and one target monitoring object behavior corresponds to one frame in the target monitoring video;
extracting target monitoring video frames corresponding to the risk behavior combination from the target monitoring video to form a corresponding candidate risk monitoring video frame set;
and carrying out video frame identification processing on the candidate risk monitoring video frame set by utilizing a target risk video frame identification neural network so as to obtain a target risk identification result corresponding to the target monitoring video.
In some preferred embodiments, in the above internet-based monitoring method, the step of performing risk behavior supervision on the target monitoring video collected by the target internet terminal to determine a corresponding risk behavior combination includes:
Determining a target monitoring object behavior sequence corresponding to a target monitoring video based on a target monitoring video acquired by a target internet terminal, and then carrying out segmented extraction processing on the target monitoring object behavior sequence to form a first object behavior combination cluster corresponding to the target monitoring object behavior sequence;
analyzing and outputting a behavior screening coefficient corresponding to each first object behavior combination included in the first object behavior combination cluster, screening a corresponding second object behavior combination cluster from the first object behavior combination cluster according to the behavior screening coefficient corresponding to each first object behavior combination, wherein the second object behavior combination included in the second object behavior combination cluster belongs to a first object behavior combination in which the corresponding behavior screening coefficient in the first object behavior combination cluster is matched with a preset target screening rule;
outputting a corresponding third object behavior combination cluster based on the second object behavior combination cluster, wherein a third object behavior combination included in the third object behavior combination cluster belongs to a low-frequency object behavior combination;
and performing comparison analysis of the third object behavior combination based on the third object behavior combination cluster, and performing risk behavior supervision operation to determine a corresponding risk behavior combination under the condition that the comparison analysis based output data determines that the comparison analysis matches with a preset target risk behavior rule.
In some preferred embodiments, in the above internet-based monitoring method, the step of extracting the target monitoring video frames corresponding to the risk behavior combination from the target monitoring video to form a corresponding candidate risk monitoring video frame set includes:
extracting target monitoring video frames corresponding to each target monitoring object behavior included in each risk behavior combination from the target monitoring video to form candidate risk monitoring video frames;
determining adjacent target monitoring object behaviors corresponding to each target monitoring object behavior included in each risk behavior combination to form a corresponding adjacent behavior set;
extracting target monitoring video frames corresponding to each adjacent target monitoring object behavior included in the adjacent behavior set from the target monitoring video to form candidate risk monitoring video frames;
and constructing and forming a corresponding candidate risk monitoring video frame set based on all the formed candidate risk monitoring video frames.
In some preferred embodiments, in the above internet-based monitoring method, the step of using a target risk video frame recognition neural network to perform video frame recognition processing on the candidate risk monitoring video frame set to obtain a target risk recognition result corresponding to the target monitoring video includes:
Utilizing a target risk video frame identification neural network to respectively encode each frame of candidate risk monitoring video frame included in the candidate risk monitoring video frame set so as to form a video frame encoding characteristic representation corresponding to each frame of candidate risk monitoring video frame;
carrying out fusion processing on video frame coding feature representations corresponding to the candidate risk monitoring video frames of each frame to form corresponding fusion video frame coding feature representations;
and performing risk prediction on the fusion video frame coding feature representation by using the target risk video frame identification neural network to obtain a target risk identification result corresponding to the target monitoring video.
In some preferred embodiments, in the above-mentioned internet-based monitoring method, the step of performing fusion processing on the video frame coding feature representations corresponding to the candidate risk monitoring video frames of each frame to form corresponding fusion video frame coding feature representations includes:
classifying each frame of candidate risk monitoring video frames included in the candidate risk monitoring video frame set according to whether the target monitoring object behaviors corresponding to the candidate risk monitoring video frames of each frame belong to the risk behavior combination or not so as to form a multi-frame first candidate risk monitoring video frame and a multi-frame second candidate risk monitoring video frame, wherein the target monitoring object behaviors corresponding to the first candidate risk monitoring video frame of each frame belong to the risk behavior combination, and the target monitoring object behaviors corresponding to the second candidate risk monitoring video frame of each frame do not belong to the risk behavior combination;
For each frame of the first candidate risk monitoring video frame, adjusting the video frame coding feature representation corresponding to the first candidate risk monitoring video frame according to the video frame coding feature representation corresponding to the second candidate risk monitoring video frame to form an adjusted video frame coding feature representation corresponding to the first candidate risk monitoring video frame;
and carrying out fusion processing on the adjusted video frame coding characteristic representation corresponding to the first candidate risk monitoring video frame of each frame to form a corresponding fusion video frame coding characteristic representation.
In some preferred embodiments, in the above internet-based monitoring method, the step of, for each frame, adjusting the video frame coding feature representation corresponding to the first candidate risk monitoring video frame according to the video frame coding feature representation corresponding to the second candidate risk monitoring video frame to form an adjusted video frame coding feature representation corresponding to the first candidate risk monitoring video frame includes:
for each frame of the first candidate risk monitoring video frame, determining that each frame corresponding to the first candidate risk monitoring video frame is adjacent to a second candidate risk monitoring video frame in the second candidate risk monitoring video frames, and performing dot multiplication processing on video frame coding feature representations corresponding to the first candidate risk monitoring video frame and video frame coding feature representations corresponding to each frame adjacent to the second candidate risk monitoring video frame respectively to obtain feature representation dot products between the video frame coding feature representations corresponding to the first candidate risk monitoring video frame and the video frame coding feature representations corresponding to each frame adjacent to the second candidate risk monitoring video frame, and performing weighting processing on video frame coding feature representations corresponding to the corresponding adjacent second candidate risk monitoring video frames according to the corresponding feature representation dot products to obtain weighted video frame coding feature representations corresponding to the adjacent second candidate risk monitoring video frames;
And superposing the video frame coding feature representation corresponding to the first candidate risk monitoring video frame and the weighted video frame coding feature representation corresponding to each frame adjacent to the second candidate risk monitoring video frame corresponding to the first candidate risk monitoring video frame so as to form the adjusted video frame coding feature representation corresponding to the first candidate risk monitoring video frame.
In some preferred embodiments, in the above-mentioned internet-based monitoring method, the step of performing fusion processing on the adjusted video frame coding feature representation corresponding to the first candidate risk monitoring video frame for each frame to form a corresponding fused video frame coding feature representation includes:
sequentially splicing the adjusted video frame coding feature representations corresponding to the first candidate risk monitoring video frame of each frame according to the time sequence of the first candidate risk monitoring video frame to form corresponding spliced video frame coding feature representations, wherein the spliced video frame coding feature representations are multi-dimensional feature representations and comprise a plurality of sub-spliced video frame coding feature representations;
for each sub-stitched video frame coding feature representation, performing dot multiplication processing on the sub-stitched video frame coding feature representation and each other sub-stitched video frame coding feature representation respectively to obtain a sub-feature representation dot product between the sub-stitched video frame coding feature representation and each other sub-stitched video frame coding feature representation, performing weighting processing on the corresponding other sub-stitched video frame coding feature representations according to the corresponding sub-feature representation dot product to obtain weighted sub-stitched video frame coding feature representations corresponding to the other sub-stitched video frame coding feature representations, and superposing the sub-stitched video frame coding feature representation and the weighted sub-stitched video frame coding feature representations corresponding to each other sub-stitched video frame coding feature representation to form a superposed sub-stitched video frame coding feature representation corresponding to the sub-stitched video frame coding feature representation;
And carrying out mean value fusion on each sub-spliced video frame coding feature representation corresponding to the superimposed sub-spliced video frame coding feature representation to form a corresponding fusion video frame coding feature representation, wherein the fusion video frame coding feature representation comprises a feature representation of one dimension.
The embodiment of the invention also provides a monitoring system based on the Internet, which comprises the following steps:
the risk behavior combination determining module is used for performing risk behavior supervision operation on the target monitoring video acquired by the target internet terminal to determine a corresponding risk behavior combination, wherein the risk behavior combination comprises at least one target monitoring object behavior, and one target monitoring object behavior corresponds to one frame in the target monitoring video;
the monitoring video frame extraction module is used for extracting target monitoring video frames corresponding to the risk behavior combination from the target monitoring video so as to form a corresponding candidate risk monitoring video frame set;
and the video frame identification module is used for utilizing the target risk video frame identification neural network to carry out video frame identification processing on the candidate risk monitoring video frame set so as to obtain a target risk identification result corresponding to the target monitoring video.
In some preferred embodiments, in the above-mentioned internet-based monitoring system, the monitoring video frame extraction module is specifically configured to:
extracting target monitoring video frames corresponding to each target monitoring object behavior included in each risk behavior combination from the target monitoring video to form candidate risk monitoring video frames;
determining adjacent target monitoring object behaviors corresponding to each target monitoring object behavior included in each risk behavior combination to form a corresponding adjacent behavior set;
extracting target monitoring video frames corresponding to each adjacent target monitoring object behavior included in the adjacent behavior set from the target monitoring video to form candidate risk monitoring video frames;
and constructing and forming a corresponding candidate risk monitoring video frame set based on all the formed candidate risk monitoring video frames.
In some preferred embodiments, in the above-mentioned internet-based monitoring system, the video frame identification module is specifically configured to:
utilizing a target risk video frame identification neural network to respectively encode each frame of candidate risk monitoring video frame included in the candidate risk monitoring video frame set so as to form a video frame encoding characteristic representation corresponding to each frame of candidate risk monitoring video frame;
Carrying out fusion processing on video frame coding feature representations corresponding to the candidate risk monitoring video frames of each frame to form corresponding fusion video frame coding feature representations;
and performing risk prediction on the fusion video frame coding feature representation by using the target risk video frame identification neural network to obtain a target risk identification result corresponding to the target monitoring video.
According to the monitoring method and system based on the Internet, risk behavior supervision operation can be performed on the target monitoring video collected by the target Internet terminal so as to determine corresponding risk behavior combinations, wherein the risk behavior combinations comprise at least one target monitoring object behavior, and one target monitoring object behavior corresponds to one frame in the target monitoring video; extracting target monitoring video frames corresponding to the risk behavior combination from the target monitoring video to form a corresponding candidate risk monitoring video frame set; and carrying out video frame identification processing on the candidate risk monitoring video frame set by utilizing the target risk video frame identification neural network so as to obtain a corresponding target risk identification result. Based on the method, before the video frame identification processing, the determination of the risk behavior combination is performed, so that the candidate risk monitoring video frame set can be screened out based on the determination, and then the video frame identification processing is performed, so that the basis of the video frame identification processing is more reliable and effective, the reliability of risk monitoring can be improved to a certain extent, and the problem of low reliability of risk monitoring in the prior art is solved.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
Fig. 1 is a block diagram of a monitoring platform based on internet according to an embodiment of the present invention.
Fig. 2 is a schematic flow chart of steps included in the internet-based monitoring method according to the embodiment of the present invention.
Fig. 3 is a schematic diagram of each module included in the internet-based monitoring system according to the embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, the embodiment of the invention provides an internet-based monitoring platform. Wherein the internet-based monitoring platform may include a memory and a processor.
In detail, the memory and the processor are electrically connected directly or indirectly to realize transmission or interaction of data. For example, electrical connection may be made to each other via one or more communication buses or signal lines. The memory may store at least one software functional module (computer program) that may exist in the form of software or firmware. The processor may be configured to execute an executable computer program stored in the memory, thereby implementing the internet-based monitoring method provided by the embodiment of the present invention (as described later).
Specifically, in some embodiments, the Memory may be, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), and the like. The processor may be a general purpose processor including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), a System on Chip (SoC), etc.; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In particular, in some embodiments, the internet-based monitoring platform may be a server with data processing capabilities.
With reference to fig. 2, the embodiment of the invention further provides an internet-based monitoring method, which can be applied to the internet-based monitoring platform. The method steps defined by the flow related to the internet-based monitoring method can be realized by the internet-based monitoring platform.
The specific flow shown in fig. 2 will be described in detail.
And step P100, performing risk behavior supervision operation on the target monitoring video acquired by the target internet terminal to determine a corresponding risk behavior combination.
In the embodiment of the invention, the internet-based monitoring platform can perform risk behavior supervision operation on the target monitoring video acquired by the target internet terminal so as to determine the corresponding risk behavior combination. The risk behavior combination includes at least one target monitored object behavior, one of the target monitored object behaviors corresponding to one frame in the target monitored video.
And step P200, extracting target monitoring video frames corresponding to the risk behavior combination from the target monitoring video to form a corresponding candidate risk monitoring video frame set.
In the embodiment of the invention, the internet-based monitoring platform can extract the target monitoring video frames corresponding to the risk behavior combination from the target monitoring video to form a corresponding candidate risk monitoring video frame set.
And step P300, utilizing a target risk video frame recognition neural network to perform video frame recognition processing on the candidate risk monitoring video frame set so as to obtain a target risk recognition result corresponding to the target monitoring video.
In the embodiment of the invention, the monitoring platform based on the internet can utilize the target risk video frame recognition neural network to perform video frame recognition processing on the candidate risk monitoring video frame set so as to obtain a target risk recognition result corresponding to the target monitoring video.
Based on this, as in the foregoing steps P100, P200 and P300, before the video frame identification process, the risk behavior combination is determined, so that the candidate risk monitoring video frame set can be screened based on the risk behavior combination, and then the video frame identification process is performed, so that the basis of the video frame identification process is more reliable and effective, thereby improving the reliability of risk monitoring to a certain extent, and further improving the problem of low reliability of risk monitoring in the prior art.
Specifically, in some embodiments, the step P100 described above may include step S110, step S120, step S130, and step S140.
Step S110, determining a target monitoring object behavior sequence corresponding to a target monitoring video based on the target monitoring video acquired by a target internet terminal, and then carrying out segmented extraction processing on the target monitoring object behavior sequence to form a first object behavior combination cluster corresponding to the target monitoring object behavior sequence.
In the embodiment of the invention, the internet-based surveillance video processing platform can determine the target surveillance object behavior sequence corresponding to the target surveillance video based on the target surveillance video acquired by the target internet terminal, and then segment the target surveillance object behavior sequence to form a first object behavior combination cluster corresponding to the target surveillance object behavior sequence.
Step S120, analyzing and outputting a behavior screening coefficient corresponding to each first object behavior combination included in the first object behavior combination cluster, and screening a corresponding second object behavior combination cluster from the first object behavior combination clusters according to the behavior screening coefficient corresponding to each first object behavior combination.
In the embodiment of the invention, the internet-based surveillance video processing platform can analyze and output the behavior screening coefficient corresponding to each first object behavior combination included in the first object behavior combination cluster, and screen out a corresponding second object behavior combination cluster from the first object behavior combination cluster according to the behavior screening coefficient corresponding to each first object behavior combination. The second object behavior combinations included in the second object behavior combination cluster belong to first object behavior combinations in which corresponding behavior screening coefficients in the first object behavior combination cluster are matched with a pre-configured target screening rule (illustratively, the first object behavior combinations in which the corresponding behavior screening coefficients are greater than or equal to a configured behavior screening coefficient reference value may be combined to form the second object behavior combination cluster).
Step S130, outputting a corresponding third object behavior cluster based on the second object behavior cluster.
In the embodiment of the present invention, the internet-based surveillance video processing platform may output a corresponding third object behavior cluster based on the second object behavior cluster. The third object behavior combinations included in the third object behavior combination cluster belong to low-frequency object behavior combinations (for example, the low-frequency object behavior combinations may refer to that the number proportion in the behavior combination database is smaller than a proportion reference value, and the proportion reference value may be configured according to actual requirements).
And step S140, based on the third object behavior combination cluster, performing comparative analysis of the third object behavior combination, and performing risk behavior supervision operation when the output data based on the comparative analysis determines that the target risk behavior rule is matched with the preset target risk behavior rule.
In the embodiment of the invention, the internet-based surveillance video processing platform may perform comparative analysis of the third object behavior combination based on the third object behavior combination cluster, and perform risk behavior supervision operation when output data based on the comparative analysis determines that the output data matches with a preset target risk behavior rule.
Based on the foregoing steps, that is, step S110, step S120, step S130 and step S140, before performing risk behavior supervision operation, not only a behavior combination is formed, but also the behavior combination is screened, so that the operation basis reliability is higher, the reliability of video risk behavior monitoring can be improved to a certain extent, and the problem of low reliability in the prior art is further improved.
Specifically, in some embodiments, in step S110, the step of determining, based on the target monitoring video collected by the target internet terminal, a target monitoring object behavior sequence corresponding to the target monitoring video, and then performing a segment extraction process on the target monitoring object behavior sequence to form a first object behavior combination cluster corresponding to the target monitoring object behavior sequence may include:
Performing behavior recognition processing on multiple frames of target monitoring video frames included in target monitoring video acquired by a target internet terminal respectively to obtain multiple target monitoring object behaviors corresponding to the multiple frames of target monitoring video frames, and sequencing the multiple target monitoring object behaviors according to the time sequence of the corresponding target monitoring video frames to form a corresponding target monitoring object behavior sequence;
extracting a first number of sequence positions and a second number of sequence positions, the first number of sequence positions being smaller than the second number of sequence positions (illustratively, specific values of the first number of sequence positions and the second number of sequence positions are not limited, as may be 1 and 4, respectively);
performing sliding window processing on the target monitoring object behavior sequence according to the first sequence position number to form a plurality of sliding window monitoring object behavior sequences corresponding to the first sequence position number, wherein the number of sequence positions of the sliding window monitoring object behavior sequences is equal to the first sequence position number;
performing at least one incremental adjustment on the number of first sequence positions to obtain an adjusted number of first sequence positions until the number of first sequence positions after current adjustment is equal to the number of second sequence positions, and performing sliding window processing on the target monitoring object behavior sequence according to the number of first sequence positions after adjustment obtained by performing incremental adjustment each time to form a plurality of sliding window monitoring object behavior sequences corresponding to the number of first sequence positions;
And combining each sliding window monitoring object behavior sequence as a first object behavior to form a first object behavior combined cluster corresponding to the target monitoring object behavior sequence.
Specifically, in some embodiments, the step S120, that is, the step of analyzing and outputting the behavior screening coefficient corresponding to each first object behavior combination included in the first object behavior combination cluster, may include:
for any one to-be-processed first object behavior combination of which the number of object behaviors exceeds the number of preconfigured reference object behaviors in the first object behavior combination cluster, analyzing and outputting an internal combination matching coefficient corresponding to the to-be-processed first object behavior combination (illustratively, the specific numerical value of the number of the reference object behaviors is not limited, and can be configured according to actual application requirements);
for any one to-be-processed first object behavior combination of which the number of object behaviors in the first object behavior combination cluster exceeds the number of pre-configured reference object behaviors, analyzing and outputting an external combination matching coefficient corresponding to the to-be-processed first object behavior combination;
analyzing a behavior screening coefficient corresponding to the first object behavior combination to be processed based on an internal combination matching coefficient corresponding to the first object behavior combination to be processed and an external combination matching coefficient corresponding to the first object behavior combination to be processed (for example, weighting summation calculation can be performed on the internal combination matching coefficient corresponding to the first object behavior combination to be processed and the external combination matching coefficient corresponding to the first object behavior combination to be processed to obtain the corresponding behavior screening coefficient);
And analyzing and outputting an external combination matching coefficient for the first object behavior combination to be analyzed for any first object behavior combination to be analyzed, wherein the number of the object behaviors in the first object behavior combination cluster does not exceed the number of the reference object behaviors, and determining a behavior screening coefficient corresponding to the first object behavior combination to be analyzed according to the external combination matching coefficient corresponding to the first object behavior combination to be analyzed, wherein a negative correlation corresponding relation exists between the behavior screening coefficient and the external combination matching coefficient (namely, the larger the behavior screening coefficient is, the smaller the external combination matching coefficient is).
Specifically, in some embodiments, the step of analyzing and outputting the internal combination matching coefficient corresponding to the to-be-processed first object behavior combination for any one to-be-processed first object behavior combination in which the number of object behaviors in the first object behavior combination cluster exceeds the number of pre-configured reference object behaviors may include:
for any one to-be-processed first object behavior combination with the number of object behaviors exceeding the number of preconfigured reference object behaviors in the first object behavior combination cluster, forming an object behavior pair corresponding to the to-be-processed first object behavior combination, wherein any one of the object behavior pairs comprises a plurality of local first object behavior combinations formed by performing object behavior separation on the to-be-processed first object behavior combination, and each local first object behavior combination comprises one object behavior in the to-be-processed first object behavior combination or comprises a plurality of object behaviors which are continuous in time sequence in the to-be-processed first object behavior combination;
For any one of the object behavior pairs, analyzing and outputting the quantity proportion of each local first object behavior combination in the object behavior database;
based on the number proportion of each local first object behavior combination in the object behavior database, and in combination with the number proportion of the first object behavior combination to be processed in the object behavior database, analyzing and outputting an internal combination matching coefficient of the object behavior pair (for example, product calculation can be performed on the number proportion of each local first object behavior combination in the object behavior database, then the ratio of the number proportion of the first object behavior combination to be processed in the object behavior database and the product calculation is calculated, and then logarithm operation is performed on the ratio, so that an internal combination matching coefficient of the object behavior pair is obtained, wherein the internal combination matching coefficient can be positively correlated with the result of logarithm operation, for example, the number proportion of the first object behavior combination to be processed in the object behavior database is taken as a weight, so that the internal combination matching coefficient is obtained by weighting calculation;
And marking the minimum value in the internal combination matching coefficient of each object behavior pair so as to mark the minimum internal combination matching coefficient as the internal combination matching coefficient corresponding to the first object behavior combination to be processed.
Specifically, in some embodiments, the step of analyzing and outputting the external combination matching coefficient corresponding to the to-be-processed first object behavior combination for any one to-be-processed first object behavior combination in which the number of object behaviors in the first object behavior combination cluster exceeds the number of pre-configured reference object behaviors may include:
extracting adjacent object behavior clusters corresponding to the first object behavior combination to be processed from the target monitoring object behavior sequence, wherein each adjacent object behavior cluster comprises one adjacent object behavior or a plurality of adjacent object behaviors;
respectively analyzing and outputting spliced first object behavior combinations formed by splicing each adjacent object behavior and the first object behavior combination to be processed, wherein the number proportion is in the object behavior database;
and analyzing and outputting an external combination matching coefficient corresponding to the first object behavior combination to be processed based on the quantity proportion of each spliced first object behavior combination in the object behavior database.
Specifically, in some embodiments, the step of outputting, based on the foregoing analysis, each of the adjacent object behaviors and the to-be-processed first object behavior combination to form a spliced first object behavior combination by splicing the adjacent object behaviors and the to-be-processed first object behavior combination, where the number ratio of the first object behavior combination to the to-be-processed first object behavior combination is included in the object behavior database may include:
determining forward spliced first object behavior combinations formed by splicing each previous adjacent object behavior in the adjacent object behavior cluster and the first object behavior combination to be processed respectively, wherein the number proportion is in the object behavior database;
and determining backward spliced first object behavior combinations formed by splicing each backward adjacent object behavior in the adjacent object behavior clusters and the first object behavior combinations to be processed respectively, wherein the number ratio is in the object behavior database.
Specifically, in some embodiments, the step of analyzing and outputting the external combination matching coefficient corresponding to the to-be-processed first object behavior combination based on the number proportion of each of the spliced first object behavior combinations in the object behavior database may include:
Performing logarithmic negative correlation calculation on the number proportion of each forward spliced first object behavior combination in the object behavior database (for example, the logarithmic operation may be performed first, and then, a negative correlation value of a result of the logarithmic operation is determined to be a corresponding first object negative correlation value), so as to output a corresponding first logarithmic negative correlation value; carrying out logarithmic negative correlation calculation on the quantity proportion of each backward spliced first object behavior combination in the object behavior database so as to output a corresponding second logarithmic negative correlation value;
based on the first log-negative correlation value and the second log-negative correlation value, an external combination matching coefficient corresponding to the first object behavior combination to be processed is analytically output (illustratively, a smaller value of the first log-negative correlation value and the second log-negative correlation value may be used as the external combination matching coefficient corresponding to the first object behavior combination to be processed).
Specifically, in some embodiments, the step S140, that is, the step of performing the comparative analysis of the third object behavior combination based on the third object behavior combination cluster, may include:
Determining a first quantity proportion of a target third object behavior combination in a first object behavior database formed in a first time period, and determining a second quantity proportion of the target third object behavior combination in a second object behavior database formed in a second time period, wherein the second time period is not coincident with the first time period, and the target third object behavior combination is any one third object behavior combination in the third object behavior combination cluster (illustratively, the first time period belongs to a time period before the second time period and can be adjacent);
calculating a quotient of the first quantity proportion and the second quantity proportion;
based on the quotient between the first quantitative proportion and the second quantitative proportion, the output data of the comparison analysis corresponding to the target third object behavior combination is obtained (the quotient can be directly taken as the output data of the corresponding comparison analysis in an exemplary manner).
Specifically, in some embodiments, the step S140, that is, the step of performing risk behavior supervision operation in the case that it is determined that the risk behavior rule matches the preconfigured target risk behavior rule based on the output data of the comparison analysis, may include:
Extracting a pre-configured risk configuration coefficient (the risk configuration coefficient is exemplarily configured according to actual requirements and is not specifically limited herein);
and under the condition that the quotient value of the comparison analysis output data representation corresponding to the target third object behavior combination exceeds the risk configuration coefficient, obtaining an analysis result matched with a pre-configured target risk behavior rule, and performing risk behavior supervision operation on the behavior corresponding to the target third object behavior combination (for example, indicating that the target third object behavior combination belongs to the risk behavior combination).
Specifically, in some embodiments, the step P200 of extracting the target surveillance video frame corresponding to the risk behavior combination from the target surveillance video to form the corresponding candidate risk surveillance video frame set may include:
extracting target monitoring video frames corresponding to each target monitoring object behavior included in each risk behavior combination from the target monitoring video to form candidate risk monitoring video frames;
determining adjacent target monitoring object behaviors corresponding to each target monitoring object behavior included in each risk behavior combination to form a corresponding adjacent behavior set;
Extracting target monitoring video frames corresponding to each adjacent target monitoring object behavior included in the adjacent behavior set from the target monitoring video to form candidate risk monitoring video frames;
and constructing and forming a corresponding candidate risk monitoring video frame set based on all the formed candidate risk monitoring video frames.
Specifically, in some embodiments, the step P300 of using the target risk video frame recognition neural network to perform video frame recognition processing on the candidate risk monitoring video frame set to obtain a target risk recognition result corresponding to the target monitoring video may include:
utilizing a target risk video frame to identify a neural network (such as an included coding unit), and respectively carrying out coding processing on each frame of candidate risk monitoring video frame included in the candidate risk monitoring video frame set so as to form a video frame coding characteristic representation corresponding to each frame of candidate risk monitoring video frame;
carrying out fusion processing on video frame coding feature representations corresponding to the candidate risk monitoring video frames of each frame to form corresponding fusion video frame coding feature representations;
and identifying a neural network (such as a prediction unit which can comprise a softmax function) by using the target risk video frame, and performing risk prediction on the fusion video frame coding feature representation to obtain a target risk identification result corresponding to the target monitoring video.
Specifically, in some embodiments, the step of performing fusion processing on the video frame coding feature representations corresponding to the candidate risk monitoring video frames for each frame to form corresponding fused video frame coding feature representations may include:
classifying each frame of candidate risk monitoring video frames included in the candidate risk monitoring video frame set according to whether the target monitoring object behaviors corresponding to the candidate risk monitoring video frames of each frame belong to the risk behavior combination or not so as to form a multi-frame first candidate risk monitoring video frame and a multi-frame second candidate risk monitoring video frame, wherein the target monitoring object behaviors corresponding to the first candidate risk monitoring video frame of each frame belong to the risk behavior combination, and the target monitoring object behaviors corresponding to the second candidate risk monitoring video frame of each frame do not belong to the risk behavior combination;
for each frame of the first candidate risk monitoring video frame, adjusting the video frame coding feature representation corresponding to the first candidate risk monitoring video frame according to the video frame coding feature representation corresponding to the second candidate risk monitoring video frame to form an adjusted video frame coding feature representation corresponding to the first candidate risk monitoring video frame;
And carrying out fusion processing on the adjusted video frame coding characteristic representation corresponding to the first candidate risk monitoring video frame of each frame to form a corresponding fusion video frame coding characteristic representation.
Specifically, in some embodiments, the step of adjusting the video frame coding feature representation corresponding to the first candidate risk monitoring video frame according to the video frame coding feature representation corresponding to the second candidate risk monitoring video frame for each frame to form the adjusted video frame coding feature representation corresponding to the first candidate risk monitoring video frame may include:
for each frame of the first candidate risk monitoring video frame, determining that each frame corresponding to the first candidate risk monitoring video frame is adjacent to a second candidate risk monitoring video frame in the second candidate risk monitoring video frames, and performing dot multiplication processing on video frame coding feature representations corresponding to the first candidate risk monitoring video frame and video frame coding feature representations corresponding to each frame adjacent to the second candidate risk monitoring video frame respectively to obtain feature representation dot products between the video frame coding feature representations corresponding to the first candidate risk monitoring video frame and the video frame coding feature representations corresponding to each frame adjacent to the second candidate risk monitoring video frame, and performing weighting processing on video frame coding feature representations corresponding to the corresponding adjacent second candidate risk monitoring video frames according to the corresponding feature representation dot products to obtain weighted video frame coding feature representations corresponding to the adjacent second candidate risk monitoring video frames;
And superposing the video frame coding feature representation corresponding to the first candidate risk monitoring video frame and the weighted video frame coding feature representation corresponding to each frame adjacent to the second candidate risk monitoring video frame corresponding to the first candidate risk monitoring video frame so as to form the adjusted video frame coding feature representation corresponding to the first candidate risk monitoring video frame.
Specifically, in some embodiments, the step of performing fusion processing on the adjusted video frame coding feature representations corresponding to the first candidate risk monitoring video frame for each frame to form corresponding fused video frame coding feature representations may include:
sequentially splicing the adjusted video frame coding feature representations corresponding to the first candidate risk monitoring video frame of each frame according to the time sequence of the first candidate risk monitoring video frame to form corresponding spliced video frame coding feature representations, wherein the spliced video frame coding feature representations are multi-dimensional feature representations and comprise a plurality of sub-spliced video frame coding feature representations;
for each sub-stitched video frame coding feature representation, performing dot multiplication processing on the sub-stitched video frame coding feature representation and each other sub-stitched video frame coding feature representation respectively to obtain a sub-feature representation dot product between the sub-stitched video frame coding feature representation and each other sub-stitched video frame coding feature representation, performing weighting processing on the corresponding other sub-stitched video frame coding feature representations according to the corresponding sub-feature representation dot product to obtain weighted sub-stitched video frame coding feature representations corresponding to the other sub-stitched video frame coding feature representations, and superposing the sub-stitched video frame coding feature representation and the weighted sub-stitched video frame coding feature representations corresponding to each other sub-stitched video frame coding feature representation to form a superposed sub-stitched video frame coding feature representation corresponding to the sub-stitched video frame coding feature representation;
And carrying out mean value fusion (namely mean value of feature representation) on each sub-spliced video frame coding feature representation corresponding to the overlapped sub-spliced video frame coding feature representation to form a corresponding fusion video frame coding feature representation, wherein the fusion video frame coding feature representation comprises a feature representation of one dimension.
With reference to fig. 3, the embodiment of the invention further provides an internet-based monitoring system, which can be applied to the internet-based monitoring platform. Wherein, the internet-based monitoring system may include:
the risk behavior combination determining module is used for performing risk behavior supervision operation on the target monitoring video acquired by the target internet terminal to determine a corresponding risk behavior combination, wherein the risk behavior combination comprises at least one target monitoring object behavior, and one target monitoring object behavior corresponds to one frame in the target monitoring video;
the monitoring video frame extraction module is used for extracting target monitoring video frames corresponding to the risk behavior combination from the target monitoring video so as to form a corresponding candidate risk monitoring video frame set;
and the video frame identification module is used for utilizing the target risk video frame identification neural network to carry out video frame identification processing on the candidate risk monitoring video frame set so as to obtain a target risk identification result corresponding to the target monitoring video.
Specifically, in some embodiments, the surveillance video frame extraction module is specifically configured to:
extracting target monitoring video frames corresponding to each target monitoring object behavior included in each risk behavior combination from the target monitoring video to form candidate risk monitoring video frames;
determining adjacent target monitoring object behaviors corresponding to each target monitoring object behavior included in each risk behavior combination to form a corresponding adjacent behavior set;
extracting target monitoring video frames corresponding to each adjacent target monitoring object behavior included in the adjacent behavior set from the target monitoring video to form candidate risk monitoring video frames;
and constructing and forming a corresponding candidate risk monitoring video frame set based on all the formed candidate risk monitoring video frames.
Specifically, in some embodiments, the video frame identification module is specifically configured to:
utilizing a target risk video frame identification neural network to respectively encode each frame of candidate risk monitoring video frame included in the candidate risk monitoring video frame set so as to form a video frame encoding characteristic representation corresponding to each frame of candidate risk monitoring video frame;
Carrying out fusion processing on video frame coding feature representations corresponding to the candidate risk monitoring video frames of each frame to form corresponding fusion video frame coding feature representations;
and performing risk prediction on the fusion video frame coding feature representation by using the target risk video frame identification neural network to obtain a target risk identification result corresponding to the target monitoring video.
In summary, according to the internet-based monitoring method and system provided by the invention, risk behavior supervision operation can be performed on the target monitoring video collected by the target internet terminal to determine a corresponding risk behavior combination, wherein the risk behavior combination comprises at least one target monitoring object behavior, and one target monitoring object behavior corresponds to one frame in the target monitoring video; extracting target monitoring video frames corresponding to the risk behavior combination from the target monitoring video to form a corresponding candidate risk monitoring video frame set; and carrying out video frame identification processing on the candidate risk monitoring video frame set by utilizing the target risk video frame identification neural network so as to obtain a corresponding target risk identification result. Based on the method, before the video frame identification processing, the determination of the risk behavior combination is performed, so that the candidate risk monitoring video frame set can be screened out based on the determination, and then the video frame identification processing is performed, so that the basis of the video frame identification processing is more reliable and effective, the reliability of risk monitoring can be improved to a certain extent, and the problem of low reliability of risk monitoring in the prior art is solved.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. An internet-based monitoring method, comprising:
performing risk behavior supervision operation on a target monitoring video acquired by a target internet terminal to determine a corresponding risk behavior combination, wherein the risk behavior combination comprises at least one target monitoring object behavior, and one target monitoring object behavior corresponds to one frame in the target monitoring video;
extracting target monitoring video frames corresponding to the risk behavior combination from the target monitoring video to form a corresponding candidate risk monitoring video frame set;
and carrying out video frame identification processing on the candidate risk monitoring video frame set by utilizing a target risk video frame identification neural network so as to obtain a target risk identification result corresponding to the target monitoring video.
2. The internet-based monitoring method according to claim 1, wherein the step of performing risk behavior supervision on the target monitoring video collected by the target internet terminal to determine the corresponding risk behavior combination includes:
Determining a target monitoring object behavior sequence corresponding to a target monitoring video based on a target monitoring video acquired by a target internet terminal, and then carrying out segmented extraction processing on the target monitoring object behavior sequence to form a first object behavior combination cluster corresponding to the target monitoring object behavior sequence;
analyzing and outputting a behavior screening coefficient corresponding to each first object behavior combination included in the first object behavior combination cluster, screening a corresponding second object behavior combination cluster from the first object behavior combination cluster according to the behavior screening coefficient corresponding to each first object behavior combination, wherein the second object behavior combination included in the second object behavior combination cluster belongs to a first object behavior combination in which the corresponding behavior screening coefficient in the first object behavior combination cluster is matched with a preset target screening rule;
outputting a corresponding third object behavior combination cluster based on the second object behavior combination cluster, wherein a third object behavior combination included in the third object behavior combination cluster belongs to a low-frequency object behavior combination;
and performing comparison analysis of the third object behavior combination based on the third object behavior combination cluster, and performing risk behavior supervision operation to determine a corresponding risk behavior combination under the condition that the comparison analysis based output data determines that the comparison analysis matches with a preset target risk behavior rule.
3. The internet-based surveillance method of claim 1, wherein the step of extracting the target surveillance video frames corresponding to the risk behavior combination from the target surveillance video to form a corresponding set of candidate risk surveillance video frames comprises:
extracting target monitoring video frames corresponding to each target monitoring object behavior included in each risk behavior combination from the target monitoring video to form candidate risk monitoring video frames;
determining adjacent target monitoring object behaviors corresponding to each target monitoring object behavior included in each risk behavior combination to form a corresponding adjacent behavior set;
extracting target monitoring video frames corresponding to each adjacent target monitoring object behavior included in the adjacent behavior set from the target monitoring video to form candidate risk monitoring video frames;
and constructing and forming a corresponding candidate risk monitoring video frame set based on all the formed candidate risk monitoring video frames.
4. The internet-based monitoring method according to any one of claims 1-3, wherein the step of performing video frame identification processing on the candidate risk monitoring video frame set by using a target risk video frame identification neural network to obtain a target risk identification result corresponding to the target monitoring video comprises:
Utilizing a target risk video frame identification neural network to respectively encode each frame of candidate risk monitoring video frame included in the candidate risk monitoring video frame set so as to form a video frame encoding characteristic representation corresponding to each frame of candidate risk monitoring video frame;
carrying out fusion processing on video frame coding feature representations corresponding to the candidate risk monitoring video frames of each frame to form corresponding fusion video frame coding feature representations;
and performing risk prediction on the fusion video frame coding feature representation by using the target risk video frame identification neural network to obtain a target risk identification result corresponding to the target monitoring video.
5. The internet-based surveillance method of claim 4, wherein the step of fusing the video frame encoded feature representations corresponding to the candidate risk surveillance video frames for each frame to form a corresponding fused video frame encoded feature representation comprises:
classifying each frame of candidate risk monitoring video frames included in the candidate risk monitoring video frame set according to whether the target monitoring object behaviors corresponding to the candidate risk monitoring video frames of each frame belong to the risk behavior combination or not so as to form a multi-frame first candidate risk monitoring video frame and a multi-frame second candidate risk monitoring video frame, wherein the target monitoring object behaviors corresponding to the first candidate risk monitoring video frame of each frame belong to the risk behavior combination, and the target monitoring object behaviors corresponding to the second candidate risk monitoring video frame of each frame do not belong to the risk behavior combination;
For each frame of the first candidate risk monitoring video frame, adjusting the video frame coding feature representation corresponding to the first candidate risk monitoring video frame according to the video frame coding feature representation corresponding to the second candidate risk monitoring video frame to form an adjusted video frame coding feature representation corresponding to the first candidate risk monitoring video frame;
and carrying out fusion processing on the adjusted video frame coding characteristic representation corresponding to the first candidate risk monitoring video frame of each frame to form a corresponding fusion video frame coding characteristic representation.
6. The internet-based surveillance method of claim 5, wherein the step of, for each frame of the first candidate risk surveillance video frame, adjusting the video frame coding feature representation corresponding to the first candidate risk surveillance video frame according to the video frame coding feature representation corresponding to the second candidate risk surveillance video frame to form an adjusted video frame coding feature representation corresponding to the first candidate risk surveillance video frame, comprises:
for each frame of the first candidate risk monitoring video frame, determining that each frame corresponding to the first candidate risk monitoring video frame is adjacent to a second candidate risk monitoring video frame in the second candidate risk monitoring video frames, and performing dot multiplication processing on video frame coding feature representations corresponding to the first candidate risk monitoring video frame and video frame coding feature representations corresponding to each frame adjacent to the second candidate risk monitoring video frame respectively to obtain feature representation dot products between the video frame coding feature representations corresponding to the first candidate risk monitoring video frame and the video frame coding feature representations corresponding to each frame adjacent to the second candidate risk monitoring video frame, and performing weighting processing on video frame coding feature representations corresponding to the corresponding adjacent second candidate risk monitoring video frames according to the corresponding feature representation dot products to obtain weighted video frame coding feature representations corresponding to the adjacent second candidate risk monitoring video frames;
And superposing the video frame coding feature representation corresponding to the first candidate risk monitoring video frame and the weighted video frame coding feature representation corresponding to each frame adjacent to the second candidate risk monitoring video frame corresponding to the first candidate risk monitoring video frame so as to form the adjusted video frame coding feature representation corresponding to the first candidate risk monitoring video frame.
7. The internet-based surveillance method of claim 5, wherein the step of fusing the adjusted video frame coding feature representations corresponding to the first candidate risk surveillance video frame for each frame to form a corresponding fused video frame coding feature representation comprises:
sequentially splicing the adjusted video frame coding feature representations corresponding to the first candidate risk monitoring video frame of each frame according to the time sequence of the first candidate risk monitoring video frame to form corresponding spliced video frame coding feature representations, wherein the spliced video frame coding feature representations are multi-dimensional feature representations and comprise a plurality of sub-spliced video frame coding feature representations;
for each sub-stitched video frame coding feature representation, performing dot multiplication processing on the sub-stitched video frame coding feature representation and each other sub-stitched video frame coding feature representation respectively to obtain a sub-feature representation dot product between the sub-stitched video frame coding feature representation and each other sub-stitched video frame coding feature representation, performing weighting processing on the corresponding other sub-stitched video frame coding feature representations according to the corresponding sub-feature representation dot product to obtain weighted sub-stitched video frame coding feature representations corresponding to the other sub-stitched video frame coding feature representations, and superposing the sub-stitched video frame coding feature representation and the weighted sub-stitched video frame coding feature representations corresponding to each other sub-stitched video frame coding feature representation to form a superposed sub-stitched video frame coding feature representation corresponding to the sub-stitched video frame coding feature representation;
And carrying out mean value fusion on each sub-spliced video frame coding feature representation corresponding to the superimposed sub-spliced video frame coding feature representation to form a corresponding fusion video frame coding feature representation, wherein the fusion video frame coding feature representation comprises a feature representation of one dimension.
8. An internet-based monitoring system, comprising:
the risk behavior combination determining module is used for performing risk behavior supervision operation on the target monitoring video acquired by the target internet terminal to determine a corresponding risk behavior combination, wherein the risk behavior combination comprises at least one target monitoring object behavior, and one target monitoring object behavior corresponds to one frame in the target monitoring video;
the monitoring video frame extraction module is used for extracting target monitoring video frames corresponding to the risk behavior combination from the target monitoring video so as to form a corresponding candidate risk monitoring video frame set;
and the video frame identification module is used for utilizing the target risk video frame identification neural network to carry out video frame identification processing on the candidate risk monitoring video frame set so as to obtain a target risk identification result corresponding to the target monitoring video.
9. The internet-based monitoring system of claim 8, wherein the monitoring video frame extraction module is specifically configured to:
extracting target monitoring video frames corresponding to each target monitoring object behavior included in each risk behavior combination from the target monitoring video to form candidate risk monitoring video frames;
determining adjacent target monitoring object behaviors corresponding to each target monitoring object behavior included in each risk behavior combination to form a corresponding adjacent behavior set;
extracting target monitoring video frames corresponding to each adjacent target monitoring object behavior included in the adjacent behavior set from the target monitoring video to form candidate risk monitoring video frames;
and constructing and forming a corresponding candidate risk monitoring video frame set based on all the formed candidate risk monitoring video frames.
10. The internet-based monitoring system of claim 8, wherein the video frame identification module is specifically configured to:
utilizing a target risk video frame identification neural network to respectively encode each frame of candidate risk monitoring video frame included in the candidate risk monitoring video frame set so as to form a video frame encoding characteristic representation corresponding to each frame of candidate risk monitoring video frame;
Carrying out fusion processing on video frame coding feature representations corresponding to the candidate risk monitoring video frames of each frame to form corresponding fusion video frame coding feature representations;
and performing risk prediction on the fusion video frame coding feature representation by using the target risk video frame identification neural network to obtain a target risk identification result corresponding to the target monitoring video.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311213694.2A CN117173614A (en) | 2023-09-19 | 2023-09-19 | Internet-based monitoring method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311213694.2A CN117173614A (en) | 2023-09-19 | 2023-09-19 | Internet-based monitoring method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117173614A true CN117173614A (en) | 2023-12-05 |
Family
ID=88944877
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311213694.2A Withdrawn CN117173614A (en) | 2023-09-19 | 2023-09-19 | Internet-based monitoring method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117173614A (en) |
-
2023
- 2023-09-19 CN CN202311213694.2A patent/CN117173614A/en not_active Withdrawn
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115620211B (en) | Performance data processing method and system of flame-retardant low-smoke halogen-free sheath | |
CN116310914B (en) | Unmanned aerial vehicle monitoring method and system based on artificial intelligence | |
CN117579101B (en) | Control method and system for carrier communication module | |
CN116109630B (en) | Image analysis method and system based on sensor acquisition and artificial intelligence | |
CN115484112B (en) | Payment big data safety protection method, system and cloud platform | |
CN115687674A (en) | Big data demand analysis method and system serving smart cloud service platform | |
CN116821777B (en) | Novel basic mapping data integration method and system | |
CN113672782A (en) | Internet of things equipment importance matching method based on data query | |
CN115620243B (en) | Pollution source monitoring method and system based on artificial intelligence and cloud platform | |
CN116109988B (en) | Anomaly monitoring method and system based on artificial intelligence and unmanned aerial vehicle | |
CN115147134B (en) | Product anti-counterfeiting traceability method, system and cloud platform based on industrial Internet | |
CN117173614A (en) | Internet-based monitoring method and system | |
CN115457467A (en) | Building quality hidden danger positioning method and system based on data mining | |
CN115375886A (en) | Data acquisition method and system based on cloud computing service | |
CN114928467A (en) | Network security operation and maintenance association analysis method and system | |
CN117176992A (en) | Internet-based monitoring video processing method and system | |
CN113673430A (en) | User behavior analysis method based on Internet of things | |
CN115631448B (en) | Audio and video quality inspection processing method and system | |
CN115980279B (en) | Stability optimization method and system for neon purity detection system | |
CN116501285B (en) | AI dialogue processing method based on virtual digital image interaction and digitizing system | |
CN116994609B (en) | Data analysis method and system applied to intelligent production line | |
CN116437057B (en) | System optimization method and system for diborane production monitoring system | |
CN115375412B (en) | Intelligent commodity recommendation processing method and system based on image recognition | |
CN117115741A (en) | User monitoring method and system based on intelligent building | |
CN117315423A (en) | User early warning method and system based on intelligent building |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20231205 |