CN114332672A - Video analysis method and device, electronic equipment and storage medium - Google Patents
Video analysis method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114332672A CN114332672A CN202111350036.9A CN202111350036A CN114332672A CN 114332672 A CN114332672 A CN 114332672A CN 202111350036 A CN202111350036 A CN 202111350036A CN 114332672 A CN114332672 A CN 114332672A
- Authority
- CN
- China
- Prior art keywords
- scene
- target
- video
- video data
- video information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004458 analytical method Methods 0.000 title claims abstract description 94
- 238000012545 processing Methods 0.000 claims abstract description 47
- 239000003245 coal Substances 0.000 claims abstract description 35
- 238000000034 method Methods 0.000 claims abstract description 33
- 238000012790 confirmation Methods 0.000 claims description 9
- 238000012544 monitoring process Methods 0.000 abstract description 9
- 230000006870 function Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 238000004590 computer program Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 230000006399 behavior Effects 0.000 description 5
- 230000002159 abnormal effect Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000002372 labelling Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 239000000463 material Substances 0.000 description 2
- 238000005065 mining Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The present disclosure provides a video analysis method, apparatus, electronic device and storage medium, the method comprising: determining a plurality of target video information, wherein the target video information comprises: video data frames, video summaries, target images; determining a plurality of scene rules; and respectively analyzing and processing the various target video information according to various scene rules to generate a target video analysis result. Through the method and the device, the video data of the coal collection scene can be efficiently analyzed, the analysis efficiency of the video information is improved, and the reliability of monitoring the coal collection scene is further improved.
Description
Technical Field
The disclosure relates to the technical field of coal acquisition video monitoring, in particular to a video analysis method, a video analysis device, electronic equipment and a storage medium.
Background
In large coal mine engineering, video data of more than 200 paths under a mine are provided, a large amount of data information is provided, monitoring personnel are required to have extremely high attention and vigilance and the capability of finding abnormal conditions, and meanwhile, the video transmission efficiency and the video quality are required.
In the related technology, aiming at the coal industry, the high-efficiency video intelligent analysis capability is lacked, and the automatic supervision function of video monitoring is not fully exerted.
Disclosure of Invention
The present disclosure is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, the present disclosure aims to provide a video analysis method, a video analysis device, an electronic device, and a storage medium, which can efficiently analyze video data of a coal collection scene, improve the analysis efficiency of video information, and further improve the reliability of monitoring the coal collection scene.
In order to achieve the above object, an embodiment of the first aspect of the present disclosure provides a video analysis method, including: determining a plurality of target video information, wherein the target video information comprises: video data frames, video summaries, target images; determining a plurality of scene rules; and respectively analyzing and processing the various target video information according to various scene rules to generate a target video analysis result.
In a video analysis method provided in an embodiment of the first aspect of the present disclosure, by determining multiple types of target video information, the target video information includes: the method comprises the steps of determining various scene rules according to video data frames, video abstracts and target images, analyzing and processing various target video information according to the various scene rules to generate target video analysis results, analyzing the video data of the coal collection scene efficiently, improving the analysis efficiency of the video information, and further improving the reliability of monitoring the coal collection scene.
In order to achieve the above object, an embodiment of a second aspect of the present disclosure provides a video analysis apparatus, including: a first determining module, configured to determine a plurality of target video information, where the target video information includes: video data frames, video summaries, target images; the second determining module is used for determining a plurality of scene rules; and the analysis module is used for respectively analyzing and processing the various target video information according to the various scene rules to generate a target video analysis result.
The video analysis apparatus according to an embodiment of the second aspect of the present disclosure determines a plurality of target video information, where the target video information includes: the method comprises the steps of determining various scene rules according to video data frames, video abstracts and target images, analyzing and processing various target video information according to the various scene rules to generate target video analysis results, analyzing the video data of the coal collection scene efficiently, improving the analysis efficiency of the video information, and further improving the reliability of monitoring the coal collection scene.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the video analytics method of the first aspect of the present disclosure.
According to a fourth aspect of the present disclosure, a non-transitory computer-readable storage medium is provided, storing computer instructions for causing a computer to perform the video analysis method of the first aspect of the present disclosure.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the video analysis method of an embodiment of the first aspect of the present disclosure.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The foregoing and/or additional aspects and advantages of the present disclosure will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a video analysis method according to an embodiment of the disclosure;
fig. 2 is a schematic flow chart of a video analysis method according to another embodiment of the present disclosure;
fig. 3 is a schematic flow chart of a video analysis method according to another embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a video analysis apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a video analysis apparatus according to another embodiment of the present disclosure;
FIG. 6 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
Reference will now be made in detail to the embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of illustrating the present disclosure and should not be construed as limiting the same. On the contrary, the embodiments of the disclosure include all changes, modifications and equivalents coming within the spirit and terms of the claims appended hereto.
Fig. 1 is a schematic flowchart of a video analysis method according to an embodiment of the present disclosure.
It should be noted that an execution main body of the video analysis method of this embodiment is a video analysis apparatus, the apparatus may be implemented by software and/or hardware, the apparatus may be configured in an electronic device, and the electronic device may include, but is not limited to, a terminal, a server, and the like.
As shown in fig. 1, the video analysis method includes:
s101: determining a plurality of target video information, wherein the target video information comprises: video data frame, video abstract, target image.
The video data frames including the required video semantics and obtained by analyzing the video data frames may be referred to as key frames, the key frames may also be referred to as video summaries, and the video images in the key frames may be referred to as target images, so that the video data frames, the video summaries, and the target images may be collectively referred to as target video information.
Optionally, in some embodiments, when determining the plurality of target video information, a plurality of initial video data may be obtained from a coal collection scene; respectively inputting corresponding initial video data into a pre-trained video data processing model to obtain a video processing result output by the video data processing model; according to the video processing result, a video data frame, a video abstract and a target image are obtained through analysis from the initial video data, the video data frame, the video abstract and the target image are used as target video information, various initial video data are processed through a pre-trained video data processing model, the target video information is convenient to determine, the selection requirements of various video formats and the target video information can be met through adjusting the pre-trained video data processing model, and the convenience and the diversity of the target video information acquisition are guaranteed.
In the embodiment of the present disclosure, the collection of multiple kinds of initial video data may be real-time video data captured by a monitoring device in a scene according to needs, or collected video data stored in advance, which is not limited to this.
The pre-trained video data processing model may be a processing model for encoding, decoding, image processing, and the like of the acquired video data, for example, encoding and decoding of a video, may be a conventional video encoding based on a hybrid encoding framework of block segmentation, may also be a processing model for processing an image by using a high-resolution (for example, 2K, 4K resolution, and the like) video encoding and decoding technology according to the needs of a coal acquisition scene, may be an image segmentation according to an algorithm formula such as a bayesian-total probability formula, or may determine the change situation of each pixel position by using the temporal change and correlation of pixel intensity data in an image sequence through an optical flow method, or may use a combination of the optical flow method and the bayesian-total probability formula, and the like, without limitation.
And after the pre-trained video data processing model is used for processing, generating a video processing result, analyzing the initial video according to the video generating result to obtain a data set of types such as a video data frame, a video abstract and a target image, and storing the data set as target video information.
In other embodiments, the determination of the multiple target video information may also be implemented by extracting key frames and key features, for example, a key frame and key feature extraction algorithm, a key feature manual labeling method, and the like, which is not limited herein.
In the embodiment of the disclosure, for the determination of various target video information, technologies such as a video acquisition and compression technology, an encoding and decoding technology, a mass video storage technology and the like can be used to construct a mine scene video distributed object storage system to provide target video information for video intelligent analysis, and a video sample labeling technology can be used to construct a video data labeling platform to form a video training sample library to further provide more target video information.
S102: a plurality of scenario rules are determined.
The video information analysis rules corresponding to the scenes in the coal collection task according to the particularity of the scenes may be referred to as scene rules.
In the embodiment of the present disclosure, the scene rule may be, for example, a single virtual warning line rule: adding a virtual detection line in a video scene, and if the target center moves to a single virtual warning line, determining that the target violates a single virtual warning line rule; virtual rectangle alert zone rules: adding a virtual rectangular frame in the video sequence, and if the center of the target moves into a virtual rectangular warning area, determining that the target violates the rules of the virtual rectangular warning area; and virtual polygon alert zone rules: establishing a polygon target abnormity judgment model, judging whether the target enters a virtual polygon drawn in a video scene, judging whether the target is abnormal in the polygon, and the like.
S103: and respectively analyzing and processing the various target video information according to various scene rules to generate a target video analysis result.
The method comprises the steps of determining targets in corresponding multiple target video information according to multiple scene rules set in different scenes in a coal collection task, judging and analyzing target behaviors, and enabling a generated result to be called as a target video analysis result.
In the embodiment of the disclosure, the extracted targets can be judged and classified for the generation of the target analysis result, and the extracted targets correspond to objects in a coal collection scene, so that data support is provided for better understanding, behavior analysis and the like of the image.
In this embodiment, by determining a plurality of target video information, the target video information includes: the method comprises the steps of determining various scene rules according to video data frames, video abstracts and target images, analyzing and processing various target video information according to the various scene rules to generate target video analysis results, analyzing the video data of the coal collection scene efficiently, improving the analysis efficiency of the video information, and further improving the reliability of monitoring the coal collection scene.
Fig. 2 is a schematic flow chart of a video analysis method according to another embodiment of the present disclosure.
As shown in fig. 2, the video analysis method includes:
s201: determining a plurality of target video information, wherein the target video information comprises: video data frame, video abstract, target image.
For the description of S201, reference may be made to the foregoing embodiments, which are not described herein again.
S202: and responding to the rule confirmation instruction, and analyzing the rule confirmation instruction to obtain the scene type identification.
The method includes identifying a plurality of different coal production scenario types for distinguishing the different scenario types, which may be referred to as scenario type identification.
In the embodiment of the present disclosure, the multiple scene types may be different service scene types divided according to different service types, or may also be different device scene types divided according to different devices, which is not limited herein.
For example, in a coal mining scene, according to the service type, the scene types can be divided into a coal breaking scene, a coal charging scene, a coal transporting scene, a supporting scene, a goaf processing scene and the like.
S203: and acquiring multiple scene types corresponding to the multiple target video information according to the scene type identification.
In the embodiment of the present disclosure, one target video information may include multiple scene type identifiers, and thus, one target video information may correspond to one or multiple scene types, and one scene type may also correspond to multiple target video information, which is not limited to this.
S204: and determining a plurality of scene rules to which the plurality of scene types belong according to the scene type identification and the scene identification result, wherein the scene identification result is the identification result of the target video information in the plurality of scene types.
In the embodiment of the present disclosure, different scene types may correspond to the same or different scene rules, different targets may also correspond to different scene rules in the same scene type, and the confirmation of the scene rules may be designed according to an exclusive rule set by an actual scene requirement, or may be a preset scene rule.
S205: and respectively analyzing and processing the various target video information according to various scene rules to generate a target video analysis result.
For the description of S205, reference may be made to the above embodiments, which are not described herein again.
S206: and configuring corresponding multiple alarm rules respectively for multiple scene types.
In the embodiment of the present disclosure, the multiple alarm rules may be configured in the console by using sound alarm or light alarm, or by combining sound and light, and configuring the alarm rules for different scene types according to the flashing frequency and color of light and different sound signals, or by using a remote device to alarm at the mobile terminal, which is not limited to this.
S207: and when the target video analysis result exceeds the alarm threshold value indicated by the alarm rule, starting the alarm rule corresponding to the target video analysis result.
The preset critical point for starting the alarm may be referred to as an alarm threshold, and the alarm threshold may be a numerical value representing a degree or a numerical value interval representing a certain degree, for example, the alarm of the temperature may be triggered when the temperature exceeds a specific numerical value, and the alarm of the distance may be triggered when the target is within a certain range from a certain point.
In the embodiment of the present disclosure, one alarm rule may correspond to one scene type, and multiple alarm rules may be combined according to actual conditions in the scene, so as to distinguish different analysis results.
In this embodiment, by determining a plurality of target video information, the target video information includes: the method comprises the steps of responding to a rule confirming instruction, analyzing a video data frame, a video abstract and a target image to obtain a scene type identifier from the rule confirming instruction, obtaining multiple scene types corresponding to multiple target video information according to the scene type identifier, determining multiple scene rules to which the multiple scene types belong according to the scene type identifier and a scene identification result, wherein the scene identification result is an identification result of the target video information in the multiple scene types, analyzing and processing the multiple target video information according to the multiple scene rules to generate a target video analysis result, configuring the corresponding multiple alarm rules according to the multiple scene types, starting the alarm rule corresponding to the target video analysis result when the target video analysis result exceeds an alarm threshold value indicated by the alarm rule, and determining the scene type and the scene identification result, the method can accurately determine various scene rules, subdivide scene types, ensure the applicability of the scene rules, and feed back abnormal conditions in scenes in time by presetting different alarm rules in different service scenes, so that the video analysis is more intelligent.
Fig. 3 is a schematic flowchart of a video analysis method according to another embodiment of the present disclosure.
As shown in fig. 3, the video analysis method includes:
s301: determining a plurality of target video information, wherein the target video information comprises: video data frame, video abstract, target image.
S302: a plurality of scenario rules are determined.
S303: and respectively analyzing and processing the various target video information according to various scene rules to generate a target video analysis result.
For the description of S301 to S303, reference may be made to the above embodiments, which are not described herein again.
S304: and identifying target characteristics according to the target video analysis result.
In the embodiment of the present disclosure, the identification of the target feature may be performed by using a salient point of the image as a key feature vector, identifying the key feature vector, and further identifying the target feature, or may be performed by using a direction and a local structure of the feature as an edge feature for describing a target boundary.
For example, the characteristic analysis of the coal-breaking business in the coal mining scene may be to identify the coal characteristics by using the coal color as a prominent point of an image according to the difference of the coal color, or to define the edge contour of the coal according to the color and use the contour as a target characteristic.
S305: and tracking and identifying the target characteristics to determine the variation trend corresponding to the target characteristics.
After the target features are identified, the target features are tracked and identified, and the target feature tracking can be divided into different types. If the target features extracted from the tracked target are different, the target tracking can be mainly divided into four categories: target region-based tracking, target feature point-based tracking, target activity contour-based tracking, and model-based tracking.
In the embodiment of the disclosure, the target characteristics are tracked and identified, a multi-target tracking function in a Bayes-total probability formula model is used to solve the problem of target shielding or the problem of tracking failure when similar colors are interfered in a large area, or a Bayes-total probability foreground detection algorithm is used to extract a foreground moving target in a video scene, and a search port is automatically initialized by using a circumscribed rectangle of a foreground mask of the extracted moving target, so that automatic tracking by using a related algorithm is realized.
In this embodiment, by determining a plurality of target video information, the target video information includes: the method comprises the steps of determining various scene rules according to video data frames, video abstracts and target images, analyzing and processing various target video information according to the various scene rules to generate target video analysis results, identifying target characteristics according to the target video analysis results, tracking and identifying the target characteristics to determine a change trend corresponding to the target characteristics, identifying the target characteristics and tracking and analyzing behaviors, understanding the change trend of the target and further prejudging the next behavior of the target, improving the applicability of video analysis, predicting the target behaviors, assisting in timely discovering various abnormal conditions in the scene, and enabling video analysis to be more intelligent.
Fig. 4 is a schematic structural diagram of a video analysis apparatus according to an embodiment of the present disclosure.
As shown in fig. 4, the video analysis apparatus 40 includes:
a first determining module 401, configured to determine a plurality of target video information, where the target video information includes: video data frames, video summaries, target images;
a second determining module 402 for determining a plurality of scene rules;
the analysis module 403 is configured to analyze and process the multiple kinds of target video information according to multiple kinds of scene rules, so as to generate a target video analysis result.
In some embodiments of the present disclosure, as shown in fig. 5, fig. 5 is a schematic structural diagram of a video analysis apparatus according to another embodiment of the present disclosure, and the first determining module 401 is specifically configured to:
acquiring various initial video data from a coal acquisition scene;
respectively inputting corresponding initial video data into a pre-trained video data processing model to obtain a video processing result output by the video data processing model;
and analyzing the initial video data to obtain a video data frame, a video abstract and a target image according to the video processing result, and taking the video data frame, the video abstract and the target image as target video information.
In some embodiments of the present disclosure, as shown in fig. 5, the second determining module 402 includes:
the parsing submodule 4021 is configured to respond to the rule confirmation instruction and parse the rule confirmation instruction to obtain a scene type identifier;
the obtaining sub-module 4022 is configured to obtain multiple scene types corresponding to the multiple target video information according to the scene type identifier;
the determining submodule 4023 is configured to determine, according to the scene type identifier and the scene identification result, a plurality of scene rules to which the plurality of scene types belong, where the scene identification result is an identification result of the target video information in the plurality of scene types.
In some embodiments of the present disclosure, as shown in fig. 5, further comprising:
a configuration module 404, configured to analyze and process the multiple kinds of target video information according to the multiple kinds of scene rules, so as to generate a target video analysis result, and then configure multiple kinds of corresponding alarm rules for the multiple kinds of scene types;
the starting module 405 is configured to start an alarm rule corresponding to the target video analysis result when the target video analysis result exceeds an alarm threshold indicated by the alarm rule.
In some embodiments of the present disclosure, as shown in fig. 5, the apparatus further comprises:
the identification module 406 is used for identifying target characteristics according to the target video analysis result;
and a third determining module 407, configured to perform tracking identification on the target feature to determine a variation trend corresponding to the target feature.
Corresponding to the video analysis method provided in the embodiments of fig. 1 to 3, the present disclosure also provides a video analysis apparatus, and since the video analysis apparatus provided in the embodiments of the present disclosure corresponds to the video analysis method provided in the embodiments of fig. 1 to 3, the embodiments of the video analysis method are also applicable to the video analysis apparatus provided in the embodiments of the present disclosure, and will not be described in detail in the embodiments of the present disclosure.
In this embodiment, by determining a plurality of target video information, the target video information includes: the method comprises the steps of determining various scene rules according to video data frames, video abstracts and target images, analyzing and processing various target video information according to the various scene rules to generate target video analysis results, analyzing the video data of the coal collection scene efficiently, improving the analysis efficiency of the video information, and further improving the reliability of monitoring the coal collection scene.
In order to achieve the above embodiments, the present disclosure also proposes a non-transitory computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the video analysis method as proposed by the foregoing embodiments of the present disclosure.
In order to implement the above embodiments, the present disclosure also provides an electronic device, including: the video analysis method comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein when the processor executes the program, the video analysis method is realized as the video analysis method provided by the previous embodiment of the disclosure.
In order to implement the foregoing embodiments, the present disclosure also provides a computer program product, which when executed by an instruction processor in the computer program product, performs the video analysis method as set forth in the foregoing embodiments of the present disclosure.
FIG. 6 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present disclosure. The electronic device 12 shown in fig. 6 is only an example and should not bring any limitations to the functionality and scope of use of the embodiments of the present disclosure.
As shown in FIG. 6, electronic device 12 is embodied in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16. Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. These architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, to name a few.
Although not shown in FIG. 6, a disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk Read Only Memory (CD-ROM), a Digital versatile disk Read Only Memory (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the disclosure.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally perform the functions and/or methodologies of the embodiments described in this disclosure.
The processing unit 16 executes various functional applications and data processing, such as implementing the video analysis method mentioned in the foregoing embodiments, by executing programs stored in the system memory 28.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
It should be noted that, in the description of the present disclosure, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present disclosure, "a plurality" means two or more unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present disclosure includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present disclosure.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present disclosure have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present disclosure, and that changes, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present disclosure.
Claims (10)
1. A video analysis method is applied to a coal collection scene, and the method comprises the following steps:
determining a plurality of target video information, wherein the target video information comprises: video data frames, video summaries, target images;
determining a plurality of scene rules;
and analyzing and processing the various target video information according to the various scene rules to generate a target video analysis result.
2. The method of claim 1, wherein said determining a plurality of target video information comprises:
acquiring various initial video data from the coal collection scene;
inputting the corresponding initial video data into a pre-trained video data processing model respectively to obtain a video processing result output by the video data processing model;
and analyzing the initial video data to obtain the video data frame, the video abstract and the target image according to the video processing result, and taking the video data frame, the video abstract and the target image as the target video information.
3. The method of claim 1, wherein the determining a plurality of scenario rules comprises:
responding to a rule confirmation instruction, and analyzing the rule confirmation instruction to obtain a scene type identifier;
acquiring multiple scene types corresponding to the multiple target video information according to the scene type identification;
and determining a plurality of scene rules to which the plurality of scene types belong according to the scene type identification and a scene identification result, wherein the scene identification result is an identification result of the target video information in the plurality of scene types.
4. The method according to claim 3, wherein after the analyzing the plurality of types of target video information according to the plurality of types of scene rules respectively to generate target video analysis results, the method further comprises:
configuring a plurality of corresponding alarm rules respectively for the plurality of scene types;
and when the target video analysis result exceeds an alarm threshold value indicated by an alarm rule, starting the alarm rule corresponding to the target video analysis result.
5. The method of claim 1, wherein the method further comprises:
identifying target characteristics according to the target video analysis result;
and tracking and identifying the target features to determine a variation trend corresponding to the target features.
6. A video analysis device, characterized in that, be applied to coal collection scene, the device includes:
a first determining module, configured to determine a plurality of types of target video information, where the target video information includes: video data frames, video summaries, target images;
the second determining module is used for determining a plurality of scene rules;
and the analysis module is used for respectively analyzing and processing the various target video information according to the various scene rules to generate a target video analysis result.
7. The apparatus of claim 6, wherein the first determining module is specifically configured to:
acquiring various initial video data from the coal collection scene;
inputting the corresponding initial video data into a pre-trained video data processing model respectively to obtain a video processing result output by the video data processing model;
and analyzing the initial video data to obtain the video data frame, the video abstract and the target image according to the video processing result, and taking the video data frame, the video abstract and the target image as the target video information.
8. The apparatus of claim 6, wherein the second determining module comprises:
the analysis submodule is used for responding to a rule confirmation instruction and analyzing the rule confirmation instruction to obtain a scene type identifier;
the obtaining submodule is used for obtaining a plurality of scene types corresponding to the plurality of target video information according to the scene type identification;
and the determining submodule is used for determining a plurality of scene rules to which the plurality of scene types belong according to the scene type identification and the scene identification result, wherein the scene identification result is the identification result of the target video information in the plurality of scene types.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
10. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111350036.9A CN114332672A (en) | 2021-11-15 | 2021-11-15 | Video analysis method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111350036.9A CN114332672A (en) | 2021-11-15 | 2021-11-15 | Video analysis method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114332672A true CN114332672A (en) | 2022-04-12 |
Family
ID=81044854
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111350036.9A Pending CN114332672A (en) | 2021-11-15 | 2021-11-15 | Video analysis method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114332672A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114900661A (en) * | 2022-05-10 | 2022-08-12 | 上海浦东发展银行股份有限公司 | Monitoring method, device, equipment and storage medium |
CN114979225A (en) * | 2022-05-12 | 2022-08-30 | 北京天玛智控科技股份有限公司 | Coal mine production control method and device based on video analysis |
-
2021
- 2021-11-15 CN CN202111350036.9A patent/CN114332672A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114900661A (en) * | 2022-05-10 | 2022-08-12 | 上海浦东发展银行股份有限公司 | Monitoring method, device, equipment and storage medium |
CN114979225A (en) * | 2022-05-12 | 2022-08-30 | 北京天玛智控科技股份有限公司 | Coal mine production control method and device based on video analysis |
CN114979225B (en) * | 2022-05-12 | 2024-01-23 | 北京天玛智控科技股份有限公司 | Coal mine production control method and device based on video analysis |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1805715B1 (en) | A method and system for processing video data | |
CN114332672A (en) | Video analysis method and device, electronic equipment and storage medium | |
CN107886048A (en) | Method for tracking target and system, storage medium and electric terminal | |
EP2660753B1 (en) | Image processing method and apparatus | |
US20070286482A1 (en) | Method and system for the detection of removed objects in video images | |
Lee et al. | Real-time illegal parking detection in outdoor environments using 1-D transformation | |
CN111079621B (en) | Method, device, electronic equipment and storage medium for detecting object | |
CN110602488A (en) | Day and night type camera device switching abnormity detection method and device and camera device | |
KR102159954B1 (en) | Method for establishing region of interest in intelligent video analytics and video analysis apparatus using the same | |
CN110619308A (en) | Aisle sundry detection method, device, system and equipment | |
JP2019168387A (en) | Building determination system | |
KR20220151583A (en) | Image processing method and device, electronic equipment and medium | |
CN110390813B (en) | Big data processing system based on vehicle type identification | |
CN113780163B (en) | Page loading time detection method and device, electronic equipment and medium | |
CN102789645B (en) | Multi-objective fast tracking method for perimeter precaution | |
US20230252654A1 (en) | Video analysis device, wide-area monitoring system, and method for selecting camera | |
CN115565135A (en) | Target tracking method and device and electronic equipment | |
US10916016B2 (en) | Image processing apparatus and method and monitoring system | |
CN111784750A (en) | Method, device and equipment for tracking moving object in video image and storage medium | |
CN112819859B (en) | Multi-target tracking method and device applied to intelligent security | |
JP2002032759A (en) | Monitor | |
CN114842228A (en) | Speckle pattern partitioning method, device, equipment and medium | |
KR20190074900A (en) | syntax-based method of providing object tracking in compressed video | |
CN110059591B (en) | Method for identifying moving target area | |
CN114067145A (en) | Passive optical splitter detection method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |