CN114466227A - Video analysis method and device, electronic equipment and storage medium - Google Patents
Video analysis method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114466227A CN114466227A CN202111583369.6A CN202111583369A CN114466227A CN 114466227 A CN114466227 A CN 114466227A CN 202111583369 A CN202111583369 A CN 202111583369A CN 114466227 A CN114466227 A CN 114466227A
- Authority
- CN
- China
- Prior art keywords
- decoding function
- video stream
- video
- state
- preset task
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000004458 analytical method Methods 0.000 title claims abstract description 59
- 238000003860 storage Methods 0.000 title claims abstract description 15
- 238000000034 method Methods 0.000 claims abstract description 36
- 230000004931 aggregating effect Effects 0.000 claims abstract description 15
- 230000006870 function Effects 0.000 claims description 191
- 238000012545 processing Methods 0.000 claims description 36
- 238000012544 monitoring process Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 9
- 238000004891 communication Methods 0.000 claims description 8
- 238000012217 deletion Methods 0.000 claims description 5
- 230000037430 deletion Effects 0.000 claims description 5
- 230000008569 process Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 11
- 238000004422 calculation algorithm Methods 0.000 description 10
- 239000010410 layer Substances 0.000 description 10
- 230000005540 biological transmission Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 230000004927 fusion Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 206010000117 Abnormal behaviour Diseases 0.000 description 1
- 102100030148 Integrator complex subunit 8 Human genes 0.000 description 1
- 101710092891 Integrator complex subunit 8 Proteins 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000011229 interlayer Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/266—Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The embodiment of the invention provides a video analysis method and device, electronic equipment and a storage medium, which are used for ensuring the real-time performance and flexibility of video stream analysis. The method comprises the following steps: determining whether a preset task indication exists; if a preset task instruction exists and the preset task instruction is to increase a video stream, acquiring video data sent by a client corresponding to the video stream address, generating a decoding function based on the video data, setting a state parameter of the decoding function to be a playing state, and adding the decoding function to a decoding function list; if a preset task instruction exists and the preset task instruction is to delete the video stream, acquiring a decoding function corresponding to the video stream, setting a state parameter of the decoding function to a NULL state, and deleting the decoding function from a decoding function list; aggregating data output by each decoding function in the decoding function list to obtain aggregated data; video analysis is performed based on the aggregated data.
Description
Technical Field
The present invention relates to the field of real-time video analysis technologies, and in particular, to a video analysis method and apparatus, an electronic device, and a storage medium.
Background
In recent years, with the gradual popularization of high-definition video monitoring, the video monitoring market has been developed in the direction of seeing, clearly seeing and clearly seeing, and in the future, the video monitoring market is developed in the direction of seeing and understanding in advance. With the continuous investment of governments and enterprises on smart city construction and social monitoring construction, more and more video monitoring devices are deployed, massive video data can be generated, and therefore security and protection can be promoted to move towards the direction of intelligence and big data.
At present, researchers have conducted many researches in the fields of motion detection, target tracking, video segmentation, behavior recognition and the like of intelligent monitoring technology and achieved fruitful results. The intelligent video analysis gradually becomes an emerging research hotspot and development direction in academia and industry, such as face recognition, vehicle structural recognition, abnormal behavior analysis, passenger flow volume statistics, video abstraction and the like.
However, although the accuracy of the intelligent video analysis algorithm is continuously improved, the problems of low speed, high cost and the like still exist, parameters such as a video stream address, an algorithm model, an alarm mode and the like need to be set before running video analysis, if the video stream address or the algorithm model needs to be changed midway, the service needs to be restarted after configuration mostly, and the operation is complex and the efficiency is not high.
Therefore, how to dynamically add or delete video streams in the service operation process, reduce the influence of video stream change on the whole service, and ensure the real-time performance and flexibility of video stream analysis is a problem to be solved urgently at present.
Disclosure of Invention
The embodiment of the invention provides a video analysis method and device, electronic equipment and a storage medium, which are used for dynamically adding and deleting video streams in the service operation process, reducing the influence of video stream change on the whole service and ensuring the real-time performance and flexibility of video stream analysis.
In a first aspect, a video analysis method is provided, the method including:
determining whether a preset task indication exists; the preset task indication is used for indicating that a video stream is added or deleted, and the preset task indication carries a video stream address corresponding to the video stream;
if the preset task instruction exists and the preset task instruction is to increase the video stream, acquiring video data sent by a client corresponding to the video stream address, generating a decoding function based on the video data, setting a state parameter of the decoding function to be in a playing state, and adding the decoding function to a decoding function list;
if the preset task indication exists and the preset task indication is deleting of the video stream, acquiring a decoding function corresponding to the video stream, setting a state parameter of the decoding function to be in a NULL state, and deleting the decoding function from a decoding function list;
aggregating the data output by each decoding function in the decoding function list to obtain aggregated data;
performing video analysis based on the aggregated data.
Optionally, after setting the state parameter of the decoding function to a NULL state, the method further includes:
monitoring state information of the decoding function;
if the state of the decoding function is switched to the NULL state, sending indication information to a client corresponding to the video stream; the indication information is used for indicating the client to stop transmitting the video data.
Optionally, the aggregating the data output by each decoding function in the decoding function list includes:
determining whether the data output by each decoding function is acquired within a first preset time length;
if the data output by the first decoding function is not acquired within the first preset time, aggregating the data output by the second decoding function; wherein the second decoding function is a decoding function in the list of decoding functions other than the first decoding function.
Optionally, before aggregating the data output by the second decoding function, the method further includes:
acquiring data output by the first decoding function for multiple times according to a preset period interval;
and if the acquisition failure times exceed the preset times, deleting the first decoding function from the decoding function list.
Optionally, the obtaining data output by the first decoding function includes:
setting a state parameter of the first decoding function to the NULL state;
after a second preset time length, setting the state parameter of the first decoding function as the playing state; the second preset time length is less than the first preset time length;
and when the state of the first decoding function is determined to be switched to the playing state, acquiring data output by the first decoding function.
In a second aspect, there is provided a video analysis apparatus, the apparatus comprising:
the processing module is used for determining whether a preset task instruction exists or not; the preset task indication is used for indicating that a video stream is added or deleted, and the preset task indication carries a video stream address corresponding to the video stream;
the processing module is further configured to, when the preset task indication exists and the preset task indication indicates that a video stream is to be added, obtain video data sent by a client corresponding to the video stream address, generate a decoding function based on the video data, set a state parameter of the decoding function to a play state, and add the decoding function to a decoding function list;
the processing module is further configured to, when the preset task indication exists and the preset task indication is a deletion video stream, acquire a decoding function corresponding to the video stream, set a state parameter of the decoding function to a NULL state, and delete the decoding function from a decoding function list;
the processing module is further configured to aggregate data output by each decoding function in the decoding function list to obtain aggregated data;
the processing module is further configured to perform video analysis based on the aggregated data.
Optionally, the apparatus further includes a communication module, and the processing module is further configured to:
monitoring state information of the decoding function;
when the state of the decoding function is switched to the NULL state, controlling a communication module to send indication information to a client corresponding to the video stream; the indication information is used for indicating the client to stop transmitting the video data.
Optionally, the processing module is specifically configured to:
determining whether the data output by each decoding function is acquired within a first preset time length;
if the data output by the first decoding function is not acquired within the first preset time, aggregating the data output by the second decoding function; wherein the second decoding function is a decoding function in the decoding function list except the first decoding function.
Optionally, the processing module is specifically configured to:
acquiring data output by the first decoding function for multiple times according to a preset period interval;
and if the acquisition failure times exceed the preset times, deleting the first decoding function from the decoding function list.
Optionally, the processing module is specifically configured to:
setting a state parameter of the first decoding function to the NULL state;
after a second preset time length, setting the state parameter of the first decoding function as the playing state; the second preset time length is less than the first preset time length;
and when the state of the first decoding function is determined to be switched to the playing state, acquiring data output by the first decoding function.
In a third aspect, an electronic device is provided, which includes:
a memory for storing program instructions;
a processor for calling the program instructions stored in the memory and executing the steps comprised in any of the methods of the first aspect according to the obtained program instructions.
In a fourth aspect, there is provided a computer-readable storage medium having stored thereon computer-executable instructions for causing a computer to perform the steps included in the method of any one of the first aspects.
In a fifth aspect, a computer program product containing instructions is provided, which when run on a computer causes the computer to perform the video analysis method described in the various possible implementations described above.
In the embodiment of the application, whether a preset task instruction exists is determined, the preset task instruction is used for indicating that a video stream is added or deleted, and a video stream address corresponding to the video stream is carried in the preset task instruction; if the preset task instruction exists and the preset task instruction is to increase the video stream, acquiring video data sent by a client corresponding to the video stream address, generating a decoding function based on the video data, setting a state parameter of the decoding function to be a playing state, and adding the decoding function to a decoding function list; if the preset task instruction exists and the preset task instruction is the deletion of the video stream, acquiring a decoding function corresponding to the video stream, setting a state parameter of the decoding function to be in a NULL state, and deleting the decoding function from a decoding function list; aggregating data output by each decoding function in the decoding function list to obtain aggregated data; video analysis is performed based on the aggregated data.
That is to say, when a video stream needs to be added, a corresponding decoding function is generated (that is, video data corresponding to one video stream corresponds to one decoding function), then the generated decoding function is added to a decoding function list, data output by each function in the decoding function list is aggregated, video analysis is performed based on the aggregation result, and when the video stream is deleted, the corresponding decoding function is directly deleted, so that in the process of adding and deleting the video stream, the influence of video stream change on the whole service is reduced, and the real-time performance and flexibility of video stream analysis are effectively ensured.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application.
Fig. 1 is a flowchart of a video analysis method according to an embodiment of the present application;
fig. 2 is a flowchart for acquiring and parsing rtsp video stream according to an embodiment of the present disclosure;
fig. 3 is a block diagram of a video analysis apparatus according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a computer device in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. In the present application, the embodiments and features of the embodiments may be arbitrarily combined with each other without conflict. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
The terms "first" and "second" in the description and claims of the present application and the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the term "comprises" and any variations thereof are intended to cover non-exclusive protection. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. The "plurality" in the present application may mean at least two, for example, two, three or more, and the embodiments of the present application are not limited.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in this document generally indicates that the preceding and following related objects are in an "or" relationship unless otherwise specified.
The following describes a video analysis method provided in an embodiment of the present application with reference to the drawings of the specification. Referring to fig. 1, a flow of a video analysis method in the embodiment of the present application is described as follows:
step 101: determining whether a preset task indication exists;
the preset task indication is used for indicating that a video stream is added or deleted, and the preset task indication carries a video stream address corresponding to the video stream, for example, if the preset task indication adds the video stream, the preset task indication also carries the video stream address corresponding to the video stream to be added. In this embodiment of the present application, when a video stream needs to be added or deleted, an instruction for adding or deleting the video stream may be sent, so that in a process of performing video analysis, task monitoring needs to be performed to determine whether a preset task instruction exists, if the preset task instruction is detected and the preset task instruction is to add the video stream, step 102 is performed, and if the preset task instruction is detected and the preset task instruction is to delete the video stream, step 105 is performed.
Step 102: acquiring video data sent by a client corresponding to a video stream address;
step 103: generating a decoding function based on the video data;
in the embodiment of the present application, after video data is acquired, a corresponding decoding function (for example, a uridecodebin plug-in for decoding the acquired video data and automatically generating a corresponding decoder) is generated according to a coding and decoding format of the video data. In a possible embodiment, after the uridebodebin plug-in is generated, a unique ID may be assigned to the plug-in, a Uniform Resource Identifier (URI) is set as an address of the acquired video stream (i.e., an address of a video stream corresponding to the added video stream), the corresponding video plug-in is generated according to a codec format of the acquired video data and is connected to a decoder decodebin, and the decoder automatically loads subsequent plug-ins such as decapsulation, decoding, and the like according to the acquired video format (e.g., a video file mp4, a real-time video stream rtsp, rtmp, and the like). As shown in FIG. 2, typefind plug-in is used to analyze video stream type, qtdefux plug-in is used to split audio and video, h264parse and h265parse plug-ins are used to parse video, capsfliter plug-in is used to filter format, nvv4l2decoder is used to decode video.
Step 104: setting the state parameter of the decoding function as a playing state, and adding the decoding function to a decoding function list;
in the embodiment of the application, after the uridebodiebin plug-in is generated, the state parameter of the uridebodiebin plug-in is set to be in a PLAYING state (for example, the state parameter of the uridebodiebin plug-in is set to be PLAYING), and the uridebodiebin plug-in is added to a decoding function list (namely, a uridebodiebin plug-in list, wherein the uridebodiebin plug-in list includes the uridebodiebin corresponding to all video streams undergoing video analysis).
Step 105: acquiring a decoding function corresponding to the video stream;
step 106: setting the state parameter of the decoding function to be in a NULL state, and deleting the decoding function from the decoding function list;
in the embodiment of the application, the state parameter of the uridebodiebin plug-in of the ID corresponding to the video stream to be deleted is set to be in a NULL state, and the uridebodiebin plug-in is deleted from the list of the uridebodiebin plug-ins.
In a possible implementation manner, after the status parameter of the uridebodiebin plugin is set to be in a NULL state, the status information of the uridebodiebin plugin can be monitored, and if the status of the uridebodiebin plugin is switched to be in the NULL state (that is, the status is switched successfully), indication information for indicating that the client stops transmitting video data is sent to the client corresponding to the video stream. For example, when the information that the state change is successful is acquired, a STOP instruction is sent to a client (for example, a pad) corresponding to the video stream, so that the client STOPs transmission of the video stream, and releases a corresponding pad resource.
Step 107: aggregating data output by each decoding function in the decoding function list to obtain aggregated data;
in the embodiment of the application, data output by each decoding function in the decoding function list is aggregated, for example, the data output by all uridecodebin plug-ins after decoding is aggregated by the nvstreammux plug-in, where the nvstreammux plug-in can aggregate multiple input channel data to prepare for algorithm batch processing, N paths of videos need N decoders, each path of video corresponds to one decoder, and finally the N paths of branches are combined by the nvstreammux plug-in and then connected with the inference plug-in. In the batch processing process, the attribute of batched (batch) -push (push) -timeout (timeout) of the nvstreammux plug-in is set to be 40ms, and the calculation formula is as follows: the batched-push-timeout is 1/max (fps), where fps is frame per second (frames per second), and max (fps) represents a value for taking the fastest one of all video streams.
In a specific implementation process, after a video stream is added, the newly added video stream and an original video stream are subjected to batch processing together through streammux, and in the adding process, the original video stream is continuously analyzed and is not influenced by adding operation; when the video stream is deleted, the uridebodiebin plug-in corresponding to the video stream is deleted, and in the deleting process, the video stream to be deleted is disconnected and released, so that the data output of other uridebodiebin plug-ins cannot be influenced.
In a possible implementation manner, when aggregating data output by each decoding function in the decoding function list, data output by a uridcodebin plugin corresponding to a certain path of video stream may be unable to be connected or read, and therefore, in a video analysis process, it may also be determined whether data output by each decoding function is acquired within a first preset time duration (for example, a value corresponding to the aforementioned batched-push-timeout), and if data output by the first decoding function is not acquired within the first preset time duration, data output by the second decoding function is aggregated; wherein the second decoding function is a decoding function in the decoding function list except the first decoding function.
Or, if the data output by the first decoding function is not acquired within the first preset duration and before the data output by the second decoding function is aggregated, the data output by the first decoding function may also be acquired multiple times according to a preset period interval, and if the number of acquisition failures exceeds a preset number, the first decoding function is deleted from the decoding function list. Therefore, for real-time video streams which are easy to have network fluctuation, an automatic timing reconnection mechanism is added while the video stream state is monitored, and the service stability is improved.
Specifically, firstly, setting the state parameter of the first decoding function to be in the NULL state, and after a second preset time length, setting the state parameter of the first decoding function to be in the play state; and if the acquisition failure times exceed the preset times, deleting the first decoding function from the decoding function list.
Step 108: video analysis is performed based on the aggregated data.
In this embodiment of the present application, after obtaining the aggregated data, the aggregated data may be intelligently analyzed based on an intelligent analysis algorithm pipeline, where the pipeline generally includes an nviner algorithm inference engine plug-in, an nvtracker target tracking plug-in, an nvvidconv format conversion plug-in, an nvossd result rendering plug-in, and an nvmsgconv conversion message plug-in and an nvmsgbroker transmission message plug-in, and a function of each plug-in is as follows:
nvinfer, neural network reasoning is carried out by using TensorRT; TensorRT is mainly used for optimizing weight parameter types, the parameter types include three types including FP32, FP16 and INT8, the memory occupation and delay can be reduced by using lower data precision, the model size is smaller, and the reasoning speed is greatly improved; interlayer fusion, when model reasoning is deployed, the operation of each layer is completed by a GPU, the GPU starts different Unified computing Device Architecture (CUDA) cores to perform calculation, the CUDA operation speed is high, a large amount of time is wasted on the starting of the CUDA cores and the read-write operation of input and output of each layer, the bottleneck of a memory broadband and the waste of GPU resources are caused, TensorRT performs horizontal or vertical fusion on the layers, the number of the layers is greatly reduced, the horizontal fusion can combine a convolution layer, a bias layer and an activation layer into a CBR structure, only one CUDA core is occupied, the vertical fusion can combine the layers with the same structure but different weights into a wider layer, only one DA core is occupied, the number of layers of a calculation graph after combination is less, the number of the CUDA cores is also less, and the whole model structure is smaller and faster, the efficiency is higher; the method comprises the following steps that multi-Stream execution is conducted, a GPU is good at parallel computation, different threads and blocks are arranged, the multi-Stream execution can hide data transmission time, the GPU divides a large Block of data into different small blocks for computation, all the following tasks are waited when a first Block of data is transmitted, a second Block of data starts to be transmitted after the first Block of data is transmitted, and meanwhile, the first Block of data starts to be computed, so that the transmission time can be hidden in the computation time; the dynamic Tensor Memory appoints the video Memory for each Tensor during the use period of the Tensor, so that repeated application of the video Memory can be avoided, the Memory occupation is reduced, and the repeated use efficiency is improved; and (4) calling the kernel, and adjusting the CUDA kernel by the TensorRT according to different algorithms, different network models and different GPU platforms so as to ensure that the current model is calculated with optimal performance on a specific platform.
nvtracker, which tracks the target obtained from nvinfer by configuring a target tracking algorithm, and comprises three types of IoU, LT and NvDCF;
nvosd, drawing a picture on the original picture according to the algorithm processing result;
nvvidconv, realizing image format conversion;
the nvmsgconv and the nvmsgbroker are used in a combined mode, and analysis data can be converted into a custom format and sent to the cloud server.
In a specific implementation process, compared with a traditional target detection algorithm, a TensorRT and GStreamer deployment mode is adopted, the model analysis efficiency is greatly improved, multi-stream video structured analysis can be simultaneously input, results are output, dynamic addition and deletion of video streams are achieved in a video analysis operation process, frequent restarting of services can be avoided for intelligent analysis services deployed in a large scale, and flexibility of a system is improved.
In some other embodiments, if it is detected that no valid video stream exists in the video analysis service, the video analysis pipeline is automatically stopped and exited to save resources and improve performance.
In some other embodiments, after video analysis based on the aggregated data, the video analysis results may also be sent to kafka in real-time for front-end page real-time presentation.
Based on the same inventive concept, the embodiment of the present application provides a video analysis apparatus, which can implement the corresponding functions of the video analysis method. The video analysis means may be a hardware structure, a software module, or a hardware structure plus a software module. The video analysis device can be realized by a chip system, and the chip system can be formed by a chip and can also comprise the chip and other discrete devices. Referring to fig. 3, the video analysis apparatus includes a processing module 301 and a communication module 302. Wherein:
a processing module 301, configured to determine whether a preset task instruction exists; the preset task indication is used for indicating that a video stream is added or deleted, and the preset task indication carries a video stream address corresponding to the video stream;
the processing module 301 is further configured to, when the preset task indication exists and the preset task indication indicates that a video stream is to be added, obtain video data sent by a client corresponding to the video stream address, generate a decoding function based on the video data, set a state parameter of the decoding function to a playing state, and add the decoding function to a decoding function list;
the processing module 301 is further configured to, when the preset task indication exists and the preset task indication is a delete video stream, obtain a decoding function corresponding to the video stream, set a state parameter of the decoding function to a NULL state, and delete the decoding function from a decoding function list;
the processing module 301 is further configured to aggregate data output by each decoding function in the decoding function list to obtain aggregated data;
the processing module 301 is further configured to perform video analysis based on the aggregated data.
Optionally, the apparatus further includes a communication module 302, and the processing module 301 is further configured to:
monitoring state information of the decoding function;
when the state of the decoding function is switched to the NULL state, controlling the communication module 302 to send indication information to the client corresponding to the video stream; the indication information is used for indicating the client to stop transmitting the video data.
Optionally, the processing module 301 is specifically configured to:
determining whether the data output by each decoding function is acquired within a first preset time length;
if the data output by the first decoding function is not acquired within the first preset time, aggregating the data output by the second decoding function; wherein the second decoding function is a decoding function in the decoding function list except the first decoding function.
Optionally, the processing module 301 is specifically configured to:
acquiring data output by the first decoding function for multiple times according to a preset period interval;
and if the acquisition failure times exceed the preset times, deleting the first decoding function from the decoding function list.
Optionally, the processing module 301 is specifically configured to:
setting a state parameter of the first decoding function to the NULL state;
after a second preset time length, setting the state parameter of the first decoding function as the playing state; the second preset time length is less than the first preset time length;
and when the state of the first decoding function is determined to be switched to the playing state, acquiring data output by the first decoding function.
All relevant contents of each step related to the embodiment of the video analysis method can be cited to the functional description of the functional module corresponding to the video analysis apparatus in the embodiment of the present application, and are not described herein again.
The division of the modules in the embodiments of the present application is schematic, and only one logical function division is provided, and in actual implementation, there may be another division manner, and in addition, each functional module in each embodiment of the present application may be integrated in one processor, may also exist alone physically, or may also be integrated in one module by two or more modules. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Based on the same inventive concept, the embodiment of the application provides electronic equipment. Referring to fig. 4, the electronic device includes at least one processor 401 and a memory 402 connected to the at least one processor, a specific connection medium between the processor 401 and the memory 402 is not limited in this embodiment, in fig. 4, the processor 401 and the memory 402 are connected by a bus 400 as an example, the bus 400 is represented by a thick line in fig. 4, and a connection manner between other components is only schematically illustrated and is not limited. The bus 400 may be divided into an address bus, a data bus, a control bus, etc., and is shown with only one thick line in fig. 4 for ease of illustration, but does not represent only one bus or type of bus.
In the embodiment of the present application, the memory 402 stores instructions executable by the at least one processor 401, and the at least one processor 401 may execute the steps included in the video analysis method by executing the instructions stored in the memory 402.
The processor 401 is a control center of the electronic device, and may connect various portions of the whole electronic device by using various interfaces and lines, and perform various functions and process data of the electronic device by operating or executing instructions stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring on the electronic device. Optionally, the processor 401 may include one or more processing units, and the processor 401 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, application programs, and the like, and the modem processor mainly handles wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 401. In some embodiments, processor 401 and memory 402 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
The processor 401 may be a general-purpose processor, such as a Central Processing Unit (CPU), digital signal processor, application specific integrated circuit, field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like, that may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the video analysis method disclosed in the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.
By programming the processor 401, the code corresponding to the video analysis method described in the foregoing embodiment may be solidified in the chip, so that the chip can execute the steps of the video analysis method when running, and how to program the processor 401 is a technique known by those skilled in the art, and is not described herein again.
Based on the same inventive concept, embodiments of the present application further provide a computer-readable storage medium storing computer instructions, which, when executed on a computer, cause the computer to perform the steps of the video analysis method as described above.
In some possible embodiments, the aspects of the video analysis method provided in the present application may also be implemented in the form of a program product, which includes program code for causing a detection device to perform the steps in the video analysis method according to various exemplary embodiments of the present application described above in this specification, when the program product is run on an electronic device.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (10)
1. A method of video analysis, the method comprising:
determining whether a preset task indication exists; the preset task indication is used for indicating that a video stream is added or deleted, and the preset task indication carries a video stream address corresponding to the video stream;
if the preset task indication exists and the preset task indication is that video stream is added, video data sent by a client corresponding to the video stream address are obtained, a decoding function is generated based on the video data, the state parameter of the decoding function is set to be in a playing state, and the decoding function is added to a decoding function list;
if the preset task indication exists and the preset task indication is the deletion of the video stream, acquiring a decoding function corresponding to the video stream, setting a state parameter of the decoding function to be in a NULL state, and deleting the decoding function from a decoding function list;
aggregating the data output by each decoding function in the decoding function list to obtain aggregated data;
performing video analysis based on the aggregated data.
2. The method of claim 1, wherein after the setting the state parameter of the decoding function to a NULL state, further comprising:
monitoring state information of the decoding function;
if the state of the decoding function is switched to the NULL state, sending indication information to a client corresponding to the video stream; the indication information is used for indicating the client to stop transmitting the video data.
3. The method of claim 1, wherein said aggregating data output by each decoding function in said list of decoding functions comprises:
determining whether the data output by each decoding function is acquired within a first preset time length;
if the data output by the first decoding function is not acquired within the first preset time, aggregating the data output by the second decoding function; wherein the second decoding function is a decoding function in the decoding function list except the first decoding function.
4. The method of claim 3, wherein prior to aggregating the data output by the second decoding function, further comprising:
acquiring data output by the first decoding function for multiple times according to a preset period interval;
and if the acquisition failure times exceed the preset times, deleting the first decoding function from the decoding function list.
5. The method of claim 4, wherein said obtaining data output by said first decoding function comprises:
setting a state parameter of the first decoding function to the NULL state;
after a second preset time length, setting the state parameter of the first decoding function as the playing state; the second preset time length is less than the first preset time length;
and when the state of the first decoding function is determined to be switched to the playing state, acquiring data output by the first decoding function.
6. A video analysis apparatus, characterized in that the apparatus comprises:
the processing module is used for determining whether a preset task instruction exists or not; the preset task indication is used for indicating that a video stream is added or deleted, and the preset task indication carries a video stream address corresponding to the video stream;
the processing module is further configured to, when the preset task indication exists and the preset task indication indicates that a video stream is to be added, acquire video data sent by a client corresponding to the video stream address, generate a decoding function based on the video data, set a state parameter of the decoding function to a playing state, and add the decoding function to a decoding function list;
the processing module is further configured to, when the preset task indication exists and the preset task indication is a deletion video stream, acquire a decoding function corresponding to the video stream, set a state parameter of the decoding function to a NULL state, and delete the decoding function from a decoding function list;
the processing module is further configured to aggregate data output by each decoding function in the decoding function list to obtain aggregated data;
the processing module is further configured to perform video analysis based on the aggregated data.
7. The apparatus of claim 6, wherein the apparatus further comprises a communication module, and wherein the processing module is further configured to:
monitoring state information of the decoding function;
when the state of the decoding function is switched to the NULL state, controlling a communication module to send indication information to a client corresponding to the video stream; the indication information is used for indicating the client to stop transmitting the video data.
8. The apparatus of claim 6, wherein the processing module is specifically configured to:
determining whether the data output by each decoding function is acquired within a first preset time length;
if the data output by the first decoding function is not acquired within the first preset time, aggregating the data output by the second decoding function; wherein the second decoding function is a decoding function in the decoding function list except the first decoding function.
9. An electronic device, comprising:
a memory for storing program instructions;
a processor for calling program instructions stored in said memory and for executing the steps comprised by the method of any one of claims 1 to 5 in accordance with the obtained program instructions.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a computer, cause the computer to perform the method according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111583369.6A CN114466227B (en) | 2021-12-22 | 2021-12-22 | Video analysis method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111583369.6A CN114466227B (en) | 2021-12-22 | 2021-12-22 | Video analysis method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114466227A true CN114466227A (en) | 2022-05-10 |
CN114466227B CN114466227B (en) | 2023-08-04 |
Family
ID=81405855
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111583369.6A Active CN114466227B (en) | 2021-12-22 | 2021-12-22 | Video analysis method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114466227B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115131730A (en) * | 2022-06-28 | 2022-09-30 | 苏州大学 | Intelligent video analysis method and system based on edge terminal |
CN114928724B (en) * | 2022-05-31 | 2023-11-24 | 浙江宇视科技有限公司 | Image output control method, system, electronic equipment and storage medium |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102238384A (en) * | 2011-04-08 | 2011-11-09 | 金诗科技有限公司 | Multi-channel video decoder |
US20130077680A1 (en) * | 2011-09-23 | 2013-03-28 | Ye-Kui Wang | Decoded picture buffer management |
CN103024388A (en) * | 2012-12-17 | 2013-04-03 | 广东威创视讯科技股份有限公司 | Method and system for decoding multipicture video in real time |
CN106506483A (en) * | 2016-10-24 | 2017-03-15 | 浙江宇视科技有限公司 | Video source group synchronized playback method and device based on ONVIF |
CN107169480A (en) * | 2017-06-28 | 2017-09-15 | 华中科技大学 | A kind of distributed character identification system of live video stream |
CN110572622A (en) * | 2019-09-30 | 2019-12-13 | 威创集团股份有限公司 | Video decoding method and device |
CN110650347A (en) * | 2019-10-24 | 2020-01-03 | 腾讯云计算(北京)有限责任公司 | Multimedia data processing method and device |
CN111436007A (en) * | 2019-01-11 | 2020-07-21 | 深圳市茁壮网络股份有限公司 | Multimedia program playing method and device and set top box |
CN112218140A (en) * | 2020-09-02 | 2021-01-12 | 中国第一汽车股份有限公司 | Video synchronous playing method, device, system and storage medium |
CN113271493A (en) * | 2021-04-06 | 2021-08-17 | 中国电子科技集团公司第十五研究所 | Video stream decoding method and computer-readable storage medium |
JP2021145343A (en) * | 2016-02-16 | 2021-09-24 | フラウンホッファー−ゲゼルシャフト ツァ フェルダールング デァ アンゲヴァンテン フォアシュンク エー.ファオ | Efficient adaptive streaming |
CN113674188A (en) * | 2021-08-04 | 2021-11-19 | 深圳中兴网信科技有限公司 | Video analysis method and device, electronic equipment and readable storage medium |
-
2021
- 2021-12-22 CN CN202111583369.6A patent/CN114466227B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102238384A (en) * | 2011-04-08 | 2011-11-09 | 金诗科技有限公司 | Multi-channel video decoder |
US20130077680A1 (en) * | 2011-09-23 | 2013-03-28 | Ye-Kui Wang | Decoded picture buffer management |
CN103024388A (en) * | 2012-12-17 | 2013-04-03 | 广东威创视讯科技股份有限公司 | Method and system for decoding multipicture video in real time |
JP2021145343A (en) * | 2016-02-16 | 2021-09-24 | フラウンホッファー−ゲゼルシャフト ツァ フェルダールング デァ アンゲヴァンテン フォアシュンク エー.ファオ | Efficient adaptive streaming |
CN106506483A (en) * | 2016-10-24 | 2017-03-15 | 浙江宇视科技有限公司 | Video source group synchronized playback method and device based on ONVIF |
CN107169480A (en) * | 2017-06-28 | 2017-09-15 | 华中科技大学 | A kind of distributed character identification system of live video stream |
CN111436007A (en) * | 2019-01-11 | 2020-07-21 | 深圳市茁壮网络股份有限公司 | Multimedia program playing method and device and set top box |
CN110572622A (en) * | 2019-09-30 | 2019-12-13 | 威创集团股份有限公司 | Video decoding method and device |
CN110650347A (en) * | 2019-10-24 | 2020-01-03 | 腾讯云计算(北京)有限责任公司 | Multimedia data processing method and device |
CN112218140A (en) * | 2020-09-02 | 2021-01-12 | 中国第一汽车股份有限公司 | Video synchronous playing method, device, system and storage medium |
CN113271493A (en) * | 2021-04-06 | 2021-08-17 | 中国电子科技集团公司第十五研究所 | Video stream decoding method and computer-readable storage medium |
CN113674188A (en) * | 2021-08-04 | 2021-11-19 | 深圳中兴网信科技有限公司 | Video analysis method and device, electronic equipment and readable storage medium |
Non-Patent Citations (1)
Title |
---|
闵行;褚晶辉;刘子玉;俞滢;: "DTV多节目传送流实时多画面播放软件设计", 电视技术, no. 21 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114928724B (en) * | 2022-05-31 | 2023-11-24 | 浙江宇视科技有限公司 | Image output control method, system, electronic equipment and storage medium |
CN115131730A (en) * | 2022-06-28 | 2022-09-30 | 苏州大学 | Intelligent video analysis method and system based on edge terminal |
CN115131730B (en) * | 2022-06-28 | 2023-09-12 | 苏州大学 | Intelligent video analysis method and system based on edge terminal |
Also Published As
Publication number | Publication date |
---|---|
CN114466227B (en) | 2023-08-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111400008B (en) | Computing resource scheduling method and device and electronic equipment | |
CN109409513B (en) | Task processing method based on neural network and related equipment | |
CN114466227B (en) | Video analysis method and device, electronic equipment and storage medium | |
CN110677468B (en) | Message processing method, device and equipment | |
CN110662017B (en) | Video playing quality detection method and device | |
CN110401700A (en) | Model loading method and system, control node and execution node | |
CN116980569A (en) | Security monitoring system and method based on cloud computing | |
US11967150B2 (en) | Parallel video processing systems | |
CN111177237B (en) | Data processing system, method and device | |
CN110069533B (en) | Event subscription method and device based on blockchain | |
CN113923472B (en) | Video content analysis method, device, electronic equipment and storage medium | |
CN107870921B (en) | Log data processing method and device | |
CN114245173B (en) | Image compression method, device, terminal equipment and storage medium | |
CN112911390B (en) | Video data playing method and terminal equipment | |
CN112637538B (en) | Smart tag method, system, medium, and terminal for optimizing video analysis | |
CN113238855B (en) | Path detection method and device | |
CN115269519A (en) | Log detection method and device and electronic equipment | |
CN113992493A (en) | Video processing method, system, device and storage medium | |
CN111782479A (en) | Log processing method and device, electronic equipment and computer readable storage medium | |
CN112437303B (en) | JPEG decoding method and device | |
CN115242735B (en) | Real-time voice stream slice analysis method, system and computer equipment | |
CN117435367B (en) | User behavior processing method, device, equipment, storage medium and program product | |
CN115731712A (en) | Traffic scene event analysis method, device, system and equipment | |
CN116320442A (en) | Video stream data generation method, device and medium | |
CN117771657A (en) | Cloud game response method, cloud game response device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |