CN114466227B - Video analysis method and device, electronic equipment and storage medium - Google Patents

Video analysis method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114466227B
CN114466227B CN202111583369.6A CN202111583369A CN114466227B CN 114466227 B CN114466227 B CN 114466227B CN 202111583369 A CN202111583369 A CN 202111583369A CN 114466227 B CN114466227 B CN 114466227B
Authority
CN
China
Prior art keywords
decoding function
video
state
video stream
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111583369.6A
Other languages
Chinese (zh)
Other versions
CN114466227A (en
Inventor
王雨婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Cloud Technology Co Ltd
Original Assignee
Tianyi Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Cloud Technology Co Ltd filed Critical Tianyi Cloud Technology Co Ltd
Priority to CN202111583369.6A priority Critical patent/CN114466227B/en
Publication of CN114466227A publication Critical patent/CN114466227A/en
Application granted granted Critical
Publication of CN114466227B publication Critical patent/CN114466227B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics

Abstract

The embodiment of the invention provides a video analysis method, a video analysis device, electronic equipment and a storage medium, which are used for ensuring the real-time performance and flexibility of video stream analysis. The method comprises the following steps: determining whether a preset task instruction exists; if a preset task instruction exists and the preset task instruction is to add video streams, video data sent by a client corresponding to the video stream address is obtained, a decoding function is generated based on the video data, state parameters of the decoding function are set to be play states, and the decoding function is added to a decoding function list; if a preset task instruction exists and the preset task instruction is to delete the video stream, acquiring a decoding function corresponding to the video stream, setting a state parameter of the decoding function to be a NULL state, and deleting the decoding function from the decoding function list; aggregating the data output by each decoding function in the decoding function list to obtain aggregated data; video analysis is performed based on the aggregate data.

Description

Video analysis method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of real-time video analysis technologies, and in particular, to a video analysis method, apparatus, electronic device, and storage medium.
Background
In recent years, with the gradual popularization of high-definition video monitoring, the video monitoring market has realized 'visual, visual and clear', and the future is developed in the direction of 'advanced visual and visual'. With the continuous investment of governments and enterprises in smart city construction and social monitoring construction, more and more video monitoring devices are deployed, massive video data can be generated, and accordingly security is promoted to move towards intelligence and big data.
At present, researchers have made many researches in the fields of motion detection, target tracking, video segmentation, behavior recognition and the like of an intelligent monitoring technology, and have obtained great achievements. Intelligent video analysis is becoming an emerging research hotspot and development direction in academia and industry, such as face recognition, vehicle structural recognition, abnormal behavior analysis, passenger flow statistics, video abstraction and the like.
However, although the accuracy of the intelligent video analysis algorithm is continuously improved, the problems of low speed, high cost and the like still exist, parameters such as a video stream address, an algorithm model, an alarm mode and the like need to be set before video analysis is operated, if the video stream address or the algorithm model needs to be changed in the middle, the service is restarted after configuration is mostly needed to be changed, and the operation is complex and the efficiency is low.
Therefore, how to dynamically add and delete video streams in the service operation process, reduce the influence of the video streams on the whole service, and ensure the real-time performance and flexibility of video stream analysis is a problem to be solved at present.
Disclosure of Invention
The embodiment of the invention provides a video analysis method, a device, electronic equipment and a storage medium, which are used for dynamically adding and deleting video streams in the service operation process, reducing the influence of video stream changes on the whole service and ensuring the real-time performance and flexibility of video stream analysis.
In a first aspect, a video analysis method is provided, the method comprising:
determining whether a preset task instruction exists; the preset task indication is used for indicating to add or delete video streams, and the preset task indication carries video stream addresses corresponding to the video streams;
if the preset task indication exists and the preset task indication is that the video stream is added, video data sent by a client corresponding to the video stream address is obtained, a decoding function is generated based on the video data, state parameters of the decoding function are set to be in a playing state, and the decoding function is added to a decoding function list;
if the preset task indication exists and the preset task indication is that the video stream is deleted, acquiring a decoding function corresponding to the video stream, setting a state parameter of the decoding function to be a NULL state, and deleting the decoding function from a decoding function list;
aggregating the data output by each decoding function in the decoding function list to obtain aggregated data;
and performing video analysis based on the aggregate data.
Optionally, after the setting the state parameter of the decoding function to the NULL state, the method further includes:
monitoring state information of the decoding function;
if the state of the decoding function is switched to the NULL state, sending indication information to a client corresponding to the video stream; the indication information is used for indicating the client to stop transmitting video data.
Optionally, the aggregating the data output by each decoding function in the decoding function list includes:
determining whether the data output by each decoding function is acquired within a first preset duration;
if the data output by the first decoding function is not obtained within the first preset duration, aggregating the data output by the second decoding function; the second decoding function is a decoding function except the first decoding function in the decoding function list.
Optionally, before the aggregating the data output by the second decoding function, the method further includes:
acquiring data output by the first decoding function for a plurality of times according to a preset period interval;
and if the number of acquisition failures exceeds the preset number, deleting the first decoding function from the decoding function list.
Optionally, the acquiring the data output by the first decoding function includes:
setting a state parameter of the first decoding function to the NULL state;
after a second preset duration, setting the state parameter of the first decoding function to be the playing state; the second preset duration is smaller than the first preset duration;
and when the state of the first decoding function is determined to be switched to the playing state, acquiring data output by the first decoding function.
In a second aspect, there is provided a video analysis apparatus, the apparatus comprising:
the processing module is used for determining whether a preset task instruction exists or not; the preset task indication is used for indicating to add or delete video streams, and the preset task indication carries video stream addresses corresponding to the video streams;
the processing module is further configured to obtain video data sent by a client corresponding to the video stream address when the preset task indication exists and the preset task indication is adding a video stream, generate a decoding function based on the video data, set a state parameter of the decoding function to a play state, and add the decoding function to a decoding function list;
the processing module is further configured to obtain a decoding function corresponding to the video stream when the preset task indication exists and the preset task indication is deleting the video stream, set a state parameter of the decoding function to be a NULL state, and delete the decoding function from a decoding function list;
the processing module is further used for aggregating the data output by each decoding function in the decoding function list to obtain aggregated data;
the processing module is also used for carrying out video analysis based on the aggregate data.
Optionally, the apparatus further includes a communication module, and the processing module is further configured to:
monitoring state information of the decoding function;
when the state of the decoding function is switched to the NULL state, the control communication module sends indication information to the client corresponding to the video stream; the indication information is used for indicating the client to stop transmitting video data.
Optionally, the processing module is specifically configured to:
determining whether the data output by each decoding function is acquired within a first preset duration;
if the data output by the first decoding function is not obtained within the first preset duration, aggregating the data output by the second decoding function; the second decoding function is a decoding function except the first decoding function in the decoding function list.
Optionally, the processing module is specifically configured to:
acquiring data output by the first decoding function for a plurality of times according to a preset period interval;
and if the number of acquisition failures exceeds the preset number, deleting the first decoding function from the decoding function list.
Optionally, the processing module is specifically configured to:
setting a state parameter of the first decoding function to the NULL state;
after a second preset duration, setting the state parameter of the first decoding function to be the playing state; the second preset duration is smaller than the first preset duration;
and when the state of the first decoding function is determined to be switched to the playing state, acquiring data output by the first decoding function.
In a third aspect, an electronic device is provided, the electronic device comprising:
a memory for storing program instructions;
and a processor, configured to call the program instructions stored in the memory, and execute the steps included in the method according to any one of the first aspect according to the obtained program instructions.
In a fourth aspect, there is provided a computer readable storage medium storing computer executable instructions for causing a computer to perform the steps comprised by the method of any one of the first aspects.
In a fifth aspect, a computer program product is provided comprising instructions which, when run on a computer, cause the computer to perform the video analysis method described in the various possible implementations described above.
In the embodiment of the application, whether a preset task instruction exists or not is determined, wherein the preset task instruction is used for indicating to add or delete video streams, and the preset task instruction carries video stream addresses corresponding to the video streams; if the preset task indication exists and the preset task indication is that the video stream is added, video data sent by a client corresponding to a video stream address is obtained, a decoding function is generated based on the video data, state parameters of the decoding function are set to be in a playing state, and the decoding function is added to a decoding function list; if the preset task indication exists and the preset task indication is that the video stream is deleted, obtaining a decoding function corresponding to the video stream, setting a state parameter of the decoding function to be a NULL state, and deleting the decoding function from a decoding function list; aggregating the data output by each decoding function in the decoding function list to obtain aggregated data; video analysis is performed based on the aggregate data.
That is, when a video stream needs to be added, a corresponding decoding function is generated (i.e., video data corresponding to one video stream corresponds to one decoding function), then the generated decoding function is added to a decoding function list, data output by each function in the decoding function list is aggregated, video analysis is performed based on an aggregation result, and when the video stream is deleted, the corresponding decoding function is directly deleted, so that in the process of adding and deleting the video stream, the influence of video stream change on the whole service is reduced, and the real-time performance and flexibility of video stream analysis are effectively ensured.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application.
Fig. 1 is a flowchart of a video analysis method according to an embodiment of the present application;
FIG. 2 is a flowchart for acquiring and analyzing an rtsp video stream according to an embodiment of the present application;
fig. 3 is a block diagram of a video analysis device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure. Embodiments and features of embodiments in this application may be combined with each other arbitrarily without conflict. Also, while a logical order is depicted in the flowchart, in some cases, the steps depicted or described may be performed in a different order than presented herein.
The terms first and second in the description and claims of the present application and in the above-described figures are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the term "include" and any variations thereof is intended to cover non-exclusive protection. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus. The term "plurality" in the present application may mean at least two, for example, two, three or more, and embodiments of the present application are not limited.
In addition, the term "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. The character "/" herein generally indicates that the associated object is an "or" relationship unless otherwise specified.
The video analysis method provided by the embodiment of the application is described below with reference to the accompanying drawings. Referring to fig. 1, the flow of the video analysis method in the embodiment of the present application is described as follows:
step 101: determining whether a preset task instruction exists;
the preset task indication is used for indicating to add or delete the video stream, and the preset task indication carries a video stream address corresponding to the video stream, for example, the preset task indication adds the video stream, and the preset task indication also carries a video stream address corresponding to the video stream to be added. In this embodiment of the present application, when a video stream needs to be added or deleted, an instruction for adding or deleting a video stream may be sent, so in the process of performing video analysis, task monitoring is needed to determine whether a preset task instruction exists, if the preset task instruction is detected and the preset task instruction is to add a video stream, step 102 is performed, and if the preset task instruction is detected and the preset task instruction is to delete a video stream, step 105 is performed.
Step 102: acquiring video data sent by a client corresponding to a video stream address;
step 103: generating a decoding function based on the video data;
in the embodiment of the present application, after the video data is acquired, a corresponding decoding function (for example, a uri codebin plug-in is used for decoding the acquired video data and automatically generating a corresponding decoder) is generated according to the codec format of the video data. In one possible implementation, after the URI codebin plugin is generated, a unique ID may be assigned to the plugin, a uniform resource identifier (Uniform Resource Identifier, URI) is set to an acquired video stream address (i.e. a video stream address corresponding to an added video stream), a corresponding video plugin is generated through a codec format of the acquired video data, and is connected to a decoder, and the decoder automatically loads subsequent plugins such as decapsulation and decoding through the acquired video format (e.g. video file mp4, real-time video stream rtsp and rtmp, etc.). As in fig. 2, typed plug-in is used to analyze video stream type, qtdiemux plug-in is used to split audio and video, h264 and h265 burst plug-ins are used to parse video, capsfliter plug-in is used to filter format, nvv l2decoder is used to decode video.
Step 104: setting the state parameters of the decoding functions to be play states, and adding the decoding functions to a decoding function list;
in the embodiment of the present application, after the urecodebin plugin is generated, the state parameter of the urecodebin plugin is set to a play state (for example, the state parameter of the urecodebin plugin is set to play), and the urecodebin plugin is added to a decoding function list (i.e., a urecodebin plugin list, where the urecodebin plugin list includes the urecodebin plugins corresponding to all video streams that are being subjected to video analysis).
Step 105: obtaining a decoding function corresponding to a video stream;
step 106: setting the state parameter of the decoding function to be NULL state, and deleting the decoding function from the decoding function list;
in the embodiment of the application, the state parameter of the uri codebin plugin of the video stream corresponding to the ID to be deleted is set to be in a NULL state, and the uri codebin plugin is deleted from the url codebin plugin list.
In one possible implementation manner, after setting the state parameter of the urecodebin plugin to a NULL state, the state information of the urecodebin plugin may be monitored, and if the state of the urecodebin plugin is switched to the NULL state (i.e., the state switching is successful), the indication information for indicating that the client stops transmitting video data is sent to the client corresponding to the video stream. For example, when the information that the state change is successful is obtained, a STOP indication is sent to the client (for example, pad) corresponding to the video stream, so that the client STOPs the transmission of the video stream, and the corresponding pad resource is released.
Step 107: aggregating the data output by each decoding function in the decoding function list to obtain aggregated data;
in this embodiment of the present application, data output by each decoding function in the decoding function list is aggregated, for example, data output after decoding all the uri codebin plugins are aggregated by using an nvstreammux plugin, where the nvstreammux plugin can aggregate multiple input channel data, prepare for algorithm batch processing, N paths of videos need N decoders, each path of video corresponds to one decoder, and finally, the N paths of branches are combined by using the nvstreammux plugin and then connected with an inference plugin. In the batch processing process, the attribute of the batched-push-timeout of the nvstreammux plug-in is set to 40ms, and the calculation formula is as follows: patched-push-timeout=1/Max (fps), where fps is frame per second (frame per second), and Max (fps) represents the value of the fastest path of all video streams.
In a specific implementation process, after adding a video stream, the newly added video stream and the original video stream are processed in batches together through a stream mux, and in the adding process, the original video stream is continuously analyzed and is not influenced by an adding operation; when deleting the video stream, deleting the corresponding uri codebin plug-in of the video stream, and in the deleting process, disconnecting and releasing the video stream to be deleted, so that the data output of other uri codebin plug-ins is not influenced.
In a possible implementation manner, when the data output by each decoding function in the decoding function list are aggregated, the data output by the uri codebin plug-in corresponding to a certain path of video stream may not be connected or read, so that in the video analysis process, whether the data output by each decoding function is acquired within a first preset duration (for example, a value corresponding to the above-mentioned patched-push-timeout) may also be determined, and if the data output by the first decoding function is not acquired within the first preset duration, the data output by the second decoding function is aggregated; the second decoding function is a decoding function except the first decoding function in the decoding function list.
Or if the data output by the first decoding function is not obtained within the first preset duration, and before the data output by the second decoding function is aggregated, the data output by the first decoding function can be obtained for multiple times according to the preset period interval, and if the number of times of obtaining failure exceeds the preset number of times, the first decoding function is deleted from the decoding function list. Therefore, for the real-time video stream which is easy to generate network fluctuation, an automatic timing reconnection mechanism is added while the state of the video stream is monitored, so that the stability of the service is improved.
Specifically, the state parameter of the first decoding function is set to the NULL state, and after a second preset duration, the state parameter of the first decoding function is set to the play state; and when the state of the first decoding function is determined to be switched to the play state, acquiring data output by the first decoding function, and if the acquisition failure times exceed the preset times, deleting the first decoding function from a decoding function list.
Step 108: video analysis is performed based on the aggregate data.
In the embodiment of the application, after the aggregate data is acquired, intelligent analysis can be performed on the aggregate data based on an intelligent analysis algorithm pipeline, wherein the pipeline generally comprises an nvifer algorithm reasoning engine plug-in, an nvtracker target tracking plug-in, an nvvidconv format conversion plug-in, an nvosd result rendering plug-in, and an nvmsgconv conversion message plug-in and an nvmsgcbrooker sending message plug-in, and each plug-in has the following functions:
nvinifer, utilizing TensorRT to make neural network reasoning; the TensorRT is mainly used for optimizing weight parameter types, wherein the parameter types are three types of FP32, FP16 and INT8, the memory occupation and delay can be reduced by using lower data precision, the model volume is smaller, and the reasoning speed can be greatly improved; interlayer fusion, when model reasoning is deployed, the operation of each layer is completed by a GPU, the GPU starts different unified computing device architecture (Compute Unified Device Architecture, CUDA) cores to calculate, because the CUDA operation speed is very high, a great amount of time is wasted on the starting of the CUDA cores and the read-write operation of each layer input and output, the bottleneck of memory broadband and the waste of GPU resources are caused, the TensorRT carries out transverse or longitudinal fusion on the layers, the number of layers is greatly reduced, the transverse fusion can merge convolution, bias and activation layers into a CBR structure, only one CUDA core is occupied, the longitudinal fusion can merge the structure into a wider layer, but the layers of different weights occupy only one CUDA core, the layers of the calculation graph after the combination are fewer, the occupied CUDA cores are fewer, and therefore the whole model structure can be smaller, faster and more efficient; the execution of multiple streams, GPU is good at parallel computing, besides different threads and blocks, different streams are also used, the execution of multiple streams can hide the time of data transmission, the GPU divides a large Block of data into different small blocks for computing, when the first Block of data is transmitted, all tasks are waiting at the back, when the first Block of data is transmitted, the second Block of data starts to be transmitted, and at the same time, the first Block of data starts to be computed, so that the transmission time can be hidden in the computing time; dynamic Tensor Memory, during the use period of each Tensor, tensorRT can assign the video Memory for each Tensor, thus avoiding repeated application of the video Memory, reducing the Memory occupation and improving the repeated use efficiency; kernel calling and TensorRT can adjust CUDA cores according to different algorithms, different network models and different GPU platforms so as to ensure that the current model is calculated with optimal performance on a specific platform.
The nvtracker tracks the target obtained in nvifer by configuring a target tracking algorithm, and comprises three types of IoU, LT and NvDCF;
nvosd, a frame is arranged on an original image according to an algorithm processing result;
nvvidconv, realizing image format conversion;
the nvmsgconv and the nvmsgbroker are used in combination, and analysis data can be converted into a custom format and sent to the cloud service server.
In a specific implementation process, compared with a traditional target detection algorithm, the model analysis efficiency is greatly improved by adopting a deployment mode of TensorRT and GStreamer, multi-stream video structural analysis can be input simultaneously, and a result is output, and in a video analysis operation process, dynamic addition and deletion of video streams to pipeline are realized, so that for intelligent analysis service deployed on a large scale, frequent restarting service can be avoided, and the flexibility of the system is increased.
In some other embodiments, if it is detected that there is no valid video stream in the video analytics service, the video analytics pipeline is automatically stopped and exited to save resources and improve performance.
In some other embodiments, after video analysis based on the aggregate data, the video analysis results may also be sent in real-time to kafka for real-time presentation on the front-end page.
Based on the same inventive concept, the embodiment of the application provides a video analysis device, which can realize the functions corresponding to the video analysis method. The video analytics device may be a hardware structure, a software module, or a hardware structure plus a software module. The video analysis device can be realized by a chip system, and the chip system can be composed of a chip or can contain the chip and other discrete devices. Referring to fig. 3, the video analysis device includes a processing module 301 and a communication module 302. Wherein:
a processing module 301, configured to determine whether a preset task instruction exists; the preset task indication is used for indicating to add or delete video streams, and the preset task indication carries video stream addresses corresponding to the video streams;
the processing module 301 is further configured to obtain video data sent by a client corresponding to the video stream address when the preset task indication exists and the preset task indication is to add a video stream, generate a decoding function based on the video data, set a state parameter of the decoding function to a play state, and add the decoding function to a decoding function list;
the processing module 301 is further configured to obtain a decoding function corresponding to the video stream when the preset task indication exists and the preset task indication is to delete the video stream, set a state parameter of the decoding function to a NULL state, and delete the decoding function from a decoding function list;
the processing module 301 is further configured to aggregate data output by each decoding function in the decoding function list to obtain aggregated data;
the processing module 301 is further configured to perform video analysis based on the aggregate data.
Optionally, the apparatus further includes a communication module 302, and the processing module 301 is further configured to:
monitoring state information of the decoding function;
when the state of the decoding function is switched to the NULL state, the control communication module 302 sends indication information to the client corresponding to the video stream; the indication information is used for indicating the client to stop transmitting video data.
Optionally, the processing module 301 is specifically configured to:
determining whether the data output by each decoding function is acquired within a first preset duration;
if the data output by the first decoding function is not obtained within the first preset duration, aggregating the data output by the second decoding function; the second decoding function is a decoding function except the first decoding function in the decoding function list.
Optionally, the processing module 301 is specifically configured to:
acquiring data output by the first decoding function for a plurality of times according to a preset period interval;
and if the number of acquisition failures exceeds the preset number, deleting the first decoding function from the decoding function list.
Optionally, the processing module 301 is specifically configured to:
setting a state parameter of the first decoding function to the NULL state;
after a second preset duration, setting the state parameter of the first decoding function to be the playing state; the second preset duration is smaller than the first preset duration;
and when the state of the first decoding function is determined to be switched to the playing state, acquiring data output by the first decoding function.
All relevant contents of each step related to the foregoing embodiments of the video analysis method may be cited in the functional description of the functional module corresponding to the video analysis device in the embodiments of the present application, which is not described herein.
The division of the modules in the embodiments of the present application is schematically only one logic function division, and there may be another division manner in actual implementation, and in addition, each functional module in each embodiment of the present application may be integrated in one processor, or may exist separately and physically, or two or more modules may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules.
Based on the same inventive concept, the embodiment of the application provides electronic equipment. Referring to fig. 4, the electronic device includes at least one processor 401 and a memory 402 connected to the at least one processor, in this embodiment, a specific connection medium between the processor 401 and the memory 402 is not limited, and in fig. 4, the connection between the processor 401 and the memory 402 is exemplified by a bus 400, and the bus 400 is shown in a bold line in fig. 4, and a connection manner between other components is only illustrative and not limited. The bus 400 may be divided into an address bus, a data bus, a control bus, etc., and is represented by only one thick line in fig. 4 for ease of illustration, but does not represent only one bus or one type of bus.
In the embodiment of the present application, the memory 402 stores instructions executable by the at least one processor 401, and the at least one processor 401 may perform the steps included in the video analysis method by executing the instructions stored in the memory 402.
The processor 401 is a control center of the electronic device, and may use various interfaces and lines to connect various parts of the entire electronic device, and by executing or executing instructions stored in the memory 402 and invoking data stored in the memory 402, various functions of the electronic device and processing data, so as to monitor the electronic device as a whole. Alternatively, the processor 401 may include one or more processing units, and the processor 401 may integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system and an application program, etc., and the modem processor mainly processes wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 401. In some embodiments, processor 401 and memory 402 may be implemented on the same chip, and in some embodiments they may be implemented separately on separate chips.
The processor 401 may be a general purpose processor such as a Central Processing Unit (CPU), digital signal processor, application specific integrated circuit, field programmable gate array or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, which may implement or perform the methods, steps and logic blocks disclosed in the embodiments of the present application. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the video analysis method disclosed in connection with the embodiments of the present application may be directly embodied as a hardware processor executing, or may be executed by a combination of hardware and software modules in the processor.
Memory 402 is a non-volatile computer-readable storage medium that can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 402 may include at least one type of storage medium, which may include, for example, flash Memory, hard disk, multimedia card, card Memory, random access Memory (Random Access Memory, RAM), static random access Memory (Static Random Access Memory, SRAM), programmable Read-Only Memory (Programmable Read Only Memory, PROM), read-Only Memory (ROM), charged erasable programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory), magnetic Memory, magnetic disk, optical disk, and the like. Memory 402 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 402 in the present embodiment may also be circuitry or any other device capable of implementing a memory function for storing program instructions and/or data.
By programming the processor 401, the codes corresponding to the video analysis method described in the foregoing embodiment may be cured into the chip, so that the chip can execute the steps of the foregoing video analysis method when running, and how to program the processor 401 into the design is a technology known to those skilled in the art, which is not repeated here.
Based on the same inventive concept, embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when run on a computer, cause the computer to perform the steps of the video analysis method as described above.
In some possible embodiments, aspects of the video analysis method provided herein may also be implemented in the form of a program product comprising program code for causing a detection device to perform the steps of the video analysis method according to various exemplary embodiments of the present application as described herein above when the program product is run on an electronic device.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (10)

1. A method of video analysis, the method comprising:
in the video analysis process, determining whether a preset task instruction exists; the preset task indication is used for indicating to add or delete video streams, and the preset task indication carries video stream addresses corresponding to the video streams;
if the preset task indication exists and the preset task indication is that the video stream is added, video data sent by a client corresponding to the video stream address is obtained, a decoding function is generated based on the video data, state parameters of the decoding function are set to be in a playing state, and the decoding function is added to a decoding function list;
if the preset task indication exists and the preset task indication is that the video stream is deleted, acquiring a decoding function corresponding to the video stream, setting a state parameter of the decoding function to be a NULL state, and deleting the decoding function from a decoding function list;
aggregating the data output by each decoding function in the decoding function list to obtain aggregated data;
and performing video analysis based on the aggregate data.
2. The method of claim 1, wherein after setting the state parameter of the decoding function to a NULL state, further comprising:
monitoring state information of the decoding function;
if the state of the decoding function is switched to the NULL state, sending indication information to a client corresponding to the video stream; the indication information is used for indicating the client to stop transmitting video data.
3. The method of claim 1, wherein aggregating the data output by each decoding function in the list of decoding functions comprises:
determining whether the data output by each decoding function is acquired within a first preset duration;
if the data output by the first decoding function is not obtained within the first preset duration, aggregating the data output by the second decoding function; the second decoding function is a decoding function except the first decoding function in the decoding function list.
4. The method of claim 3, wherein prior to aggregating the data output by the second decoding function, further comprising:
acquiring data output by the first decoding function for a plurality of times according to a preset period interval;
and if the number of acquisition failures exceeds the preset number, deleting the first decoding function from the decoding function list.
5. The method of claim 4, wherein the obtaining the data output by the first decoding function comprises:
setting a state parameter of the first decoding function to the NULL state;
after a second preset duration, setting the state parameter of the first decoding function to be the playing state; the second preset duration is smaller than the first preset duration;
and when the state of the first decoding function is determined to be switched to the playing state, acquiring data output by the first decoding function.
6. A video analysis device, the device comprising:
the processing module is used for determining whether a preset task instruction exists in the video analysis process; the preset task indication is used for indicating to add or delete video streams, and the preset task indication carries video stream addresses corresponding to the video streams;
the processing module is further configured to obtain video data sent by a client corresponding to the video stream address when the preset task indication exists and the preset task indication is adding a video stream, generate a decoding function based on the video data, set a state parameter of the decoding function to a play state, and add the decoding function to a decoding function list;
the processing module is further configured to obtain a decoding function corresponding to the video stream when the preset task indication exists and the preset task indication is deleting the video stream, set a state parameter of the decoding function to be a NULL state, and delete the decoding function from a decoding function list;
the processing module is further used for aggregating the data output by each decoding function in the decoding function list to obtain aggregated data;
the processing module is also used for carrying out video analysis based on the aggregate data.
7. The apparatus of claim 6, further comprising a communication module, the processing module further to:
monitoring state information of the decoding function;
when the state of the decoding function is switched to the NULL state, the control communication module sends indication information to the client corresponding to the video stream; the indication information is used for indicating the client to stop transmitting video data.
8. The apparatus of claim 6, wherein the processing module is specifically configured to:
determining whether the data output by each decoding function is acquired within a first preset duration;
if the data output by the first decoding function is not obtained within the first preset duration, aggregating the data output by the second decoding function; the second decoding function is a decoding function except the first decoding function in the decoding function list.
9. An electronic device, comprising:
a memory for storing program instructions;
a processor for invoking program instructions stored in said memory and for performing the steps comprised in the method according to any of claims 1-5 in accordance with the obtained program instructions.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program comprising program instructions which, when executed by a computer, cause the computer to perform the method of any of claims 1-5.
CN202111583369.6A 2021-12-22 2021-12-22 Video analysis method and device, electronic equipment and storage medium Active CN114466227B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111583369.6A CN114466227B (en) 2021-12-22 2021-12-22 Video analysis method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111583369.6A CN114466227B (en) 2021-12-22 2021-12-22 Video analysis method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114466227A CN114466227A (en) 2022-05-10
CN114466227B true CN114466227B (en) 2023-08-04

Family

ID=81405855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111583369.6A Active CN114466227B (en) 2021-12-22 2021-12-22 Video analysis method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114466227B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114928724B (en) * 2022-05-31 2023-11-24 浙江宇视科技有限公司 Image output control method, system, electronic equipment and storage medium
CN115131730B (en) * 2022-06-28 2023-09-12 苏州大学 Intelligent video analysis method and system based on edge terminal

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102238384A (en) * 2011-04-08 2011-11-09 金诗科技有限公司 Multi-channel video decoder
CN103024388A (en) * 2012-12-17 2013-04-03 广东威创视讯科技股份有限公司 Method and system for decoding multipicture video in real time
CN106506483A (en) * 2016-10-24 2017-03-15 浙江宇视科技有限公司 Video source group synchronized playback method and device based on ONVIF
CN107169480A (en) * 2017-06-28 2017-09-15 华中科技大学 A kind of distributed character identification system of live video stream
CN110572622A (en) * 2019-09-30 2019-12-13 威创集团股份有限公司 Video decoding method and device
CN110650347A (en) * 2019-10-24 2020-01-03 腾讯云计算(北京)有限责任公司 Multimedia data processing method and device
CN111436007A (en) * 2019-01-11 2020-07-21 深圳市茁壮网络股份有限公司 Multimedia program playing method and device and set top box
CN112218140A (en) * 2020-09-02 2021-01-12 中国第一汽车股份有限公司 Video synchronous playing method, device, system and storage medium
CN113271493A (en) * 2021-04-06 2021-08-17 中国电子科技集团公司第十五研究所 Video stream decoding method and computer-readable storage medium
JP2021145343A (en) * 2016-02-16 2021-09-24 フラウンホッファー−ゲゼルシャフト ツァ フェルダールング デァ アンゲヴァンテン フォアシュンク エー.ファオ Efficient adaptive streaming
CN113674188A (en) * 2021-08-04 2021-11-19 深圳中兴网信科技有限公司 Video analysis method and device, electronic equipment and readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9131245B2 (en) * 2011-09-23 2015-09-08 Qualcomm Incorporated Reference picture list construction for video coding

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102238384A (en) * 2011-04-08 2011-11-09 金诗科技有限公司 Multi-channel video decoder
CN103024388A (en) * 2012-12-17 2013-04-03 广东威创视讯科技股份有限公司 Method and system for decoding multipicture video in real time
JP2021145343A (en) * 2016-02-16 2021-09-24 フラウンホッファー−ゲゼルシャフト ツァ フェルダールング デァ アンゲヴァンテン フォアシュンク エー.ファオ Efficient adaptive streaming
CN106506483A (en) * 2016-10-24 2017-03-15 浙江宇视科技有限公司 Video source group synchronized playback method and device based on ONVIF
CN107169480A (en) * 2017-06-28 2017-09-15 华中科技大学 A kind of distributed character identification system of live video stream
CN111436007A (en) * 2019-01-11 2020-07-21 深圳市茁壮网络股份有限公司 Multimedia program playing method and device and set top box
CN110572622A (en) * 2019-09-30 2019-12-13 威创集团股份有限公司 Video decoding method and device
CN110650347A (en) * 2019-10-24 2020-01-03 腾讯云计算(北京)有限责任公司 Multimedia data processing method and device
CN112218140A (en) * 2020-09-02 2021-01-12 中国第一汽车股份有限公司 Video synchronous playing method, device, system and storage medium
CN113271493A (en) * 2021-04-06 2021-08-17 中国电子科技集团公司第十五研究所 Video stream decoding method and computer-readable storage medium
CN113674188A (en) * 2021-08-04 2021-11-19 深圳中兴网信科技有限公司 Video analysis method and device, electronic equipment and readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DTV多节目传送流实时多画面播放软件设计;闵行;褚晶辉;刘子玉;俞滢;;电视技术(21);全文 *

Also Published As

Publication number Publication date
CN114466227A (en) 2022-05-10

Similar Documents

Publication Publication Date Title
CN114466227B (en) Video analysis method and device, electronic equipment and storage medium
CN111400008B (en) Computing resource scheduling method and device and electronic equipment
CN107577805B (en) Business service system for log big data analysis
CN111625452A (en) Flow playback method and system
US20180322419A1 (en) Model Driven Modular Artificial Intelligence Learning Framework
US11889135B2 (en) Video stream playing control method and apparatus, and storage medium
CN109542642A (en) A kind of method and device of front-end task processing
CN111090502A (en) Streaming data task scheduling method and device
CN113923472B (en) Video content analysis method, device, electronic equipment and storage medium
CN112911390B (en) Video data playing method and terminal equipment
CN114245173B (en) Image compression method, device, terminal equipment and storage medium
CN108880930A (en) A kind of detection method and equipment of network loop
CN115269519A (en) Log detection method and device and electronic equipment
CN113238855A (en) Path detection method and device
CN114490458A (en) Data transmission method, chip, server and storage medium
CN111782479A (en) Log processing method and device, electronic equipment and computer readable storage medium
CN113157475A (en) Log processing method and device, storage medium and electronic equipment
CN113992493A (en) Video processing method, system, device and storage medium
CN108279973B (en) Information statistical method and device and electronic equipment
CN111176860A (en) Method, system, computer storage medium and terminal for realizing trajectory analysis
US11967150B2 (en) Parallel video processing systems
CN109218801B (en) Information processing method, device and storage medium
CN117771657A (en) Cloud game response method, cloud game response device, computer equipment and storage medium
US20230186625A1 (en) Parallel video processing systems
CN114385440A (en) Logical partition utilization rate early warning method and device based on Linux server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant