CN109669780B - Video analysis method and system - Google Patents

Video analysis method and system Download PDF

Info

Publication number
CN109669780B
CN109669780B CN201811589989.9A CN201811589989A CN109669780B CN 109669780 B CN109669780 B CN 109669780B CN 201811589989 A CN201811589989 A CN 201811589989A CN 109669780 B CN109669780 B CN 109669780B
Authority
CN
China
Prior art keywords
analysis task
video
task
current
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811589989.9A
Other languages
Chinese (zh)
Other versions
CN109669780A (en
Inventor
谢锦滨
张奕
李传朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jilian Network Technology Co Ltd
Original Assignee
Shanghai Jilian Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jilian Network Technology Co Ltd filed Critical Shanghai Jilian Network Technology Co Ltd
Priority to CN201811589989.9A priority Critical patent/CN109669780B/en
Publication of CN109669780A publication Critical patent/CN109669780A/en
Application granted granted Critical
Publication of CN109669780B publication Critical patent/CN109669780B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Television Signal Processing For Recording (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a video analysis method and a system, wherein the method comprises the following steps: when the video analysis task is detected to be suspended, if the current analysis task exits, detecting whether the exiting current analysis task and the suspended video analysis task are the same dimension analysis task; if yes, sending state keeping information to the current analysis task to maintain the current execution state, and executing the suspended video analysis task; if not, informing the current analysis task to quit execution, and releasing the GPU resources corresponding to the current analysis task. By the method, the GPU memory is fully utilized to execute a plurality of analysis tasks concurrently, repeated loading of the analysis models can be reduced by executing the tasks with the same dimensionality in succession, time waste caused by repeated loading is avoided, and video analysis efficiency and GPU utilization rate are improved.

Description

Video analysis method and system
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a video parsing method and system.
Background
The video image information structuring is an important technology, and aims to convert video information into a text structure and store the text structure in a database, so that later retrieval is facilitated and various applications are realized. In engineering, the video is analyzed, shots are separated, scenes or targets in the shots are detected and tracked, and finally a track flow of related information is formed. The AI technology at present mainly refers to deep learning, and since a single deep learning model can generally analyze information of only one dimension, such as an object and a scene, multiple model operations may be involved in a single video parsing process. Meanwhile, in addition to the deep learning model requiring a Graphic Processor (GPU), algorithms such as shot detection and tracking may also be operated by a CPU. In the single-video single-dimension single-instance parsing process, when arithmetic operations are executed on a CPU, the GPU is kept idle. To improve GPU top-efficiency, multiple dimensions of multiple models are typically maintained within the GPU, multitasking and parsing the video. Due to uncertainty of the analysis task, after the current analysis task is completed, different models may need to be loaded to perform the next analysis task, and the model used by the current analysis task is limited by the size of the GPU memory, and the GPU memory may need to be unloaded to be used by the next task model.
Therefore, in the existing video parsing method, a large amount of time is wasted when the GPU unloads and loads the model, and the video parsing efficiency is reduced.
Disclosure of Invention
The invention provides a video analysis method and a video analysis system, which are used for solving the problems that in the existing video analysis method, more time is wasted and the video analysis efficiency is reduced when a GPU unloads and loads a model.
The specific technical scheme is as follows:
a method of video parsing, the method comprising:
when the video analysis task is detected to be hung, judging whether the current analysis task exists or not to exit;
if yes, detecting whether the current analysis task to be exited is the same dimension analysis task as the suspended video analysis task;
if yes, sending state keeping information to the current analysis task to maintain the current execution state, and executing the suspended video analysis task;
if not, informing the current analysis task to quit execution, and releasing the GPU resources corresponding to the current analysis task.
Optionally, before determining whether the current parsing task exits, the method further includes:
initializing a GPU memory management server, and acquiring the number of GPUs and corresponding memory size information;
and acquiring a memory peak value corresponding to the analytic model, and storing the memory peak value, the GPU number and corresponding memory size information.
Optionally, before determining whether there is a current parsing task to exit, the method HIA includes:
receiving a video analysis task, and judging whether idle GPU resources exist at present to meet the video analysis task;
if so, calling an analysis model corresponding to the video analysis task, and entering an analysis state;
if not, the video analysis task is suspended, and the current video analysis task is waited to be finished.
Optionally, the determining whether there is an idle GPU resource satisfying the video parsing task currently includes:
and judging whether the idle memory of the current idle GPU meets the memory corresponding to the video analysis task.
A video analytics system, comprising:
the memory management module is used for judging whether the current analysis task exits or not when the video analysis task is detected to be hung; if yes, detecting whether the current analysis task to be exited is the same dimension analysis task as the suspended video analysis task; if yes, sending state keeping information to the current analysis task to maintain the current execution state, and executing the suspended video analysis task; if not, informing the current analysis task to quit execution, and releasing GPU resources corresponding to the current analysis task;
and the video analysis module is used for analyzing the video data.
Optionally, the memory management module is further configured to initialize the GPU memory management server, and obtain the number of GPUs and corresponding memory size information; and acquiring a memory peak value corresponding to the analytic model, and storing the memory peak value, the GPU number and corresponding memory size information.
Optionally, the memory management module is further configured to receive a video parsing task, and determine whether there is an idle GPU resource satisfying the video parsing task currently; if so, calling an analysis model corresponding to the video analysis task, and entering an analysis state; if not, the video analysis task is suspended, and the current video analysis task is waited to be finished.
Optionally, the memory management module is further configured to determine whether an idle memory of the currently idle GPU meets a memory corresponding to the video parsing task.
By the method, the GPU memory is fully utilized to execute a plurality of analysis tasks concurrently, repeated loading of the analysis models can be reduced by executing the tasks with the same dimensionality in succession, time waste caused by repeated loading is avoided, and video analysis efficiency and GPU utilization rate are improved.
Drawings
Fig. 1 is a flowchart of a video parsing method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a video parsing system according to an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention are described in detail with reference to the drawings and the specific embodiments, and it should be understood that the embodiments and the specific technical features in the embodiments of the present invention are merely illustrative of the technical solutions of the present invention, and are not restrictive, and the embodiments and the specific technical features in the embodiments of the present invention may be combined with each other without conflict.
Fig. 1 is a flowchart of a video parsing method according to an embodiment of the present invention, where the method includes:
s1, when detecting that the video analysis task is suspended, judging whether the current analysis task exits;
firstly, the method provided by the invention is applied to a model dynamic loading management system, the system comprises a GPU memory management module and a video analysis task program, the GPU memory management module and the video analysis task program are mutually independent processes, and the video analysis task program is started by the GPU memory management module. The two modules are interacted through local IPC, and when the analysis program exits, a message is sent to inform the GPU memory management module.
When the system is started, the GPU memory management server is initialized, and the number of GPUs and the corresponding memory size information are obtained. That is, determine how many GPUs are currently available and the memory corresponding to the GPUs. And acquiring a memory peak value corresponding to the analytic model, wherein the memory peak value indicates resources required by the analytic model. The GPU memory management module will store the above information.
When the GPU memory management module receives the video parsing task, it is determined whether there is a free GPU resource satisfying the video parsing task, that is, whether a free memory of the currently free GPU satisfies a memory corresponding to the video parsing task.
And if the memory meets the requirements, calling an analysis model corresponding to the video analysis task, and entering an analysis state.
And if the memory does not meet the requirements, suspending the video analysis task, namely, putting the video analysis task into the waiting queue again, waiting for the end of the current video analysis task, and matching the current video analysis task to the corresponding GPU resource.
When the current video analysis task is suspended, whether the current analysis task exits is judged, if not, S2 is executed, and if yes, S3 is executed.
S2, continuing to suspend the video analysis task;
s3, detecting whether the exiting current parsing task is the same dimension parsing task as the suspended video parsing task;
the system detects the video analysis task about to exit in real time, and when the current analysis task is about to exit, the system firstly judges whether the suspended video analysis task is the same dimension analysis task as the current analysis task, namely whether the analysis model required by the suspended video analysis task is the same as the analysis model of the current analysis task.
If yes, then S4 is executed, otherwise, then S5 is executed.
S4, sending state keeping information to the current analysis task to maintain the current execution state and execute the suspended video analysis task;
and S5, informing the current analysis task to quit execution and releasing the GPU resources corresponding to the current analysis task.
That is, the video task parser will continue parsing the suspended video task after the exiting current parsing task exits.
By the method, the GPU memory is fully utilized to concurrently execute a plurality of analysis tasks, repeated loading of analysis models can be reduced by sequentially executing the tasks with the same dimensionality, time waste caused by repeated loading is avoided, and video analysis efficiency and GPU utilization rate are improved.
Corresponding to the method provided by the present invention, an embodiment of the present invention further provides a video parsing system, and as shown in fig. 2, the video parsing system in the embodiment of the present invention is schematically configured, and the system includes:
the memory management module 201 is configured to, when it is detected that a video parsing task is suspended, determine whether a current parsing task exists and is to exit; if yes, detecting whether the current analysis task to be exited is the same dimension analysis task as the suspended video analysis task; if yes, sending state keeping information to the current analysis task to maintain the current execution state, and executing the suspended video analysis task; if not, informing the current analysis task to quit execution, and releasing GPU resources corresponding to the current analysis task;
and the video analysis module 202 is used for analyzing the video data.
Further, in the embodiment of the present invention, the memory management module 201 is further configured to initialize a GPU memory management server, and obtain the number of GPUs and corresponding memory size information; and acquiring a memory peak value corresponding to the analytic model, and storing the memory peak value, the GPU number and corresponding memory size information.
Further, in the embodiment of the present invention, the memory management module 201 is further configured to receive a video parsing task, and determine whether there is a free GPU resource currently satisfying the video parsing task; if so, calling an analysis model corresponding to the video analysis task, and entering an analysis state; if not, the video analysis task is suspended, and the current video analysis task is waited to be finished.
Further, in this embodiment of the present invention, the memory management module 201 is further configured to determine whether an idle memory of the currently idle GPU meets a memory corresponding to the video parsing task.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the application, including the use of specific symbols, labels, or other designations to identify the vertices.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (8)

1. A method for video parsing, the method comprising:
when the video analysis task is detected to be hung, judging whether the current analysis task exists or not to exit;
if yes, detecting whether the current analysis task to be exited is the same dimension analysis task as the suspended video analysis task;
if yes, sending state keeping information to the current analysis task to maintain the current execution state, and executing the suspended video analysis task;
if not, informing the current analysis task to quit execution, and releasing the GPU resources corresponding to the current analysis task.
2. The method of claim 1, wherein prior to determining whether there is a current resolution task to exit, the method further comprises:
initializing a GPU memory management server, and acquiring the number of GPUs and corresponding memory size information;
and acquiring a memory peak value corresponding to the analytic model, and storing the memory peak value, the GPU number and corresponding memory size information.
3. The method of claim 2, wherein prior to determining whether there is a current resolution task to exit, the method further comprises:
receiving a video analysis task, and judging whether idle GPU resources exist at present to meet the video analysis task;
if so, calling an analysis model corresponding to the video analysis task, and entering an analysis state;
if not, the video analysis task is suspended, and the current video analysis task is waited to be finished.
4. The method of claim 3, wherein the determining whether there are currently idle GPU resources to satisfy the video parsing task specifically comprises:
and judging whether the idle memory of the current idle GPU meets the memory corresponding to the video analysis task.
5. A video parsing system, comprising:
the memory management module is used for judging whether the current analysis task exits or not when the video analysis task is detected to be hung; if yes, detecting whether the current analysis task to be exited is the same dimension analysis task as the suspended video analysis task; if yes, sending state keeping information to the current analysis task to maintain the current execution state, and executing the suspended video analysis task; if not, informing the current analysis task to quit execution, and releasing GPU resources corresponding to the current analysis task;
and the video analysis module is used for analyzing the video data.
6. The system of claim 5, wherein the memory management module is further configured to initialize the GPU memory management server, and obtain the number of GPUs and corresponding memory size information; and acquiring a memory peak value corresponding to the analytic model, and storing the memory peak value, the GPU number and corresponding memory size information.
7. The system of claim 5, wherein the memory management module is further configured to receive a video parsing task, and determine whether there are currently idle GPU resources to satisfy the video parsing task; if so, calling an analysis model corresponding to the video analysis task, and entering an analysis state; if not, the video analysis task is suspended, and the current video analysis task is waited to be finished.
8. The system of claim 5, wherein the memory management module is further configured to determine whether a free memory of the currently free GPU meets a memory corresponding to the video parsing task.
CN201811589989.9A 2018-12-25 2018-12-25 Video analysis method and system Active CN109669780B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811589989.9A CN109669780B (en) 2018-12-25 2018-12-25 Video analysis method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811589989.9A CN109669780B (en) 2018-12-25 2018-12-25 Video analysis method and system

Publications (2)

Publication Number Publication Date
CN109669780A CN109669780A (en) 2019-04-23
CN109669780B true CN109669780B (en) 2020-02-14

Family

ID=66146034

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811589989.9A Active CN109669780B (en) 2018-12-25 2018-12-25 Video analysis method and system

Country Status (1)

Country Link
CN (1) CN109669780B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110087144A (en) * 2019-05-15 2019-08-02 深圳市商汤科技有限公司 Video file processing method, device, electronic equipment and computer storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902466A (en) * 2014-04-04 2014-07-02 浪潮电子信息产业股份有限公司 Internal memory pool capable of being dynamically adjusted
CN105808356A (en) * 2016-03-11 2016-07-27 广州市久邦数码科技有限公司 Android system-based Bitmap recycling method and system
CN106293885A (en) * 2015-05-20 2017-01-04 联芯科技有限公司 Task creation, hang-up and restoration methods

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6363410B1 (en) * 1994-12-13 2002-03-26 Microsoft Corporation Method and system for threaded resource allocation and reclamation
US6848033B2 (en) * 2001-06-07 2005-01-25 Hewlett-Packard Development Company, L.P. Method of memory management in a multi-threaded environment and program storage device
WO2012008016A1 (en) * 2010-07-13 2012-01-19 富士通株式会社 Multithread processing device, multithread processing system, multithread processing program, and multithread processing method
CN103905783B (en) * 2012-12-25 2017-09-01 杭州海康威视数字技术股份有限公司 The method and apparatus of decoding display is carried out to video flowing
US9678797B2 (en) * 2014-03-10 2017-06-13 Microsoft Technology Licensing, Llc Dynamic resource management for multi-process applications
CN103810048B (en) * 2014-03-11 2017-01-18 国家电网公司 Automatic adjusting method and device for thread number aiming to realizing optimization of resource utilization
CN104375898B (en) * 2014-11-20 2017-12-01 无锡悟莘科技有限公司 A kind of mobile terminal CPU usage optimization method
CN104331325B (en) * 2014-11-25 2017-08-25 深圳市信义科技有限公司 A kind of multi-element intelligent video resource scheduling system analyzed based on resource detection and dispatching method
CN106330878A (en) * 2016-08-18 2017-01-11 乐视控股(北京)有限公司 Method and device for managing video streaming resolution
US10296390B2 (en) * 2016-10-14 2019-05-21 International Business Machines Corporation Feedback mechanism for controlling dispatching work tasks in a multi-tier storage environment
CN108595259B (en) * 2017-03-16 2021-08-20 哈尔滨英赛克信息技术有限公司 Memory pool management method based on global management
CN107608785A (en) * 2017-08-15 2018-01-19 深圳天珑无线科技有限公司 Process management method, mobile terminal and readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902466A (en) * 2014-04-04 2014-07-02 浪潮电子信息产业股份有限公司 Internal memory pool capable of being dynamically adjusted
CN106293885A (en) * 2015-05-20 2017-01-04 联芯科技有限公司 Task creation, hang-up and restoration methods
CN105808356A (en) * 2016-03-11 2016-07-27 广州市久邦数码科技有限公司 Android system-based Bitmap recycling method and system

Also Published As

Publication number Publication date
CN109669780A (en) 2019-04-23

Similar Documents

Publication Publication Date Title
Yang et al. Re-thinking CNN frameworks for time-sensitive autonomous-driving applications: Addressing an industrial challenge
CN107885762B (en) Intelligent big data system, method and equipment for providing intelligent big data service
CN111400008B (en) Computing resource scheduling method and device and electronic equipment
CN107943577B (en) Method and device for scheduling tasks
US9015724B2 (en) Job dispatching with scheduler record updates containing characteristics combinations of job characteristics
CN111258744A (en) Task processing method based on heterogeneous computation and software and hardware framework system
CN105589783A (en) Application program lag problem data obtaining method and device
US20200019854A1 (en) Method of accelerating execution of machine learning based application tasks in a computing device
CN110825440B (en) Instruction execution method and device
CN111190741A (en) Scheduling method, device and storage medium based on deep learning node calculation
CN109144741A (en) The method, apparatus and electronic equipment of interprocess communication
CN112416368B (en) Cache deployment and task scheduling method, terminal and computer readable storage medium
CN114217966A (en) Deep learning model dynamic batch processing scheduling method and system based on resource adjustment
CN111694675A (en) Task scheduling method and device and storage medium
CN109669780B (en) Video analysis method and system
CN113051049A (en) Task scheduling system, method, electronic device and readable storage medium
CN112241289B (en) Text data processing method and electronic equipment
CN112416301A (en) Deep learning model development method and device and computer readable storage medium
CN115981871B (en) GPU resource scheduling method, device, equipment and storage medium
CN110825342B (en) Memory scheduling device and system, method and apparatus for processing information
CN110825502B (en) Neural network processor and task scheduling method for neural network processor
US9769263B2 (en) Predictive connection request shedding
CN111124691B (en) Multi-process shared GPU (graphics processing Unit) scheduling method and system and electronic equipment
CN116069485A (en) Method, apparatus, electronic device and medium for processing tasks
US11340949B2 (en) Method and node for managing a request for hardware acceleration by means of an accelerator device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant