CN116383438A - Video data aggregation method, system, equipment and medium - Google Patents

Video data aggregation method, system, equipment and medium Download PDF

Info

Publication number
CN116383438A
CN116383438A CN202310007742.6A CN202310007742A CN116383438A CN 116383438 A CN116383438 A CN 116383438A CN 202310007742 A CN202310007742 A CN 202310007742A CN 116383438 A CN116383438 A CN 116383438A
Authority
CN
China
Prior art keywords
video data
task
passing
data convergence
characteristic value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310007742.6A
Other languages
Chinese (zh)
Inventor
王文博
汤亚咏
姚芳
杜忠华
张登
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sailing Information Technology Co ltd
Original Assignee
Shanghai Sailing Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sailing Information Technology Co ltd filed Critical Shanghai Sailing Information Technology Co ltd
Priority to CN202310007742.6A priority Critical patent/CN116383438A/en
Publication of CN116383438A publication Critical patent/CN116383438A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/71Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application relates to a video data aggregation method, a system, equipment and a medium, wherein the method comprises the following steps: creating a video data convergence task and selecting a structuring algorithm for the video data convergence task; receiving video data in a video data convergence task; extracting a passing characteristic value in video data through a structuring algorithm; and establishing a holographic file according to the passing characteristic value. According to the method, the extraction of the characteristic values in the video data is completed through the structuring algorithm, the effective data which can be effectively utilized are extracted from the video data, the extracted effective data are marked and archived according to a certain rule, the extracted effective data can be organically formed into a data archive, the analysis of a large number of scattered historical and real-time video data is completed, the effective data are extracted from the data, and the effective data are gathered.

Description

Video data aggregation method, system, equipment and medium
Technical Field
The present invention relates to the field of video data management, and in particular, to a method, system, device, and medium for video data aggregation.
Background
With the rapid development of computer technologies such as cloud computing and big data, more and more social data resources are acquired, and video data is taken as common data resources in daily life and is often acquired through various monitoring devices; the pure video data often only records the specific condition of a certain area within a period of time, but the effective data in the video data is required to be utilized, the video data scattered everywhere is required to be processed, and unified video data assets are extracted and formed, so that the comprehensive application of the video data across industries, departments and areas is realized.
At present, for extraction and convergence of video data, each data convergence manufacturer usually carries out manual frame selection labeling on various features in the video data by a manual checking and analyzing method, and then stores labeling results respectively to form corresponding feature files; part of data convergence manufacturers carry out feature labeling in a single algorithm mode and establish feature files; the efficiency of the two ways of video data extraction is low, and for a large amount of scattered historical and real-time video data, the extraction of effective data is difficult, and the aggregation of the effective data is difficult to finish.
Disclosure of Invention
In order to extract and aggregate valid data from a large amount of scattered historical and real-time video data, the application provides a video data aggregation method, a video data aggregation system, video data aggregation equipment and a video data aggregation medium.
In a first aspect, the present application provides a video data aggregation method, which adopts the following technical scheme:
a method of video data aggregation, the method comprising the steps of:
creating a video data convergence task and selecting a structuring algorithm for the video data convergence task;
receiving video data in the video data convergence task;
extracting a passing feature value in the video data through the structuring algorithm;
and establishing a holographic file according to the passing characteristic value.
Through adopting the technical scheme, the extraction of the characteristic values in the video data is completed through the structuring algorithm, the effective data which can be effectively utilized are extracted from the video data, the extracted effective data are marked and filed according to a certain rule, so that the extracted effective data can organically form a data file, the analysis of a large number of scattered historical and real-time video data is completed, the effective data are extracted from the data, and the effective data are gathered.
Preferably, in creating a video data convergence task and selecting a structuring algorithm for the video data convergence task, the method specifically includes the following steps:
receiving user configuration information, wherein the user configuration information comprises task priority information;
setting task priority of the video data convergence task according to the task priority information;
and completing the creation of the video data convergence task.
By adopting the technical scheme, the priority configuration and the structural algorithm configuration of the video data convergence task are set according to the user configuration information, and the priority setting improves the efficiency of the video convergence task, so that a user can determine the video convergence sequence according to own will; the user-defined structuring algorithm configuration enables a user to extract required data from video data according to service requirements, and is friendly to the user.
Preferably, after receiving the video data in the video data convergence task, the method further comprises the following steps:
and sequentially distributing idle VA (virtual average value) for the video data convergence task in the computing power resource pool according to the task priority of the video data convergence task so as to carry out the video data convergence task.
By adopting the technical scheme, the calculation power is allocated to each video convergence task according to the priority of the video convergence task, the video convergence task with high priority is preferentially carried out, and the user requirement is met.
Preferably, when there are a plurality of video data convergence tasks at the same time, the method further includes an idle computing force integration process, where the idle computing force integration process specifically includes:
searching all occupied VA occupied by the video data convergence task in progress in the computing power resource pool;
and according to the video data convergence tasks on two or more occupied VA, releasing one of the VA-occupied standards to perform idle calculation integration, and releasing the occupied VA as an idle VA for the rest of the video data convergence tasks.
By adopting the technical scheme, the integration of idle calculation force is completed, the waste of hardware resources is avoided, and the efficiency of effective data extraction is improved.
Preferably, after extracting the passing feature value in the video data by the structuring algorithm, the method further comprises the following steps:
comparing the passed characteristic value with a target characteristic value in a target characteristic library;
calculating the feature similarity between the passing feature value and the target feature value;
judging whether the feature similarity is larger than a set threshold value or not;
if yes, adding a label for the passing characteristic value according to the target characteristic value.
By adopting the technical scheme, the target object is found out from the extracted effective data according to the user demand, and the target object is marked, so that the user can better find out the target object in the holographic file.
Preferably, after judging whether the feature similarity is greater than a set threshold, the method further comprises the following steps:
if not, pushing the passed characteristic value to a superior target characteristic library for comparison.
By adopting the technical scheme, when comparison fails in the lower-level target feature library, the comparison range is enlarged, so that effective data extracted from video data can be fully utilized.
Preferably, after extracting the passing feature value in the video data by a structuring algorithm, the method further comprises the following steps:
tracking a travel track of the passing feature value in the video data;
and adding the travel track of the passing characteristic value to the corresponding passing characteristic value in the holographic file.
By adopting the technical scheme, the track tracking of the passing characteristic values in the video data is completed, so that the behavior analysis of the passing characteristic values by the user is facilitated, and when the video data with the correlation exists, the user can analyze the behavior characteristics of the passing characteristic values through the track tracking, so that the video data can be more effectively utilized.
In a second aspect, the present application provides a video data convergence system that adopts the following technical scheme:
a video data convergence system, the system comprising the following modules:
the convergence task creation module is used for creating a video data convergence task and selecting a structuring algorithm for the video data convergence task;
the video data receiving module is used for receiving video data in the video data convergence task;
the pass feature value extraction module is used for extracting pass feature values in the video data through the structuring algorithm;
and the holographic file establishing module is used for establishing the holographic file according to the passing characteristic value.
In a third aspect, the present application provides a computer device, which adopts the following technical scheme: comprising a memory and a processor, said memory having stored thereon a computer program capable of being loaded by the processor and performing any of the video data aggregation methods described above.
In a fourth aspect, the present application provides a computer readable storage medium, which adopts the following technical solutions: a program capable of being loaded by a processor and executing any one of the video data aggregation methods described above is stored.
In summary, the present application includes at least one of the following beneficial technical effects:
1. the extraction of the effective data in the video data is completed, the extracted effective data is filed organically, and the effective data is convenient for users to use, so that the video data can be utilized effectively;
2. the user-defined configuration of the video data convergence task, including the priority configuration of the video data convergence task and the user-defined configuration of the structuring algorithm, is supported, and is friendly to users;
3. when a plurality of video data convergence tasks exist, the calculation power scheduling of the plurality of video data convergence tasks is completed, the calculation power is distributed according to the priority and the idle calculation power is integrated, the utilization rate of hardware resources is improved, and the video data convergence efficiency is improved.
Drawings
Fig. 1 is a method flowchart of a video data aggregation method according to an embodiment of the present application.
Fig. 2 is a system block diagram of a video data convergence system provided in an embodiment of the application.
Fig. 3 is a schematic structural diagram of a video data aggregation device according to an embodiment of the present application.
Reference numerals illustrate: 201. a converging task creation module; 202. a video data receiving module; 203. through a characteristic value extraction module; 204. a holographic file building module; 300. an electronic device; 301. a processor; 302. a communication bus; 303. a user interface; 304. a network interface; 305. a memory.
Detailed Description
In order to make the technical solutions in the present specification better understood by those skilled in the art, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments.
In the description of embodiments of the present application, words such as "exemplary," "such as" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "illustrative," "such as" or "for example" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "illustratively," "such as" or "for example," etc., is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, the term "and/or" is merely an association relationship describing an association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a alone, B alone, and both A and B. In addition, unless otherwise indicated, the term "plurality" means two or more. For example, a plurality of systems means two or more systems, and a plurality of screen terminals means two or more screen terminals. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating an indicated technical feature. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The embodiment of the application discloses a video data aggregation method.
Referring to fig. 1, a video data aggregation method includes the steps of:
s10: creating a video data convergence task;
specifically, a user inputs a video data convergence task creation instruction, a video data convergence task is newly built, and in the newly built video data convergence task, the user completes configuration of relevant information of the video data convergence task through user configuration information.
The user configuration information comprises task priority information and structured algorithm type information, wherein the task priority information is used for setting the execution priority of the video data convergence task, and in an embodiment of the application, the task priority of the video data convergence task is set to be in four grades, namely one grade, two grades, three grades and four grades, wherein the task priority is highest, and the task priority is gradually decreased from one grade to four grades.
The structured algorithm type information is used for determining what structured algorithm is used by the video data convergence task to extract the passing feature value, and the passing feature values extracted by different structured algorithms are different, for example, the structured algorithm type information configured by a user is a face extraction algorithm, so that the video data convergence task can extract the face information appearing from the video data, and the face information is used as the passing feature value; in an embodiment of the present application, the structural algorithm class information dispatches a corresponding structural algorithm plug-in from the structural algorithm plug-in library, and configures the fetched structural algorithm plug-in on the video data convergence task to complete the selection of the structural algorithm.
S20: receiving video data;
specifically, after the creation of the video data convergence task is completed, video data is imported into the created video data convergence task, wherein the imported video data can be real-time video data acquired by a monitoring device or stored historical video data, and the imported video data is compatible with coding formats such as H.265, H.264, mpeg4, mjpeg, mpeg1, mpeg2, wmv1/2/3, h.263, vp6/vp8 and the like. In one embodiment of the present application, manually uploaded profile data is supported, including pictures and videos, compatible with jpg, jpeg, jpe, dib, bmp, png, asf, vob, mpeg, flvaviwebm, 3gp, ts and mp4 file formats.
S30: extracting a passing characteristic value in video data through a structuring algorithm;
specifically, after the video data convergence task is created and video data is uploaded in the video data convergence task, the configured structuring algorithm is used for extracting the feature value in the video data, the extraction category of the feature value is determined by different configured structuring algorithms, in the embodiment of the application, the configured structuring algorithm is a vehicle picture algorithm, all the vehicle objects appearing in the imported video data are searched, the vehicle objects are marked, the corresponding vehicle object picture is intercepted from the video data, and the extraction of the feature value is completed.
In the process of carrying out the structuring algorithm, the management of the computational resources of the structuring algorithm is supported on the basis of the task priority configuration, specifically, because the configuration of the task priority of each video data convergence task is completed in the foregoing steps, when the computational resources are invoked for each video data convergence task, the computational resources are sequentially invoked according to the configured task priority from high to low, for example, three video data convergence tasks are respectively task A, task B and task C, and the task priorities configured for the three video data convergence tasks are respectively: task priority of task A is first level, task priority of task B is second level, task priority of task C is third level, then allocate computing power resource for task A preferentially, after completing computing power resource allocation of task A, allocate computing power resource to task B and task C again, realize task priority management of video data convergence task; in the running tasks, if the overall computing power resources are insufficient due to the increase of the computing power demand of the structuring algorithm, the computing power resources of the low-priority tasks can be preempted by the high-priority tasks, and the task progress is automatically saved after the low-priority tasks are stopped due to the preempted computing power resources, and the task progress is continued after the idle computing power resources are reserved.
The computing power resources are managed in a computing power resource pool mode, namely when each video data convergence task is carried out, each video data convergence task is distributed to a divided VA (virtual memory) in the computing power resource pool, and each video data convergence task is completed in the VA. In an embodiment of the present application, idle computing power resource integration is performed according to a standard that at least one of idle VA can be released after video data convergence tasks on two or more VA are integrated together, specifically, when task a occupies a first VA and task B occupies a second VA, if task a and task B are simultaneously placed in the first VA so that the second VA is in an idle state, task a and task B are simultaneously placed in the first VA, and the second VA is released for performing other video data convergence tasks, so that idle computing power resource integration is achieved.
It should be noted that, the integration standard of the idle computing power is that the video data convergence task on the integrated VA does not affect the analysis efficiency of the integrated video data task when integrated on one of the VA, and by the above-mentioned illustration, the computing power requirement of the task a and the task B is not greater than the maximum computing power that the first VA can provide when the task a and the task B are placed on the first VA. When the idle computing power is integrated, the integrated idle computing power can be used by other video data convergence tasks, and similarly, the integrated idle computing power is also sequentially called according to the task priority of the video data convergence task valve.
S40: comparing the characteristic value with a target characteristic value in a target characteristic library, and calculating the characteristic similarity;
specifically, after the extraction of the passing feature values in the video data is completed, the extracted passing feature values are compared with target feature values in a target feature library, and the similarity between the passing feature values and the target feature values is calculated, so that the extracted passing feature values are analyzed, and target objects required by users are found.
In an embodiment of the present application, data in a target feature library is pushed to a static library through an interface, and a real-time extracted feature value is automatically compared with target feature value data in the static library, for example, face information in video data is extracted through a human body extraction algorithm, the target feature library is set as wanted-attack information in a certain region, then the face information in the video data is compared with wanted-attack face information in all wanted-attack information in the target feature library, and similarity of all extracted face information and all wanted-attack face information is calculated, so as to determine whether a corresponding wanted-attack face appears in video data of a video data convergence task.
S50: judging whether the feature similarity is larger than a set threshold value or not;
specifically, the feature similarity obtained by comparing the extracted feature value with the target feature value in the target feature library is compared with a preset set threshold value, and whether the feature similarity is larger than the set threshold value or not is judged, wherein the set threshold value can be set to be 90%.
S60: if yes, adding a label for the passing characteristic value according to the target characteristic value;
specifically, if the feature similarity between the passing feature value and the target feature value is greater than a set threshold, marking the passing feature value as hit, adding a label to the passing feature value by using the result of similarity TOP N, and adding information according to the related information of the target feature value compared with the passing feature value. For example, if the feature similarity between a face a extracted from a certain video data in a video data convergence task and a target face in a target feature library is 95% and is greater than a set feature similarity threshold value of 90%, the face a is marked as a hit, and relevant information of the target face is added to a label of the face a.
S61: if not, pushing the target feature library to the upper-level target feature library for comparison;
specifically, if the feature similarity of the passing feature value with the passing feature value is not greater than the target similarity of the set threshold value in the target feature library, the passing feature value is pushed to the upper-level target feature library to be continuously compared, and similarly if the target feature value with the feature similarity of the passing feature value greater than the set threshold value is in the upper-level target feature library, corresponding labels are marked on the passing feature value according to the feature similarity and the related information of the target feature value compared with the passing feature value.
The target feature library may have multiple levels, and target feature values in the target feature library at each level may be different, in an embodiment of the present application, the target feature library may be classified according to different categories of the target feature values in the target feature library, and in another embodiment of the present application, the target feature library may be classified according to a classification refinement degree of the target feature values in the target feature library.
S70: establishing a holographic file according to the passing characteristic values;
specifically, through the steps, extraction of the passing characteristic values in the video data is completed, corresponding labels are marked on the passing characteristic values according to comparison results with the target characteristic values, all the extracted passing characteristic values are archived, a corresponding holographic file is established, and the holographic file comprises preview pictures of all the passing characteristic values taken from the video data and corresponding label information of all the passing characteristic values.
The user can operate the built holographic files, for example, combine a plurality of holographic files, and transfer the passing characteristic value in one file to another file, and because the corresponding label is added on the passing characteristic value in each file, the user can also find the specific passing characteristic value information in the holographic file through the set retriever.
In an embodiment of the present application, under the passing feature value of the holographic file, a travelling track of the passing feature value in the video data is further added, the travelling track of the passing feature value in the video data is drawn by the structuring algorithm when the passing feature value is extracted by the structuring algorithm, and the travelling track, that is, a travelling route of the passing feature value in the video area included in the video data, is a preview of the passing feature value in the holographic file in a picture form, so that when a user wishes to view the travelling track of the passing feature value in the video shooting area, the user can view the travelling track of the passing feature value in the holographic file under the passing feature value.
The implementation principle of the video data aggregation method in the embodiment of the application is as follows: uploading video data to a created video data convergence task, configuring task priority and a structuring algorithm for the video data task, extracting passing characteristic values in the video data according to the configured structuring algorithm, comparing the extracted passing characteristic values with target characteristic values in a target characteristic library, marking corresponding labels for the passing characteristic values according to comparison results, and finally, accumulating the extracted passing characteristic values to generate a holographic file which can be managed, checked or analyzed by a user, thereby completing extraction of effective data in a large amount of video data.
The embodiment of the application also discloses a video data convergence system.
Referring to fig. 2, a video data convergence system includes the following modules:
the convergence task creation module 201 is configured to create a video data convergence task and select a structuring algorithm for the video data convergence task;
a video data receiving module 202, configured to receive video data in the video data convergence task;
a passing feature value extracting module 203, configured to extract a passing feature value in the video data through the structuring algorithm;
the holographic file creation module 204 is configured to create a holographic file according to the passing feature value.
Referring to fig. 3, a schematic structural diagram of an electronic device 300 is provided in an embodiment of the present application. As shown in fig. 3, the electronic device 300 may include: at least one processor 301, at least one network interface 304, a user interface 303, a memory 305, at least one communication bus 302.
Wherein the communication bus 302 is used to enable connected communication between these components.
The user interface 303 may include a Display screen (Display), a Camera (Camera), and the optional user interface 303 may further include a standard wired interface, and a wireless interface.
The network interface 304 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein the processor 301 may include one or more processing cores. The processor 301 utilizes various interfaces and lines to connect various portions of the overall server, perform various functions of the server and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 305, and invoking data stored in the memory 305. Alternatively, the processor 301 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 301 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 301 and may be implemented by a single chip.
The Memory 305 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 305 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 305 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 305 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described respective method embodiments, etc.; the storage data area may store data or the like referred to in the above respective method embodiments. Memory 305 may also optionally be at least one storage device located remotely from the aforementioned processor 301. As shown in fig. 3, an operating system, a network communication module, a user interface module, and an application program of a video data aggregation method may be included in the memory 305 as a computer storage medium.
It should be noted that: in the device provided in the above embodiment, when implementing the functions thereof, only the division of the above functional modules is used as an example, in practical application, the above functional allocation may be implemented by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the embodiments of the apparatus and the method provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the embodiments of the method are detailed in the method embodiments, which are not repeated herein.
In the electronic device 300 shown in fig. 3, the user interface 303 is mainly used for providing an input interface for a user, and acquiring data input by the user; and the processor 301 may be configured to invoke an application program in the memory 305 that stores a video data aggregation method that, when executed by the one or more processors 301, causes the electronic device 300 to perform the method as described in one or more of the embodiments above.
An electronic device 300 readable storage medium, the electronic device 300 readable storage medium storing instructions. When executed by the one or more processors 301, causes the electronic device 300 to perform the methods as described in one or more of the embodiments above.
It will be clear to a person skilled in the art that the solution of the present application may be implemented by means of software and/or hardware. "Unit" and "module" in this specification refer to software and/or hardware capable of performing a specific function, either alone or in combination with other components, such as Field programmable gate arrays (Field-ProgrammaBLE Gate Array, FPGAs), integrated circuits (Integrated Circuit, ICs), etc.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, such as the division of the units, merely a logical function division, and there may be additional manners of dividing the actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory 305. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution contributing to the prior art or in the form of a software product stored in a memory 305, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the methods described in the various embodiments of the present application. And the aforementioned memory 305 includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be performed by hardware associated with instructions in a program that may be stored in a computer readable memory 305, where the memory 305 may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic or optical disk, and the like.
The foregoing is merely exemplary embodiments of the present disclosure and is not intended to limit the scope of the present disclosure. That is, equivalent changes and modifications are contemplated by the teachings of this disclosure, which fall within the scope of the present disclosure. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains and as may be applied to the examples.

Claims (10)

1. A method of video data aggregation, the method comprising the steps of:
creating a video data convergence task and selecting a structuring algorithm for the video data convergence task;
receiving video data in the video data convergence task;
extracting a passing feature value in the video data through the structuring algorithm;
and establishing a holographic file according to the passing characteristic value.
2. A method for video data aggregation according to claim 1, wherein in creating a video data aggregation task and selecting a structuring algorithm for the video data aggregation task, the method specifically comprises the steps of:
receiving user configuration information, wherein the user configuration information comprises task priority information
Setting task priority of the video data convergence task according to the task priority information;
and completing the creation of the video data convergence task.
3. The video data aggregation method according to claim 2, further comprising, after receiving video data in the video data aggregation task, the steps of:
and sequentially distributing idle VA (virtual average value) for the video data convergence task in the computing power resource pool according to the task priority of the video data convergence task so as to carry out the video data convergence task.
4. A video data aggregation method according to claim 3, wherein when there are a plurality of video data aggregation tasks at the same time, the method further comprises an idle computing force integration process, the idle computing force integration process specifically being:
searching all occupied VA occupied by the video data convergence task in progress in the computing power resource pool;
and according to the video data convergence tasks on two or more occupied VA, releasing one of the VA-occupied standards to perform idle calculation integration, and releasing the occupied VA as an idle VA for the rest of the video data convergence tasks.
5. A video data aggregation method according to claim 1, characterized in that after extracting the passing feature values in the video data by the structuring algorithm, it further comprises the steps of:
comparing the passed characteristic value with a target characteristic value in a target characteristic library;
calculating the feature similarity between the passing feature value and the target feature value;
judging whether the feature similarity is larger than a set threshold value or not;
if yes, adding a label for the passing characteristic value according to the target characteristic value.
6. The video data aggregation method according to claim 5, further comprising the steps of, after determining whether the feature similarity is greater than a set threshold:
if not, pushing the passed characteristic value to a superior target characteristic library for comparison.
7. The method of video data aggregation according to claim 1, further comprising the steps of, after extracting the passing feature values in the video data by a structuring algorithm:
tracking a travel track of the passing feature value in the video data;
and adding the travel track of the passing characteristic value to the corresponding passing characteristic value in the holographic file.
8. A video data convergence system based on any one of claims 1-7, said system comprising the following modules:
the convergence task creation module (201) is used for creating a video data convergence task and selecting a structuring algorithm for the video data convergence task;
a video data receiving module (202) for receiving video data in the video data convergence task;
a passing feature value extraction module (203) for extracting passing feature values in the video data by the structuring algorithm;
and the holographic file establishing module (204) is used for establishing the holographic file according to the passing characteristic value.
9. A computer device comprising a memory (305) and a processor (301), the memory (305) having stored thereon a computer program capable of being loaded by the processor (301) and executing the method according to any of claims 1 to 7.
10. A computer readable storage medium, characterized in that a computer program is stored which can be loaded by a processor (301) and which performs the method according to any of claims 1 to 7.
CN202310007742.6A 2023-01-04 2023-01-04 Video data aggregation method, system, equipment and medium Pending CN116383438A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310007742.6A CN116383438A (en) 2023-01-04 2023-01-04 Video data aggregation method, system, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310007742.6A CN116383438A (en) 2023-01-04 2023-01-04 Video data aggregation method, system, equipment and medium

Publications (1)

Publication Number Publication Date
CN116383438A true CN116383438A (en) 2023-07-04

Family

ID=86971958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310007742.6A Pending CN116383438A (en) 2023-01-04 2023-01-04 Video data aggregation method, system, equipment and medium

Country Status (1)

Country Link
CN (1) CN116383438A (en)

Similar Documents

Publication Publication Date Title
CN109783224B (en) Task allocation method and device based on load allocation and terminal equipment
CN110874440B (en) Information pushing method and device, model training method and device, and electronic equipment
CN111506434B (en) Task processing method and device and computer readable storage medium
CN114416352A (en) Computing resource allocation method and device, electronic equipment and storage medium
CN103765421A (en) Content control method, content control apparatus, and program
US20220229809A1 (en) Method and system for flexible, high performance structured data processing
CN104657205A (en) Virtualization-based video content analyzing method and system
CN109697452B (en) Data object processing method, processing device and processing system
CN104572298A (en) Video cloud platform resource dispatching method and device
CN111343416A (en) Distributed image analysis method, system and storage medium
US8913838B2 (en) Visual information processing allocation between a mobile device and a network
CN107870822B (en) Asynchronous task control method and system based on distributed system
US10430312B2 (en) Method and device for determining program performance interference model
CN116383438A (en) Video data aggregation method, system, equipment and medium
CN114300082B (en) Information processing method and device and computer readable storage medium
CN111913743A (en) Data processing method and device
CN111901561B (en) Video data processing method, device and system in monitoring system and storage medium
CN109218801B (en) Information processing method, device and storage medium
KR20160084215A (en) Method for dynamic processing application for cloud streaming service and apparatus for the same
CN115617532B (en) Target tracking processing method, system and related device
CN115373764B (en) Automatic container loading method and device
CN111104528A (en) Picture obtaining method and device and client
CN113094530B (en) Image data retrieval method and device, electronic equipment and storage medium
CN113821349A (en) Load balancing method and device
CN114201306B (en) Multi-dimensional geographic space entity distribution method and system based on load balancing technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination