CN117472599A - Deep synthesis detection system and deep synthesis detection acceleration integrated machine equipment - Google Patents
Deep synthesis detection system and deep synthesis detection acceleration integrated machine equipment Download PDFInfo
- Publication number
- CN117472599A CN117472599A CN202311381322.0A CN202311381322A CN117472599A CN 117472599 A CN117472599 A CN 117472599A CN 202311381322 A CN202311381322 A CN 202311381322A CN 117472599 A CN117472599 A CN 117472599A
- Authority
- CN
- China
- Prior art keywords
- data
- detection
- model
- reasoning
- shared memory
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 131
- 230000015572 biosynthetic process Effects 0.000 title claims abstract description 43
- 238000003786 synthesis reaction Methods 0.000 title claims abstract description 43
- 230000001133 acceleration Effects 0.000 title claims abstract description 11
- 238000000034 method Methods 0.000 claims abstract description 96
- 230000001360 synchronised effect Effects 0.000 claims abstract description 17
- 238000006243 chemical reaction Methods 0.000 claims abstract description 12
- 230000005055 memory storage Effects 0.000 claims abstract description 11
- 238000003860 storage Methods 0.000 claims abstract description 9
- 238000009826 distribution Methods 0.000 claims abstract description 7
- 238000007405 data analysis Methods 0.000 claims description 7
- 238000009825 accumulation Methods 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 3
- 238000013473 artificial intelligence Methods 0.000 abstract description 3
- 238000012545 processing Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 238000007726 management method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 238000012805 post-processing Methods 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 239000000306 component Substances 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000033228 biological regulation Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000008094 contradictory effect Effects 0.000 description 1
- 239000008358 core component Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013138 pruning Methods 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
- 230000000192 social effect Effects 0.000 description 1
- 238000007711 solidification Methods 0.000 description 1
- 230000008023 solidification Effects 0.000 description 1
- 201000009032 substance abuse Diseases 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/52—Program synchronisation; Mutual exclusion, e.g. by means of semaphores
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/544—Buffers; Shared memory; Pipes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
- G06N5/041—Abduction
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Abstract
The invention relates to a depth synthesis detection system and a depth synthesis detection acceleration integrated machine device. The method is suitable for the field of computer artificial intelligence deep synthesis detection. The technical scheme of the invention comprises a service module, a detection module and a detection module, wherein the service module is used for acquiring data to be detected; the detection unit group is provided with a plurality of detection units, and the detection units comprise a shared memory and a plurality of detection models; the control distribution module is used for analyzing the data to be detected of the receiving service module; the pipeline data flow method comprises a data pipeline, wherein the data pipeline comprises data type conversion, shared memory storage management, a synchronous lock and a batch waiting device; the data type conversion is used for converting the data to be pushed into a specified data type; the shared memory storage management adopts a circular queue to carry out storage management on data; the synchronous lock is used for exclusive locking when a certain process operates the shared memory, and the lock is released after the operation is completed; the batch waiting device is used for waiting for data in the shared memory storage space to be integrated into one batch in a certain time.
Description
Technical Field
The invention relates to a depth synthesis detection system and a depth synthesis detection acceleration integrated machine device. The method is suitable for the field of computer artificial intelligence deep synthesis detection.
Background
With the continued development and innovation of artificial intelligence technology, deep-composition content (Deepfake) has become an attractive technology, increasingly appearing in a variety of media channels, including social media, news stories, movies, and television programs, among others. The depth synthesis technology can use machine learning algorithm and computer vision technology to generate realistic synthesized images, videos and audios, so that the face, sound or behavior of one person is synthesized into the image or sound of another person, and the effect of virtual reality is achieved.
Although the depth synthesis technique has a wide application prospect in some fields, the abuse risk is continuously increased just like a handle double-edged sword. With the technology open, the average person can blur the real and virtual boundaries only by a small amount of sample data such as images, audio and the like. More and more deep synthetic techniques are used to forge infirm information, frauds and fraudulent activities, personal attacks and misuse privacy, which can have adverse social effects. Deep synthesis technology also faces challenges in technical protection, supervision, and legal regulations.
Depth synthesis detection for large volumes of false images and video is particularly important. Because various synthetic images exist in images or videos, models for depth synthesis detection are generally based on methods such as feature extraction, motion tracking, anomaly detection and defect detection, and model integration of a plurality of different tasks is required to achieve optimal detection accuracy.
The method for optimizing multi-model reasoning comprises acceleration of model reasoning and data flow pushing among models and multi-process concurrent optimization. The model acceleration also comprises pruning, quantification, inference model solidification and other tools such as common onnx-runtime, tensorrt, openvino and the like which optimize acceleration in an inference stage; the data flow pushing, multi-process concurrency and IO optimization among the models are mainly used for scenes of the multiple models in the same task, the models are inter-dependent, and the input of one model depends on the output of the model in front of the model.
In the application scene of deep synthesis detection, the complexity of detection tasks is high, multiple types of models need to be integrated, the data correlation among different models is high, the correlation among the models is low, and the limitation is caused by the combined training and multi-task learning of the traditional model optimization method. Data correlation: the reasoning of one model may wait for the result of the reasoning of the previous model, and for a large number of reasoning of multiple models, if data conflict occurs, reasoning blank is generated, so that the whole task is cut off, and resources such as computing power, bandwidth and the like are wasted.
Because the complexity of the deep synthesis detection task is high, the detection task needs to integrate various models, for example, a certain reasoning task needs to be subjected to the processes of data IO, multi-model reasoning, reasoning result collection, business logic processing and the like, in the task processing process, model reasoning needs to be performed after IO is completed, and the multi-model reasoning also has a mutual waiting relationship. Under the reasoning strategy, a large amount of data reasoning can generate time for reasoning empty and the like, so that the reasoning performance of the detection task is reduced.
Disclosure of Invention
The invention aims to solve the technical problems that: aiming at the problems, a depth synthesis detection system and a depth synthesis detection acceleration integrated machine device are provided.
The technical scheme adopted by the invention is as follows: a depth synthesis detection system, comprising:
the service module is used for acquiring data to be detected;
the detection unit group is provided with a plurality of detection units, each detection unit comprises a shared memory and a plurality of detection models, and the detection units can push data to each detection model by adopting a pipeline data flow method;
the control distribution module is used for analyzing the data to be detected of the service module, sending the analyzed data into the detection unit group and carrying out balanced pushing according to the accumulation condition of the data in each detection unit;
the pipeline type data flow method comprises a data pipeline, wherein the data pipeline comprises data type conversion, shared memory storage management, a synchronous lock and a batch waiting device;
the data type conversion is used for converting the data to be pushed into a specified data type; the shared memory storage management adopts a circular queue to carry out storage management on data; the synchronous lock is used for exclusive locking when a certain process operates the shared memory, and the lock is released after the operation is completed; the batch waiting device is used for waiting for data in the storage space of the shared memory to be integrated into one batch in a certain time.
Further comprises:
and the progress scheduling center is used for adjusting the process number of each detection model based on model reasoning time consumption of each detection model.
The process number of each detection model is adjusted based on the model reasoning time consumption of each detection model, and the process number comprises the following steps:
determining the reasoning time ratio of each detection model based on the model reasoning time consumption of each detection model in the same process number, wherein the reasoning time ratio is the reasoning time consumption ratio of the corresponding detection model to the fastest reasoning model in each detection model;
based on the reasoning time ratio of each detection model, the process number of each detection model is adjusted, and the process number ratio of each detection model to the fastest reasoning model is the same as or similar to the reasoning time ratio.
The control distribution module comprises:
the service data analysis sub-module is used for receiving and analyzing the data of the service module;
the concurrency control sub-module is used for receiving the data of the service data analysis sub-module, sending the analyzed data to the detection unit group, carrying out balanced pushing according to the accumulation condition of the data in each detection unit, and transmitting the detection result output by the detection unit to the service module;
and the function scheduling sub-module is used for scheduling the matched detection model from the model library aiming at different service data in the service scene.
A depth synthesis detection acceleration all-in-one device for operating the depth synthesis detection system, characterized in that the hardware configuration method of the device comprises:
firstly, selecting a GPU card type, starting a depth synthesis detection system, starting a detection unit, and carrying out model reasoning on a batch of business data to obtain the GPU utilization rate and the GPU video memory occupancy rate in the model reasoning process;
determining the number n of detection units which can be started under the condition of a single GPU card according to the GPU utilization rate and the video memory occupancy rate, adjusting and starting the n detection units, and acquiring the CPU utilization rate, the memory occupancy rate, the disk IO read-write rate and the network bandwidth utilization rate in the model reasoning process;
and determining the CPU type, the memory size, the disk type and the number according to the memory occupation, the CPU utilization rate, the network bandwidth utilization rate or the disk IO read-write rate required by a single GPU card.
The beneficial effects of the invention are as follows: the invention adopts a pipeline data flow method to push data, and uses a shared memory to communicate among multiple processes so as to ensure high-efficiency data transmission efficiency; when a plurality of processes operate the shared memory space, a synchronous lock is adopted to ensure data security, when a certain process operates the shared memory, the lock is exclusive, and after the operation is completed, the lock is released; batch processing is carried out on the data through the batch waiting device, the overall reasoning time consumption is reduced, and the acceleration model reasoning is realized.
The invention provides a model data pushing method of a pipeline data stream to solve the problems of empty reasoning and the like, and the main reason of empty reasoning and the like is that data are serially inferred among a plurality of models.
According to the invention, the data to be detected is pushed to each detection unit according to the accumulation condition of the data in each detection unit, and the detection units push the data to each detection model by adopting a pipeline data flow method, so that the deep synthesis detection is optimized at the software level, and the detection performance is improved.
In the multi-model reasoning task process, reasoning speed difference exists among different models due to model structures; preprocessing data in the same model, carrying out file IO and post-processing to occupy CPU calculation power, carrying out model reasoning to occupy GPU calculation power, and carrying out processing speed difference on each module; according to the invention, under the pushing of the pipeline data flow, each model and each module in the model can realize multi-process parallel reasoning, and when the computational power resource is fixed, the number of module processes with lower processing speed is increased, so that the data processing capacity in unit time can be improved.
In the invention, the synchronous lock is needed to maintain data security when the multiple processes are concurrent, the synchronous lock can cause certain performance loss, but the synchronous lock is not contradictory with the multiple processes, the reasoning task mainly takes time in the reasoning process, the monopolization of the synchronous lock is performed before the reasoning, namely, the monopolization is performed when the data is acquired, the data is released after the data is acquired, the reasoning is performed after the lock is released, and the time for the multiple processes to acquire the data through the synchronous lock is short relative to the reasoning time in the process.
The hardware influencing the performance of the depth synthesis detection device mainly comprises a CPU, a GPU, a memory and a hard disk of the device, and the performance of the depth synthesis detection system is influenced by the lowest configuration component, so that all components of the integrated machine device can be coordinated according to the calculation force requirements of all models of the depth synthesis detection to achieve the optimal performance of the depth synthesis detection system under the condition of limited device cost.
Drawings
Fig. 1 is a schematic diagram of a data pipeline in an embodiment.
FIG. 2 is a system block diagram of a depth synthesis detection system in an embodiment.
Fig. 3 is a schematic diagram of the composition of the detection unit in the embodiment.
Detailed Description
As shown in fig. 1, this embodiment is a depth synthesis detection system, including: service module, detecting unit group, control distributing module and progress dispatching center.
In this example, the service module provides an external data receiving interface and a result pushing interface for the deep synthesis detection system, and can be used for obtaining data to be detected and feeding back a detection result. The module provides local file detection, http call services, grpc call services, kafka data push and receive services, and the like.
In this embodiment, the detection unit group includes a plurality of detection units, where the detection units include a shared memory and a plurality of detection models of different types, and the detection units can push data to each detection model by using a pipeline data flow method.
In this example, the pipeline data flow method may be connected to a plurality of task modules, and communication is performed between multiple processes using a shared memory, so as to ensure an efficient data transmission rate. The core component in the pipeline data flow method is a data pipeline, and the pipeline data flow method consists of four parts, namely data type conversion, shared memory storage management, synchronous lock and batch waiting device.
Data type conversion: in the depth synthesis detection task, most of data to be transmitted among model modules are image information matrixes (generally stored by numpy, tensor and the like), and a small amount of service related character strings or integers and floating point numbers are formed, and because the cross-process shared memory does not support complex data types, type conversion processing is required to be performed on the complex data types, so that the data can be stored and transmitted in the shared memory space. In the method, the complex data types are subjected to character serialization and then are subjected to byte conversion.
Shared memory storage management: because the size of the shared memory among the processes is limited, a circular queue is adopted to store and manage data in the process of processing a large number of tasks, and each sub-item in the queue comprises whether an identification bit, an item size and data content are occupied or not.
And (3) synchronous lock: the multi-process synchronous lock is added for ensuring the safety of data when the multi-process operates on the shared memory space, the lock is exclusively locked when a certain process operates on the shared memory, and the lock is released after the operation is finished, namely the lock is exclusively locked when the certain process acquires the data from the shared memory, and the lock is released after the certain process acquires the data from the shared memory. When the lock is exclusive, other processes cannot operate the shared memory, and the other processes can operate the shared memory only after releasing the lock.
Batch waiting device: the batch waiting device mainly aims at batch reasoning of the model, can wait for data in the storage space of the shared memory to be gathered into one batch under the condition of not overtime, and the batch size can be configured through parameters. Since the current multiple AI reasoning frameworks will decrease in average reasoning time as the batch size increases, the batch waiter can accelerate model reasoning.
The control distribution module comprises a concurrency controller, a function scheduling module and a service data analysis module, wherein the service data analysis module receives and analyzes data of an upper layer service module, meanwhile, the analyzed data is sent to the concurrency control module, the concurrency control module sends the data to a plurality of data flow pipelines, and balanced pushing is carried out according to the accumulation condition of the data in each pipeline, and a detection result is transmitted to the upper layer service module through the service data analysis module; the functional scheduling module is mainly used for performing model scheduling processing on different service data in a service scene.
In this embodiment, the progress scheduling center schedules the model process according to the upper control distribution module and the model reasoning situation, daemons the process currently executing the reasoning task, receives the input of the upper pipeline type flow, and distributes the data flow to each detection unit.
In this example, the progress scheduling center adopts a multi-model reasoning process number dynamic scheduling method to adjust the process number of each detection model in the detection unit, the multi-model reasoning dynamic scheduling carries out process scheduling according to algorithm reasoning time consumption, and the processes supporting scheduling in the depth synthesis detection system comprise multi-model reasoning processes, image video file IO processes, data preprocessing and post-processing processes and network request processes which also contain data in part of service scenes, wherein the process scheduling processes are as follows.
According to the number of processes among GPU resource allocation models, model reasoning time consumption is calculated according to each reasoning model, model time registration with the fastest reasoning speed is taken as T1, the reasoning time ratio of the reasoning time Ti of other model modules to the time of T1 is calculated and is recorded as Ni, the number of processes in multiple models can be scheduled according to Ni, each multi-model process number is recorded as Pi, and the number of the model processes with the most reasoning time consumption is more. The data flow of each model may change in the process of reasoning the service data, and the corresponding process number is calculated and adjusted by using the concurrent configuration of the process data and a fixed period.
According to the CPU resource configuration image video file IO process, the data preprocessing and post-processing process, firstly, setting the number of each CPU calculation type process to 1, reasoning a batch of test data, calculating the reasoning time consumption ratio of each process, and adjusting the number of each process according to the time consumption ratio in the same way as the GPU resource configuration calculation mode.
GPU resources, CPU resources, memory and storage are used in the running process of the deep synthesis detection system, the upper limit of system performance depends on the lowest configuration module, and all main modules can reach the optimal performance of the system under the lowest cost through balanced configuration, so that the hardware configuration performance of equipment needs to be optimized by combining a specific model aiming at hardware characteristics.
The performance of the depth synthesis detection system can be improved in the software layer by adding detection units in fig. 2, the detection units are composed as shown in fig. 3, and each detection unit is internally provided with GPU computing power occupation, CPU computing power occupation and memory occupation, so that the detection performance can be improved to the greatest extent by balanced computing card, CPU and memory configuration under the limited cost, and the balanced equipment configuration process is as follows:
firstly, selecting the type of a GPU card, starting a depth synthesis detection system, starting a detection unit, reasoning a batch of business data, and observing the following indexes in the reasoning process: GPU utilization rate and GPU video memory occupancy rate.
Determining the number n of detection units which can be started under the single-card condition according to the GPU utilization rate and the video memory occupation condition, adjusting and starting the n detection units, and continuously observing the following indexes: CPU utilization, memory occupancy, disk IO read-write rate and network bandwidth utilization.
The observation index marks the memory occupation and CPU utilization rate under the condition of a single GPU card, the IO read-write rate or the bandwidth utilization rate of the observation is determined according to the service data scene, the IO read-write rate of the observation disk in the local file processing scene, and the network requests the data to observe the bandwidth occupation rate.
And determining the CPU type, the memory size, the disk type and the number according to the memory occupation, the CPU utilization rate, the bandwidth utilization rate or the disk IO read-write rate required by the single GPU computing card.
Claims (6)
1. A depth synthesis detection system, comprising:
the service module is used for acquiring data to be detected;
the detection unit group is provided with a plurality of detection units, each detection unit comprises a shared memory and a plurality of detection models, and the detection units can push data to each detection model by adopting a pipeline data flow method;
the control distribution module is used for analyzing the data to be detected of the service module, sending the analyzed data into the detection unit group and carrying out balanced pushing according to the accumulation condition of the data in each detection unit;
the pipeline type data flow method comprises a data pipeline, wherein the data pipeline comprises data type conversion, shared memory storage management, a synchronous lock and a batch waiting device;
the data type conversion is used for converting the data to be pushed into a specified data type; the shared memory storage management adopts a circular queue to carry out storage management on data; the synchronous lock is used for exclusive locking when a certain process operates the shared memory, and the lock is released after the operation is completed; the batch waiting device is used for waiting for data in the storage space of the shared memory to be integrated into one batch in a certain time.
2. The depth synthesis detection system of claim 1, further comprising:
and the progress scheduling center is used for adjusting the process number of each detection model based on model reasoning time consumption of each detection model.
3. The depth synthesis detection system of claim 2, wherein the model inference based on each detection model time-consuming adjustment of the number of processes for each detection model comprises:
determining the reasoning time ratio of each detection model based on the model reasoning time consumption of each detection model in the same process number, wherein the reasoning time ratio is the reasoning time consumption ratio of the corresponding detection model to the fastest reasoning model in each detection model;
based on the reasoning time ratio of each detection model, the process number of each detection model is adjusted, and the process number ratio of each detection model to the fastest reasoning model is the same as or similar to the reasoning time ratio.
4. The depth synthesis detection system of claim 1, wherein the control distribution module comprises:
the service data analysis sub-module is used for receiving and analyzing the data of the service module;
the concurrency control sub-module is used for receiving the data of the service data analysis sub-module, sending the analyzed data to the detection unit group, carrying out balanced pushing according to the accumulation condition of the data in each detection unit, and transmitting the detection result output by the detection unit to the service module;
and the function scheduling sub-module is used for scheduling the matched detection model from the model library aiming at different service data in the service scene.
5. A depth synthesis detection acceleration all-in-one apparatus for operating the depth synthesis detection system of any one of claims 1 to 4, characterized in that the hardware configuration method of the apparatus comprises:
firstly, selecting a GPU card type, starting a depth synthesis detection system, starting a detection unit, and carrying out model reasoning on a batch of business data to obtain the GPU utilization rate and the GPU video memory occupancy rate in the model reasoning process;
determining the number n of detection units which can be started under the condition of a single GPU card according to the GPU utilization rate and the video memory occupancy rate, adjusting and starting the n detection units, and acquiring the CPU utilization rate, the memory occupancy rate, the disk IO read-write rate and the network bandwidth utilization rate in the model reasoning process;
and determining the CPU type, the memory size, the disk type and the number according to the memory occupation, the CPU utilization rate, the network bandwidth utilization rate or the disk IO read-write rate required by a single GPU card.
6. A method of pipeline data streaming, characterized by: the system comprises a data pipeline, wherein the data pipeline comprises data type conversion, shared memory storage management, a synchronous lock and a batch waiting device;
the data type conversion is used for converting the data to be pushed into a specified data type; the shared memory storage management adopts a circular queue to carry out storage management on data;
the synchronous lock is used for exclusive locking when a certain process operates the shared memory, and the lock is released after the operation is completed;
the batch waiting device is used for waiting for data in the storage space of the shared memory to be integrated into one batch in a certain time.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311381322.0A CN117472599A (en) | 2023-10-24 | 2023-10-24 | Deep synthesis detection system and deep synthesis detection acceleration integrated machine equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311381322.0A CN117472599A (en) | 2023-10-24 | 2023-10-24 | Deep synthesis detection system and deep synthesis detection acceleration integrated machine equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117472599A true CN117472599A (en) | 2024-01-30 |
Family
ID=89634021
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311381322.0A Pending CN117472599A (en) | 2023-10-24 | 2023-10-24 | Deep synthesis detection system and deep synthesis detection acceleration integrated machine equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117472599A (en) |
-
2023
- 2023-10-24 CN CN202311381322.0A patent/CN117472599A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108012156B (en) | Video processing method and control platform | |
CN111614769B (en) | Behavior intelligent analysis engine system of deep learning technology and control method | |
CN109769115A (en) | A kind of method, apparatus and equipment of Intelligent Optimal video analysis performance | |
CN102222213A (en) | Distributed vision computing method based on open type Web Service framework | |
CN108920153A (en) | A kind of Docker container dynamic dispatching method based on load estimation | |
CN106951322A (en) | The image collaboration processing routine acquisition methods and system of a kind of CPU/GPU isomerous environments | |
CN110689136B (en) | Deep learning model obtaining method, device, equipment and storage medium | |
CN108985451B (en) | Data processing method and device based on AI chip | |
CN111506434A (en) | Task processing method and device and computer readable storage medium | |
CN109062697A (en) | It is a kind of that the method and apparatus of spatial analysis service are provided | |
CN117193992B (en) | Model training method, task scheduling device and computer storage medium | |
CN104572298B (en) | The resource regulating method and device of video cloud platform | |
CN116467076A (en) | Multi-cluster scheduling method and system based on cluster available resources | |
CN117472599A (en) | Deep synthesis detection system and deep synthesis detection acceleration integrated machine equipment | |
CN114640669A (en) | Edge calculation method and device | |
CN111539281A (en) | Distributed face recognition method and system | |
CN114391260A (en) | Character recognition method and device, storage medium and electronic equipment | |
CN114339266B (en) | Video stream queue processing method based on domestic CPU and operating system | |
CN112669353B (en) | Data processing method, data processing device, computer equipment and storage medium | |
CN112817732B (en) | Stream data processing method and system suitable for cloud-edge collaborative multi-data-center scene | |
CN113992493A (en) | Video processing method, system, device and storage medium | |
CN117519996B (en) | Data processing method, device, equipment and storage medium | |
CN116680086B (en) | Scheduling management system based on offline rendering engine | |
CN113032098B (en) | Virtual machine scheduling method, device, equipment and readable storage medium | |
KR102345786B1 (en) | Weather Change Detection Method using Distributed Edge Clusters with AI Framework Support for Intelligent Weather Data Processing Based on Remote Sensing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |