WO2021204227A1 - 用于神经网络训练和智能分析的处理资源配置方法和装置 - Google Patents
用于神经网络训练和智能分析的处理资源配置方法和装置 Download PDFInfo
- Publication number
- WO2021204227A1 WO2021204227A1 PCT/CN2021/086051 CN2021086051W WO2021204227A1 WO 2021204227 A1 WO2021204227 A1 WO 2021204227A1 CN 2021086051 W CN2021086051 W CN 2021086051W WO 2021204227 A1 WO2021204227 A1 WO 2021204227A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- training
- neural network
- network model
- processing resources
- updated
- Prior art date
Links
- 238000012549 training Methods 0.000 title claims abstract description 358
- 238000012545 processing Methods 0.000 title claims abstract description 301
- 238000004458 analytical method Methods 0.000 title claims abstract description 173
- 238000000034 method Methods 0.000 title claims abstract description 72
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 44
- 238000013468 resource allocation Methods 0.000 title claims abstract description 10
- 238000003062 neural network model Methods 0.000 claims abstract description 295
- 238000001514 detection method Methods 0.000 claims description 37
- 230000033001 locomotion Effects 0.000 claims description 31
- 230000003044 adaptive effect Effects 0.000 claims description 19
- 238000011156 evaluation Methods 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 12
- 230000003287 optical effect Effects 0.000 claims description 9
- 230000011218 segmentation Effects 0.000 claims description 6
- 230000035945 sensitivity Effects 0.000 claims description 6
- 210000005036 nerve Anatomy 0.000 claims description 2
- 230000010354 integration Effects 0.000 abstract description 7
- 230000008569 process Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 10
- 230000001960 triggered effect Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 239000000725 suspension Substances 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000013500 data storage Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 208000014951 hematologic disease Diseases 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Definitions
- This application relates to the field of video surveillance analysis, and in particular to a processing resource configuration method for neural network training and intelligent analysis.
- a trained neural network can apply what it has learned to tasks in the digital world-for example: recognizing images, spoken words, blood diseases, or recommending someone to someone who or she might buy next shoes, etc. application. This faster and more efficient neural network can "derive" the new data it obtains based on its training results. In the field of artificial intelligence, this process is called inference, also known as intelligent analysis.
- the collected sample data is usually trained on the first hardware device to train the neural network model, and then the trained neural network model is transplanted to the second device that actually needs to be applied, so that the training The neural network model realizes application functions through intelligent analysis.
- This method is actually to train the neural network model offline, deploy the neural network model online for intelligent analysis, and realize the training of the neural network model and the deployment of intelligent analysis on different hardware devices. This brings inconvenience to the maintenance and upgrade of neural network model applications for real-time online intelligent analysis.
- This application provides a processing resource allocation method for neural network training and intelligent analysis, so as to realize the integration of neural network model training and intelligent analysis.
- the specific technical solutions are as follows:
- an embodiment of the present application provides a processing resource configuration method for neural network training and intelligent analysis, and the method includes:
- the neural network model to be updated is trained.
- the trigger logic includes at least one of the following:
- the current time reaches the set first time for the neural network model to be updated to start training, and the current time includes an absolute time determined based on the system time, or a relative time set based on timing,
- the amount of current idle processing resources reaches the set first threshold for starting training of the neural network model to be updated
- the method further includes,
- the training of the neural network model to be updated uses the current processing resources, if the trigger logic is not satisfied, the neural network model to be updated in the current training is stored, the training thread is suspended, and the intelligent analysis of the neural network model is started.
- the performing training of the neural network model to be updated based on currently idle processing resources includes:
- the training of the neural network model uses the current processing resources at any time, if the trigger logic is not satisfied, it further includes:
- the training parameters include one of the number of concurrent threads, thread start signals and waiting signals, number of iterations, learning rate, or any combination thereof.
- the configuration of processing resources required for training for the neural network model to be updated based on currently idle processing resources includes:
- the neural network model to be updated Before the judging whether the neural network model to be updated has been trained, it further includes:
- the amount of resources of the current idle processing resources reaches the set second threshold of the neural network model to be updated to suspend training; if so, the neural network model to be updated in the current training is stored, and the training thread is suspended; otherwise, the current training is continued.
- the judging that the resource amount of the current idle processing resource reaches a set second threshold for suspending training of the neural network model to be updated further includes:
- the resource amount of the current idle processing resources does not reach the second threshold, it is determined whether the current time has reached the set second time for the neural network model to be updated to suspend training, and if so, the neural network model to be updated in the current training is stored, Pause the training thread, otherwise, continue the current training until the training of the neural network model to be updated is completed.
- the trigger logic is that the amount of current idle processing resources reaches the set first threshold for starting training of the neural network model to be updated, or the current time reaches the set first threshold for starting training of the neural network model to be updated.
- the training of the neural network model uses the current processing resources at any time, if the trigger logic is not satisfied, including,
- the trigger logic if it is satisfied, it further includes:
- selectively reserve foreground detection work selectively suspend the intelligent analysis performed by the neural network model, and release the processing resources occupied by the suspended neural network model for intelligent analysis;
- the trigger event includes: no motion foreground is detected within a set first time threshold, no optical flow is detected within a set second time threshold, and no target is detected within a set third time threshold, One event or any combination of the target segmentation results is not detected within the set fourth time threshold.
- processing resources include one or any combination of system memory, processing resources of a processor, graphics processor GPU memory, bandwidth resources, and the number of threads;
- the first threshold includes one of a threshold of system memory, a threshold of processing resources of a processor, a threshold of GPU memory, a threshold of bandwidth resources, a threshold of the number of threads, or any combination thereof;
- the second threshold includes a threshold of system memory, a threshold of processing resources of a processor, a threshold of GPU memory, a threshold of bandwidth resources, one of thresholds of the number of threads, or any combination thereof;
- the judging whether the resource amount of the current idle processing resource reaches the first threshold set for starting training of the neural network model to be updated further includes:
- each resource amount of idle processing resources judge whether each resource amount reaches its set threshold.
- configuring adaptive training parameters for the neural network model to be updated includes:
- the judging whether the resource amount of the current idle processing resource reaches the second threshold set for the suspension training of the neural network model to be updated further includes:
- each resource amount of idle processing resources determines whether each resource amount reaches its set threshold. When any one of the resources reaches its set threshold, adjust the current training according to the current idle processing resources until all resources are available. Reached each set threshold.
- the configuration of processing resources required for training for the neural network model to be updated based on currently idle processing resources further includes:
- a certain ratio of processing resources is used to configure training parameters.
- an embodiment of the present application provides a device for processing resource configuration for neural network training and intelligent analysis, and the device includes:
- the trigger logic detection module is used to judge whether the trigger logic is satisfied at any time when the current processing resources are used by the intelligent analysis of the neural network model;
- the training module is used to perform resource evaluation based on the current processing resources when the trigger logic is met, determine the currently idle processing resources, and configure the processing resources required for training for the neural network model to be updated based on the currently idle processing resources; based on the training institute Need to process resources, and train the neural network model to be updated.
- an embodiment of the present application provides an electronic device for neural network training and intelligent analysis processing resource configuration.
- the device includes a memory and a processor, wherein the memory stores instructions that can be executed by the processor. It is executed by the processor, so that the processor executes the above-mentioned processing resource allocation method for neural network training and intelligent analysis.
- an embodiment of the present application provides a computer-readable storage medium that stores a computer program in the computer-readable storage medium.
- the computer program is executed by a processor, the above-mentioned processing resources for neural network training and intelligent analysis are realized. Configuration method.
- the embodiments of the present application provide a computer program product for executing at runtime: the above-mentioned processing resource configuration method for neural network training and intelligent analysis.
- intelligent analysis is performed at any time when the current processing resources are used, and through trigger logic, the neural network model to be updated is controlled to be trained based on the currently idle processing resources, which realizes the integration of intelligent analysis and training of the neural network model.
- the hardware resources are maximized for training, which realizes the flexible configuration of processing resources between intelligent analysis and training, and improves the self-upgrading application of the neural network model. convenient.
- FIG. 1 is a schematic flow diagram of a method for configuring processing resources for neural network training and intelligent analysis by using event triggering in combination with video analysis according to an embodiment of the present application.
- FIG. 2 is a schematic flow diagram of a method for starting processing resource allocation for neural network training and intelligent analysis in a time-triggered manner in combination with video analysis according to an embodiment of the application.
- FIG. 3 is a schematic flowchart of a method for starting a processing resource configuration method for neural network training and intelligent analysis in a manner of triggering by idle processing resources in combination with video analysis according to an embodiment of the present application.
- FIG. 4 is a schematic flowchart of a method for triggering a combination of events, time, and processing resources in combination with video analysis to start a processing resource configuration method for neural network training and intelligent analysis according to an embodiment of the application.
- FIG. 5 is a schematic diagram of the framework of the processing resources occupied by the neural network model for intelligent analysis and the processing resources occupied by the training of the neural network model to be updated according to an embodiment of the application.
- FIG. 6 is a schematic structural diagram of a processing resource configuration device for neural network training and intelligent analysis according to an embodiment of the application.
- FIG. 7 is a schematic structural diagram of an electronic device for processing resource configuration for neural network training and intelligent analysis according to an embodiment of the application.
- processing power computing power
- memory resources memory resources
- bandwidth resources of processors as processing resources are becoming more and more sufficient.
- more intelligent analysis of neural network functions can be realized on the chip, it can also support nerves.
- Network model training According to the processing resources occupied during the intelligent analysis of the neural network model, this application configures processing resources for training of the neural network model during the time window when the processing resources occupied by the intelligent analysis is idle, so as to realize the training and intelligent analysis of the neural network model Integration.
- processing resources usually include, but are not limited to, system memory, processing resources of the processor, GPU (Graphics Processing Unit) video memory, bandwidth resources, one of the number of threads, or any combination thereof, where the processor includes CPU (Central Processing Unit) and/or GPU.
- the embodiment of the present application implements the configuration of processing resources by setting trigger logic.
- the embodiment of the present application provides a processing resource configuration method for neural network training and intelligent analysis, which includes the following steps:
- the neural network model to be updated is trained.
- intelligent analysis is performed at any time when the current processing resources are used, and through trigger logic, the neural network model to be updated is controlled to be trained based on the currently idle processing resources, which realizes the integration of intelligent analysis and training of the neural network model.
- the hardware resources are maximized for training, which realizes the flexible configuration of processing resources between intelligent analysis and training, and improves the self-upgrading application of the neural network model. convenient.
- FIG. 1 is a schematic flowchart of a method for configuring processing resources for neural network training and intelligent analysis by using an event trigger method in combination with video analysis according to an embodiment of the present application.
- Step 101 Determine whether the current foreground motion analysis meets the set trigger event.
- neural network models There may be multiple neural network models currently used for intelligent analysis. For example, some neural network models are used for motion detection, some neural network models are used for target detection, some neural network models are used for target tracking, and some neural network models are used for target recognition, etc. .
- foreground motion analysis can be performed on the collected images without interruption. If the current foreground motion analysis meets the set trigger event, the intelligent analysis performed by the current neural network model is partially or completely suspended, and released The processing resources occupied by the intelligent analysis of the neural network model have been suspended, and then step 102 is executed.
- the neural network model is activated for intelligent analysis.
- the trigger event includes one of the following or any combination thereof: no motion foreground is detected within the set first time threshold, no optical flow is detected within the set second time threshold, and at the set third time threshold The target is not detected within the threshold, and the target segmentation result is not detected within the set fourth time threshold.
- the current foreground motion analysis it is analyzed that no motion foreground is detected within the set first time threshold, no optical flow is detected within the set second time threshold, and no movement is detected within the set third time threshold. If the target is detected, the target segmentation result is not detected within the set fourth time threshold, etc., it can be determined that the current foreground motion analysis meets the set trigger event, which will trigger a partial or full suspension of the current neural network model. Intelligent analysis, and release the operation of processing resources occupied by the intelligent analysis of the suspended neural network model.
- the first time threshold, the second time threshold, the third time threshold, and the fourth time threshold may be the same or different; when the thresholds are different, the priority of the trigger event may be formed.
- the uninterrupted operation of the foreground detection in the foreground motion analysis that is highly sensitive to the foreground and occupies less processing resources can be retained.
- the time window in which the processing resources occupied by the intelligent analysis are idle does not include the time window of the foreground target, where the time window refers to the time period on the time slice occupied by different analysis modules and is used to configure resources.
- foreground detection refers to which area of the image has motion or target detection detected by background modeling analysis, there may be multiple foreground detection results, which can be based on various foreground detection results and the neural network based on each foreground detection.
- Step 102 Trigger the resource evaluation of the current processing resources, so as to configure the processing resources required for the training of the neural network to be updated.
- the processing resources can include system memory, processor processing resources, GPU memory, bandwidth resources, One or any combination of threads.
- Step 103 According to the processing resource evaluation result, automatically configure adaptive training parameters for the neural network model to be updated; the training parameters include the number of concurrent threads, thread start signals and waiting signals, number of iterations, learning rate, and so on.
- the adaptive training parameters are set based on experience; the start and wait signals are triggered based on the foreground information of the foreground detection and/or processing resources. Specifically, when it is detected that there are targets in the foreground information and/or the processing resources are less than the set If the threshold is higher, the wait signal is triggered, otherwise, the start signal is triggered.
- one neural network model to be updated can be selected according to priority to configure parameters, or a neural network model to be updated can be matched according to the currently idle processing resources, for example, according to each Update the statistics of the processing resources occupied by the neural network model in historical training, and select a neural network model to be updated according to the current idle resources.
- the first ratio of currently idle processing resources may be configured as processing resources required for training the neural network model to be updated, for example, 20%-30% of the total currently idle processing resources may be configured for training The processing resources needed to update the neural network model.
- Step 104 Load the neural network model to be updated, training parameters, and training data, execute the training process, and monitor the current processing resources in real time. Once the foreground information is detected, the neural network model currently in training is stored, and the video memory and the current processing resources are judged. / Or whether the memory resources are sufficient, if sufficient, pause the training thread and start the neural network model for intelligent analysis, so as not to miss the target, otherwise, terminate the (KILL) training thread and start the neural network model for intelligent analysis.
- judging whether the video memory and/or memory resources in the current processing resources are sufficient can be based on: the video memory and/or memory resources occupied by the neural network model during training and the video memory and/or memory resources occupied by the neural network model for intelligent analysis Whether the sum is less than or equal to the total video memory and/or memory.
- the loaded training data can be training data with artificially annotated label information; it can be unlabeled data, and the training data is automatically labeled by the model.
- Step 105 Determine whether the training of the neural network model to be updated is completed, if so, save the trained neural network model and deploy the neural network model; otherwise, return to step 101.
- the embodiment of the application uses the foreground detection event as the trigger logic, and takes into account the training of the neural network to be updated on the same hardware without missing the target, maximizes the use of hardware resources for training, and supports the neural network model.
- Self-upgrading applications realizes the integration of neural network model training and intelligent analysis, as well as the flexible allocation of processing resources.
- FIG. 2 is a schematic flow diagram of a method for configuring processing resources for neural network training and intelligent analysis in a time-triggered manner combined with video analysis according to an embodiment of the present application.
- the neural network model needs to be updated after iterating for a period of time and evaluating the target performance of the model, perform the following steps:
- Step 201 Determine whether the current time has reached the first time that the neural network model to be updated is in line with the initial training time. If it is, the foreground detection with high sensitivity to the foreground is retained, and the intelligent analysis performed by some or all neural network models is suspended. , And then perform step 202, otherwise, keep the current foreground motion analysis, and perform intelligent analysis of the neural network model, and return to step 201.
- the first time can be determined based on video analysis data. For example, for a certain monitoring device, from the historical data of video analysis, it does not detect the foreground during a certain period of night, so the first time can be set here.
- the current time may be based on an absolute time determined by the system time, for example, a certain time on a certain day of a certain month; or, based on a relative time set by a timer, for example, a duration set by a timer.
- Step 202 Trigger the resource evaluation of the current processing resources so as to configure the processing resources required for the training of the neural network to be updated. This step is the same as step 102;
- Step 203 Automatically configure adaptive training parameters for the neural network model to be updated according to the processing resource evaluation result; this step is the same as step 103.
- Step 204 load the neural network model to be updated, training parameters, and training data, execute the training process, and monitor the current processing resources in real time. Once the foreground information is detected, the neural network model currently in training is stored, and the video memory and the current processing resources are judged. / Or whether the memory resources are sufficient. If sufficient, the training thread is suspended and the neural network model is started for intelligent analysis; otherwise, the training thread is terminated and the neural network model is started for intelligent analysis.
- Step 205 After the intelligent analysis is finished, it is determined whether the set second time for the neural network model to be updated to suspend training is reached, if so, the neural network model currently in training is stored and the training thread is suspended; otherwise, step 202 is returned.
- step 206 it is judged whether the training of the neural network model to be updated is completed, if so, the trained neural network model is saved, and the neural network model is deployed; otherwise, it returns to step 201.
- the neural network model to be updated reaches the expected convergence condition, or the number of iterations reaches the specified number, it can be determined that the training of the neural network model to be updated is completed.
- steps 204 to 205 may also be:
- Step 204' load the neural network model to be updated, training parameters, and training data, execute the training process, and monitor current processing resources in real time.
- step 205' it is judged whether the set second time for the neural network model to be updated to suspend training is reached, if so, the neural network model currently in training is stored and the training thread is suspended; otherwise, the current training is continued.
- the current video can be stored in the peripheral device.
- intelligent analysis of the neural network model is performed based on the stored foreground information. This implementation mode prevents the training process from being accidentally performed.
- the foreground detection is interrupted, which helps to improve the efficiency of training.
- FIG. 3 is a schematic flow diagram of a method for triggering processing resources for neural network training and intelligent analysis by using idle processing resource triggering in combination with video analysis according to an embodiment of the present application.
- Step 301 Determine whether the amount of resources of the current idle processing resources reaches the first threshold set for starting training of the neural network model to be updated. If yes, perform step 302; otherwise, keep the current foreground motion analysis and the current neural network model. Intelligent analysis, return to step 301.
- the amount of idle processing resources includes one or any combination of system memory, processing resources of the processor, GPU memory, bandwidth resources, and the number of threads; specifically, thresholds can be set for each amount of idle processing resources, for example,
- the first threshold includes a threshold of system memory, a threshold of processing resources of a processor, a threshold of GPU video memory, a threshold of bandwidth resources, and a threshold of the number of threads.
- the resource amount of the currently idle processing resources is obtained by subtracting the currently occupied processing resources from the total processing resources.
- Step 302 according to the current idle processing resources, automatically configure adaptive training parameters for the neural network model to be updated; this step is the same as step 103.
- training parameters can be configured according to the amount of each idle processing resource.
- Step 303 Load the neural network model to be updated, training parameters, and training data, execute the training process, and monitor current processing resources in real time.
- Step 304 Determine whether the resource amount of the current idle processing resources reaches the second threshold set for suspending training of the neural network model to be updated. If so, store the neural network model currently in training and suspend the training thread; otherwise, return to step 302 .
- each resource amount reaches the corresponding set second threshold value.
- Current idle processing resources adjust current training and adaptive training parameters. For example, suspend threads occupying processing resources that exceed a set threshold, reduce the amount of training data, and repeatedly adjust training until all resources reach each set threshold, indicating the current Idle processing resources have been fully utilized, the neural network model currently in training is stored, and the training thread is suspended.
- step 305 it is judged whether the training of the neural network model to be updated is completed, if so, the trained neural network model is saved, and the neural network model is deployed; otherwise, it returns to step 301.
- This embodiment uses the current idle processing resources as the trigger logic to realize real-time monitoring of the current idle processing resources. While not affecting the intelligent analysis of the current neural network model, the idle processing resources can be dynamically utilized, so that the idle processing resources can be used dynamically on the same hardware. The training of the neural network model to be updated is completed.
- FIG. 4 is a flow diagram of a method for triggering a combination of events, time, and processing resources in combination with video analysis to initiate a processing resource configuration method for neural network training and intelligent analysis in an embodiment of the application.
- Step 401 Determine whether at least one of the following trigger conditions is met: whether the current foreground motion analysis meets the set trigger event, whether the current time has reached the first time that the neural network model to be updated to be updated meets the set first time for training, and currently idle processing resources Whether the amount of resources reaches the first threshold set for starting training of the neural network model to be updated.
- step 402 If yes, keep the foreground detection with high sensitivity to the foreground, suspend the intelligent analysis performed by some or all neural network models, and then execute step 402, otherwise, keep the current foreground motion analysis and perform the intelligent analysis of the neural network, and return to step 401.
- Step 402 Trigger the resource evaluation of the current processing resources to configure the processing resources required for the training of the neural network to be updated. This step is the same as step 102.
- Step 403 Automatically configure adaptive training parameters for the neural network model to be updated according to the processing resource evaluation result; this step is the same as step 103.
- Step 404 Load the neural network model to be updated, training parameters, and training data, execute the training process, and monitor the current processing resources in real time. Once the foreground information is detected, the neural network model currently in training is stored, and the video memory and the current processing resources are judged. / Or whether the memory resources are sufficient, if sufficient, pause the training thread and start the neural network model for intelligent analysis; otherwise, terminate the training thread and start the neural network model for intelligent analysis.
- Step 405 It is judged whether the resource amount of the current idle processing resource reaches the second threshold set for the suspension training of the neural network model to be updated.
- Step 406 Determine whether the training of the neural network model to be updated is completed. If so, save the trained neural network model and deploy the neural network model; otherwise, store the neural network model currently in training, suspend the training thread, and return to step 401.
- This embodiment controls the training of the neural network model from the three dimensions of event, time, and processing resources, which not only ensures the stability and robustness of video analysis, but also maximizes the use of idle processing resources so that idle processing resources can be Used flexibly.
- FIG. 5 is a schematic diagram of the processing resources occupied by the neural network model for intelligent analysis and the processing resources occupied by the neural network model to be updated for training according to an embodiment of the application.
- FIG. 6 is a schematic structural diagram of a processing resource configuration device for neural network model training and intelligent analysis according to an embodiment of the application.
- the device includes,
- the trigger logic detection module is used to judge whether the trigger logic is satisfied at any time when the current processing resources are used by the intelligent analysis of the neural network model;
- the training module is used to perform resource evaluation based on the current processing resources when the trigger logic is met, determine the currently idle processing resources, and configure the processing resources required for training for the neural network model to be updated based on the currently idle processing resources; based on the training institute Need to process resources, and train the neural network model to be updated.
- the trigger logic detection module includes at least one of the following modules:
- the foreground motion analysis module is used to detect whether the current foreground motion analysis meets the set trigger event; when it is satisfied, it outputs a trigger signal to the training module;
- the time trigger detection module is used to detect that the current time reaches the set first time for the neural network model to be updated to start training, and when it is satisfied, it outputs a trigger signal to the training module;
- the processing resource detection module is used for the resource amount of the current idle processing resources to reach the set first threshold for starting training of the neural network model to be updated, and when it is met, output a trigger signal to the training module;
- the training module includes,
- the resource evaluation module determines the currently idle processing resources according to the resource evaluation of the current processing resources of the trigger signal
- the training parameter configuration module configures adaptive training parameters for the neural network model to be updated according to the resource evaluation results
- the training execution module loads the neural network model to be updated, training parameters, and training data, executes the training, and monitors the current processing resources in real time;
- the device also includes,
- the neural network model intelligent analysis module when the trigger logic is not satisfied, performs intelligent analysis based on the neural network model;
- the model storage deployment module saves the trained neural network model to be updated and deploys the intelligent analysis of the model.
- the trigger logic detection module includes at least one of the following modules: a foreground motion analysis module, a time trigger detection module, and a processing resource detection module;
- the foreground motion analysis module is used to detect whether the current foreground motion analysis meets the set trigger event; when it is satisfied, it outputs a trigger signal to the training module;
- the time trigger detection module is used to detect whether the current time reaches the set first time for the neural network model to be updated to start training.
- the current time includes the absolute time determined based on the system time, or the relative time set based on the timing; When satisfied, output a trigger signal to the training module;
- the processing resource detection module is used to detect whether the resource amount of the currently idle processing resource reaches the set first threshold for starting training of the neural network model to be updated; when it is met, output a trigger signal to the training module;
- the training module is also used to train the neural network model to be updated at any time when the current processing resources are used. If the trigger logic is not satisfied, store the neural network model to be updated in the current training, pause the training thread, and start the neural network model for intelligence analyze.
- the training module is specifically used to load the neural network model to be updated, training parameters, and training data, perform training, and monitor the current processing resources in real time; when foreground information is detected, determine the GPU memory and the current processing resources / Or whether the system memory resources are sufficient; if the GPU memory and/or system memory resources are sufficient, the training thread is suspended and the neural network model is started for intelligent analysis; if the GPU memory and/or system memory resources are insufficient, the training thread is terminated, And start the neural network model for intelligent analysis; determine whether the training of the neural network model to be updated is completed; if it is, save the trained neural network model and deploy the neural network model, otherwise, return to execute the neural network model for intelligent analysis use At any time of the current processing resource, it is judged whether the step of triggering logic is satisfied; wherein, the training parameters include the number of concurrent threads, the thread start signal and the waiting signal, the number of iterations, the learning rate, or any combination thereof.
- the training module is specifically used to configure adaptive training parameters for the neural network model to be updated according to the currently idle processing resources; to determine whether the amount of resources of the currently idle processing resources reaches the set pause training of the neural network model to be updated If the amount of currently idle processing resources reaches the second threshold, store the neural network model to be updated in the current training, and suspend the training thread and/or adjust the training parameters; if the currently idle processing resources are If the amount does not reach the second threshold, the current training is continued.
- the training module is specifically used to determine whether the current time has reached the set second time for the neural network model to be updated to suspend training if the amount of currently idle processing resources does not reach the second threshold; if the current time reaches At the second time, the neural network model to be updated in the current training is stored, and the training thread is suspended; if the current time does not reach the second time, the current training is continued until the training of the neural network model to be updated is completed.
- the training module is specifically used when the trigger logic is that the amount of the currently idle processing resource reaches the set first threshold for starting training of the neural network model to be updated, or when the trigger logic is the current time arrival In the case of meeting the set first time for starting training of the neural network model to be updated, it is judged whether the amount of currently idle processing resources reaches the set second threshold for suspending training of the neural network model to be updated, and whether the current time has reached compliance Set the second time for suspending training of the neural network model to be updated; if the amount of currently idle processing resources reaches the second threshold, or the current time reaches the second time, store the neural network model to be updated in the current training , Pause the training thread; if the amount of currently idle processing resources does not reach the second threshold and the current time does not reach the second time, continue the current training until the training of the neural network model to be updated is completed.
- the training module is specifically used to selectively reserve foreground detection work, selectively suspend the intelligent analysis performed by the neural network model, and release the suspended neural network model according to the sensitivity to the foreground and/or the processing resources occupied Processing resources occupied by intelligent analysis; trigger events include: no motion foreground is detected within the set first time threshold, no optical flow is detected within the set second time threshold, and at the set third time The target is not detected within the threshold, and one event or any combination of the target segmentation results is not detected within the set fourth time threshold.
- processing resources include one of system memory, processing resources of the processor, GPU video memory, bandwidth resources, the number of threads, or any combination thereof;
- the first threshold includes one of the threshold of system memory, the threshold of processing resources of the processor, the threshold of GPU memory, the threshold of bandwidth resources, the threshold of the number of threads, or any combination thereof;
- the second threshold includes a threshold of system memory, a threshold of processing resources of a processor, a threshold of GPU video memory, a threshold of bandwidth resources, one of thresholds of the number of threads, or any combination thereof;
- the training module is specifically used to determine whether each resource amount has reached the corresponding set first threshold according to the respective resource amounts of the currently idle processing resources; according to the respective resource amounts of the currently idle processing resources, configure the neural network model to be updated Adaptive training parameters; according to the respective resource amounts of the currently idle processing resources, respectively determine whether each resource amount reaches the corresponding set second threshold value, when any resource amount reaches the corresponding set second threshold value, according to the current idle
- the processing resources adjust the current training and adaptive training parameters until all the resource amounts reach the second threshold set correspondingly.
- the training module is specifically used to select the neural network model to be updated according to the priority, or to match the neural network model to be updated according to the currently idle processing resources; configure a certain ratio of the currently idle processing resources for training the neural network model to be updated The processing resources required.
- intelligent analysis is performed at any time when the current processing resources are used, and through trigger logic, the neural network model to be updated is controlled to be trained based on the currently idle processing resources, which realizes the integration of intelligent analysis and training of the neural network model.
- the hardware resources are maximized for training, which realizes the flexible configuration of processing resources between intelligent analysis and training, and improves the self-upgrading application of the neural network model. convenient.
- the embodiment of the present application provides an electronic device for processing resource configuration for neural network training and intelligent analysis. As shown in FIG. 7, it may include a processor 701 and a memory 702. The instruction is executed by the processor 701 so that the processor executes the above-mentioned processing resource configuration method for neural network training and intelligent analysis.
- the foregoing memory may include RAM (Random Access Memory, random access memory), and may also include NVM (Non-Volatile Memory, non-volatile memory), such as at least one disk storage.
- the memory may also be at least one storage device located far away from the foregoing processor.
- the above-mentioned processor can be a general-purpose processor, including CPU, NP (Network Processor, network processor), etc.; it can also be DSP (Digital Signal Processing, digital signal processor), ASIC (Application Specific Integrated Circuit, application-specific integrated circuit), FPGA (Field-Programmable Gate Array) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
- the memory 702 and the processor 701 may perform data transmission through a wired connection or a wireless connection, and the electronic device may communicate with other devices through a wired communication interface or a wireless communication interface. What is shown in FIG. 7 is only an example of data transmission between the processor 701 and the memory 702 through a bus, and is not a limitation on the specific connection manner.
- the embodiment of the present application also provides a computer-readable storage medium that stores a computer program in the computer-readable storage medium, and when the computer program is executed by a processor, the foregoing processing resource configuration method for neural network training and intelligent analysis is realized.
- the embodiment of the present application also provides a computer program product for executing the above-mentioned processing resource configuration method for neural network training and intelligent analysis at runtime.
- the computer program product includes one or more computer instructions.
- the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
- the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
- the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
- the usable medium may be a magnetic medium (such as a floppy disk, a hard disk, a magnetic tape), an optical medium (such as a DVD (Digital Versatile Disc, digital versatile disc)), or a semiconductor medium (such as an SSD (Solid State Disk, solid state hard disk)), etc. .
- the program can be stored in a computer readable storage medium, which is referred to herein as Storage media, such as ROM/RAM, magnetic disks, optical disks, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (20)
- 一种用于神经网络训练和智能分析的处理资源配置方法,其特征在于,所述方法包括,在神经网络模型进行智能分析使用当前处理资源的任意时间,判断是否满足触发逻辑;如果满足所述触发逻辑,则根据所述当前处理资源进行资源评估,确定当前空闲的处理资源;基于所述当前空闲的处理资源,为待更新神经网络模型配置训练所需的处理资源;基于所述训练所需的处理资源,对所述待更新神经网络模型进行训练。
- 如权利要求1所述的方法,其特征在于,所述触发逻辑至少包括如下之一:当前前景运动分析满足设定的触发事件;当前时间到达符合设定的对待更新神经网络模型启动训练的第一时间,所述当前时间包括基于系统时间确定的绝对时间,或,基于计时设定的相对时间;当前空闲的处理资源的资源量达到设定的对待更新神经网络模型启动训练的第一阈值;在所述基于所述训练所需的处理资源,对所述待更新神经网络模型进行训练的步骤之后,所述方法还包括,在所述待更新神经网络模型进行训练使用当前处理资源的任意时间,如果不满足所述触发逻辑,则存储当前训练中的待更新神经网络模型,暂停训练线程,启动神经网络模型进行智能分析。
- 如权利要求2所述的方法,其特征在于,所述基于所述训练所需的处理资源,对所述待更新神经网络模型进行训练的步骤,包括:加载所述待更新神经网络模型、训练参数、以及训练数据,执行训练,并实时监控当前处理资源;在所述存储当前训练中的待更新神经网络模型,暂停训练线程,启动神经网络模型进行智能分析的步骤之前,所述方法还包括:当检测到前景信息时,判断当前处理资源中的图形处理器GPU显存和/或系统内存资源是否充足;所述暂停训练线程,启动神经网络模型进行智能分析的步骤,包括:如果所述GPU显存和/或系统内存资源充足,则暂停训练线程,并启动神经网络模型进行智能分析;所述方法还包括:如果所述GPU显存和/或系统内存资源不充足,则终止训练线程,并启动神经网络模型进行智能分析;判断所述待更新神经网络模型是否训练完毕;如果是,则保存训练好的神经网络模型,并部署该神经网络模型,否则,返回执行所述在神经网络模型进行智能分析使用当前处理资源的任意时间,判断是否满足触发逻辑的步骤;所述训练参数包括,并发线程的数量、线程启动信号和等待信号、迭代次数、学习率之一或其任意组合。
- 如权利要求3所述的方法,其特征在于,所述基于所述当前空闲的处理资源,为待更新神经网络模型配置训练所需的处理资源的步骤,包括:根据所述当前空闲的处理资源,为待更新神经网络模型配置自适应训练参数;在所述判断所述待更新神经网络模型是否训练完毕的步骤之前,所述方法还包括:判断所述当前空闲的处理资源的资源量是否达到设定的对所述待更新神经网络模型暂停训练的第二阈值;所述存储当前训练中的待更新神经网络模型的步骤,包括:如果所述当前空闲的处理资源的资源量达到所述第二阈值,则存储当前训练中的待更新神经网络模型,并暂停训练线程和/或调整训练参数;所述方法还包括:如果所述当前空闲的处理资源的资源量未达到所述第二阈值,则继续当前训练。
- 如权利要求4所述的方法,其特征在于,在所述继续当前训练的步骤之前,所述方法还包括:如果所述当前空闲的处理资源的资源量未达到所述第二阈值,则判断当前时间是否到达设定的对所述待更新神经网络模型暂停训练的第二时间;所述继续当前训练的步骤,包括:如果所述当前时间未到达所述第二时间,则继续当前训练,直至待更新神经网络模型训练完毕;所述存储当前训练中的待更新神经网络模型的步骤,包括:如果所述当前时间到达所述第二时间,则存储当前训练中的待更新神经网络模型,并暂停训练线程。
- 如权利要求2所述的方法,其特征在于,在所述触发逻辑为当前空闲的处理资源的资源量达到设定的对待更新神经网络模型启动训练的第一阈值的情况下,或,在所述触发逻辑为当前时间到达符合设定的对待更新神经网络模型启动训练的第一时间的情况下,在所述存储当前训练中的待更新神经网络模型,暂停训练线程,启动神经网络模型进行智能分析的步骤之前,所述方法还包括:判断所述当前空闲的处理资源的资源量是否到达设定的对待更新神经网络模型暂停训练的第二阈值、当前时间是否到达符合设定的对待更新定神经网络模型暂停训练的第二时间;所述存储当前训练中的待更新神经网络模型,暂停训练线程的步骤,包括:如果所述当前空闲的处理资源的资源量到达所述第二阈值,或者,所述当前时间到达所述第二时间,则存储当前训练中的待更新神经网络模型,暂停训练线程;所述方法还包括:如果所述当前空闲的处理资源的资源量未到达所述第二阈值、所述当前时间未到达所述第二时间,则继续当前训练,直至待更新神经网络模型训练完毕。
- 如权利要求2所述的方法,其特征在于,在所述根据所述当前处理资源进行资源评估,确定当前空闲的处理资源的步骤之前,所述方法还包括:按照对前景敏感度和/或占用处理资源,选择性地保留前景检测工作,选择性地暂停神经网络模型所进行的智能分析,并释放暂停的神经网络模型进行智能分析所占用的处理资源;所述触发事件包括,在设定的第一时间阈值内未检测到运动前景,在设定的第二时间阈值内未检测到光流,在设定的第三时间阈值内未检测到目标,在设定的第四时间阈值内未检测到目标分割结果中的一个事件或任意的组合。
- 如权利要求4所述的方法,其特征在于,所述处理资源包括,系统内存、处理器的处理资源、GPU显存、带宽资源、线程数之一或其任意组合;所述第一阈值包括,系统内存的阈值,处理器的处理资源的阈值,GPU显存的阈值,带宽资源的阈值,线程数量的阈值之一或其任意组合;所述第二阈值包括,系统内存的阈值,处理器的处理资源的阈值,GPU显存的阈值,带宽资源的阈值,线程数量的阈值之一或其任意组合;所述判断当前空闲的处理资源的资源量是否达到对待更新神经网络模型启动训练的第一阈值的步骤,包括:根据当前空闲的处理资源的各个资源量,分别判断各个资源量是否达到对应设定的第一阈值;所述根据所述当前空闲的处理资源,为待更新神经网络模型配置自适应训练参数的步骤,包括:根据当前空闲的处理资源的各个资源量,配置待更新神经网络模型的自适应训练参数;所述判断所述当前空闲的处理资源的资源量是否达到对所述待更新神经网络模型暂停训练的设定的第二阈值的步骤,包括:根据所述当前空闲的处理资源的各个资源量,分别判断各个资源量是否达到对应设定的第二阈值,当任一资源量达到对应设定的第二阈值时,根据当前空闲的处理资源调整当前训练以及自适应训练参数,直至所有资源量都达到各自对应设定的第二阈值。
- 如权利要求1所述的方法,其特征在于,所述待更新神经网络模型为两个以上;所述基于所述当前空闲的处理资源,为待更新神经网络模型配置训练所需的处理资源的步骤,包括:按照优先级选择待更新神经网络模型,或者,根据当前空闲的处理资源匹配待更新神经网络模型;将一定比率的当前空闲的处理资源配置为用于训练所述待更新神经网络模型所需的处理资源。
- 一种用于神经网络训练和智能分析的处理资源配置的装置,其特征在于,该装置包括,触发逻辑检测模块,用于在神经网络模型进行智能分析使用当前处理资源的任意时间,判断是否满足触发逻辑;训练模块,用于在满足所述触发逻辑时,根据所述当前处理资源进行资源评估,确定当前空闲的处理资源,基于所述当前空闲的处理资源,为待更新神经网络模型配置训练所需的处理资源;基于所述训练所需处理资源,对所述待更新神经网络模型进行训练。
- 如权利要求10所述的装置,其特征在于,所述触发逻辑检测模块至少包括如下之一模块:前景运动分析模块、时间触发检测模块和处理资源检测模块;所述前景运动分析模块,用于检测当前前景运动分析是否满足设定的触发事件;当满足时,向所述训练模块输出触发信号;所述时间触发检测模块,用于检测当前时间是否到达符合设定的对待更新神经网络模型启动训练的第一时间,所述当前时间包括基于系统时间确定的绝对时间,或,基于计时设定的相对时间;当满足时,向所述训练模块输出触发信号;所述处理资源检测模块,用于检测当前空闲的处理资源的资源量是否达到设定的对待更新神经网络模型启动训练的第一阈值;当满足时,向所述训练模块输出触发信号;所述训练模块,还用于在所述待更新神经网络模型进行训练使用当前处理资源的任意时间,如果不满足所述触发逻辑,则存储当前训练中的待更新神经网络模型,暂停训练线程,启动神经网络模型进行智能分析。
- 如权利要求11所述的装置,其特征在于,所述训练模块,具体用于加载所述待更新神经网络模型、训练参数、以及训练数据,执行训练,并实时监控当前处理资源;当检测到前景信息时,判断当前处理资源中的图形处理器GPU显存和/或系统内存资源是否充足;如果所述GPU显存和/或系统内存资源充足,则暂停训练线程,并启动神经网络模型进行智能分析;如果所述GPU显存和/或系统内存资源不充足,则终止训练线程,并启动神经网络模型进行智能分析;判断所述待更新神经网络模型是否训练完毕;如果是,则保存训练好的神经网络模型,并部署该神经网络模型,否则,返回执行所述在神经网络模型进行智能分析使用当前处理资源的任意时间,判断是否满足触发逻辑的步 骤;所述训练参数包括,并发线程的数量、线程启动信号和等待信号、迭代次数、学习率之一或其任意组合。
- 如权利要求12所述的装置,其特征在于,所述训练模块,具体用于根据所述当前空闲的处理资源,为待更新神经网络模型配置自适应训练参数;判断所述当前空闲的处理资源的资源量是否达到设定的对所述待更新神经网络模型暂停训练的第二阈值;如果所述当前空闲的处理资源的资源量达到所述第二阈值,则存储当前训练中的待更新神经网络模型,并暂停训练线程和/或调整训练参数;如果所述当前空闲的处理资源的资源量未达到所述第二阈值,则继续当前训练。
- 如权利要求13所述的装置,其特征在于,所述训练模块,具体用于如果所述当前空闲的处理资源的资源量未达到所述第二阈值,则判断当前时间是否到达设定的对所述待更新神经网络模型暂停训练的第二时间;如果所述当前时间到达所述第二时间,则存储当前训练中的待更新神经网络模型,并暂停训练线程;如果所述当前时间未到达所述第二时间,则继续当前训练,直至待更新神经网络模型训练完毕。
- 如权利要求11所述的装置,其特征在于,所述训练模块,具体用于在所述触发逻辑为当前空闲的处理资源的资源量达到设定的对待更新神经网络模型启动训练的第一阈值的情况下,或,在所述触发逻辑为当前时间到达符合设定的对待更新神经网络模型启动训练的第一时间的情况下,判断所述当前空闲的处理资源的资源量是否到达设定的对待更新神经网络模型暂停训练的第二阈值、当前时间是否到达符合设定的对待更新定神经网络模型暂停训练的第二时间;如果所述当前空闲的处理资源的资源量到达所述第二阈值,或者,所述当前时间到达所述第二时间,则存储当前训练中的待更新神经网络模型,暂停训练线程;如果所述当前空闲的处理资源的资源量未到达所述第二阈值、所述当前时间未到达所述第二时间,则继续当前训练,直至待更新神经网络模型训练完毕。
- 如权利要求11所述的装置,其特征在于,所述训练模块,具体用于按照对前景敏感度和/或占用处理资源,选择性地保留前景检测工作,选择性地暂停神经网络模型所进行的智能分析,并释放暂停的神经网络模型进行智能分析所占用的处理资源;所述触发事件包括,在设定的第一时间阈值内未检测到运动前景,在设定的第二时间阈值内未检测到光流,在设定的第三时间阈值内未检测到目标,在设定的第四时间阈值内未检测到目标分割结果中的一个事件或任意的组合。
- 如权利要求13所述的装置,其特征在于,所述处理资源包括,系统内存、处理器的处理资源、GPU显存、带宽资源、线程数之一或其任意组合;所述第一阈值包括,系统内存的阈值,处理器的处理资源的阈值,GPU显存的阈值,带宽资源的阈值,线程数量的阈值之一或其任意组合;所述第二阈值包括,系统内存的阈值,处理器的处理资源的阈值,GPU显存的阈 值,带宽资源的阈值,线程数量的阈值之一或其任意组合;所述训练模块,具体用于根据当前空闲的处理资源的各个资源量,分别判断各个资源量是否达到对应设定的第一阈值;根据当前空闲的处理资源的各个资源量,配置待更新神经网络模型的自适应训练参数;根据所述当前空闲的处理资源的各个资源量,分别判断各个资源量是否达到对应设定的第二阈值,当任一资源量达到对应设定的第二阈值时,根据当前空闲的处理资源调整当前训练以及自适应训练参数,直至所有资源量都达到各自对应设定的第二阈值。
- 如权利要求10所述的装置,其特征在于,所述待更新神经网络模型为两个以上;所述训练模块,具体用于按照优先级选择待更新神经网络模型,或者,根据当前空闲的处理资源匹配待更新神经网络模型;将一定比率的当前空闲的处理资源配置为用于训练所述待更新神经网络模型所需的处理资源。
- 一种用于神经网络训练和智能分析的处理资源配置的电子设备,其特征在于,该设备包括存储器和处理器,其中,存储器存储有可被处理器执行的指令,所述指令被处理器执行,以使所述处理器执行如权利要求1至9任一所述用于神经网络训练和智能分析的处理资源配置方法。
- 一种计算机可读存储介质,其特征在于,所述存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至9任一所述用于神经网络训练和智能分析的处理资源配置方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010272345.8A CN111752703B (zh) | 2020-04-09 | 2020-04-09 | 用于神经网络训练和智能分析的处理资源配置方法和装置 |
CN202010272345.8 | 2020-04-09 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021204227A1 true WO2021204227A1 (zh) | 2021-10-14 |
Family
ID=72673185
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/086051 WO2021204227A1 (zh) | 2020-04-09 | 2021-04-09 | 用于神经网络训练和智能分析的处理资源配置方法和装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111752703B (zh) |
WO (1) | WO2021204227A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117593171A (zh) * | 2024-01-15 | 2024-02-23 | 西安甘鑫科技股份有限公司 | 基于fpga的图像采集储存处理方法 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111752703B (zh) * | 2020-04-09 | 2024-03-19 | 杭州海康威视数字技术股份有限公司 | 用于神经网络训练和智能分析的处理资源配置方法和装置 |
CN113626195A (zh) * | 2021-08-10 | 2021-11-09 | 云从科技集团股份有限公司 | 神经网络模型的执行方法、装置、设备和存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190012575A1 (en) * | 2017-07-04 | 2019-01-10 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method, apparatus and system for updating deep learning model |
CN109558940A (zh) * | 2018-11-09 | 2019-04-02 | 深圳市康拓普信息技术有限公司 | 一种深度学习模型训练的管理方法和系统 |
CN110175677A (zh) * | 2019-04-16 | 2019-08-27 | 平安普惠企业管理有限公司 | 自动更新方法、装置、计算机设备及存储介质 |
CN110502340A (zh) * | 2019-08-09 | 2019-11-26 | 广东浪潮大数据研究有限公司 | 一种资源动态调整方法、装置、设备及存储介质 |
CN110705719A (zh) * | 2018-06-21 | 2020-01-17 | 第四范式(北京)技术有限公司 | 执行自动机器学习的方法和装置 |
CN111752703A (zh) * | 2020-04-09 | 2020-10-09 | 杭州海康威视数字技术股份有限公司 | 用于神经网络训练和智能分析的处理资源配置方法和装置 |
-
2020
- 2020-04-09 CN CN202010272345.8A patent/CN111752703B/zh active Active
-
2021
- 2021-04-09 WO PCT/CN2021/086051 patent/WO2021204227A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190012575A1 (en) * | 2017-07-04 | 2019-01-10 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method, apparatus and system for updating deep learning model |
CN110705719A (zh) * | 2018-06-21 | 2020-01-17 | 第四范式(北京)技术有限公司 | 执行自动机器学习的方法和装置 |
CN109558940A (zh) * | 2018-11-09 | 2019-04-02 | 深圳市康拓普信息技术有限公司 | 一种深度学习模型训练的管理方法和系统 |
CN110175677A (zh) * | 2019-04-16 | 2019-08-27 | 平安普惠企业管理有限公司 | 自动更新方法、装置、计算机设备及存储介质 |
CN110502340A (zh) * | 2019-08-09 | 2019-11-26 | 广东浪潮大数据研究有限公司 | 一种资源动态调整方法、装置、设备及存储介质 |
CN111752703A (zh) * | 2020-04-09 | 2020-10-09 | 杭州海康威视数字技术股份有限公司 | 用于神经网络训练和智能分析的处理资源配置方法和装置 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117593171A (zh) * | 2024-01-15 | 2024-02-23 | 西安甘鑫科技股份有限公司 | 基于fpga的图像采集储存处理方法 |
CN117593171B (zh) * | 2024-01-15 | 2024-04-30 | 西安甘鑫科技股份有限公司 | 基于fpga的图像采集储存处理方法 |
Also Published As
Publication number | Publication date |
---|---|
CN111752703A (zh) | 2020-10-09 |
CN111752703B (zh) | 2024-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021204227A1 (zh) | 用于神经网络训练和智能分析的处理资源配置方法和装置 | |
JP7120708B2 (ja) | クラウドデバイス共同的リアルタイムユーザ使用および性能異常検出のシステムおよび方法 | |
US11507430B2 (en) | Accelerated resource allocation techniques | |
US11153175B2 (en) | Latency management by edge analytics in industrial production environments | |
JP5524117B2 (ja) | 自動負荷検出に基づく電力管理 | |
CA3032674A1 (en) | Automatically scaling neural networks based on load | |
US11709703B2 (en) | Automated semantic tagging | |
US20230281515A1 (en) | Distributed learning model for fog computing | |
US20220058512A1 (en) | Machine learning model training system | |
WO2023165512A1 (zh) | 一种故障文件保存方法及相关装置 | |
CN105302641A (zh) | 虚拟化集群中进行节点调度的方法及装置 | |
US9606879B2 (en) | Multi-partition networking device and method therefor | |
CN114253683B (zh) | 任务处理方法、装置、电子设备及存储介质 | |
Teich et al. | Run-time enforcement of non-functional program properties on MPSoCs | |
CN111542808B (zh) | 预测电子设备上运行应用的线程的最优数量的方法和系统 | |
US12020036B2 (en) | Trajectory-based hierarchical autoscaling for serverless applications | |
US9244736B2 (en) | Thinning operating systems | |
CN113595814B (zh) | 消息延迟检测方法、装置、电子设备及存储介质 | |
CN110647401B (zh) | 调频方法、调频装置、存储介质与电子设备 | |
CN113365171B (zh) | 有屏音箱处理方法、装置、电子设备及存储介质 | |
TWI832279B (zh) | 人工智慧模型運算加速系統及人工智慧模型運算加速方法 | |
CN109062705B (zh) | 进程间通信的监控方法、电子装置以及可读存储介质 | |
Kwan | Predicting Fine-grained Cloud Resource Usage with Online and Offline Learning | |
US20180173593A1 (en) | Method and a circuit for controlling memory resources of an electronic device | |
CN114217969A (zh) | 一种执行函数执行方法及相关设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21784965 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21784965 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 190523) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21784965 Country of ref document: EP Kind code of ref document: A1 |