WO2020258508A1 - 模型超参数调整控制方法、装置、计算机设备及存储介质 - Google Patents

模型超参数调整控制方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2020258508A1
WO2020258508A1 PCT/CN2019/103660 CN2019103660W WO2020258508A1 WO 2020258508 A1 WO2020258508 A1 WO 2020258508A1 CN 2019103660 W CN2019103660 W CN 2019103660W WO 2020258508 A1 WO2020258508 A1 WO 2020258508A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
information
preset
trained
performance
Prior art date
Application number
PCT/CN2019/103660
Other languages
English (en)
French (fr)
Inventor
陈娴娴
阮晓雯
徐亮
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020258508A1 publication Critical patent/WO2020258508A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • the embodiments of the present application relate to the technical field of model training, in particular to a method, device, computer equipment, and storage medium for adjusting and controlling model hyperparameters.
  • hyperparameters With the development of science and technology, people's work and life involve massive amounts of data. In the era of big data, analysis and modeling based on massive data in various scenarios has become the daily work and scientific research content of major artificial intelligence fields. Because of the complexity of various models, there are many parameters that cannot be learned from the data. Such parameters are called hyperparameters.
  • the inventor realizes that the value of hyperparameters often directly affects the prediction effect of the established model in different scenarios. Therefore, it is necessary to adjust the hyperparameters during the modeling process to improve the accuracy of the model.
  • the complexity and uncertainty of the hyper-parameter space result in high complexity and time-consuming of hyper-parameter tuning, which reduces the tuning efficiency.
  • the embodiments of the present application provide a model hyperparameter adjustment and control method, device, computer equipment, and storage medium that improve the efficiency of parameter adjustment by obtaining an adjustment plan based on a sample and a model.
  • a technical solution adopted by the embodiment created by this application is to provide a method for adjusting and controlling model hyperparameters, including the following steps: obtaining original information of the model to be trained, wherein the original information includes all The model information of the model to be trained and the sample information of the sample data used to train the model to be trained; the model information and the sample information are calculated according to a preset performance algorithm to generate model performance information; according to the model The performance information searches a preset tuning database for a target tuning solution corresponding to the model performance information, so that the model to be trained adjusts hyperparameters according to the target tuning solution.
  • an embodiment of the present application also provides a model hyperparameter adjustment control device, including: a first acquisition module for acquiring original information of the model to be trained, wherein the original information includes the model to be trained The first processing module is used to calculate the model information and the sample information to generate model performance information according to a preset performance algorithm; An execution module, configured to search for a target tuning solution corresponding to the model performance information in a preset tuning database according to the model performance information, so that the model to be trained is adjusted according to the target tuning solution Hyperparameters.
  • an embodiment of the present application further provides a computer device including a memory and a processor.
  • the memory stores computer-readable instructions.
  • the The processor executes a model hyperparameter adjustment control method, including the following steps: acquiring original information of a model to be trained, wherein the original information includes model information of the model to be trained and information used to train the model to be trained
  • the sample information of the sample data includes model information of the model to be trained and information used to train the model to be trained
  • the sample information of the sample data the model information and the sample information are calculated according to a preset performance algorithm to generate model performance information; the model performance information is searched for the model performance information in the preset tuning database according to the model performance information A corresponding target parameter adjustment scheme, so that the model to be trained adjusts hyperparameters according to the target parameter adjustment scheme.
  • embodiments of the present application also provide a non-volatile storage medium storing computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, the one or more processors Performing a model hyperparameter adjustment control method includes the following steps: acquiring original information of a model to be trained, wherein the original information includes model information of the model to be trained and sample data used to train the model to be trained Sample information; calculate the model information and the sample information according to a preset performance algorithm to generate model performance information; according to the model performance information, search the preset tuning database for information corresponding to the model performance information The target parameter adjustment scheme, so that the model to be trained adjusts hyperparameters according to the target parameter adjustment scheme.
  • This application obtains the original information of the model to be trained, including the model information of the model to be trained and the sample information of the sample data.
  • the sample data is used to train the model to be trained, and then the model information and sample data are compared according to the performance algorithm.
  • Information is calculated to generate the model performance information of the model to be trained, and then the target tuning scheme is searched in the tuning database according to the model performance information, so that the model to be trained can adjust the hyperparameters according to the target tuning scheme.
  • the input sample data and model information of the model to be trained are used to automatically select the tuning plan.
  • the selected tuning plan is adapted to the model and sample data to be trained, which can effectively reduce the difficulty of tuning and improve the efficiency of tuning.
  • FIG. 1 is a schematic diagram of the basic flow of a method for adjusting and controlling model hyperparameters according to an embodiment of this application;
  • FIG. 2 is a schematic diagram of a process for acquiring terminal performance according to an embodiment of the application
  • FIG. 3 is a schematic diagram of a process for obtaining machine configuration information according to an embodiment of the application
  • FIG. 4 is a schematic flowchart of a storage target parameter adjustment solution according to an embodiment of the application.
  • FIG. 5 is a schematic diagram of a flow chart of an associated parameter adjustment solution and an operator in an embodiment of the application
  • FIG. 6 is a schematic diagram of a process for obtaining the identity information of an operator according to an embodiment of the application.
  • FIG. 7 is a schematic diagram of a process of storing target storage information in multiple threads according to an embodiment of the application.
  • FIG. 8 is a schematic diagram of the basic structure of a model hyperparameter adjustment control device according to an embodiment of the application.
  • Fig. 9 is a block diagram of the basic structure of a computer device according to an embodiment of the application.
  • FIG. 1 is a schematic diagram of the basic flow of the method for adjusting and controlling model hyperparameters in this embodiment.
  • a model hyperparameter adjustment control method includes the following steps:
  • the model to be trained refers to the model established based on the analysis of massive data of the application scenario.
  • the original information of the model to be trained includes the model information and sample information of the model to be trained, where the sample information is the sample data used to train the model to be trained
  • the original information of the model to be trained can be input by the user.
  • the user terminal includes but not limited to smart phones, smart bracelets, tablets, PC (personal computer, personal computer) terminal and other intelligent electronic devices, for example, the PC terminal displays the original information input interface of the model to be trained to the user through the display, and the system obtains the original information of the model to be trained by monitoring the user's operation and input; another On the other hand, the original information of the model to be trained can also be detected by the user terminal.
  • the PC terminal displays the original information input interface of the model to be trained to the user through the display, and the system obtains the original information of the model to be trained by monitoring the user's operation and input; another On the other hand, the original information of the model to be trained can also be detected by the user terminal.
  • the system detects the selected model and sample data to obtain model information and sample information, respectively, where the sample data is preset input
  • the model information includes the name of the model to be trained and model attribute information
  • the sample information includes data size information of the sample data.
  • S1200 Calculate the model information and the sample information according to a preset performance algorithm to generate model performance information
  • the system calculates the model information and sample information according to a preset performance algorithm to generate model performance information, where the performance algorithm is preset in the system for The algorithm that generates model performance information based on training sample data and model calculations.
  • the model level can be calculated according to the size of the model information and sample data to generate model performance information. For example, according to the size of the sample data, it is ranked first.
  • Level samples, second level samples, and third pole samples where the data volume of the sample data of the first level sample is less than 1G (Giga), and the data volume of the sample data of the second level sample is greater than 1G and less than 5G, The data volume of the sample data of the third-level sample is greater than 5G.
  • the system calculates based on the model information and the size of the sample data to generate model performance information with the adjustment level. For example, when the sample data is the first-level sample and the model is a machine learning model , The generated tuning level is the first-level tuning level. When the sample data is the second-level sample and the model is a machine learning model, the generated tuning level is the second-level tuning level. When the sample data is the third When the model is a machine learning model, the generated tuning level is the third tuning level, and the model performance information can be generated according to the model information and the sample information of the sample data.
  • the system selects the target tuning plan in the tuning database according to the model performance information, so that the model to be trained adjusts the hyperparameters according to the target tuning plan.
  • the tuning database is A preset warehouse for storing and managing parameter adjustment schemes.
  • the parameter adjustment scheme is a preset hyperparameter automatic parameter adjustment mechanism.
  • a multifunctional automatic parameter adjustment mechanism can be integrated according to BayesianOptimization and GridSearch.
  • BayesianOptimization refers to BayesianOptimization.
  • Si optimization the so-called optimization, is actually a process of seeking extreme values. In data science, it is often a problem of seeking extreme values. For example, finding the derivative is a good way to find the extreme value.
  • the model hyperparameter adjustment control method of this application is based on the input sample size (data amount of sample data) and the corresponding model (to-be-trained model) to automatically match the parameter adjustment scheme, for example: when the data amount of sample data is less than 1G and is waiting If the training model is a machine learning model, select the target tuning solution integrated by GridSearch. When the data volume of the sample data is greater than 1G and the model to be trained is a machine learning model, select the target tuning solution integrated by BayesianOptimization.
  • the target tuning scheme can also be selected in combination with machine performance.
  • Machine performance refers to the performance of the equipment that performs model training, such as sample information based on sample data, model information, and machine performance (such as machine configuration, No GPU available, etc.) Automated parameter adjustment scheme. Take the sample data less than 1G as an example. When the model is a machine learning model and GPU is available, select the target adjustment scheme integrated by Grid Search.
  • the model is a machine learning model and If there is no GPU available, choose the target tuning solution of BayesianOptimization integration, by automatically integrating Bayesian and GridSearch and other tuning parameters as a whole, and automatically selecting the optimal tuning solution based on the input sample size, machine performance, and selected model , So as to select the tuning scheme that is compatible with the model, data volume and machine performance.
  • the performance of the machine can also be calculated according to the configuration of the machine.
  • the device can be detected and analyzed through machine running software (such as Master Lu, 3DMark, Furmark or SiSoftware Sandra Lite).
  • machine running software such as Master Lu, 3DMark, Furmark or SiSoftware Sandra Lite.
  • the amount of sample data is less than 1G.
  • the model is a machine learning model and the score obtained by Master Lu's running score software is less than 150,000 points, select the target tuning solution integrated by GridSearch, and automatically select it according to the size and level of the sample data, the model information and the machine performance
  • the matched parameter adjustment scheme greatly improves the efficiency of parameter adjustment.
  • This embodiment obtains the original information of the model to be trained, including the model information of the model to be trained and sample information of the sample data.
  • the sample data is used to train the model to be trained, and then the model information and sample data are analyzed according to the performance algorithm.
  • the sample information is calculated to generate the model performance information of the model to be trained, and then the target tuning scheme is searched in the tuning database according to the model performance information, so that the model to be trained can adjust the hyperparameters according to the target tuning scheme.
  • the parameter tuning scheme is automatically selected according to the input sample data and model information of the model to be trained, and the selected tuning scheme is adapted to the model and sample data to be trained, which can effectively reduce the difficulty of tuning and improve the efficiency of tuning.
  • FIG. 2 is a schematic diagram of a specific process of obtaining terminal performance in an embodiment of the present application.
  • step 1100 it also includes the following steps:
  • Obtaining the performance data list is the hardware configuration information data table of the local terminal.
  • the automatic configuration of model hyperparameters needs to be completed by the local terminal, and the system needs to deploy the parameter adjustment scheme according to the hardware configuration of the local terminal.
  • Local terminals include but are not limited to smart phones, tablet computers, PC (personal computer, personal computer) terminals and other electronic devices.
  • the operation of local terminals requires the support of hardware accessories.
  • the system can obtain the performance data list of local terminals according to the performance.
  • the data list obtains the machine configuration information of the local terminal.
  • a database is provided in the local terminal, and the performance data list is stored in the database. The system can obtain the performance data list of the local terminal by accessing the database.
  • the machine configuration information in the performance data list can be provided by the user.
  • Input saving for example, the local terminal is equipped with a display, the system displays the performance data list input interface to the user through the display, the user inputs the machine configuration information of the local terminal in the input interface, the system monitors the user's operation, and stores the machine configuration information in the performance Data list, and then store the performance data list in the database.
  • S1020 Calculate the machine configuration information according to a preset machine running algorithm to generate machine performance information and add it to the model performance information.
  • the system After obtaining the performance data list of the local terminal, the system calculates the machine configuration information in the performance data list according to the preset machine running algorithm to generate machine performance information.
  • the machine performance information characterizes the machine performance of the local terminal.
  • the information is added to the model performance information, and the system can automatically select the optimal tuning scheme based on the input sample size (sample data of the model to be trained), machine performance, and model to be trained, so as to select the model, data volume, and machine performance. Adapted tuning program.
  • the machine score algorithm is a tool preset by the system to calculate the machine performance of the local terminal according to the terminal hardware configuration.
  • the machine score software is used to detect and analyze the equipment.
  • the score software includes but is not limited to Master Lu, 3DMark, Furmark, and SiSoftware Sandra Lite, this embodiment automatically selects a suitable tuning solution based on the size level of the sample data, model information and combined with the machine performance of the local terminal, which can effectively improve the tuning efficiency.
  • FIG. 3 is a schematic diagram of a basic process of obtaining machine configuration information in an embodiment of the present application.
  • step 1010 it also includes the following steps:
  • the device number information is the serial number of the local terminal.
  • the device number information of the local terminal can be set by the user to distinguish the local terminal.
  • the device number information of the local terminal is "ZD001" as an example.
  • the number information is stored in the database of the local terminal, and the system can obtain the device number information of the local terminal by accessing the database of the local terminal.
  • S1002. Generate device detection control information according to the device number information, so as to detect the local terminal according to the device detection control information to obtain the machine configuration information.
  • the system After obtaining the device number information of the local terminal, the system generates device detection control information based on the device number information, and detects the local terminal based on the device detection control information to obtain the machine configuration information of the local terminal.
  • the device detection control The information can control the operation of the local terminal and detect the machine configuration of the local terminal during the operation of the local terminal.
  • the system controls the preset detection software (such as Master Lu) through the device detection control information to detect the machine configuration of the local terminal, and can accurately obtain the local The machine configuration information of the terminal.
  • FIG. 4 is a schematic diagram of a basic flow chart of a storage target parameter adjustment solution according to an embodiment of the present application.
  • step 1300 it further includes the following steps:
  • the system After selecting the target tuning plan and adjusting the hyperparameters of the model to be trained according to the target tuning plan, the system performs structural processing on the model performance information and the target tuning plan to generate target storage information.
  • the target storage information adopts the structure
  • the structured storage method the so-called structured storage method, actually applies the principle of the tree file system to a single file, so that a single file can also contain "subdirectories” like a file system, and "subdirectories" can also contain deeper levels. Each "subdirectory" can contain multiple files, and save the content that originally required multiple files to be stored in a tree structure and hierarchy into one file.
  • the system constructs the model performance information and the target tuning plan into target storage information in the form of key-value pairs, thereby establishing the relationship between the model performance and the tuning plan.
  • the target storage information is stored in the tuning database, which is a warehouse preset by the system for storing and managing target storage results
  • the model performance information and the target tuning program form a key-value pair and then stored in the tuning database, thereby associating the tuning program with the model performance, so as to facilitate the later call tuning of the model to be trained with the same model performance.
  • the solution does not need to repeatedly perform calculations on the model to be trained with the same model performance, simplifying the model adjustment steps, and improving the efficiency of model training.
  • FIG. 5 is a schematic diagram of a basic flow chart of associating a parameter adjustment solution and an operator in an embodiment of the present application.
  • step 1500 includes the following steps:
  • Identity information is the identification information of the operator, including but not limited to the operator’s name, job number, department and ID number.
  • the operator’s identity information can be entered by the operator himself, such as this application
  • the model hyperparameter adjustment control method is applied to a smart device, the smart device includes a display through which the system displays an identity information input interface to the operator, and the operator inputs corresponding identity information in the identity information input interface, such as the identity information
  • the input interface includes the name input column, the worker number input column and the ID card number input column.
  • the operator can enter the name, worker number and ID card number in the name input column, worker number input column and ID card number input column respectively ,
  • the system monitors the operator’s input operation to obtain the operator’s identity information.
  • the identity information of the operator can also be obtained by voice, for example, the voice information input by the operator is collected.
  • the voice information includes the name information and ID number information of the operator.
  • the system obtains the identity information of the operator through voice recognition technology .
  • Voice recognition technology is a technology that allows machines to convert voice signals into corresponding text or commands through the process of recognition and understanding.
  • the system After obtaining the operator’s identity information, the system sets the target storage information flag information according to the identity information, and then saves the target storage information in the tuning database.
  • the flag information is the tag information of the target storage information. It is used to mark the product target (target storage information) and classification or content (identity information of the operator), so as to establish the association between the target storage information and the operator, so that the person in charge of the model training can be found later.
  • FIG. 6 is a schematic diagram of a basic process of obtaining the identity information of an operator in an embodiment of the present application.
  • step S1510 includes the following steps:
  • the facial image refers to the facial expression image of the operator.
  • the local terminal is provided with a camera as an example.
  • the system collects the operator’s face through the camera.
  • the image can also collect the face image of the operator through the user terminal.
  • the user terminal includes but is not limited to smart phones, smart bracelets, tablet computers and other electronic devices equipped with cameras, such as the front camera or rear camera of a smart phone. With a camera, the operator collects his facial expression image through the smart phone camera and uploads it to the local terminal.
  • the face image of the operator can also be obtained by taking photos or videos.
  • the system uses the camera to shoot the operators to obtain the target video.
  • the target video can be processed by video processing software (for example, OpenCV), the target video is split into several frames, and the screen images are extracted from the target video by means of timing acquisition. For example, extract a target picture from the target video at a rate of 0.5 seconds, and then randomly extract a target picture from the obtained target pictures as the face image of the operator; but it is not limited to this, according to the specific application Depending on the scene, the speed of capturing images can be adjusted adaptively.
  • video processing software for example, OpenCV
  • the adjustment principle is that the stronger the system's processing capability and the higher the tracking accuracy requirements, the shorter the acquisition time, until it reaches the synchronization with the image capture frequency of the camera equipment; Otherwise, the longer the collection time interval, but the longest collection time interval must not exceed 1s.
  • the face recognition model is a tool preset by the system for recognizing face images.
  • the LSTM network Long Short-Term Memory
  • the LSTM network uses "gates” to control the discarding or adding of information to achieve the function of forgetting or memory.
  • "Gate” is a structure that allows information to pass through selectively, consisting of a sigmoid (S-shaped growth curve) function and a dot multiplication operation.
  • the output value of the sigmoid function is in the interval [0,1], 0 means completely discarded, and 1 means completely passed.
  • the neural network model trained to convergence has a classifier capable of recognizing face images, where the face recognition model includes the aforementioned neural network model, and the neural network model includes N+1 classifiers, and N is a positive integer.
  • the classification result of the face image in the classifier is obtained, where the classification result includes the identity information classification corresponding to the face image and the confidence of the identity information classification Degree (Confidence).
  • the confidence of identity information classification refers to the fact that the face image is classified into more than one identity information classification after the face recognition model is screened and classified, and the percentage of the face image in the identity information classification is obtained. value. Since the final identification information corresponding to the face image is one type, it is necessary to compare the confidence levels of the identification information classification of the same face image. For example, the system collects the face image of the operator and is classified into Zhang San’s The confidence level is 0.95, and the confidence level of being classified into Li Si is 0.63.
  • the confidence level is compared with a preset first threshold, and when the confidence level is greater than the preset first threshold, it is confirmed that the identity information classification result represented by the confidence level is the identity information of the operator.
  • the preset first threshold is generally set to a value between 0.9 and 1.
  • the emotional information with a confidence level greater than the first threshold is selected as the final identity information classification result, that is, the identity information represented by the confidence level is confirmed. For example, when the preset first threshold is 0.9, and the operator’s face image is classified into Zhang San’s confidence level of 0.95, since 0.95>0.9, the identity information represented by the face image is Zhang San’s Personally identifiable information.
  • the face recognition model recognizes the face image of the operator and then outputs the operator’s personal identity information.
  • the face image is input into the preset face recognition model, and the face image output by the face recognition model is obtained.
  • the confidence of the identity information classification is greater than the preset first threshold, the identity information classification result represented by the confidence is confirmed as the operator's personal identity information, thereby improving the accuracy of identifying the identity information of the face image.
  • the face recognition model when the face recognition model recognizes the operator’s name based on the operator’s face image, it can also search for the operator’s personal information corresponding to the name in the preset user database according to the name.
  • the personal information of the operator includes the operator's name, ID number, and job number.
  • FIG. 7 is a schematic diagram of the basic flow of multi-thread storage of target storage information according to an embodiment of the present application.
  • step S1500 includes the following steps:
  • S1530 Establish a storage task for saving the target storage information to a preset parameter adjustment database through a thread
  • a thread is a single sequential control flow in an application.
  • There is a relatively independent and schedulable execution unit in the process which is the scheduling unit of the program when the basic unit of the system independently schedules and dispatches the CPU.
  • Running multiple threads at the same time to complete different tasks in a single program is called multithreading.
  • the task queue refers to a task set that includes multiple operation tasks and asynchronous calls are made between these operation tasks to solve the problem of task blocking.
  • the operation tasks in the task queue are set with corresponding priorities, and the priority is a kind of The agreement is that when the computer time-sharing operating system processes multiple operating programs, it determines the priority parameters for each operating program to accept system resources, with the highest priority doing it first, and the lower priority doing it later.
  • the system compares the priority of each operation task in the task queue with the priority of the task to be executed to find out whether there is an operation task with a higher priority than the task to be executed in the task queue.
  • Prioritize other operation tasks that have a higher priority than the task to be executed which can make the system run smoothly without stalling. For example, when the system needs to process multiple training models and store target storage information at the same time, the system first executes the task of training the model Then execute the task to be executed to improve the efficiency of model training.
  • an embodiment of the present application also provides a model hyperparameter adjustment control device.
  • FIG. 8 is a schematic diagram of the basic structure of the model hyperparameter adjustment control device of this embodiment.
  • a model hyperparameter adjustment control device includes: a first acquisition module 2100, a first processing module 2200, and a first execution module 2300, wherein the first acquisition module 2100 is used to acquire the original model to be trained Information, wherein the original information includes model information of the model to be trained and sample information of sample data used to train the model to be trained; the first processing module 2200 is used to perform processing on the model according to a preset performance algorithm Information and the sample information are calculated to generate model performance information; the first execution module 2300 is configured to search for a target tuning solution corresponding to the model performance information in a preset tuning database according to the model performance information, and The model to be trained adjusts hyperparameters according to the target parameter adjustment scheme.
  • This embodiment obtains the original information of the model to be trained, including the model information of the model to be trained and sample information of the sample data.
  • the sample data is used to train the model to be trained, and then the model information and sample data are analyzed according to the performance algorithm.
  • the sample information is calculated to generate the model performance information of the model to be trained, and then the target tuning scheme is searched in the tuning database according to the model performance information, so that the model to be trained can adjust the hyperparameters according to the target tuning scheme.
  • the parameter tuning scheme is automatically selected according to the input sample data and model information of the model to be trained, and the selected tuning scheme is adapted to the model and sample data to be trained, which can effectively reduce the difficulty of tuning and improve the efficiency of tuning.
  • the model hyperparameter adjustment control device further includes: a second acquisition module and a second execution module, wherein the second acquisition module is used to acquire a preset local terminal performance data list, wherein the performance data list The machine configuration information of the local terminal is included; the second execution module is configured to calculate the machine configuration information according to a preset machine running algorithm to generate machine performance information and add it to the model performance information.
  • the model hyperparameter adjustment control device further includes: a third acquisition module and a third execution module, wherein the third acquisition module is used to acquire the device number information of the local terminal; the third execution module is used to The device number information generates device detection control information to detect the local terminal according to the device detection control information to obtain the machine configuration information.
  • the model hyperparameter adjustment control device further includes: a second processing module and a fourth execution module, wherein the second processing module is used to structurally transform the model performance information and the target tuning solution Generate target storage information; the fourth execution module is used to save the target storage information to a preset tuning database.
  • the model hyperparameter adjustment control device further includes: a first acquisition sub-module and a first execution sub-module, wherein the first acquisition sub-module is used to acquire a model training operator corresponding to the model to be trained The identity information; the first execution sub-module is used to set the flag information of the target storage information according to the identity information and store it in the tuning database, so that the target storage information is associated with the operator.
  • the model hyperparameter adjustment control device further includes: a second acquisition submodule, a first processing submodule, and a third acquisition submodule, wherein the second acquisition submodule is used to acquire the face of the operator Image; the first processing sub-module is used to input the face image into a preset face recognition model, where the face recognition model is a convolutional neural network model trained to convergence.
  • the third obtaining submodule is used to obtain the identity information of the operator output by the face recognition model.
  • the model hyperparameter adjustment control device further includes: a thread sub-module, a detection sub-module, and a second execution sub-module, wherein the thread sub-module is used to save the target storage information to a preset Adjust the storage task in the parameter database; the detection sub-module is used to detect whether there is an operation task with a higher priority than the storage task in the task queue after the storage task; the second execution sub-module is used when the task queue exists When the priority is higher than the operation task of the storage task, the operation task is executed first until the operation task is completed and the storage task is called back to execute the storage task
  • FIG. 9 is a block diagram of the basic structure of the computer device in this embodiment.
  • the computer device includes a processor, a nonvolatile storage medium, a memory, and a network interface connected through a system bus.
  • the non-volatile storage medium of the computer device stores an operating system, a database, and computer-readable instructions.
  • the database may store control information sequences.
  • the processor can realize a A model hyperparameter adjustment control method.
  • the processor of the computer equipment is used to provide calculation and control capabilities, and supports the operation of the entire computer equipment.
  • a computer readable instruction may be stored in the memory of the computer device, and when the computer readable instruction is executed by the processor, the processor may execute a model hyperparameter adjustment control method.
  • the network interface of the computer device is used to connect and communicate with the terminal.
  • the network interface of the computer device is used to connect and communicate with the terminal.
  • the structure shown in the figure is only a block diagram of part of the structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied.
  • the specific computer equipment may include More or fewer parts than shown in the figure, or some parts are combined, or have a different arrangement of parts.
  • the processor is configured to execute the first acquisition module 2100, the first processing module 2200, and the first execution module 2300 in FIG. 8, and the memory stores program codes and various data required to execute the above modules.
  • the network interface is used for data transmission between user terminals or servers.
  • the memory in this embodiment stores the program codes and data required to execute all the sub-modules in the model hyperparameter adjustment control device, and the server can call the program codes and data of the server to execute the functions of all the sub-modules.
  • the computer obtains the original information of the model to be trained, including the model information of the model to be trained and the sample information of the sample data.
  • the sample data is used to train the model to be trained, and then calculates the model information and sample information of the sample data according to the performance algorithm. Perform calculations to generate the model performance information of the model to be trained, and then search the target tuning scheme in the tuning database according to the model performance information, so that the model to be trained can adjust the hyperparameters according to the target tuning scheme.
  • the input sample data and model information of the training model automatically select the tuning plan, and the selected tuning plan is adapted to the model and sample data to be trained, which can effectively reduce the difficulty of tuning and improve the efficiency of tuning.
  • the present application also provides a non-volatile storage medium storing computer-readable instructions.
  • the computer-readable instructions When executed by one or more processors, the one or more processors can execute any one of the foregoing embodiments.
  • the steps of the model hyperparameter adjustment control method are described in detail below.
  • the computer program can be stored in a computer readable storage medium. At this time, it may include the procedures of the above-mentioned method embodiments.
  • the aforementioned storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Feedback Control In General (AREA)

Abstract

一种模型超参数调整控制方法、装置、计算机设备及存储介质,包括下述步骤:获取待训练模型的原始信息,其中,原始信息包括待训练模型的模型信息以及用于训练待训练模型的样本数据的样本信息(S1100);根据预设的性能算法对模型信息和样本信息进行计算生成模型性能信息(S1200);根据模型性能信息在预设的调参数据库中查找与模型性能信息相对应的目标调参方案,以使待训练模型根据目标调参方案调整超参数(S1300)。所述方法通过获取待训练模型的模型信息以及样本数据的样本信息并计算生成模型性能信息,再根据该模型性能信息选取目标调参方案,使得待训练模型根据目标调参方案进行超参数的调整,能有效降低调整超参数的难度并提高调参效率。

Description

模型超参数调整控制方法、装置、计算机设备及存储介质
本申请要求于2019年6月27日提交中国专利局、申请号为201910569483.X,发明名称为“模型超参数调整控制方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及模型训练技术领域,尤其是一种模型超参数调整控制方法、装置、计算机设备及存储介质。
背景技术
随着科技的发展,人们的工作和生活中涉及到方方面面的海量数据,在大数据时代下,基于各个场景中的海量数据进行分析和建模已经成为各大人工智能领域日常的工作和科研内容,各类模型由于其本身的复杂性会存在许多不能通过对数据进行学习得到的参数,这种参数叫做超参数。
发明人意识到超参数的取值往往会直接影响所建立模型在不同场景下的预测效果,所以在建模过程中需要对超参数进行调整以提高模型的准确性,但是,由于超参数的多样性以及超参空间的不确定性,导致超参数调参的复杂度很高且耗时严重,减低了调参效率。
发明内容
本申请实施例提供一种通过根据样本和模型取调参方案以提高调参效率的模型超参数调整控制方法、装置、计算机设备及存储介质。
为解决上述技术问题,本申请创造的实施例采用的一个技术方案是:提供一种模型超参数调整控制方法,包括下述步骤:获取待训练模型的原始信息,其中,所述原始信息包括所述待训练模型的模型信息以及用于训练所述待训练模型的样本数据的样本信息;根据预设的性能算法对所述模型信息和所述样本信息进行计算生成模型性能信息;根据所述模型性能信息在预设的调参数据库中查找与所述模型性能信息相对应的目标调参方案,以使所述待训练模型根据所述目标调参方案调整超参数。
为解决上述技术问题,本申请实施例还提供一种模型超参数调整控制装置,包括:第一获取模块,用于获取待训练模型的原始信息,其中,所述原始信息包括所述待训练模型的模型信息以及用于训练所 述待训练模型的样本数据的样本信息;第一处理模块,用于根据预设的性能算法对所述模型信息和所述样本信息进行计算生成模型性能信息;第一执行模块,用于根据所述模型性能信息在预设的调参数据库中查找与所述模型性能信息相对应的目标调参方案,以使所述待训练模型根据所述目标调参方案调整超参数。
为解决上述技术问题,本申请实施例还提供一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行一种模型超参数调整控制方法,包括以下步骤:获取待训练模型的原始信息,其中,所述原始信息包括所述待训练模型的模型信息以及用于训练所述待训练模型的样本数据的样本信息;根据预设的性能算法对所述模型信息和所述样本信息进行计算生成模型性能信息;根据所述模型性能信息在预设的调参数据库中查找与所述模型性能信息相对应的目标调参方案,以使所述待训练模型根据所述目标调参方案调整超参数。
为解决上述技术问题,本申请实施例还提供一种存储有计算机可读指令的非易失存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行一种模型超参数调整控制方法,包括以下步骤:获取待训练模型的原始信息,其中,所述原始信息包括所述待训练模型的模型信息以及用于训练所述待训练模型的样本数据的样本信息;根据预设的性能算法对所述模型信息和所述样本信息进行计算生成模型性能信息;根据所述模型性能信息在预设的调参数据库中查找与所述模型性能信息相对应的目标调参方案,以使所述待训练模型根据所述目标调参方案调整超参数。
本申请通过获取待训练模型的原始信息,包括该待训练模型的模型信息以及样本数据的样本信息,样本数据是用于训练该待训练模型的,然后根据性能算法对模型信息和样本数据的样本信息进行计算生成该待训练模型的模型性能信息,再根据该模型性能信息在调参数据库中查找目标调参方案,从而使得该待训练模型根据该目标调参方案进行超参数的调整,通过根据待训练模型的输入样本数据以及模型信息自动化选择调参方案,选取的调参方案与待训练模型和样本数据适配,能有效降低调参难度提高调参效率。
附图说明
图1为本申请实施例模型超参数调整控制方法的基本流程示意图;
图2为本申请实施例获取终端性能的流程示意图;
图3为本申请实施例获取机器配置信息的流程示意图;
图4为本申请实施例存储目标调参方案的流程示意图;
图5为本申请实施例关联调参方案和操作人员的流程示意图;
图6为本申请实施例获取操作人员的身份信息的流程示意图;
图7为本申请实施例多线程存储目标存储信息的流程示意图;
图8为本申请实施例模型超参数调整控制装置基本结构示意图;
图9为本申请实施例计算机设备基本结构框图。
具体实施方式
实施例1
具体请参阅图1,图1为本实施例模型超参数调整控制方法的基本流程示意图。
如图1所示,一种模型超参数调整控制方法,包括下述步骤:
S1100、获取待训练模型的原始信息,其中,所述原始信息包括所述待训练模型的模型信息以及用于训练所述待训练模型的样本数据的样本信息;
待训练模型是指根据应用场景的海量数据分析而建立的模型,待训练模型的原始信息包括该待训练模型的模型信息以及样本信息,其中,样本信息为用于训练该待训练模型的样本数据,在实施时,待训练模型的原始信息可以由用户输入,以本申请模型超参数调整控制方法应用于用户终端为例,用户终端包括但不限于智能手机、智能手环、平板、PC(personal computer,个人计算机)终端以及其它智能电子设备,例如在PC终端通过显示器向用户展示待训练模型的原始信息输入界面,系统通过监听用户的操作和输入以获取该待训练模型的原始信息;另一方面,该待训练模型的原始信息还可以由用户终端检测得到,例如在建模的过程中,系统检测选定的模型和样本数据分别得到模型信息和样本信息,其中,样本数据是预设输入至系统中的,该模型信息包括待训练模型的名称以及模型属性信息,样本信息包括样本数据的数据大小信息。
S1200、根据预设的性能算法对所述模型信息和所述样本信息进行计算生成模型性能信息;
在获取待训练模型的模型信息以及样本数据的样本信息之后,系统根据预设的性能算法对该模型信息和样本信息进行计算以生成模型性能信息,其中,性能算法是系统中预设的用于根据训练样本数据和模型计算生成模型性能信息的算法,在实施时,可以根据模型信息和样本数据的大小计算模型等级生成模型性能信息,例如:根据样本数据的大小由高到低分为第一级样本、第二级样本和第三极样本,其中,第一级样本的样本数据的数据量小于1G(Giga,吉咖),第二级样本的样本数据的数据量大于1G且小于5G,第三级样本的样本数据的数据量大于5G,系统根据模型信息和样本数据的大小进行计算生成携带有调参等级的模型性能信息,例如当样本数据为第一级样本 且模型为机器学习模型时,生成的调参等级为第一级调参等级,当样本数据为第二级样本且模型为机器学习模型时,生成的调参等级为第二级调参等级,当样本数据为第三级样本且模型为机器学习模型时,生成的调参等级为第三级调参等级,能根据模型信息和样本数据的样本信息对应生成模型性能信息。
S1300、根据所述模型性能信息在预设的调参数据库中查找与所述模型性能信息相对应的目标调参方案,以使所述待训练模型根据所述目标调参方案调整超参数。
在计算生成模型性能信息后,系统根据该模型性能信息在调参数据库中选取目标调参方案,使得该待训练模型根据该目标调参方案进行超参数的调整,在实施时,调参数据库是预设的用于存储和管理调参方案的仓库,其中,调参方案是预设的超参数自动化调参机制,例如可以根据BayesianOptimization与GridSearch集成的多功能自动化调参机制,BayesianOptimization是指贝叶斯优化,所谓优化,实际上就是一个求极值的过程,数据科学的很多时候就是求极值的问题。例如求导数就是求极值的好方法,基于梯度的优化的条件是函数形式已知才能求出导数,并且函数要是凸函数才可以,然而实际上很多时候是不满足这两个条件的,所以不能用梯度优化,贝叶斯优化应运而生了,贝叶斯优化常用于解决反演问题,反演问题是指由结果及某些一般原理(或模型)出发去确定表征问题特征的参数或模型参数。Grid Search(穷举搜索、网格搜索)是一种调参手段,在所有候选的参数选择中,通过循环遍历,尝试每一种可能性,表现最好的参数就是最终的结果。其原理就像是在数组里找最大值。(为什么叫网格搜索?以有两个参数的模型为例,参数a有3种可能,参数b有4种可能,把所有可能性列出来,可以表示成一个3*4的网格,循环过程就像是在每个网格里遍历和搜索)。
在实施时,本申请模型超参数调整控制方法基于输入样本量(样本数据的数据量)以及对应模型(待训练模型)进行自动化匹配调参方案,例如:当样本数据的数据量小于1G且待训练模型为机器学习模型则选择Grid Search集成的目标调参方案,当样本数据的数据量大于1G且待训练模型为机器学习模型则选择BayesianOptimization集成的目标调参方案。
在另一个实施例中,还可以结合机器性能选取目标调参方案,机器性能是指执行模型训练的设备的性能,例如根据样本数据的样本信息、模型信息以及机器性能(例如机器的配置、有无可用GPU等)自动化匹配调参方案,以样本数据的数据量小于1G为例,当模型为机器学习模型且有可用GPU则选择Grid Search集成的目标调参方案,当模型为机器学习模型且没有可用GPU则选择 BayesianOptimization集成的目标调参方案,通过将贝叶斯与Grid Search等调参整体进行自动化集成,通过根据输入样本量、机器性能以及所选用的模型自动化进行最优调参方案选择,从而选取与模型、数据量以及机器性能相适配的调参方案。在一个实施例中,机器性能还可以根据机器的配置计算得到,例如通过机器跑分软件(例如鲁大师、3DMark、Furmark或者SiSoftware Sandra Lite)对设备进行检测和分析,例如样本数据的数据量小于1G、模型为机器学习模型且通过鲁大师跑分软件进行检测得到的分数小于15万分时,选择Grid Search集成的目标调参方案,通过根据样本数据的大小级别、模型信息并结合机器性能自动化选择相适配的调参方案,大大提高了调参效率。
本实施例通过获取待训练模型的原始信息,包括该待训练模型的模型信息以及样本数据的样本信息,样本数据是用于训练该待训练模型的,然后根据性能算法对模型信息和样本数据的样本信息进行计算生成该待训练模型的模型性能信息,再根据该模型性能信息在调参数据库中查找目标调参方案,从而使得该待训练模型根据该目标调参方案进行超参数的调整,通过根据待训练模型的输入样本数据以及模型信息自动化选择调参方案,选取的调参方案与待训练模型和样本数据适配,能有效降低调参难度提高调参效率。
在一个可选实施例中,请参阅图2,图2是本申请一个实施例获取终端性能的具体流程示意图。
如图2所示,步骤1100之前,还包括如下述步骤:
S1010、获取预设的本地终端性能数据列表,其中,所述性能数据列表包括所述本地终端的机器配置信息;
获取性能数据列表是本地终端的硬件配置信息数据表,在实施时,对于模型超参数的自动化配置需要由本地终端来完成,则系统需要根据本地终端的硬件配置调配调参方案。本地终端包括但不限于智能手机、平板电脑、PC(personal computer,个人计算机)终端以及其它电子设备,本地终端运行需要硬件配件的支持,系统通过获取本地终端的性能数据列表,即可根据该性能数据列表获取本地终端的机器配置信息。在一个实施例中,本地终端中设置有数据库,该数据库中存储有性能数据列表,系统通过访问该数据库即可获取本地终端的性能数据列表,该性能数据列表中的机器配置信息可以由用户自己输入保存,例如本地终端设置有显示器,系统通过显示器向用户展示性能数据列表输入界面,用户在该输入界面中输入本地终端的机器配置信息,系统监听用户的操作,将该机器配置信息存储至性能数据列表中,然后将性能数据列表存储至数据库中。
S1020、根据预设的机器跑分算法对所述机器配置信息进行计算 生成机器性能信息并添加至所述模型性能信息中。
在获取本地终端的性能数据列表后,系统根据预设的机器跑分算法对性能数据列表中的机器配置信息进行计算生成机器性能信息,机器性能信息表征本地终端的机器性能,并将该机器性能信息添加至模型性能信息中,系统即可根据输入样本量(待训练模型的样本数据)、机器性能以及待训练模型自动化进行最优调参方案选择,从而选取与模型、数据量以及机器性能相适配的调参方案。
在一个实施例中,机器跑分算法是系统预设的用于根据终端硬件配置计算本地终端的机器性能的工具,例如通过机器跑分软件对设备进行检测和分析,跑分软件包括但不限于鲁大师、3DMark、Furmark以及SiSoftware Sandra Lite,本实施例通过根据样本数据的大小级别、模型信息并结合本地终端的机器性能自动化选择相适配的调参方案,能有效提高调参效率。
在另一个可选实施例中,请参阅图3,图3是本申请一个实施例获取机器配置信息的基本流程示意图。
如图3所示,步骤1010之前,还包括如下述步骤:
S1001、获取所述本地终端的设备号信息;
设备号信息是本地终端的序列号,在实施时,本地终端的设备号信息可以由用户进行设置,从而将本地终端区分开来,例如:本地终端的设备号信息为“ZD001”为例,设备号信息存储与本地终端的数据库中,系统通过访问本地终端的数据库即可获取本地终端的设备号信息。
S1002、根据所述设备号信息生成设备检测控制信息,以根据所述设备检测控制信息对所述本地终端进行检测获取所述机器配置信息。
在获取本地终端的设备号信息后,系统根据该设备号信息生成设备检测控制信息,并根据该设备检测控制信息对本地终端进行检测从而获取本地终端的机器配置信息,在实施时,设备检测控制信息能控制本地终端运行,并在本地终端运行过程中检测本地终端的机器配置,例如系统通过设备检测控制信息控制预设的检测软件(例如鲁大师)检测本地终端的机器配置,能准确获取本地终端的机器配置信息。
在一个可选实施例中,请参阅图4,图4是本申请一个实施例存储目标调参方案的基本流程示意图。
如图4所示,步骤1300之后,还包括如下述步骤:
S1400、将所述模型性能信息以及所述目标调参方案进行结构化转换生成目标存储信息;
再选取目标调参方案并根据该目标调参方案调整待训练模型的超参数后,系统将该模型性能信息和目标调参方案进行结构化处理, 从而生成目标存储信息,该目标存储信息采用结构化存储方式,所谓结构化存储方法,实际是把树状文件系统的原理应用到单个的文件中,使得单个文件也能象文件系统一样包含“子目录”,“子目录”还可以包含更深层次的“子目录”,各个“子目录”可以含多个文件,把原来需要多个文件存储的内容按树状结构和层次保存到一个文件中去。系统通过将模型性能信息以及目标调参方案进行结构化转换成键值对形式的目标存储信息,从而建立模型性能与调参方案之间的联系。
S1500、将所述目标存储信息保存至预设的调参数据库中。
系统将模型性能信息和目标调参方案结构化转换成目标存储信息后,将该目标存储信息存储至调参数据库中,该调参数据库是系统预设的用于存储和管理目标存储结果的仓库,本实施例通过将模型性能信息和目标调参方案组成键值对后存储至调参数据库中,从而将调参方案与模型性能关联起来,方便后期对于同样模型性能的待训练模型调用调参方案,不需要重复对具有相同模型性能的待训练模型进行计算,简化模型调参步骤,提高模型训练效率。
在一个可选实施例中,请参阅图5,图5是本申请一个实施例关联调参方案和操作人员的基本流程示意图。
如图5所示,步骤1500包括如下述步骤:
S1510、获取与所述待训练模型相对应的模型训练操作人员的身份信息;
身份信息是操作人员的身份证明信息,包括但不限于操作人员的姓名、工号、部门以及身份证号码等信息,在实施时,操作人员的身份信息可以由操作人员自己操作输入,例如本申请模型超参数调整控制方法应用于智能设备上,该智能设备包括显示器,系统通过该显示器向操作人员展示身份信息输入界面,操作人员在该身份信息输入界面中输入对应的身份信息,例如该身份信息输入界面中包括姓名输入栏、工号输入栏和身份证号码输入栏,操作人员即可在该姓名输入栏、工号输入栏和身份证号码输入栏中分别输入姓名、工号和身份证号码,系统监听操作人员的输入操作以获取操作人员的身份信息。当然,还可以采用语音方式获取操作人员的身份信息,例如采集操作人员输入的语音信息,该语音信息中包括操作人员的姓名信息和身份证号码信息,系统通过语音识别技术获取操作人员的身份信息,语音识别技术就是让机器通过识别和理解过程把语音信号转变为相应的文本或命令的技术。
S1520、根据所述身份信息设置所述目标存储信息的标志信息并存储至所述调参数据库中,以使所述目标存储信息与所述操作人员关联。
在获取操作人员的身份信息后,系统根据该身份信息设置目标存 储信息的标志信息,然后将该目标存储信息保存至调参数据库中,在实施时,标志信息是目标存储信息的标签信息,标签是用来标志产品目标(目标存储信息)和分类或内容(操作人员的身份信息),从而建立目标存储信息与操作人员的关联,方便后期查找到模型训练的负责人。
在一个可选实施例中,请参阅图6,图6是本申请一个实施例获取操作人员的身份信息的基本流程示意图。
如图6所示,步骤S1510包括如下述步骤:
S1511、获取所述操作人员的人脸图像;
人脸图像是指操作人员的人脸表情图像,在实施时,以本申请模型超参数调整控制方法应用于本地终端为例,本地终端设置有摄像头为例,系统通过摄像头采集操作人员的人脸图像,当然,还可以通过用户终端采集操作人员的人脸图像,用户终端包括但不限于智能手机、智能手环、平板电脑以及其它设置有摄像头的电子设备,例如智能手机的前置摄像头或者后置摄像头,操作人员通过智能手机的摄像头采集自己的人脸表情图像,并上传至本地终端。在实施时,还可以通过拍照或者拍视频的方式获取操作人员的人脸图像,以通过拍视频的方式获取操作人员的人脸图像为例,系统通过摄像头对操作人员进行拍摄得到目标视频,系统可以通过视频处理软件(例如OpenCV)对目标视频进行处理,将目标视频拆分为若干帧画面,通过定时采集方式从目标视频中抽取画面图像。例如以0.5秒一张的速度在目标视频中抽取一张目标图片,然后在得到的若干目标图片中再次随机抽取一张目标图片作为操作人员的人脸图像;但是不局限于此,根据具体应用场景的不同,采集画面图像的速度能够进行适应性的调整,调整原则在于,系统处理能力越强且跟踪准确性要求越高则采集时间越短,达到与摄像设备采集图像的频率同步时为止;否则,则采集时间间隔越长,但最长采集时间间隔不得超过1s。当然,也可以直接在目标视频的若干帧画面中随机抽取一张画面作为操作人员的人脸图像。
S1512、将所述人脸图像输入至预设的人脸识别模型中,其中,所述人脸识别模型为训练至收敛的卷积神经网络模型;
人脸识别模型是系统预先设置的用于识别人脸图像的工具,在实施时,可以使用LSTM网络(长短期记忆人工神经网络模型,Long Short-Term Memory)作为神经网络模型。LSTM网络通过“门”(gate)来控制丢弃或者增加信息,从而实现遗忘或记忆的功能。“门”是一种使信息选择性通过的结构,由一个sigmoid(S型生长曲线)函数和一个点乘操作组成。sigmoid函数的输出值在[0,1]区间,0代表完全丢弃,1代表完全通过。训练至收敛的神经网络模型具备了能识别人脸图像的分类器,其中,人脸识别模型包括上述的神经网络模型,该 神经网络模型包括了N+1个分类器,N为正整数。
具体地,通过将人脸图像输入到预设的人脸识别模型中,得到人脸图像在分类器中的分类结果,其中,分类结果包括人脸图像对应的身份信息分类和身份信息分类的置信度(Confidence)。其中,身份信息分类的置信度是指人脸图像经过人脸识别模型进行筛选分类后,人脸图像被归类到一种以上的身份信息分类以及得到人脸图像占该身份信息分类的百分值。由于最终得到人脸图像对应的身份信息为一种,故需要将同一人脸图像的各个身份信息分类的置信度进行比较,例如,系统采集到操作人员的人脸图像,被分类到张三的置信度为0.95,被分类到李四的置信度为0.63。
然后将该置信度与预设的第一阈值进行比对,当所述置信度大于预设的第一阈值时,确认所述置信度所表征的身份信息分类结果为操作人员的身份信息。预设的第一阈值一般设置为0.9到1之间的数值。通过筛选出置信度大于第一阈值的情绪信息作为最终的身份信息分类结果,即确认置信度所表征的身份信息。例如,当预设的第一阈值为0.9时,并且操作人员的人脸图像被分类到张三的置信度为0.95,由于0.95>0.9,所以该人脸图像所表征的身份信息为张三的个人身份信息。
S1513、获取所述人脸识别模型输出的所述操作人员的身份信息。
人脸识别模型对操作人员的人脸图像进行识别后输出操作人员的个人身份信息,通过将人脸图像输入到预设的人脸识别模型中,并获取人脸识别模型输出的人脸图像的身份信息分类的置信度,当置信度大于预设第一阈值时,确认置信度所表征的身份信息分类结果为操作人员的个人身份信息,从而提高了识别人脸图像的身份信息分类准确度。在实施时,当人脸识别模型根据操作人员的人脸图像识别到操作人员的姓名时,还可以根据该姓名在预设的用户数据库中查找与该姓名相对应的操作人员的个人信息,该操作人员的个人信息包括操作人员的姓名、身份证号以及工号等。
在一个可选实施例中,请参阅图7,图7是本申请一个实施例多线程存储目标存储信息的基本流程示意图。
如图7所示,步骤S1500包括如下述步骤:
S1530、通过线程建立将所述目标存储信息保存至预设的调参数据库中的存储任务;
线程是应用程序中一个单一的顺序控制流程。进程内有一个相对独立的、可调度的执行单元,是系统独立调度和分派CPU的基本单位指令运行时的程序的调度单位。在单个程序中同时运行多个线程完成不同的工作,称为多线程。通过建立执行将所述目标存储信息保存至预设的调参数据库中的待执行任务,从而将所述目标存储信息保存 至预设的调参数据库中的操作和其它操作任务和其它应用程序的操作任务异步多线程同时进行。
S1540、检测所述存储任务之后的任务队列中是否存在优先级高于所述存储任务的操作任务;
任务队列是指包括多个操作任务,且这些操作任务之间进行异步调用,从而解决任务阻塞问题的任务集合,任务队列中的操作任务设置有对应的优先级,优先级(priority)是一种约定,是计算机分时操作系统在处理多个作业程序时,决定各个作业程序接受系统资源的优先等级的参数,优先级高的先做,优先级低的后做。系统通过遍历任务队列中的各个操作任务与待执行任务的优先级进行比对,从而查找任务队列中是否存在优先级高于待执行任务的操作任务。
S1550、当所述任务队列存在优先级高于所述存储任务的操作任务时,优先执行所述操作任务至所述操作任务执行完毕后回调执行所述存储任务。
优先执行优先级高于该待执行任务的其它操作任务,能使得系统运行流畅不卡顿,例如在同一时间系统要处理多个训练模型和存储目标存储信息的操作,系统先执行训练模型的任务后再执行该待执行任务,提高模型训练效率。
为解决上述技术问题,本申请实施例还提供一种模型超参数调整控制装置。
具体请参阅图8,图8为本实施例模型超参数调整控制装置基本结构示意图。
如图8所示,一种模型超参数调整控制装置,包括:第一获取模块2100、第一处理模块2200和第一执行模块2300,其中,第一获取模块2100用于获取待训练模型的原始信息,其中,所述原始信息包括所述待训练模型的模型信息以及用于训练所述待训练模型的样本数据的样本信息;第一处理模块2200用于根据预设的性能算法对所述模型信息和所述样本信息进行计算生成模型性能信息;第一执行模块2300用于根据所述模型性能信息在预设的调参数据库中查找与所述模型性能信息相对应的目标调参方案,以使所述待训练模型根据所述目标调参方案调整超参数。
本实施例通过获取待训练模型的原始信息,包括该待训练模型的模型信息以及样本数据的样本信息,样本数据是用于训练该待训练模型的,然后根据性能算法对模型信息和样本数据的样本信息进行计算生成该待训练模型的模型性能信息,再根据该模型性能信息在调参数据库中查找目标调参方案,从而使得该待训练模型根据该目标调参方案进行超参数的调整,通过根据待训练模型的输入样本数据以及模型信息自动化选择调参方案,选取的调参方案与待训练模型和样本数据 适配,能有效降低调参难度提高调参效率。
在一些实施方式中,模型超参数调整控制装置还包括:第二获取模块和第二执行模块,其中,第二获取模块用于获取预设的本地终端性能数据列表,其中,所述性能数据列表包括所述本地终端的机器配置信息;第二执行模块用于根据预设的机器跑分算法对所述机器配置信息进行计算生成机器性能信息并添加至所述模型性能信息中。
在一些实施方式中,模型超参数调整控制装置还包括:第三获取模块和第三执行模块,其中,第三获取模块用于获取所述本地终端的设备号信息;第三执行模块用于根据所述设备号信息生成设备检测控制信息,以根据所述设备检测控制信息对所述本地终端进行检测获取所述机器配置信息。
在一些实施方式中,模型超参数调整控制装置还包括:第二处理模块和第四执行模块,其中,第二处理模块用于将所述模型性能信息以及所述目标调参方案进行结构化转换生成目标存储信息;第四执行模块用于将所述目标存储信息保存至预设的调参数据库中。
在一些实施方式中,模型超参数调整控制装置还包括:第一获取子模块和第一执行子模块,其中,第一获取子模块用于获取与所述待训练模型相对应的模型训练操作人员的身份信息;第一执行子模块用于根据所述身份信息设置所述目标存储信息的标志信息并存储至所述调参数据库中,以使所述目标存储信息与所述操作人员关联。
在一些实施方式中,模型超参数调整控制装置还包括:第二获取子模块、第一处理子模块和第三获取子模块,其中,第二获取子模块用于获取所述操作人员的人脸图像;第一处理子模块用于将所述人脸图像输入至预设的人脸识别模型中,其中,所述人脸识别模型为训练至收敛的卷积神经网络模型。第三获取子模块用于获取所述人脸识别模型输出的所述操作人员的身份信息。
在一些实施方式中,模型超参数调整控制装置还包括:线程子模块、检测子模块和第二执行子模块,其中,线程子模块用于通过线程建立将所述目标存储信息保存至预设的调参数据库中的存储任务;检测子模块用于检测所述存储任务之后的任务队列中是否存在优先级高于所述存储任务的操作任务;第二执行子模块用于当所述任务队列存在优先级高于所述存储任务的操作任务时,优先执行所述操作任务至所述操作任务执行完毕后回调执行所述存储任务
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
为解决上述技术问题,本申请实施例还提供计算机设备。具体请参阅图9,图9为本实施例计算机设备基本结构框图。
如图9所示,计算机设备的内部结构示意图。如图9所示,该计算机设备包括通过系统总线连接的处理器、非易失性存储介质、存储器和网络接口。其中,该计算机设备的非易失性存储介质存储有操作系统、数据库和计算机可读指令,数据库中可存储有控件信息序列,该计算机可读指令被处理器执行时,可使得处理器实现一种模型超参数调整控制方法。该计算机设备的处理器用于提供计算和控制能力,支撑整个计算机设备的运行。该计算机设备的存储器中可存储有计算机可读指令,该计算机可读指令被处理器执行时,可使得处理器执行一种模型超参数调整控制方法。该计算机设备的网络接口用于与终端连接通信。本领域技术人员可以理解,图中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
本实施方式中处理器用于执行图8中第一获取模块2100、第一处理模块2200和第一执行模块2300,存储器存储有执行上述模块所需的程序代码和各类数据。网络接口用于向用户终端或服务器之间的数据传输。本实施方式中的存储器存储有模型超参数调整控制装置中执行所有子模块所需的程序代码及数据,服务器能够调用服务器的程序代码及数据执行所有子模块的功能。
计算机通过获取待训练模型的原始信息,包括该待训练模型的模型信息以及样本数据的样本信息,样本数据是用于训练该待训练模型的,然后根据性能算法对模型信息和样本数据的样本信息进行计算生成该待训练模型的模型性能信息,再根据该模型性能信息在调参数据库中查找目标调参方案,从而使得该待训练模型根据该目标调参方案进行超参数的调整,通过根据待训练模型的输入样本数据以及模型信息自动化选择调参方案,选取的调参方案与待训练模型和样本数据适配,能有效降低调参难度提高调参效率。
本申请还提供一种存储有计算机可读指令的非易失存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述任一实施例所述模型超参数调整控制方法的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,该计算机程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,前述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等非易失性存储介质,或随机存储记忆体(Random Access Memory,RAM)等。

Claims (20)

  1. 一种模型超参数调整控制方法,包括下述步骤:
    获取待训练模型的原始信息,其中,所述原始信息包括所述待训练模型的模型信息以及用于训练所述待训练模型的样本数据的样本信息;
    根据预设的性能算法对所述模型信息和所述样本信息进行计算生成模型性能信息;
    根据所述模型性能信息在预设的调参数据库中查找与所述模型性能信息相对应的目标调参方案,以使所述待训练模型根据所述目标调参方案调整超参数。
  2. 根据权利要求1所述的模型超参数调整控制方法,所述根据所述模型性能信息在预设的调参数据库中查找与所述模型性能信息相对应的目标调参方案之前,还包括如下述步骤:
    获取预设的本地终端性能数据列表,其中,所述性能数据列表包括所述本地终端的机器配置信息;
    根据预设的机器跑分算法对所述机器配置信息进行计算生成机器性能信息并添加至所述模型性能信息中。
  3. 根据权利要求2所述的模型超参数调整控制方法,所述获取预设的本地终端性能数据列表的步骤之前,还包括如下述步骤:
    获取所述本地终端的设备号信息;
    根据所述设备号信息生成设备检测控制信息,以根据所述设备检测控制信息对所述本地终端进行检测获取所述机器配置信息。
  4. 根据权利要求1所述的模型超参数调整控制方法,所述根据所述模型性能信息在预设的调参数据库中查找与所述模型性能信息相对应的目标调参方案之后,还包括如下述步骤:
    将所述模型性能信息以及所述目标调参方案进行结构化转换生成目标存储信息;
    将所述目标存储信息保存至预设的调参数据库中。
  5. 根据权利要求4所述的模型超参数调整控制方法,所述将所述目标存储信息保存至预设的调参数据库中的步骤,包括如下述步骤:
    获取与所述待训练模型相对应的模型训练操作人员的身份信息;
    根据所述身份信息设置所述目标存储信息的标志信息并存储至所述调参数据库中,以使所述目标存储信息与所述操作人员关联。
  6. 根据权利要求5所述的模型超参数调整控制方法,所述获取与所述待训练模型相对应的模型训练操作人员的身份信息的步骤,包括如下述步骤:
    获取所述操作人员的人脸图像;
    将所述人脸图像输入至预设的人脸识别模型中,其中,所述人脸识别模型为训练至收敛的卷积神经网络模型;
    获取所述人脸识别模型输出的所述操作人员的身份信息。
  7. 根据权利要求4所述的模型超参数调整控制方法,所述将所述目标存储信息保存至预设的调参数据库中的步骤,包括如下述步骤:
    通过线程建立将所述目标存储信息保存至预设的调参数据库中的存储任务;
    检测所述存储任务之后的任务队列中是否存在优先级高于所述存储任务的操作任务;
    当所述任务队列存在优先级高于所述存储任务的操作任务时,优先执行所述操作任务至所述操作任务执行完毕后回调执行所述存储任务。
  8. 一种模型超参数调整控制装置,包括:
    第一获取模块,用于获取待训练模型的原始信息,其中,所述原始信息包括所述待训练模型的模型信息以及用于训练所述待训练模型的样本数据的样本信息;
    第一处理模块,用于根据预设的性能算法对所述模型信息和所述样本信息进行计算生成模型性能信息;
    第一执行模块,用于根据所述模型性能信息在预设的调参数据库中查找与所述模型性能信息相对应的目标调参方案,以使所述待训练模型根据所述目标调参方案调整超参数。
  9. 一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行一种模型超参数调整控制方法,所述模型超参数调整控制方法包括以下步骤:
    获取待训练模型的原始信息,其中,所述原始信息包括所述待训练模型的模型信息以及用于训练所述待训练模型的样本数据的样本信息;
    根据预设的性能算法对所述模型信息和所述样本信息进行计算生成模型性能信息;
    根据所述模型性能信息在预设的调参数据库中查找与所述模型性能信息相对应的目标调参方案,以使所述待训练模型根据所述目标调参方案调整超参数。
  10. 根据权利要求9所述的计算机设备,所述根据所述模型性能信息在预设的调参数据库中查找与所述模型性能信息相对应的目标调参方案之前,还包括如下述步骤:
    获取预设的本地终端性能数据列表,其中,所述性能数据列表包括所述本地终端的机器配置信息;
    根据预设的机器跑分算法对所述机器配置信息进行计算生成机器性能信息并添加至所述模型性能信息中。
  11. 根据权利要求10所述的计算机设备,所述获取预设的本地终端性能数据列表的步骤之前,还包括如下述步骤:
    获取所述本地终端的设备号信息;
    根据所述设备号信息生成设备检测控制信息,以根据所述设备检测控制信息对所述本地终端进行检测获取所述机器配置信息。
  12. 根据权利要求9所述的计算机设备,所述根据所述模型性能信息在预设的调参数据库中查找与所述模型性能信息相对应的目标调参方案之后,还包括如下述步骤:
    将所述模型性能信息以及所述目标调参方案进行结构化转换生成目标存储信息;
    将所述目标存储信息保存至预设的调参数据库中。
  13. 根据权利要求12所述的计算机设备,所述将所述目标存储信息保存至预设的调参数据库中的步骤,包括如下述步骤:
    获取与所述待训练模型相对应的模型训练操作人员的身份信息;
    根据所述身份信息设置所述目标存储信息的标志信息并存储至所述调参数据库中,以使所述目标存储信息与所述操作人员关联。
  14. 根据权利要求13所述的计算机设备,所述获取与所述待训练模型相对应的模型训练操作人员的身份信息的步骤,包括如下述步骤:
    获取所述操作人员的人脸图像;
    将所述人脸图像输入至预设的人脸识别模型中,其中,所述人脸识别模型为训练至收敛的卷积神经网络模型;
    获取所述人脸识别模型输出的所述操作人员的身份信息。
  15. 一种存储有计算机可读指令的非易失存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行一种模型超参数调整控制方法,所述模型超参数调整控制方法包括以下步骤:
    获取待训练模型的原始信息,其中,所述原始信息包括所述待训练模型的模型信息以及用于训练所述待训练模型的样本数据的样本信息;
    根据预设的性能算法对所述模型信息和所述样本信息进行计算生成模型性能信息;
    根据所述模型性能信息在预设的调参数据库中查找与所述模型性能信息相对应的目标调参方案,以使所述待训练模型根据所述目标调参方案调整超参数。
  16. 根据权利要求15所述的非易失存储介质,所述根据所述模型性能信息在预设的调参数据库中查找与所述模型性能信息相对应的 目标调参方案之前,还包括如下述步骤:
    获取预设的本地终端性能数据列表,其中,所述性能数据列表包括所述本地终端的机器配置信息;
    根据预设的机器跑分算法对所述机器配置信息进行计算生成机器性能信息并添加至所述模型性能信息中。
  17. 根据权利要求16所述的非易失存储介质,所述获取预设的本地终端性能数据列表的步骤之前,还包括如下述步骤:
    获取所述本地终端的设备号信息;
    根据所述设备号信息生成设备检测控制信息,以根据所述设备检测控制信息对所述本地终端进行检测获取所述机器配置信息。
  18. 根据权利要求15所述的非易失存储介质,所述根据所述模型性能信息在预设的调参数据库中查找与所述模型性能信息相对应的目标调参方案之后,还包括如下述步骤:
    将所述模型性能信息以及所述目标调参方案进行结构化转换生成目标存储信息;
    将所述目标存储信息保存至预设的调参数据库中。
  19. 根据权利要求18所述的非易失存储介质,所述将所述目标存储信息保存至预设的调参数据库中的步骤,包括如下述步骤:
    获取与所述待训练模型相对应的模型训练操作人员的身份信息;
    根据所述身份信息设置所述目标存储信息的标志信息并存储至所述调参数据库中,以使所述目标存储信息与所述操作人员关联。
  20. 根据权利要求15所述的非易失性存储介质,所述获取与所述待训练模型相对应的模型训练操作人员的身份信息的步骤,包括如下述步骤:
    获取所述操作人员的人脸图像;
    将所述人脸图像输入至预设的人脸识别模型中,其中,所述人脸识别模型为训练至收敛的卷积神经网络模型;
    获取所述人脸识别模型输出的所述操作人员的身份信息。
PCT/CN2019/103660 2019-06-27 2019-08-30 模型超参数调整控制方法、装置、计算机设备及存储介质 WO2020258508A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910569483.XA CN110443126A (zh) 2019-06-27 2019-06-27 模型超参数调整控制方法、装置、计算机设备及存储介质
CN201910569483.X 2019-06-27

Publications (1)

Publication Number Publication Date
WO2020258508A1 true WO2020258508A1 (zh) 2020-12-30

Family

ID=68428408

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/103660 WO2020258508A1 (zh) 2019-06-27 2019-08-30 模型超参数调整控制方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN110443126A (zh)
WO (1) WO2020258508A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990480A (zh) * 2021-03-10 2021-06-18 北京嘀嘀无限科技发展有限公司 构建模型的方法、装置、电子设备和存储介质
CN113277388A (zh) * 2021-04-02 2021-08-20 东南大学 一种电动吊篮数据采集控制方法
CN113708403A (zh) * 2021-08-17 2021-11-26 苏州罗约科技有限公司 一种变流器并网控制方法、系统、服务器及存储介质
CN113822322A (zh) * 2021-07-15 2021-12-21 腾讯科技(深圳)有限公司 图像处理模型训练方法及文本处理模型训练方法
CN114238269A (zh) * 2021-12-03 2022-03-25 中兴通讯股份有限公司 数据库参数调整方法、装置、电子设备和存储介质
CN114392126A (zh) * 2022-01-24 2022-04-26 佳木斯大学 一种残疾儿童手部配合训练系统
CN116701350A (zh) * 2023-05-19 2023-09-05 阿里云计算有限公司 自动优化方法及训练方法、装置、电子设备

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113555008A (zh) * 2020-04-17 2021-10-26 阿里巴巴集团控股有限公司 一种针对模型的调参方法及装置
CN111696517A (zh) * 2020-05-28 2020-09-22 平安科技(深圳)有限公司 语音合成方法、装置、计算机设备及计算机可读存储介质
CN112232294B (zh) * 2020-11-09 2023-10-13 北京爱笔科技有限公司 一种超参数优化、目标识别模型训练、目标识别方法及装置
CN115470910A (zh) * 2022-10-20 2022-12-13 晞德软件(北京)有限公司 基于贝叶斯优化及K-center采样的自动调参方法
CN115859693B (zh) * 2023-02-17 2023-06-06 阿里巴巴达摩院(杭州)科技有限公司 数据处理方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040179719A1 (en) * 2003-03-12 2004-09-16 Eastman Kodak Company Method and system for face detection in digital images
US20090238426A1 (en) * 2008-03-19 2009-09-24 Uti Limited Partnership System and Methods for Identifying an Object within a Complex Environment
CN103440493A (zh) * 2013-02-27 2013-12-11 中国人民解放军空军装备研究院侦察情报装备研究所 基于相关向量机的高光谱影像模糊分类方法及装置
CN109389143A (zh) * 2018-06-19 2019-02-26 北京九章云极科技有限公司 一种数据分析处理系统及自动建模方法
CN109816000A (zh) * 2019-01-09 2019-05-28 浙江工业大学 一种新的特征选择与参数优化方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11397887B2 (en) * 2017-09-26 2022-07-26 Amazon Technologies, Inc. Dynamic tuning of training parameters for machine learning algorithms
CN108764455A (zh) * 2018-05-17 2018-11-06 南京中兴软件有限责任公司 调参方法、装置及存储介质
CN109213805A (zh) * 2018-09-07 2019-01-15 东软集团股份有限公司 一种实现模型优化的方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040179719A1 (en) * 2003-03-12 2004-09-16 Eastman Kodak Company Method and system for face detection in digital images
US20090238426A1 (en) * 2008-03-19 2009-09-24 Uti Limited Partnership System and Methods for Identifying an Object within a Complex Environment
CN103440493A (zh) * 2013-02-27 2013-12-11 中国人民解放军空军装备研究院侦察情报装备研究所 基于相关向量机的高光谱影像模糊分类方法及装置
CN109389143A (zh) * 2018-06-19 2019-02-26 北京九章云极科技有限公司 一种数据分析处理系统及自动建模方法
CN109816000A (zh) * 2019-01-09 2019-05-28 浙江工业大学 一种新的特征选择与参数优化方法

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990480A (zh) * 2021-03-10 2021-06-18 北京嘀嘀无限科技发展有限公司 构建模型的方法、装置、电子设备和存储介质
CN113277388A (zh) * 2021-04-02 2021-08-20 东南大学 一种电动吊篮数据采集控制方法
CN113822322A (zh) * 2021-07-15 2021-12-21 腾讯科技(深圳)有限公司 图像处理模型训练方法及文本处理模型训练方法
CN113708403A (zh) * 2021-08-17 2021-11-26 苏州罗约科技有限公司 一种变流器并网控制方法、系统、服务器及存储介质
CN114238269A (zh) * 2021-12-03 2022-03-25 中兴通讯股份有限公司 数据库参数调整方法、装置、电子设备和存储介质
CN114238269B (zh) * 2021-12-03 2024-01-23 中兴通讯股份有限公司 数据库参数调整方法、装置、电子设备和存储介质
CN114392126A (zh) * 2022-01-24 2022-04-26 佳木斯大学 一种残疾儿童手部配合训练系统
CN114392126B (zh) * 2022-01-24 2023-09-22 佳木斯大学 一种残疾儿童手部配合训练系统
CN116701350A (zh) * 2023-05-19 2023-09-05 阿里云计算有限公司 自动优化方法及训练方法、装置、电子设备
CN116701350B (zh) * 2023-05-19 2024-03-29 阿里云计算有限公司 自动优化方法及训练方法、装置、电子设备

Also Published As

Publication number Publication date
CN110443126A (zh) 2019-11-12

Similar Documents

Publication Publication Date Title
WO2020258508A1 (zh) 模型超参数调整控制方法、装置、计算机设备及存储介质
KR101423916B1 (ko) 복수의 얼굴 인식 방법 및 장치
CN105100894B (zh) 面部自动标注方法及系统
JP2022101603A (ja) 環境センサデータを用いる効率的な画像解析
US9367756B2 (en) Selection of representative images
JP5864783B2 (ja) 操作者不在画像キャプチャのための方法および装置
CN104994426B (zh) 节目视频识别方法及系统
US9607224B2 (en) Entity based temporal segmentation of video streams
CN100545856C (zh) 视频内容分析系统
US7152209B2 (en) User interface for adaptive video fast forward
US8571332B2 (en) Methods, systems, and media for automatically classifying face images
US20140093174A1 (en) Systems and methods for image management
CN111444366B (zh) 图像分类方法、装置、存储介质及电子设备
WO2021175071A1 (zh) 图像处理方法、装置、存储介质及电子设备
CN103617432A (zh) 一种场景识别方法及装置
CN106101541A (zh) 一种终端、拍照设备及其基于人物情绪的拍摄方法
CN110781422B (zh) 页面配置方法、装置、计算机设备及存储介质
WO2019080908A1 (zh) 实现图像识别的图像处理方法及装置、电子设备
CN104182721A (zh) 提升人脸识别率的图像处理系统及图像处理方法
KR101647691B1 (ko) 하이브리드 기반의 영상 클러스터링 방법 및 이를 운용하는 서버
US11768871B2 (en) Systems and methods for contextualizing computer vision generated tags using natural language processing
JP2019160001A (ja) 画像処理装置、画像処理方法およびプログラム
CN105791674A (zh) 电子设备和对焦方法
Mady et al. Efficient real time attendance system based on face detection case study “MEDIU staff”
CN111476878A (zh) 3d人脸生成控制方法、装置、计算机设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19935288

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19935288

Country of ref document: EP

Kind code of ref document: A1