WO2019080501A1 - 电子装置、多模型样本训练方法、系统和计算机可读存储介质 - Google Patents

电子装置、多模型样本训练方法、系统和计算机可读存储介质

Info

Publication number
WO2019080501A1
WO2019080501A1 PCT/CN2018/089427 CN2018089427W WO2019080501A1 WO 2019080501 A1 WO2019080501 A1 WO 2019080501A1 CN 2018089427 W CN2018089427 W CN 2018089427W WO 2019080501 A1 WO2019080501 A1 WO 2019080501A1
Authority
WO
WIPO (PCT)
Prior art keywords
machine learning
training
learning model
trained
sample data
Prior art date
Application number
PCT/CN2018/089427
Other languages
English (en)
French (fr)
Inventor
陈林
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019080501A1 publication Critical patent/WO2019080501A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Definitions

  • the present application relates to the field of machine learning model training, and in particular to an electronic device, a multi-model sample training method, a system, and a computer readable storage medium.
  • the present application provides an electronic device, a multi-model sample training method, a system, and a computer readable storage medium, which are intended to solve the impact of an inexperienced user spending too much time on re-training the model on the progress of the work.
  • a first aspect of the present application provides an electronic device, including a memory, a processor, and a multi-model sample training system executable on the processor, where the multi-model sample training system is used by the processor The following steps are implemented during execution:
  • A. Receive sample data uploaded by the user, and determine data attributes of the sample data, where the data attributes include a type and a quantity;
  • the second aspect of the present application provides a multi-model sample training method, the method comprising the steps of:
  • a third aspect of the present application provides a multi-model sample training system, the multi-model sample training system comprising:
  • a first determining module configured to receive sample data uploaded by a user, and determine a data attribute of the sample data, where the data attribute includes a type and a quantity;
  • a second determining module configured to determine a machine learning model corresponding to the data attribute of the sample data according to a mapping relationship between the predetermined machine learning model and the data attribute of the sample data
  • a first training module configured to separately train the determined machine learning model on the sample data
  • the analysis module is configured to analyze the training results of each machine learning model obtained after the training is completed, and display the training results that meet the preset conditions on the display interface.
  • a fourth aspect of the present application provides a computer readable storage medium storing a multi-model sample training system, the multi-model sample training system being executable by at least one processor to cause the at least one The processor performs the following steps:
  • the training results of the various machine learning models obtained after the completion of the training are analyzed, and the training results meeting the preset conditions are displayed on the display interface.
  • the mapping relationship between the machine learning model and the data attributes of the sample data is preset in the system. After receiving the sample data uploaded by the user, the data attributes of the sample data are first analyzed and determined, and the machine learning is performed according to the system. The mapping relationship between the model and the data attributes of the sample data determines a machine learning model (perhaps one or more) that is better trained for the current sample data, and the determined machine learning model is separately trained on the sample data. After the training is completed, the training results of the respective machine learning models are displayed on the display interface for the user to view and select the best machine learning model.
  • the system adopts the system to automatically determine one or more machine learning models suitable for the training effect according to the sample data uploaded by the user, trains the sample data, and displays the better training results to the user after the training is finished, so that the user
  • the best model can be accurately selected at one time, and the whole process does not need to be spent; compared with the prior art user-selected model to train the sample data, the solution avoids the inexperienced user due to the wrong selection of the model.
  • the situation that leads to poor training results solves the problem that the user spends too much time repeating the training model and affecting the progress of the work.
  • FIG. 1 is a schematic flow chart of an embodiment of a multi-model sample training method according to the present application.
  • FIG. 2 is a schematic flowchart of a second embodiment of a multi-model sample training method according to the present application
  • FIG. 3 is a schematic flowchart of a third embodiment of a multi-model sample training method according to the present application.
  • FIG. 4 is a schematic diagram of an operating environment of a preferred embodiment of a multi-model sample training system of the present application
  • FIG. 5 is a program block diagram of an embodiment of a multi-model sample training system of the present application.
  • FIG. 6 is a block diagram of a program of a second embodiment of a multi-model sample training system of the present application.
  • FIG. 1 is a schematic flowchart of an embodiment of a multi-model sample training method according to the present application.
  • the multi-model sample training method includes:
  • Step S10 Receive sample data uploaded by a user, and determine data attributes of the sample data, where the data attributes include a type and a quantity;
  • the system receives the sample data and analyzes its data attributes to determine the type and quantity of the sample data.
  • the type of the sample data mainly includes image data and data for predicting continuous values (for example, stock market quotes), and the image data may also be data for predicting continuous values (for example, the face-corrected video is time-series image data) .
  • Step S20 determining a machine learning model corresponding to the data attribute of the sample data according to a mapping relationship between the predetermined machine learning model and the data attribute of the sample data;
  • the system has a variety of machine learning models, including traditional machine learning models: random forest, SVM, naive Bayes, knn, gbdt, xgboost, LR, etc., and deep learning models: leant, alexnet, vggnet, resnet, inception- V1, inception-resnet, sgd, fast-rcnn, etc.
  • the mapping relationship between the machine learning model and the data attributes of the sample data is preset in the system, that is, different data attributes (ie different types and/or different numbers) of the sample data are set, and corresponding suitability and training effect are set in the system.
  • the machine learning model corresponds to it.
  • the type of the sample data is image data and the number is greater than A (for example, 10000 sheets)
  • A for example, 10000 sheets
  • CNNs convolutional neural networks
  • each convolutional neural network model is different in number of layers
  • A for example, 10,000 sheets
  • the sample attributes (type and number) with the sample data are determined.
  • SVM support vector machine
  • the type of sample data is predictive of continuous value data
  • determine the corresponding model as a selection regression model or more Type model. Therefore, after determining the type and quantity of the received sample data, determining a suitable and good training effect corresponding to the current sample data according to a mapping relationship between the predetermined machine learning model and the data attribute of the sample data in the system. Or multiple machine learning models.
  • Step S30 training the determined sample learning data on the determined machine learning model
  • the system After determining the machine learning model, the system separately trains the determined machine learning models for the sample data, and after the determined training of the various machine learning models is completed, the system obtains the training results of the respective machine learning models.
  • Step S40 analyzing the training results of the respective machine learning models obtained after the training is completed, and displaying the training results that meet the preset conditions on the display interface.
  • the training result includes an accuracy rate and a loss function curve of the machine learning model after the training is completed;
  • the preset condition is preferably: the accuracy rate is greater than a preset value (for example, 95%), or the accuracy rate is descending.
  • the pre-preset name for example, the top 3; of course, in other embodiments, the preset condition may also be other schemes.
  • the mapping relationship between the machine learning model and the data attributes of the sample data is preset in the system. After receiving the sample data uploaded by the user, the data attributes of the sample data are first analyzed and determined, and the machine is in accordance with the system. Learning the mapping relationship between the model and the data attributes of the sample data, and determining a machine learning model (perhaps one or more) that is better for the current sample data training, and determining the machine learning model for the sample data separately After the training and training are completed, the training results of the respective machine learning models are displayed on the display interface for the user to view and select the best machine learning model.
  • the system adopts the system to automatically determine one or more machine learning models suitable for the training effect according to the sample data uploaded by the user, trains the sample data, and displays the better training results to the user after the training is finished, so that the user
  • the best model can be accurately selected at one time, and the whole process does not need to be spent; compared with the prior art user-selected model to train the sample data, the solution avoids the inexperienced user due to the wrong selection of the model.
  • the situation that leads to poor training results solves the problem that the user spends too much time repeating the training model and affecting the progress of the work.
  • FIG. 2 is a schematic flowchart of a second embodiment of a multi-model sample training method according to the present application; in this embodiment, the multi-model sample training method replaces the step S30 with:
  • Step S31 displaying the determined machine learning model on the operation interface, so that the user selects a machine learning model to be trained;
  • the system After determining the machine learning model corresponding to the current sample data, the system displays the determined machine learning model on an operation interface, so that the user selects a machine learning model to be trained on the operation interface, and the user can only Select a machine learning model to be trained, or you can select multiple or all machine learning models to be trained.
  • Step S32 After receiving the machine learning model to be trained selected by the user based on the operation interface, the machine learning model to be trained separately trains the sample data.
  • the system After receiving the machine learning model to be trained selected by the user on the operation interface, the system trains the sample data separately from the machine learning models to be trained.
  • the user can select which machine learning model to train the sample data in the corresponding machine learning model determined by the system according to the current sample data.
  • step S32 a step of implementing a machine learning model that the user can add on the operation interface may be added, so that professional R&D personnel can add more machine learning models according to their own plans. Sample training.
  • step S32 includes:
  • Step S321 after receiving the machine learning model to be trained selected by the user based on the operation interface, displaying a parameter setting interface of each machine learning model to be trained on the operation interface;
  • the system pops up a parameter setting interface of each machine learning model to be trained on the operation interface, so that the user can adjust the parameters of each machine learning model to be trained;
  • the parameters in the parameter setting interface can be adjusted without using the default parameters.
  • Step S322 after the parameter setting of each trained machine learning model is completed, the sample data is separately trained by using the machine learning model to be trained.
  • each machine learning model completion parameter setting to be trained for example, the operation interface has a "complete setting” control
  • the user clicks the "complete setting” control the system determines that the user completes the parameter setting
  • the system determines that the user completes the parameter setting
  • the parameter setting interface of each machine learning model to be trained is popped up, so that the more professional users can adjust the better parameters according to their own experience, and For less professional users, you can directly use the default parameters of the machine learning model, no need to adjust the parameters; this is more in line with the needs of users of different professional levels.
  • the present application also proposes a multi-model sample training system.
  • FIG. 4 is a schematic diagram of an operating environment of a preferred embodiment of the multi-model sample training system 10 of the present application.
  • the multi-model sample training system 10 is installed and operated in the electronic device 1.
  • the electronic device 1 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a server.
  • the electronic device 1 may include, but is not limited to, a memory 11, a processor 12, and a display 13.
  • Figure 4 shows only the electronic device 1 with components 11-13, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
  • the memory 11 may be an internal storage unit of the electronic device 1 in some embodiments, such as a hard disk or memory of the electronic device 1.
  • the memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (SMC), and a secure digital (SD). Card, flash card, etc.
  • the memory 11 may also include both an internal storage unit of the electronic device 1 and an external storage device.
  • the memory 11 is used to store application software and various types of data installed in the electronic device 1, such as program code of the multi-model sample training system 10.
  • the memory 11 can also be used to temporarily store data that has been output or is about to be output.
  • the processor 12 in some embodiments, may be a Central Processing Unit (CPU), microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as executing multiple model samples. Training system 10 and so on.
  • CPU Central Processing Unit
  • microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as executing multiple model samples. Training system 10 and so on.
  • the display 13 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch sensor, or the like in some embodiments.
  • the display 13 is for displaying information processed in the electronic device 1 and a user interface for displaying visualization, such as a business customization interface or the like.
  • the components 11-13 of the electronic device 1 communicate with each other through a system bus.
  • FIG. 5 is a program module diagram of an embodiment of the multi-model sample training system 10 of the present application.
  • the multi-model sample training system 10 can be divided into one or more modules, one or more modules being stored in the memory 11 and being processed by one or more processors (this embodiment is the processor 12) ) is performed to complete the application.
  • the multi-model sample training system 10 can be segmented into a first determination module 101, a second determination module 102, a first training module 103, and an analysis module 104.
  • a module referred to in this application refers to a series of computer program instruction segments capable of performing a specific function, and is more suitable than the program for describing the execution process of the multi-model sample training system 10 in the electronic device 1, wherein:
  • a first determining module 101 configured to receive sample data uploaded by a user, and determine a data attribute of the sample data, where the data attribute includes a type and a quantity;
  • the system receives the sample data and analyzes its data attributes to determine the type and quantity of the sample data.
  • the type of the sample data mainly includes image data and data for predicting continuous values (for example, stock market quotes), and the image data may also be data for predicting continuous values (for example, the face-corrected video is time-series image data) .
  • a second determining module 102 configured to determine a machine learning model corresponding to the data attribute of the sample data according to a mapping relationship between the predetermined machine learning model and data attributes of the sample data;
  • the system has a variety of machine learning models, including traditional machine learning models: random forest, SVM, naive Bayes, knn, gbdt, xgboost, LR, etc., and deep learning models: leant, alexnet, vggnet, resnet, inception- V1, inception-resnet, sgd, fast-rcnn, etc.
  • the mapping relationship between the machine learning model and the data attributes of the sample data is preset in the system, that is, different data attributes (ie different types and/or different numbers) of the sample data are set, and corresponding suitability and training effect are set in the system.
  • the machine learning model corresponds to it.
  • CNNs convolutional neural networks
  • model each convolutional neural network model is different in number of layers
  • the type of sample data is image data and the number does not exceed A (for example, 10,000 sheets)
  • the sample attributes (type and number) with the sample data are determined.
  • SVM support vector machine
  • the type of sample data is predictive of continuous value data, determine the corresponding model as a selection regression model or more Type model. Therefore, after determining the type and quantity of the received sample data, determining a suitable and good training effect corresponding to the current sample data according to the mapping relationship between the predetermined machine learning model and the data attribute of the sample data in the system. Or multiple machine learning models.
  • a first training module 103 configured to separately train the sample data in the determined machine learning model
  • the system After determining the machine learning model, the system separately trains the determined machine learning models for the sample data, and after the determined training of the various machine learning models is completed, the system obtains the training results of the respective machine learning models.
  • the analysis module 104 is configured to analyze training results of the respective machine learning models obtained after the training is completed, and display the training results that meet the preset conditions on the display interface.
  • the training result includes an accuracy rate and a loss function curve of the machine learning model after the training is completed;
  • the preset condition is preferably: the accuracy rate is greater than a preset value (for example, 95%), or the accuracy rate is descending.
  • the pre-preset name for example, the top 3; of course, in other embodiments, the preset condition may also be other schemes.
  • the mapping relationship between the machine learning model and the data attributes of the sample data is preset in the system. After receiving the sample data uploaded by the user, the data attributes of the sample data are first analyzed and determined, and the machine is in accordance with the system. Learning the mapping relationship between the model and the data attributes of the sample data, and determining a machine learning model (perhaps one or more) that is better for the current sample data training, and determining the machine learning model for the sample data separately After the training and training are completed, the training results of the respective machine learning models are displayed on the display interface for the user to view and select the best machine learning model.
  • the system adopts the system to automatically determine one or more machine learning models suitable for the training effect according to the sample data uploaded by the user, trains the sample data, and displays the better training results to the user after the training is finished, so that the user
  • the best model can be accurately selected at one time, and the whole process does not need to be spent; compared with the prior art user-selected model to train the sample data, the solution avoids the inexperienced user due to the wrong selection of the model.
  • the situation that leads to poor training results solves the problem that the user spends too much time repeating the training model and affecting the progress of the work.
  • the multi-model sample training system of the embodiment replaces the first training module 103 with the second training module 105, and the second training module includes:
  • a display sub-module 1051 configured to display the determined machine learning model on an operation interface, so that the user selects a machine learning model to be trained;
  • the system After determining the machine learning model corresponding to the current sample data, the system displays the determined machine learning model on an operation interface, so that the user selects a machine learning model to be trained on the operation interface, and the user can only Select a machine learning model to be trained, or you can select multiple or all machine learning models to be trained.
  • the training sub-module 1052 is configured to train the sample learning data to be trained by the machine learning model to be trained after receiving the machine learning model to be trained selected by the user based on the operation interface.
  • the system After receiving the machine learning model to be trained selected by the user on the operation interface, the system trains the sample data separately from the machine learning models to be trained.
  • the user can select which machine learning model to train the sample data in the corresponding machine learning model determined by the system according to the current sample data.
  • a program module for adding a machine learning model may be added to implement a step in which a user can add a machine learning model on the operation interface, so that professional R&D personnel can add more according to their own plans. More machine learning models for sample training.
  • the training submodule 1052 includes:
  • a display unit configured to display, after receiving the machine learning model to be trained selected by the user based on the operation interface, a parameter setting interface of each machine learning model to be trained on the operation interface;
  • the system pops up a parameter setting interface of each machine learning model to be trained on the operation interface, so that the user can adjust the parameters of each machine learning model to be trained;
  • the parameters in the parameter setting interface can be adjusted without using the default parameters.
  • the training unit is configured to separately train the sample data by using the machine learning model to be trained after the parameter setting of each machine learning model with training is completed.
  • each machine learning model completion parameter setting to be trained for example, the operation interface has a "complete setting” control
  • the user clicks the "complete setting” control the system determines that the user completes the parameter setting
  • the system determines that the user completes the parameter setting
  • the parameter setting interface of each machine learning model to be trained is popped up, so that the more professional users can adjust the better parameters according to their own experience, and For less professional users, you can directly use the default parameters of the machine learning model, no need to adjust the parameters; this is more in line with the needs of users of different professional levels.
  • the present application further provides a computer readable storage medium storing a multi-model sample training system, the multi-model sample training system being executable by at least one processor to enable the at least one A processor executes the multi-model sample training method of any of the above embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

本申请公开一种电子装置、多模型样本训练方法和计算机可读存储介质,其中,该方法包括:接收用户上传的样本数据,确定所述样本数据的数据属性,所述数据属性包括类型和数量;根据预先确定的机器学习模型与样本数据的数据属性的映射关系,确定所述样本数据的数据属性对应的机器学习模型;将所述确定的机器学习模型分别对所述样本数据进行训练;分析训练完成后得到的各个机器学习模型的训练结果,将符合预设条件的训练结果在显示界面上展示。本申请技术方案避免了经验不足的用户由于模型的选择错误而导致训练结果很差的情况发生,解决了用户花费太多时间重复训练模型而对工作进度的影响的问题。

Description

电子装置、多模型样本训练方法、系统和计算机可读存储介质
本申请基于巴黎公约申明享有2017年10月27日递交的申请号为CN 201711056980.7、名称为“电子装置、多模型样本训练方法和计算机可读存储介质”中国专利申请的优先权,该中国专利申请的整体内容以参考的方式结合在本申请中。
技术领域
本申请涉及机器学习模型训练领域,特别涉及一种电子装置、多模型样本训练方法、系统和计算机可读存储介质。
背景技术
目前,业内在使用机器学习训练样本时,需要手动选择一些机器学习模型,然后用选择的机器学习模型对样本数据进行训练,得到一个分类器。然而,这种需要自行选择模型的方式对初级或基础不扎实的用户而言,难度比较大,很容易发生因为模型选择错误而导致得到的分类器的效果较差,不符合要求,需要重新选择模型进行训练,而重复训练花费太多时间,严重影响用户的工作进度。
发明内容
本申请提供一种电子装置、多模型样本训练方法、系统和计算机可读存储介质,旨在解决经验不足的用户花费太多时间重复训练模型而对工作进度的影响。
本申请第一方面提供一种电子装置,包括存储器、处理器,所述存储器上存储有可在所述处理器上运行的多模型样本训练系统,所述多模型样本训练系统被所述处理器执行时实现如下步骤:
A、接收用户上传的样本数据,确定所述样本数据的数据属性,所述数据属性包括类型和数量;
B、根据预先确定的机器学习模型与样本数据的数据属性的映射关系,确定所述样本数据的数据属性对应的机器学习模型;
C、将所述确定的机器学习模型分别对所述样本数据进行训练;
D、分析训练完成后得到的各个机器学习模型的训练结果,将符合预设条件的训练结果在显示界面上展示。
本申请第二方面提供一种多模型样本训练方法,该方法包括步骤:
E、接收用户上传的样本数据,确定所述样本数据的数据属性, 所述数据属性包括类型和数量;
F、根据预先确定的机器学习模型与样本数据的数据属性的映射关系,确定所述样本数据的数据属性对应的机器学习模型;
G、将所述确定的机器学习模型分别对所述样本数据进行训练;
H、分析训练完成后得到的各个机器学习模型的训练结果,将符合预设条件的训练结果在显示界面上展示。
本申请第三方面提供一种多模型样本训练系统,所述多模型样本训练系统包括:
第一确定模块,用于接收用户上传的样本数据,确定所述样本数据的数据属性,所述数据属性包括类型和数量;
第二确定模块,用于根据预先确定的机器学习模型与样本数据的数据属性的映射关系,确定所述样本数据的数据属性对应的机器学习模型;
第一训练模块,用于将所述确定的机器学习模型分别对所述样本数据进行训练;
分析模块,用于分析训练完成后得到的各个机器学习模型的训练结果,将符合预设条件的训练结果在显示界面上展示。
本申请第四方面提供一种计算机可读存储介质,所述计算机可读存储介质存储有多模型样本训练系统,所述多模型样本训练系统可被至少一个处理器执行,以使所述至少一个处理器执行以下步骤:
接收用户上传的样本数据,确定所述样本数据的数据属性,所述数据属性包括类型和数量;
根据预先确定的机器学习模型与样本数据的数据属性的映射关系,确定所述样本数据的数据属性对应的机器学习模型;
将所述确定的机器学习模型分别对所述样本数据进行训练;
分析训练完成后得到的各个机器学习模型的训练结果,将符合预设条件的训练结果在显示界面上展示。
本申请技术方案,在系统中预先设置机器学习模型与样本数据的数据属性的映射关系,当接收到用户上传的样本数据后,先对样本数据的数据属性进行分析确定,在根据系统中机器学习模型与样本数据的数据属性的映射关系,确定出针对当前样本数据训练效果较佳的机 器学习模型(可能是一个,也可能是多个),将确定的机器学习模型分别对该样本数据进行训练,训练完成后,将各个机器学习模型的训练结果中,符合预设条件的训练结果在显示界面上进行展示,以供用户查看并挑选最佳的机器学习模型。即本方案采用系统根据用户上传的样本数据自动确定合适且训练效果好的一个或多个机器学习模型,对样本数据进行训练,并在训练结束后将较好的训练结果展示给用户,让用户可以一次性准确的挑选到最佳的模型,且整个过程无需花费;与现有技术的用户自行选择模型对样本数据训练的方式相比,本方案避免了经验不足的用户由于模型的选择错误而导致训练结果很差的情况发生,解决了用户花费太多时间重复训练模型而对工作进度的影响的问题。
附图说明
图1为本申请多模型样本训练方法一实施例的流程示意图;
图2为本申请多模型样本训练方法二实施例的流程示意图;
图3为本申请多模型样本训练方法三实施例的流程示意图;
图4为本申请多模型样本训练系统较佳实施例的运行环境示意图;
图5为本申请多模型样本训练系统一实施例的程序模块图;
图6为本申请多模型样本训练系统二实施例的程序模块图。
具体实施方式
以下结合附图对本申请的原理和特征进行描述,所举实例只用于解释本申请,并非用于限定本申请的范围。
如图1所示,图1为本申请多模型样本训练方法一实施例的流程示意图。
本实施例中,该多模型样本训练方法包括:
步骤S10,接收用户上传的样本数据,确定所述样本数据的数据属性,所述数据属性包括类型和数量;
当用户上传样本数据后,系统接收该样本数据对其数据属性进行分析,以确定该样本数据的类型及数量。其中,样本数据的类型主要包括图像数据及预测连续数值的数据(例如股市行情),另外,图像数据也可能为预测连续数值的数据(例如,人脸姿态校正的视频是时间序列的图像数据)。
步骤S20,根据预先确定的机器学习模型与样本数据的数据属性的映射关系,确定所述样本数据的数据属性对应的机器学习模型;
系统中具有各种机器学习模型,其中包括传统机器学习模型:随机森林、SVM、朴素贝叶斯、knn、gbdt、xgboost、LR等,以及深度学习模型:lenet、alexnet、vggnet、resnet、inception-v1、inception-resnet、sgd、fast-rcnn等。系统中预先设置了机器学习模型与样本数据的数据属性的映射关系,即针对样本数据的不同数据属性(即不同类型及/或不同数量),系统中均设置了相应的适合且训练效果较好的机器学习模型与之关联对应。例如,在所述样本数据的类型为图像数据且数量大于A(例如10000张)时,确定与该样本数据的样本属性(类型和数量)相对应的为一个或多个卷积神经网络(CNN)模型(各个卷积神经网络模型为层数不同);在所述样本数据的类型为图像数据且数量不超过A(例如10000张)时,确定与该样本数据的样本属性(类型和数量)相对应的为支持向量机(SVM)模型(或还包括其它多个类型的模型);在所述样本数据的类型为预测连续数值的数据时,确定对应的模型为选择回归模型或者更多其它类型模型。因此,当确定了接收到的样本数据的类型和数量后,则根据系统中的预先确定的机器学习模型与样本数据的数据属性的映射关系,确定当前样本数据对应的合适且训练效果好的一个或多个机器学习模型。
步骤S30,将所述确定的机器学习模型分别对所述样本数据进行训练;
系统在确定机器学习模型后,将确定的各个机器学习模型分别对该样本数据进行训练,系统在确定的各个机器学习模型训练完成后,得到各个机器学习模型的训练结果。
步骤S40,分析训练完成后得到的各个机器学习模型的训练结果,将符合预设条件的训练结果在显示界面上展示。
本实施例中,所述训练结果包括训练完成后的机器学习模型的准确率和损失函数曲线;所述预设条件优选为:准确率大于预设值(例如95%),或准确率降序排名的前预设名(例如前3名);当然,在其它实施例中,所述预设条件还可为其它方案。系统在得到各个机器 学习模型的训练结果后,根据所述预设条件筛选训练结果,将筛选后得到的训练结果在显示界面上展示,以将较佳的训练结果展示给用户看,用户则可根据挑选训练结果最优的机器学习模型进行使用。
本实施例技术方案,在系统中预先设置机器学习模型与样本数据的数据属性的映射关系,当接收到用户上传的样本数据后,先对样本数据的数据属性进行分析确定,在根据系统中机器学习模型与样本数据的数据属性的映射关系,确定出针对当前样本数据训练效果较佳的机器学习模型(可能是一个,也可能是多个),将确定的机器学习模型分别对该样本数据进行训练,训练完成后,将各个机器学习模型的训练结果中,符合预设条件的训练结果在显示界面上进行展示,以供用户查看并挑选最佳的机器学习模型。即本方案采用系统根据用户上传的样本数据自动确定合适且训练效果好的一个或多个机器学习模型,对样本数据进行训练,并在训练结束后将较好的训练结果展示给用户,让用户可以一次性准确的挑选到最佳的模型,且整个过程无需花费;与现有技术的用户自行选择模型对样本数据训练的方式相比,本方案避免了经验不足的用户由于模型的选择错误而导致训练结果很差的情况发生,解决了用户花费太多时间重复训练模型而对工作进度的影响的问题。
如图2所示,图2为本申请多模型样本训练方法二实施例流程示意图;本实施例多模型样本训练方法将所述步骤S30替换为:
步骤S31,将所述确定的机器学习模型在操作界面上展示,以供用户选择待训练的机器学习模型;
系统在确定当前样本数据对应的机器学习模型后,将确定的机器学习模型展示在一个操作界面上,以供用户在操作界面上选择待训练的机器学习模型,用户可以在所述操作界面上只选择一个机器学习模型待训练,也可以选择多个或全部机器学习模型待训练。
步骤S32,在接收到用户基于所述操作界面选择的待训练的机器学习模型后,将所述待训练的机器学习模型分别对所述样本数据进行训练。
系统在接收到用户在所述操作界面上选择的待训练的机器学习模型后,将这些待训练的机器学习模型分别对所述样本数据进行训练。
本实施例的方案,使用户在系统根据当前样本数据确定的对应的机器学习模型中,可以自行选择哪个或哪几个机器学习模型对样本数据进行训练。
另外,在其它实施例中,还可在步骤S32之前,增加实现用户可在操作界面上自行添加机器学习模型的步骤,以使专业的研发人员可根据自己的计划增加更多的机器学习模型进行样本训练。
如图3所示,为本申请多模型样本训练方法三实施例的流程示意图,本实施例基于二实施例,在本实施例中,所述步骤S32包括:
步骤S321,在接收到用户基于所述操作界面选择的待训练的机器学习模型后,在操作界面上显示各个待训练的机器学习模型的参数设置界面;
系统在接收到用户选择的待训练的机器学习模型后,在操作界面上弹出各个待训练机器学习模型的参数设置界面,以供用户对各个待训练机器学习模型的参数进行调节;对于模型训练方面的资深用户来说,用户可根据自己的经验在参数设置界面上自行调整相应参数;当然,各个机器学习模型都有默认的参数,这些默认参数通常是能达到较好的训练效果,因此,对于对模型训练的初级用户,则可以不对参数设置界面中的参数进行调节,直接使用默认的参数。
步骤S322,在各个带训练的机器学习模型完成参数设置后,采用所述待训练的机器学习模型分别对所述样本数据进行训练。
当系统检测到各个待训练的机器学习模型完成参数设置(例如,所述操作界面上有“完成设置”控件,用户点击该“完成设置”控件,系统则确定到用户完成参数设置),系统则将各个待训练的机器学习模型分别对所述样本数据进行训练。
本实施例方案,通过在用户选择待训练的机器学习模型后,对应弹出各个待训练机器学习模型的参数设置界面,可供比较专业的用户 可根据自己的经验自行对调节较佳的参数,并且对于不太专业的用户来说,则可以直接使用机器学习模型默认的参数,无需进行参数调节;这样更加的符合不同专业级别的用户的使用需求。
此外,本申请还提出一种多模型样本训练系统。
请参阅图4,是本申请多模型样本训练系统10较佳实施例的运行环境示意图。
在本实施例中,多模型样本训练系统10安装并运行于电子装置1中。电子装置1可以是桌上型计算机、笔记本、掌上电脑及服务器等计算设备。该电子装置1可包括,但不仅限于,存储器11、处理器12及显示器13。图4仅示出了具有组件11-13的电子装置1,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。
存储器11在一些实施例中可以是电子装置1的内部存储单元,例如该电子装置1的硬盘或内存。存储器11在另一些实施例中也可以是电子装置1的外部存储设备,例如电子装置1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储器11还可以既包括电子装置1的内部存储单元也包括外部存储设备。存储器11用于存储安装于电子装置1的应用软件及各类数据,例如多模型样本训练系统10的程序代码等。存储器11还可以用于暂时地存储已经输出或者将要输出的数据。
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU),微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行多模型样本训练系统10等。
显示器13在一些实施例中可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。显示器13用于显示在电子装置1中处理的信息以及用于显示可视化的用户界面,例如业务定制界面等。电子装置1 的部件11-13通过系统总线相互通信。
请参阅图5,是本申请多模型样本训练系统10一实施例的程序模块图。在本实施例中,多模型样本训练系统10可以被分割成一个或多个模块,一个或者多个模块被存储于存储器11中,并由一个或多个处理器(本实施例为处理器12)所执行,以完成本申请。例如,在图5中,多模型样本训练系统10可以被分割成第一确定模块101、第二确定模块102、第一训练模块103及分析模块104。本申请所称的模块是指能够完成特定功能的一系列计算机程序指令段,比程序更适合于描述多模型样本训练系统10在电子装置1中的执行过程,其中:
第一确定模块101,用于接收用户上传的样本数据,确定所述样本数据的数据属性,所述数据属性包括类型和数量;
当用户上传样本数据后,系统接收该样本数据对其数据属性进行分析,以确定该样本数据的类型及数量。其中,样本数据的类型主要包括图像数据及预测连续数值的数据(例如股市行情),另外,图像数据也可能为预测连续数值的数据(例如,人脸姿态校正的视频是时间序列的图像数据)。
第二确定模块102,用于根据预先确定的机器学习模型与样本数据的数据属性的映射关系,确定所述样本数据的数据属性对应的机器学习模型;
系统中具有各种机器学习模型,其中包括传统机器学习模型:随机森林、SVM、朴素贝叶斯、knn、gbdt、xgboost、LR等,以及深度学习模型:lenet、alexnet、vggnet、resnet、inception-v1、inception-resnet、sgd、fast-rcnn等。系统中预先设置了机器学习模型与样本数据的数据属性的映射关系,即针对样本数据的不同数据属性(即不同类型及/或不同数量),系统中均设置了相应的适合且训练效果较好的机器学习模型与之关联对应。例如,在所述样本数据的类型为图像数据且数量大于A(例如10000张)时,确定与该样本数据的样本属性(类型和数量)相对应的为一个或多个卷积神经网络(CNN)模型(各个卷积神经网络模型为层数不同);在所述样本数据的类型为图像数据 且数量不超过A(例如10000张)时,确定与该样本数据的样本属性(类型和数量)相对应的为支持向量机(SVM)模型(或还包括其它多个类型的模型);在所述样本数据的类型为预测连续数值的数据时,确定对应的模型为选择回归模型或者更多其它类型模型。因此,当确定了接收到的样本数据的类型和数量后,则根据系统中的预先确定的机器学习模型与样本数据的数据属性的映射关系,确定当前样本数据对应的合适且训练效果好的一个或多个机器学习模型。
第一训练模块103,用于将所述确定的机器学习模型分别对所述样本数据进行训练;
系统在确定机器学习模型后,将确定的各个机器学习模型分别对该样本数据进行训练,系统在确定的各个机器学习模型训练完成后,得到各个机器学习模型的训练结果。
分析模块104,用于分析训练完成后得到的各个机器学习模型的训练结果,将符合预设条件的训练结果在显示界面上展示。
本实施例中,所述训练结果包括训练完成后的机器学习模型的准确率和损失函数曲线;所述预设条件优选为:准确率大于预设值(例如95%),或准确率降序排名的前预设名(例如前3名);当然,在其它实施例中,所述预设条件还可为其它方案。系统在得到各个机器学习模型的训练结果后,根据所述预设条件筛选训练结果,将筛选后得到的训练结果在显示界面上展示,以将较佳的训练结果展示给用户看,用户则可根据挑选训练结果最优的机器学习模型进行使用。
本实施例技术方案,在系统中预先设置机器学习模型与样本数据的数据属性的映射关系,当接收到用户上传的样本数据后,先对样本数据的数据属性进行分析确定,在根据系统中机器学习模型与样本数据的数据属性的映射关系,确定出针对当前样本数据训练效果较佳的机器学习模型(可能是一个,也可能是多个),将确定的机器学习模型分别对该样本数据进行训练,训练完成后,将各个机器学习模型的训练结果中,符合预设条件的训练结果在显示界面上进行展示,以供用户查看并挑选最佳的机器学习模型。即本方案采用系统根据用户上传的样本数据自动确定合适且训练效果好的一个或多个机器学习模 型,对样本数据进行训练,并在训练结束后将较好的训练结果展示给用户,让用户可以一次性准确的挑选到最佳的模型,且整个过程无需花费;与现有技术的用户自行选择模型对样本数据训练的方式相比,本方案避免了经验不足的用户由于模型的选择错误而导致训练结果很差的情况发生,解决了用户花费太多时间重复训练模型而对工作进度的影响的问题。
参阅图6,本实施例多模型样本训练系统,将所述第一训练模块103替换为第二训练模块105,该第二训练模块包括:
展示子模块1051,用于将所述确定的机器学习模型在操作界面上展示,以供用户选择待训练的机器学习模型;
系统在确定当前样本数据对应的机器学习模型后,将确定的机器学习模型展示在一个操作界面上,以供用户在操作界面上选择待训练的机器学习模型,用户可以在所述操作界面上只选择一个机器学习模型待训练,也可以选择多个或全部机器学习模型待训练。
训练子模块1052,用于在接收到用户基于所述操作界面选择的待训练的机器学习模型后,将所述待训练的机器学习模型分别对所述样本数据进行训练。
系统在接收到用户在所述操作界面上选择的待训练的机器学习模型后,将这些待训练的机器学习模型分别对所述样本数据进行训练。
本实施例的方案,使用户在系统根据当前样本数据确定的对应的机器学习模型中,可以自行选择哪个或哪几个机器学习模型对样本数据进行训练。
另外,在其它实施例中,还可在增加用于添加机器学习模型的程序模块,以实现用户可在操作界面上自行添加机器学习模型的步骤,使专业的研发人员可根据自己的计划增加更多的机器学习模型进行样本训练。
进一步地,本实施例中,所述训练子模块1052包括:
显示单元,用于用于在接收到用户基于所述操作界面选择的待训练的机器学习模型后,在操作界面上显示各个待训练的机器学习模型的参数设置界面;
系统在接收到用户选择的待训练的机器学习模型后,在操作界面上弹出各个待训练机器学习模型的参数设置界面,以供用户对各个待训练机器学习模型的参数进行调节;对于模型训练方面的资深用户来说,用户可根据自己的经验在参数设置界面上自行调整相应参数;当然,各个机器学习模型都有默认的参数,这些默认参数通常是能达到较好的训练效果,因此,对于对模型训练的初级用户,则可以不对参数设置界面中的参数进行调节,直接使用默认的参数。
训练单元,用于在各个带训练的机器学习模型完成参数设置后,采用所述待训练的机器学习模型分别对所述样本数据进行训练。
当系统检测到各个待训练的机器学习模型完成参数设置(例如,所述操作界面上有“完成设置”控件,用户点击该“完成设置”控件,系统则确定到用户完成参数设置),系统则将各个待训练的机器学习模型分别对所述样本数据进行训练。
本实施例方案,通过在用户选择待训练的机器学习模型后,对应弹出各个待训练机器学习模型的参数设置界面,可供比较专业的用户可根据自己的经验自行对调节较佳的参数,并且对于不太专业的用户来说,则可以直接使用机器学习模型默认的参数,无需进行参数调节;这样更加的符合不同专业级别的用户的使用需求。
进一步地,本申请还提出一种计算机可读存储介质,所述计算机可读存储介质存储有多模型样本训练系统,所述多模型样本训练系统可被至少一个处理器执行,以使所述至少一个处理器执行上述任一实施例中的多模型样本训练方法。
以上所述仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是在本申请的申请构思下,利用本申请说明书及附图内容所作的等效结构变换,或直接/间接运用在其他相关的技术领域均包括在本申请的专利保护范围内。

Claims (20)

  1. 一种电子装置,其特征在于,所述电子装置包括存储器、处理器,所述存储器上存储有可在所述处理器上运行的多模型样本训练系统,所述多模型样本训练系统被所述处理器执行时实现如下步骤:
    A、接收用户上传的样本数据,确定所述样本数据的数据属性,所述数据属性包括类型和数量;
    B、根据预先确定的机器学习模型与样本数据的数据属性的映射关系,确定所述样本数据的数据属性对应的机器学习模型;
    C、将所述确定的机器学习模型分别对所述样本数据进行训练;
    D、分析训练完成后得到的各个机器学习模型的训练结果,将符合预设条件的训练结果在显示界面上展示。
  2. 如权利要求1所述的电子装置,其特征在于,所述训练结果包括机器学习模型训练的准确率和损失函数曲线;所述预设条件为:准确率大于预设值,或准确率降序排名的前预设名。
  3. 如权利要求1所述的电子装置,其特征在于,所述步骤C替换为:
    C1、将所述确定的机器学习模型在操作界面上展示,以供用户选择待训练的机器学习模型;
    C2、在接收到用户基于所述操作界面选择的待训练的机器学习模型后,将所述待训练的机器学习模型分别对所述样本数据进行训练。
  4. 如权利要求3所述的电子装置,其特征在于,所述训练结果包括机器学习模型训练的准确率和损失函数曲线;所述预设条件为:准确率大于预设值,或准确率降序排名的前预设名。
  5. 如权利要求3所述的电子装置,其特征在于,所述步骤C2包括:
    在接收到用户基于所述操作界面选择的待训练的机器学习模型后,在操作界面上显示各个待训练的机器学习模型的参数设置界面;
    在各个带训练的机器学习模型完成参数设置后,采用所述待训练的机器学习模型分别对所述样本数据进行训练。
  6. 如权利要求5所述的电子装置,其特征在于,所述训练结果包括机器学习模型训练的准确率和损失函数曲线;所述预设条件为:准确率大于预设值,或准确率降序排名的前预设名。
  7. 一种多模型样本训练方法,其特征在于,该方法包括步骤:
    E、接收用户上传的样本数据,确定所述样本数据的数据属性,所述数据属性包括类型和数量;
    F、根据预先确定的机器学习模型与样本数据的数据属性的映射关系,确定所述样本数据的数据属性对应的机器学习模型;
    G、将所述确定的机器学习模型分别对所述样本数据进行训练;
    H、分析训练完成后得到的各个机器学习模型的训练结果,将符合预设条件的训练结果在显示界面上展示。
  8. 如权利要求7所述的多模型样本训练方法,其特征在于,所述训练结果包括机器学习模型训练的准确率和损失函数曲线;所述预设条件为:准确率大于预设值,或准确率降序排名的前预设名。
  9. 如权利要求7所述的多模型样本训练方法,其特征在于,所述步骤G替换为:
    G1、将所述确定的机器学习模型在操作界面上展示,以供用户选择待训练的机器学习模型;
    G2、在接收到用户基于所述操作界面选择的待训练的机器学习模型后,将所述待训练的机器学习模型分别对所述样本数据进行训练。
  10. 如权利要求9所述的多模型样本训练方法,其特征在于,所述训练结果包括机器学习模型训练的准确率和损失函数曲线;所述预设条件为:准确率大于预设值,或准确率降序排名的前预设名。
  11. 如权利要求9所述的多模型样本训练方法,其特征在于,所述步骤G2包括:
    在接收到用户基于所述操作界面选择的待训练的机器学习模型后,在操作界面上显示各个待训练的机器学习模型的参数设置界面;
    在各个带训练的机器学习模型完成参数设置后,采用所述待训练的机器学习模型分别对所述样本数据进行训练。
  12. 如权利要求11所述的多模型样本训练方法,其特征在于,所述训练结果包括机器学习模型训练的准确率和损失函数曲线;所述预设条件为:准确率大于预设值,或准确率降序排名的前预设名。
  13. 一种多模型样本训练系统,其特征在于,所述多模型样本训练系统包括:
    第一确定模块,用于接收用户上传的样本数据,确定所述样本数据的数据属性,所述数据属性包括类型和数量;
    第二确定模块,用于根据预先确定的机器学习模型与样本数据的数据属性的映射关系,确定所述样本数据的数据属性对应的机器学习模型;
    第一训练模块,用于将所述确定的机器学习模型分别对所述样本数据进行训练;
    分析模块,用于分析训练完成后得到的各个机器学习模型的训练结果,将符合预设条件的训练结果在显示界面上展示。
  14. 如权利要求13所述的多模型样本训练系统,其特征在于,所述第一训练模块替换为第二训练模块,所述第二训练模块包括:
    展示子模块,用于将所述确定的机器学习模型在操作界面上展 示,以供用户选择待训练的机器学习模型;
    训练子模块,用于在接收到用户基于所述操作界面选择的待训练的机器学习模型后,将所述待训练的机器学习模型分别对所述样本数据进行训练。
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有多模型样本训练系统,所述多模型样本训练系统可被至少一个处理器执行,以使所述至少一个处理器执行以下步骤:
    接收用户上传的样本数据,确定所述样本数据的数据属性,所述数据属性包括类型和数量;
    根据预先确定的机器学习模型与样本数据的数据属性的映射关系,确定所述样本数据的数据属性对应的机器学习模型;
    将所述确定的机器学习模型分别对所述样本数据进行训练;
    分析训练完成后得到的各个机器学习模型的训练结果,将符合预设条件的训练结果在显示界面上展示。
  16. 如权利要求15所述的计算机可读存储介质,其特征在于,所述训练结果包括机器学习模型训练的准确率和损失函数曲线;所述预设条件为:准确率大于预设值,或准确率降序排名的前预设名。
  17. 如权利要求15所述的计算机可读存储介质,其特征在于,所述将所述确定的机器学习模型分别对所述样本数据进行训练的步骤替换为:
    将所述确定的机器学习模型在操作界面上展示,以供用户选择待训练的机器学习模型;
    在接收到用户基于所述操作界面选择的待训练的机器学习模型后,将所述待训练的机器学习模型分别对所述样本数据进行训练。
  18. 如权利要求17所述的计算机可读存储介质,其特征在于,所述训练结果包括机器学习模型训练的准确率和损失函数曲线;所述 预设条件为:准确率大于预设值,或准确率降序排名的前预设名。
  19. 如权利要求17所述的计算机可读存储介质,其特征在于,所述在接收到用户基于所述操作界面选择的待训练的机器学习模型后,将所述待训练的机器学习模型分别对所述样本数据进行训练的步骤包括:
    在接收到用户基于所述操作界面选择的待训练的机器学习模型后,在操作界面上显示各个待训练的机器学习模型的参数设置界面;
    在各个带训练的机器学习模型完成参数设置后,采用所述待训练的机器学习模型分别对所述样本数据进行训练。
  20. 如权利要求19所述的计算机可读存储介质,其特征在于,所述训练结果包括机器学习模型训练的准确率和损失函数曲线;所述预设条件为:准确率大于预设值,或准确率降序排名的前预设名。
PCT/CN2018/089427 2017-10-27 2018-06-01 电子装置、多模型样本训练方法、系统和计算机可读存储介质 WO2019080501A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711056980.7A CN108021986A (zh) 2017-10-27 2017-10-27 电子装置、多模型样本训练方法和计算机可读存储介质
CN201711056980.7 2017-10-27

Publications (1)

Publication Number Publication Date
WO2019080501A1 true WO2019080501A1 (zh) 2019-05-02

Family

ID=62080432

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/089427 WO2019080501A1 (zh) 2017-10-27 2018-06-01 电子装置、多模型样本训练方法、系统和计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN108021986A (zh)
WO (1) WO2019080501A1 (zh)

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108021986A (zh) * 2017-10-27 2018-05-11 平安科技(深圳)有限公司 电子装置、多模型样本训练方法和计算机可读存储介质
US11257073B2 (en) * 2018-01-31 2022-02-22 Salesforce.Com, Inc. Systems, methods, and apparatuses for implementing machine learning models for smart contracts using distributed ledger technologies in a cloud based computing environment
US10701054B2 (en) 2018-01-31 2020-06-30 Salesforce.Com, Inc. Systems, methods, and apparatuses for implementing super community and community sidechains with consent management for distributed ledger technologies in a cloud based computing environment
CN108665072A (zh) * 2018-05-23 2018-10-16 中国电力科学研究院有限公司 一种基于云架构的机器学习算法全过程训练方法及系统
CN110689134A (zh) * 2018-07-05 2020-01-14 第四范式(北京)技术有限公司 执行机器学习过程的方法、装置、设备以及存储介质
CN110738303A (zh) * 2018-07-18 2020-01-31 科沃斯机器人股份有限公司 机器模型更新方法、设备、系统及存储介质
CN109165249B (zh) * 2018-08-07 2020-08-04 阿里巴巴集团控股有限公司 数据处理模型构建方法、装置、服务器和用户端
CN110858290B (zh) * 2018-08-24 2023-10-17 比亚迪股份有限公司 驾驶员异常行为识别方法、装置、设备及存储介质
CN110895718A (zh) * 2018-09-07 2020-03-20 第四范式(北京)技术有限公司 用于训练机器学习模型的方法及系统
US11288280B2 (en) 2018-10-31 2022-03-29 Salesforce.Com, Inc. Systems, methods, and apparatuses for implementing consumer data validation, matching, and merging across tenants with optional verification prompts utilizing blockchain
US11568437B2 (en) 2018-10-31 2023-01-31 Salesforce.Com, Inc. Systems, methods, and apparatuses for implementing commerce rewards across tenants for commerce cloud customers utilizing blockchain
CN109522343B (zh) * 2018-11-27 2021-06-25 广东小天才科技有限公司 一种学习进度的智能检测方法及电子设备
CN109800048A (zh) * 2019-01-22 2019-05-24 深圳魔数智擎科技有限公司 模型的结果展示方法、计算机可读存储介质及计算机设备
US11803537B2 (en) 2019-01-31 2023-10-31 Salesforce, Inc. Systems, methods, and apparatuses for implementing an SQL query and filter mechanism for blockchain stored data using distributed ledger technology (DLT)
US11875400B2 (en) 2019-01-31 2024-01-16 Salesforce, Inc. Systems, methods, and apparatuses for dynamically assigning nodes to a group within blockchains based on transaction type and node intelligence using distributed ledger technology (DLT)
US11811769B2 (en) 2019-01-31 2023-11-07 Salesforce, Inc. Systems, methods, and apparatuses for implementing a declarative, metadata driven, cryptographically verifiable multi-network (multi-tenant) shared ledger
US11971874B2 (en) 2019-01-31 2024-04-30 Salesforce, Inc. Systems, methods, and apparatuses for implementing efficient storage and validation of data and metadata within a blockchain using distributed ledger technology (DLT)
US11488176B2 (en) 2019-01-31 2022-11-01 Salesforce.Com, Inc. Systems, methods, and apparatuses for implementing certificates of authenticity of digital twins transacted onto a blockchain using distributed ledger technology (DLT)
US11899817B2 (en) 2019-01-31 2024-02-13 Salesforce, Inc. Systems, methods, and apparatuses for storing PII information via a metadata driven blockchain using distributed and decentralized storage for sensitive user information
US11876910B2 (en) 2019-01-31 2024-01-16 Salesforce, Inc. Systems, methods, and apparatuses for implementing a multi tenant blockchain platform for managing Einstein platform decisions using distributed ledger technology (DLT)
US11783024B2 (en) 2019-01-31 2023-10-10 Salesforce, Inc. Systems, methods, and apparatuses for protecting consumer data privacy using solid, blockchain and IPFS integration
US11886421B2 (en) 2019-01-31 2024-01-30 Salesforce, Inc. Systems, methods, and apparatuses for distributing a metadata driven application to customers and non-customers of a host organization using distributed ledger technology (DLT)
US11244313B2 (en) 2019-01-31 2022-02-08 Salesforce.Com, Inc. Systems, methods, and apparatuses for implementing declarative smart actions for coins and assets transacted onto a blockchain using distributed ledger technology (DLT)
US11824864B2 (en) 2019-01-31 2023-11-21 Salesforce, Inc. Systems, methods, and apparatuses for implementing a declarative and metadata driven blockchain platform using distributed ledger technology (DLT)
US11038771B2 (en) 2019-04-26 2021-06-15 Salesforce.Com, Inc. Systems, methods, and apparatuses for implementing a metadata driven rules engine on blockchain using distributed ledger technology (DLT)
US11880349B2 (en) 2019-04-30 2024-01-23 Salesforce, Inc. System or method to query or search a metadata driven distributed ledger or blockchain
US11995647B2 (en) 2019-04-30 2024-05-28 Salesforce, Inc. System and method of providing interoperable distributed and decentralized ledgers using consensus on consensus and delegated consensus
CN110135524A (zh) * 2019-05-29 2019-08-16 北京迈格威科技有限公司 自动化的模型训练方法、装置、设备及介质
CN110837718B (zh) * 2019-11-07 2023-12-26 交控科技股份有限公司 道岔故障检测方法、装置、电子设备及存储介质
CN112784181A (zh) * 2019-11-08 2021-05-11 阿里巴巴集团控股有限公司 信息展示、图像处理方法及设备、信息展示装置
CN111191558B (zh) * 2019-12-25 2024-02-02 深圳市优必选科技股份有限公司 一种机器人及其人脸识别示教方法和存储介质
US11824970B2 (en) 2020-01-20 2023-11-21 Salesforce, Inc. Systems, methods, and apparatuses for implementing user access controls in a metadata driven blockchain operating via distributed ledger technology (DLT) using granular access objects and ALFA/XACML visibility rules
US11611560B2 (en) 2020-01-31 2023-03-21 Salesforce.Com, Inc. Systems, methods, and apparatuses for implementing consensus on read via a consensus on write smart contract trigger for a distributed ledger technology (DLT) platform
CN113495963B (zh) * 2020-03-19 2023-03-14 复旦大学 网络安全知识图谱的嵌入表示方法及装置
CN111831322B (zh) * 2020-04-15 2023-08-01 中国人民解放军军事科学院战争研究院 一种面向多层次用户的机器学习参数配置方法
CN113673706A (zh) * 2020-05-15 2021-11-19 富泰华工业(深圳)有限公司 机器学习模型训练方法、装置及电子设备
CN111915019B (zh) * 2020-08-07 2023-06-20 平安科技(深圳)有限公司 联邦学习方法、系统、计算机设备和存储介质
CN112966439A (zh) * 2021-03-05 2021-06-15 北京金山云网络技术有限公司 机器学习模型训练方法、装置以及虚拟实验箱

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101782976A (zh) * 2010-01-15 2010-07-21 南京邮电大学 一种云计算环境下机器学习自动选择方法
CN107122327A (zh) * 2016-02-25 2017-09-01 阿里巴巴集团控股有限公司 一种利用训练数据训练模型的方法和训练系统
CN107229976A (zh) * 2017-06-08 2017-10-03 郑州云海信息技术有限公司 一种基于spark的分布式机器学习系统
CN108021986A (zh) * 2017-10-27 2018-05-11 平安科技(深圳)有限公司 电子装置、多模型样本训练方法和计算机可读存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101719222B (zh) * 2009-11-27 2014-02-12 北京中星微电子有限公司 分类器训练方法和装置以及人脸认证方法和装置
CN102682091A (zh) * 2012-04-25 2012-09-19 腾讯科技(深圳)有限公司 基于云服务的视觉搜索方法和系统
CN107194412A (zh) * 2017-04-20 2017-09-22 百度在线网络技术(北京)有限公司 一种处理数据的方法、装置、设备和计算机存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101782976A (zh) * 2010-01-15 2010-07-21 南京邮电大学 一种云计算环境下机器学习自动选择方法
CN107122327A (zh) * 2016-02-25 2017-09-01 阿里巴巴集团控股有限公司 一种利用训练数据训练模型的方法和训练系统
CN107229976A (zh) * 2017-06-08 2017-10-03 郑州云海信息技术有限公司 一种基于spark的分布式机器学习系统
CN108021986A (zh) * 2017-10-27 2018-05-11 平安科技(深圳)有限公司 电子装置、多模型样本训练方法和计算机可读存储介质

Also Published As

Publication number Publication date
CN108021986A (zh) 2018-05-11

Similar Documents

Publication Publication Date Title
WO2019080501A1 (zh) 电子装置、多模型样本训练方法、系统和计算机可读存储介质
US20190251471A1 (en) Machine learning device
US8732603B2 (en) Visual designer for non-linear domain logic
US10684998B2 (en) Automatic schema mismatch detection
WO2020192463A1 (zh) 一种展示方法及装置
WO2021109928A1 (zh) 机器学习方案模板的创建方法、使用方法及装置
US20110295863A1 (en) Exposing metadata relationships through filter interplay
KR101773574B1 (ko) 데이터 테이블의 차트 시각화 방법
WO2022001501A1 (zh) 数据标注的方法、装置、计算设备和存储介质
US11775412B2 (en) Machine learning models applied to interaction data for facilitating modifications to online environments
US11256996B2 (en) Method for recommending next user input using pattern analysis of user input
US9910487B1 (en) Methods, systems and computer program products for guiding users through task flow paths
WO2019223145A1 (zh) 电子装置、推销名单推荐方法、系统和计算机可读存储介质
JP2018092615A (ja) 畳み込みニューラルネットワークモデルの決定装置及び決定方法
US20190005030A1 (en) System and method for providing an intelligent language learning platform
US9355191B1 (en) Identification of query completions which change users' original search intent
CN113807066A (zh) 一种图表生成方法、装置及电子设备
US11467943B2 (en) System and method for struggle identification
KR101910179B1 (ko) 데이터 시각화를 위한 웹 기반 차트 라이브러리 시스템
WO2020246325A1 (ja) 情報処理装置、情報処理方法、及びプログラム
US20160267087A1 (en) Enhanced template curating
WO2019205381A1 (zh) 股票筛选方法、装置及计算机可读存储介质
Zhang et al. On the cost of interactions in interactive visual machine learning
US11550780B2 (en) Pre-constructed query recommendations for data analytics
US7957943B2 (en) Method and system for modeling effects visually

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18870806

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24.09.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18870806

Country of ref document: EP

Kind code of ref document: A1