WO2017092022A1 - 一种张量模式下的有监督学习优化方法及系统 - Google Patents
一种张量模式下的有监督学习优化方法及系统 Download PDFInfo
- Publication number
- WO2017092022A1 WO2017092022A1 PCT/CN2015/096375 CN2015096375W WO2017092022A1 WO 2017092022 A1 WO2017092022 A1 WO 2017092022A1 CN 2015096375 W CN2015096375 W CN 2015096375W WO 2017092022 A1 WO2017092022 A1 WO 2017092022A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- tensor
- projection
- objective function
- rank
- unit
- Prior art date
Links
- 238000005457 optimization Methods 0.000 title claims abstract description 82
- 238000000034 method Methods 0.000 title claims abstract description 48
- 230000006870 function Effects 0.000 claims abstract description 114
- 239000011159 matrix material Substances 0.000 claims abstract description 32
- 238000012549 training Methods 0.000 claims abstract description 32
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 25
- 238000000354 decomposition reaction Methods 0.000 claims description 56
- 230000009977 dual effect Effects 0.000 claims description 47
- 238000004364 calculation method Methods 0.000 claims description 28
- 238000010276 construction Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/01—Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
Definitions
- the invention belongs to the technical field of pattern recognition, and in particular relates to a supervised learning optimization method and system in a tensor mode.
- the prior art still uses vector mode algorithms to process tensor data.
- the original data must be feature extracted (vectorized) in the preprocessing stage.
- the spatial information and the intrinsic correlation unique to the tensor data are easily destroyed, and the model parameters are too many, which may easily lead to Dimensional disasters, over-learning, small samples, etc.
- the embodiment of the present invention provides a supervised learning optimization method and system in a tensor mode to solve the dimensional disaster, over-learning, and smallness of the vector mode algorithm provided by the prior art when processing tensor data.
- the problem of the sample and the like overcomes the existing tensor mode algorithm.
- the algorithm of the present invention is to solve the limitations of the existing algorithm, such as the problem that the time complexity of the algorithm is high, and the local minimum is often encountered.
- a supervised learning optimization method in a tensor mode comprising:
- the quadratic programming sub-problems of N vector patterns are transformed into multiple quadratic programming problems in a single tensor mode, and the optimization framework of the objective function of the OPSTM problem is constructed.
- the dual problem of the optimization framework of the objective function is obtained, and the tensor rank-one decomposition is introduced into the calculation of the tensor inner product to obtain the modified dual problem.
- sequence minimum optimization SMO algorithm is used to solve the modified dual problem, and the optimal combination of Lagrangian and the offset scalar b are output;
- the to-be-predicted tensor data is subjected to rank-one decomposition, and is input to the decision function for prediction.
- the objective function of the quadratic programming problem of the n-th sub-problem becomes:
- w (n) is the n-th order optimal projection vector of the training tensor data set
- n 1, 2, ... N
- C is a penalty factor
- It is a slack variable
- the eta coefficient ⁇ is used to measure the importance of the intra-class scatter matrix.
- the optimization framework of the objective function of the OPSTM problem is a combination of N vector pattern quadratic programming problems, respectively corresponding to a sub-problem, wherein the quadratic programming problem of the n-th sub-problem is:
- E is the unit matrix
- Is the tensor input data obtained by projecting the tensor input data X m in the tensor data set along each order
- ⁇ i is the i-mode multiplication operator
- b (n) is the nth order of the training tensor data set.
- the quadratic programming sub-problems of N vector patterns are transformed into multiple quadratic programming problems in a single tensor mode.
- the optimization framework of the objective function of the constructed OPSTM problem satisfies:
- the tensor rank-one decomposition is introduced into the calculation of the tensor inner product, and the modified dual problem is:
- a supervised learning optimization system in a tensor mode comprising:
- a data receiving unit configured to receive the input training tensor data set
- An intra-class scatter introduction unit is used to introduce an intra-class scatter matrix into the objective function, so that the objective function maximizes the distance between the classes while minimizing the intra-class distance;
- Sub-problem optimization framework building unit for constructing an optimization framework of the objective function of the optimal projection tensor OPSTM sub-problem
- the problem optimization framework building unit is used to transform the quadratic programming sub-problems of N vector patterns into multiple quadratic programming problems in a single tensor mode, and to construct an optimization framework of the objective function of the OPSTM problem;
- the dual problem obtaining unit is configured to obtain the dual problem of the optimization framework of the objective function according to the Lagrangian multiplier method, and introduce the tensor rank-one decomposition into the calculation of the tensor inner product to obtain the modified dual problem. ;
- the dual problem solving unit is used to solve the modified dual problem by using the sequence minimum optimization SMO algorithm, and output the optimal combination of Lagrangian and the offset scalar b;
- a projection tensor calculation unit for calculating a projection tensor W * ;
- a projection tensor decomposition unit for performing rank-one decomposition on the projection tensor W * ;
- a back projection unit configured to perform back projection on a component obtained by rank-decomposing the projection tensor W * ;
- An optimal projection tensor calculation unit is configured to perform a rank-one decomposition inverse operation on the component after the back projection, and obtain an optimal projection tensor W corresponding to the training tensor data set;
- a decision function building unit is used to construct a decision function stage, and the optimal projection tensor W is decomposed by the rank one and the offset scalar b together to construct a decision function;
- a prediction unit configured to input the predicted tensor data into the decision function after the rank-first decomposition in the application prediction stage, and perform prediction.
- the intra-class scatter introducing unit introduces the intra-class scatter matrix into the objective function of the STM sub-problem by the eta coefficient ⁇
- the objective function of the quadratic programming problem of the n-th sub-problem becomes:
- w (n) is the n-th order optimal projection vector of the training tensor data set
- n 1, 2, ... N
- C is a penalty factor
- It is a slack variable
- the eta coefficient ⁇ is used to measure the importance of the intra-class scatter matrix.
- the optimization framework of the objective function of the OPSTM problem is a combination of N vector pattern quadratic programming problems, respectively corresponding to a sub-problem, wherein the quadratic programming of the n-th sub-problem
- the problem is:
- E is the unit matrix
- Is the tensor input data obtained by projecting the tensor input data X m in the tensor data set along each order
- ⁇ i is the i-mode multiplication operator
- b (n) is the nth order of the training tensor data set.
- the problem optimization framework building unit is based on a formula And formula
- the quadratic programming sub-problems of N vector patterns are transformed into multiple quadratic programming problems in a single tensor mode.
- the optimization framework of the objective function of the constructed OPSTM problem satisfies:
- the dual problem solving unit obtains the dual problem of the optimization framework of the objective function according to the Lagrangian multiplier method:
- the dual problem solving unit introduces the tensor rank-one decomposition into the calculation of the tensor inner product, and the modified dual problem is:
- the projection tensor calculation unit is based on a formula Calculate the projection tensor W * .
- the quadratic programming problem of the N vector patterns is transformed into the multiple quadratic programming problem under the single tensor mode, and the optimized framework of the transformed objective function is the optimization framework of the objective function of the OPSTM problem.
- the number of parameters of the model is greatly reduced, which overcomes the problems of dimensional disaster, over-learning, small sample and so on when the traditional vector mode algorithm deals with tensor data, and highlights its excellent classification effect while ensuring efficient processing.
- the algorithm provided by the embodiment of the present invention can efficiently process tensor data directly in the tensor field, and has the characteristics of optimal classification ability, and has strong practicability and generalization.
- FIG. 1 is a flow chart showing an implementation of an embodiment of a supervised learning optimization method in a tensor mode of the present invention
- FIG. 2 is a structural block diagram of an embodiment of a supervised learning optimization system in the tensor mode of the present invention.
- the input training tensor data set is received; the intra-class scatter matrix is introduced into the objective function, so that the objective function maximizes the inter-class distance while minimizing the intra-class distance; constructing an optimal projection
- the decomposition is introduced into the calculation of the tensor inner product, and the modified dual problem is obtained.
- the sequence minimum optimization SMO algorithm is used to solve the modified dual problem, and the optimal combination and offset scalar of Lagrangian are output; the projection tensor is calculated; The rank-decomposition of the projection tensor is performed; the component obtained by the rank-decomposition of the projection tensor is back-projected; the inverse-projected component is subjected to the rank-one decomposition inverse operation to obtain the optimal corresponding to the training tensor data set.
- Projection tensor W constructing the decision function stage, constructing the decision function by the optimal projection tensor W after the rank-one decomposition and the offset scalar; in the application prediction stage, the predicted tensor data is subjected to rank-one decomposition and input to the In the decision function, the prediction is made.
- FIG. 1 is a flowchart showing an implementation process of a supervised learning optimization method in a tensor mode according to Embodiment 1 of the present invention, which is described in detail as follows:
- step S101 the input training tensor data set is received.
- the training tensor data set is ⁇ Xm, ym
- m 1, 2...M ⁇ , where Xm represents tensor input data, y m ⁇ +1,- 1 ⁇ indicates the label.
- the sample points are stored in the form of second-order tensors (matrices). All sample points are composed of column data to form the input data set. Similarly, the label set is also a column vector, and each label is The position corresponds to the position of the corresponding sample point.
- step S102 an intra-class scatter matrix is introduced into the objective function such that the objective function maximizes the inter-class distance while minimizing the intra-class distance.
- the objective function optimization framework supporting the Support Tensor Machine (STM) problem is a combination of N vector mode quadratic programming problems, respectively corresponding to a sub-problem, wherein the n-th sub-problem
- the secondary planning problem is:
- C is a penalty factor
- w (n) has the Fisher criterion effect in the n-th order of the training tensor data set to maximize the class spacing.
- step S103 an optimization framework of the objective function of the optimal projection tensor OPSTM subproblem is constructed.
- the optimization framework of the objective function of the optimal projection tensor OPSTM problem is a combination of N vector pattern quadratic programming problems, respectively corresponding to a sub-problem, wherein the n-th sub-problem quadratic programming
- the problem is:
- Training the nth-order projection vector of the tensor data set, n 1, 2, ... N; w (n) is the optimal projection vector of the nth order of the training tensor data set in equation (1-4); ⁇ (n) and P (n) satisfy E is the unit matrix.
- step S104 the quadratic programming sub-problems of the N vector patterns are transformed into multiple quadratic programming problems in a single tensor mode, and an optimization framework of the objective function of the OPSTM problem is constructed.
- W * is the projection tensor and ⁇ > is the inner product operator.
- the transformed objective function optimization framework is the objective function optimization framework of OPSTM problem.
- the number of parameters of the model is greatly reduced, and the problems of dimensional disaster, over-learning, small sample, etc. which occur when the vector algorithm processes the tensor data are overcome.
- step S105 according to the Lagrangian multiplier method, the dual problem of the objective function optimization framework is obtained, and the tensor rank-one decomposition is introduced into the calculation of the tensor inner product to obtain the modified dual problem.
- the dual problem of the optimization framework [(3-1), (3-2, (3-3)]) of the objective function of the OPSTM problem is obtained, wherein ⁇ m is a Lagrange multiplier.
- the tensor CP (CANDECOMP/PARAFAC) decomposition is introduced into the calculation of the tensor inner product.
- the rank-one decomposition of the tensor data V i , V j is:
- the tensor inner product calculation part is introduced, and the tensor rank-one decomposition auxiliary calculation is introduced. Further reducing the computational complexity and storage cost, and the tensor rank-decomposition can obtain a more compact and meaningful representation of the tensor object, which can more effectively extract the structural information and intrinsic correlation of the tensor data, and effectively avoid the present Some tensor mode algorithms take time-consuming alternating projection iterations.
- step S107 the projection tensor W * is calculated.
- step S108 the projection tensor W * is subjected to rank-one decomposition.
- the projection tensor W * is subjected to rank-one decomposition to obtain
- step S109 the component obtained by rank-decomposing the projection tensor W * is backprojected.
- step S110 the inverse-projected component is subjected to a rank-one decomposition inverse operation to obtain an optimal projection tensor W corresponding to the training tensor data set.
- step S111 a decision function stage is constructed, and the optimal projection tensor W is constructed together with the offset scalar b after the rank-one decomposition.
- the decision function stage is constructed, and the optimal projection tensor W is subjected to rank-one decomposition, and after decomposition, the decision function is constructed together with the offset scalar b:
- step S112 in the application prediction stage, the to-be-predicted tensor data is subjected to rank-one decomposition, and then input into a decision function to perform prediction.
- the tensor data to be predicted is subjected to rank-one decomposition, and then input into a decision function for prediction.
- the present embodiment has the following advantages: 1) Converting the quadratic programming problem of N vector modes into a multiple quadratic programming problem under a single tensor mode, and optimizing the transformed objective function
- the framework is the optimization framework of the objective function of the OPSTM problem, which can greatly reduce the number of parameters of the model, and overcome the problems of dimensional disasters, over-learning, small samples, etc., which are caused by the traditional vector pattern algorithm when processing tensor data. At the same time, it highlights its excellent classification effect.
- the algorithm provided by the embodiment of the present invention can efficiently process tensor data directly in the tensor field, and has the characteristics of optimal classification ability, and has strong practicability and generalization.
- the tensor inner product calculation part is introduced, and the tensor rank-one decomposition auxiliary calculation is introduced. Further reducing the computational complexity and storage cost, and the tensor rank-decomposition can obtain a more compact and meaningful representation of the tensor object, which can more effectively extract the structural information and intrinsic correlation of the tensor data, and effectively avoid the consumption. Alternate projection iteration process.
- the size of the sequence numbers of the foregoing processes does not mean the order of execution sequence, and the execution order of each process should be determined by its function and internal logic, and should not be implemented in the embodiment of the present invention. Form any limit.
- FIG. 2 is a block diagram showing a specific structure of a supervised learning optimization system in a tensor mode according to Embodiment 2 of the present invention.
- the supervised learning optimization system 2 in the tensor mode includes: a data receiving unit 21, an intra-class scatter introducing unit 22, a sub-question optimization framework building unit 23, a problem optimization framework building unit 24, a dual problem obtaining unit 25, and a dual problem solving.
- the data receiving unit 21 is configured to receive the input training tensor data set
- the intra-class scatter introduction unit 22 is configured to introduce an intra-class scatter matrix into the objective function, so that the objective function maximizes the inter-class distance while minimizing the intra-class distance;
- the sub-problem optimization framework building unit 23 is configured to construct an optimization framework of the objective function of the optimal projection tensor OPSTM sub-problem;
- the problem optimization framework construction unit 24 is configured to convert the quadratic programming sub-problems of the N vector patterns into multiple quadratic programming problems in a single tensor mode, and construct an optimization framework of the objective function of the OPSTM problem;
- the dual problem obtaining unit 25 is configured to obtain a dual problem of the optimization framework of the objective function according to the Lagrangian multiplier method, and introduce a tensor rank-one decomposition into the calculation of the tensor inner product to obtain the modified duality. problem;
- the dual problem solving unit 26 is configured to solve the modified dual problem by using the sequence minimum optimization SMO algorithm, and output the optimal combination of Lagrangian and the offset scalar b;
- a projection tensor calculation unit 27 for calculating a projection tensor W * ;
- a projection tensor decomposition unit 28 for performing rank-one decomposition on the projection tensor W * ;
- a back projection unit 29, configured to perform back projection on a component obtained by performing rank-one decomposition on the projection tensor W * ;
- the optimal projection tensor calculation unit 210 is configured to perform a rank-one decomposition inverse operation on the component after the back projection, to obtain an optimal projection tensor W corresponding to the training tensor data set;
- the decision function construction unit 211 is configured to construct a decision function stage, and construct the decision function together with the offset scalar b after the optimal projection tensor W is decomposed by the rank one;
- the prediction unit 212 is configured to input the prediction tensor data into the decision function after the rank-first decomposition in the application prediction stage, and perform prediction.
- the intra-class scatter introducing unit 22 introduces the intra-class scatter matrix into the objective function of the STM sub-problem by the eta coefficient ⁇
- the objective function of the quadratic programming problem of the n-th sub-problem becomes:
- w (n) is the n-th order optimal projection vector of the training tensor data set
- n 1, 2, ... N
- C is a penalty factor
- It is a slack variable
- the eta coefficient ⁇ is used to measure the importance of the intra-class scatter matrix.
- the optimization framework of the objective function of the OPSTM problem is a combination of N vector pattern quadratic programming problems, respectively corresponding to a sub-problem, wherein the n-th sub-problem is twice
- the planning problem is:
- E is the unit matrix
- Is the tensor input data obtained by projecting the tensor input data X m in the tensor data set along each order
- ⁇ i is the i-mode multiplication operator
- b (n) is the nth order of the training tensor data set.
- the problem optimization framework building unit 24 is based on a formula And formula
- the quadratic programming sub-problems of N vector patterns are transformed into multiple quadratic programming problems in a single tensor mode.
- the optimization framework of the objective function of the constructed OPSTM problem satisfies:
- the dual problem solving unit 26 obtains the dual problem of the optimization framework of the objective function according to the Lagrangian multiplier method:
- the dual problem solving unit 26 introduces the tensor rank-one decomposition into the calculation of the tensor inner product, and the modified dual problem is:
- the projection tensor calculation unit 27 is based on a formula Calculate the projection tensor W * .
- the supervised learning optimization system 2 in the tensor mode provided by the embodiment of the present invention can be applied to the foregoing corresponding method embodiment 1.
- the supervised learning optimization system 2 in the tensor mode provided by the embodiment of the present invention can be applied to the foregoing corresponding method embodiment 1.
- the disclosed systems, devices, and methods may be implemented in other manners.
- the device embodiments described above are merely illustrative.
- the division of the unit is only a logical function division.
- there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or Some features can be ignored or not executed.
- the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
- the functions may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a standalone product.
- the technical solution of the present invention which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
- the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
- the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Pure & Applied Mathematics (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Computational Linguistics (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Complex Calculations (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
Claims (12)
- 一种张量模式下的有监督学习优化方法,其特征在于,所述方法包括:接收输入的训练张量数据集;将类内散布矩阵引入目标函数,使得目标函数最大化类间距离的同时最小化类内距离;构建最优投影张量机OPSTM子问题的目标函数的优化框架;将N个向量模式的二次规划子问题转化为单个张量模式下的多重二次规划问题,构建OPSTM问题的目标函数的优化框架;根据拉格朗日乘子法,得到所述目标函数的优化框架的对偶问题,并将张量秩一分解引入到张量内积的计算,得到修改后的对偶问题;利用序列最小优化SMO算法,求解修改后的对偶问题,输出拉格朗日的最优组合及偏移标量b;计算投影张量W*;对投影张量W*进行秩一分解;对投影张量W*进行秩一分解后得到的分量进行逆投影;对经过逆投影后的分量,进行秩一分解逆运算,得到训练张量数据集对应的最优投影张量W;构建决策函数阶段,将最优投影张量W经过秩一分解后和偏移标量b一起构建决策函数;在应用预测阶段,待预测张量数据经过秩一分解后,输入到所述决策函数中,进行预测。
- 一种张量模式下的有监督学习优化系统,其特征在于,所述系统包括:数据接收单元,用于接收输入的训练张量数据集;类内散布引入单元,用于将类内散布矩阵引入目标函数,使得目标函数最大化类间距离的同时最小化类内距离;子问题优化框架构建单元,用于构建最优投影张量机OPSTM子问题的目标函数的优化框架;问题优化框架构建单元,用于将N个向量模式的二次规划子问题转化为单个张量模式下的多重二次规划问题,构建OPSTM问题的目标函数的优化框架;对偶问题获得单元,用于根据拉格朗日乘子法,得到所述目标函数的优化框架的对偶问题,并将张量秩一分解引入到张量内积的计算,得到修改后的对偶问题;对偶问题求解单元,用于利用序列最小优化SMO算法,求解修改后的对偶问题,输出拉格朗日的最优组合及偏移标量b;投影张量计算单元,用于计算投影张量W*;投影张量分解单元,用于对投影张量W*进行秩一分解;逆投影单元,用于对投影张量W*进行秩一分解后得到的分量进行逆投影;最优投影张量计算单元,用于对经过逆投影后的分量,进行秩一分解逆运算,得到训练张量数据集对应的最优投影张量W;决策函数构建单元,用于构建决策函数阶段,将最优投影张量W经过秩一分解后和偏移标量b一起构建决策函数;预测单元,用于在应用预测阶段,待预测张量数据经过秩一分解后,输入到所述决策函数中,进行预测。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
SG11201609625WA SG11201609625WA (en) | 2015-12-04 | 2015-12-04 | Optimization method and system for supervised learning under tensor mode |
US15/310,330 US10748080B2 (en) | 2015-12-04 | 2015-12-04 | Method for processing tensor data for pattern recognition and computer device |
PCT/CN2015/096375 WO2017092022A1 (zh) | 2015-12-04 | 2015-12-04 | 一种张量模式下的有监督学习优化方法及系统 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2015/096375 WO2017092022A1 (zh) | 2015-12-04 | 2015-12-04 | 一种张量模式下的有监督学习优化方法及系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017092022A1 true WO2017092022A1 (zh) | 2017-06-08 |
Family
ID=58796112
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/096375 WO2017092022A1 (zh) | 2015-12-04 | 2015-12-04 | 一种张量模式下的有监督学习优化方法及系统 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10748080B2 (zh) |
SG (1) | SG11201609625WA (zh) |
WO (1) | WO2017092022A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107515843A (zh) * | 2017-09-04 | 2017-12-26 | 四川易诚智讯科技有限公司 | 基于张量近似的各向异性数据压缩方法 |
CN110555054A (zh) * | 2018-06-15 | 2019-12-10 | 泉州信息工程学院 | 一种基于模糊双超球分类模型的数据分类方法及系统 |
CN114235411A (zh) * | 2021-12-28 | 2022-03-25 | 频率探索智能科技江苏有限公司 | 轴承外圈缺陷定位方法 |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10915663B1 (en) * | 2019-01-29 | 2021-02-09 | Facebook, Inc. | Systems and methods for protecting data |
US11107100B2 (en) | 2019-08-09 | 2021-08-31 | International Business Machines Corporation | Distributing computational workload according to tensor optimization |
US20210295176A1 (en) * | 2020-03-17 | 2021-09-23 | NEC Laboratories Europe GmbH | Method and system for generating robust solutions to optimization problems using machine learning |
CN111639243B (zh) * | 2020-06-04 | 2021-03-09 | 东北师范大学 | 时空数据渐进式多维模式提取与异常检测可视分析方法 |
CN112395804B (zh) * | 2020-10-21 | 2022-02-18 | 青岛民航凯亚系统集成有限公司 | 飞机二次能源系统冷量分配方法 |
CN114066720B (zh) * | 2021-11-01 | 2024-03-26 | 力度工业智能科技(苏州)有限公司 | 基于张量回归的三维表面形貌预测方法、装置及可读介质 |
CN118035731B (zh) * | 2024-04-11 | 2024-06-25 | 深圳华建电力工程技术有限公司 | 用电安全监测预警方法及服务系统 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000035394A (ja) * | 1998-07-17 | 2000-02-02 | Shimadzu Corp | 走査型プローブ顕微鏡 |
CN103886329A (zh) * | 2014-03-21 | 2014-06-25 | 西安电子科技大学 | 基于张量分解降维的极化图像分类方法 |
CN104361318A (zh) * | 2014-11-10 | 2015-02-18 | 中国科学院深圳先进技术研究院 | 一种基于弥散张量成像技术的疾病诊断辅助系统及方法 |
CN104850913A (zh) * | 2015-05-28 | 2015-08-19 | 深圳先进技术研究院 | 一种空气质量pm2.5预测方法及系统 |
CN105069485A (zh) * | 2015-08-26 | 2015-11-18 | 中国科学院深圳先进技术研究院 | 一种张量模式下基于极限学习机的模式识别方法 |
CN105654110A (zh) * | 2015-12-04 | 2016-06-08 | 深圳先进技术研究院 | 一种张量模式下的有监督学习优化方法及系统 |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7805388B2 (en) * | 1998-05-01 | 2010-09-28 | Health Discovery Corporation | Method for feature selection in a support vector machine using feature ranking |
US7970718B2 (en) * | 2001-05-18 | 2011-06-28 | Health Discovery Corporation | Method for feature selection and for evaluating features identified as significant for classifying data |
US7589729B2 (en) * | 2002-05-15 | 2009-09-15 | Mental Images Gmbh | Image synthesis by rank-1 lattices |
US20050177040A1 (en) * | 2004-02-06 | 2005-08-11 | Glenn Fung | System and method for an iterative technique to determine fisher discriminant using heterogenous kernels |
US20070122041A1 (en) * | 2005-11-29 | 2007-05-31 | Baback Moghaddam | Spectral method for sparse linear discriminant analysis |
WO2009022946A1 (en) * | 2007-08-10 | 2009-02-19 | Michael Felsberg | Image reconstruction |
JP5506272B2 (ja) * | 2009-07-31 | 2014-05-28 | 富士フイルム株式会社 | 画像処理装置及び方法、データ処理装置及び方法、並びにプログラム |
JP5161845B2 (ja) * | 2009-07-31 | 2013-03-13 | 富士フイルム株式会社 | 画像処理装置及び方法、データ処理装置及び方法、並びにプログラム |
US8566268B2 (en) * | 2010-10-08 | 2013-10-22 | International Business Machines Corporation | System and method for composite distance metric leveraging multiple expert judgments |
US20140181171A1 (en) * | 2012-12-24 | 2014-06-26 | Pavel Dourbal | Method and system for fast tensor-vector multiplication |
AU2012258412A1 (en) * | 2012-11-30 | 2014-06-19 | Canon Kabushiki Kaisha | Combining differential images by inverse Riesz transformation |
US9008429B2 (en) * | 2013-02-01 | 2015-04-14 | Xerox Corporation | Label-embedding for text recognition |
US9099083B2 (en) * | 2013-03-13 | 2015-08-04 | Microsoft Technology Licensing, Llc | Kernel deep convex networks and end-to-end learning |
US9405124B2 (en) * | 2013-04-09 | 2016-08-02 | Massachusetts Institute Of Technology | Methods and apparatus for light field projection |
WO2015142923A1 (en) * | 2014-03-17 | 2015-09-24 | Carnegie Mellon University | Methods and systems for disease classification |
US9476730B2 (en) * | 2014-03-18 | 2016-10-25 | Sri International | Real-time system for multi-modal 3D geospatial mapping, object recognition, scene annotation and analytics |
US20160004664A1 (en) * | 2014-07-02 | 2016-01-07 | Xerox Corporation | Binary tensor factorization |
US9754371B2 (en) * | 2014-07-31 | 2017-09-05 | California Institute Of Technology | Multi modality brain mapping system (MBMS) using artificial intelligence and pattern recognition |
EP3195604B1 (en) * | 2014-08-22 | 2023-07-26 | Nova Southeastern University | Data adaptive compression and data encryption using kronecker products |
EP3026588A1 (en) * | 2014-11-25 | 2016-06-01 | Inria Institut National de Recherche en Informatique et en Automatique | interaction parameters for the input set of molecular structures |
US9792492B2 (en) * | 2015-07-07 | 2017-10-17 | Xerox Corporation | Extracting gradient features from neural networks |
-
2015
- 2015-12-04 SG SG11201609625WA patent/SG11201609625WA/en unknown
- 2015-12-04 WO PCT/CN2015/096375 patent/WO2017092022A1/zh active Application Filing
- 2015-12-04 US US15/310,330 patent/US10748080B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000035394A (ja) * | 1998-07-17 | 2000-02-02 | Shimadzu Corp | 走査型プローブ顕微鏡 |
CN103886329A (zh) * | 2014-03-21 | 2014-06-25 | 西安电子科技大学 | 基于张量分解降维的极化图像分类方法 |
CN104361318A (zh) * | 2014-11-10 | 2015-02-18 | 中国科学院深圳先进技术研究院 | 一种基于弥散张量成像技术的疾病诊断辅助系统及方法 |
CN104850913A (zh) * | 2015-05-28 | 2015-08-19 | 深圳先进技术研究院 | 一种空气质量pm2.5预测方法及系统 |
CN105069485A (zh) * | 2015-08-26 | 2015-11-18 | 中国科学院深圳先进技术研究院 | 一种张量模式下基于极限学习机的模式识别方法 |
CN105654110A (zh) * | 2015-12-04 | 2016-06-08 | 深圳先进技术研究院 | 一种张量模式下的有监督学习优化方法及系统 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107515843A (zh) * | 2017-09-04 | 2017-12-26 | 四川易诚智讯科技有限公司 | 基于张量近似的各向异性数据压缩方法 |
CN107515843B (zh) * | 2017-09-04 | 2020-12-15 | 四川易诚智讯科技有限公司 | 基于张量近似的各向异性数据压缩方法 |
CN110555054A (zh) * | 2018-06-15 | 2019-12-10 | 泉州信息工程学院 | 一种基于模糊双超球分类模型的数据分类方法及系统 |
CN110555054B (zh) * | 2018-06-15 | 2023-06-09 | 泉州信息工程学院 | 一种基于模糊双超球分类模型的数据分类方法及系统 |
CN114235411A (zh) * | 2021-12-28 | 2022-03-25 | 频率探索智能科技江苏有限公司 | 轴承外圈缺陷定位方法 |
Also Published As
Publication number | Publication date |
---|---|
SG11201609625WA (en) | 2017-07-28 |
US20170344906A1 (en) | 2017-11-30 |
US10748080B2 (en) | 2020-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017092022A1 (zh) | 一种张量模式下的有监督学习优化方法及系统 | |
US11501192B2 (en) | Systems and methods for Bayesian optimization using non-linear mapping of input | |
Kolouri et al. | Optimal mass transport: Signal processing and machine-learning applications | |
US20180349158A1 (en) | Bayesian optimization techniques and applications | |
Vannieuwenhoven et al. | A new truncation strategy for the higher-order singular value decomposition | |
US20180247193A1 (en) | Neural network training using compressed inputs | |
Hu et al. | Scalable bayesian non-negative tensor factorization for massive count data | |
CN110781970A (zh) | 分类器的生成方法、装置、设备及存储介质 | |
Khan et al. | Physics-informed feature-to-feature learning for design-space dimensionality reduction in shape optimisation | |
Jowaheer et al. | A BINAR (1) time-series model with cross-correlated COM–Poisson innovations | |
Wu et al. | Fractional spectral graph wavelets and their applications | |
Wang et al. | An adaptive two-stage dual metamodeling approach for stochastic simulation experiments | |
Niezgoda et al. | Unsupervised learning for efficient texture estimation from limited discrete orientation data | |
Gavval et al. | CUDA-Self-Organizing feature map based visual sentiment analysis of bank customer complaints for Analytical CRM | |
de Miranda Cardoso et al. | Learning bipartite graphs: Heavy tails and multiple components | |
Li et al. | An alternating nonmonotone projected Barzilai–Borwein algorithm of nonnegative factorization of big matrices | |
Meng et al. | An additive global and local Gaussian process model for large data sets | |
WO2016090625A1 (en) | Scalable web data extraction | |
Attigeri et al. | Analysis of feature selection and extraction algorithm for loan data: A big data approach | |
Motai et al. | Cloud colonography: distributed medical testbed over cloud | |
Chang et al. | A hybrid data-driven-physics-constrained Gaussian process regression framework with deep kernel for uncertainty quantification | |
Meng et al. | Parallel edge-based visual assessment of cluster tendency on GPU | |
Kudinov et al. | A hybrid language model based on a recurrent neural network and probabilistic topic modeling | |
JP2020030702A (ja) | 学習装置、学習方法及び学習プログラム | |
Elanbari et al. | Advanced Computation of a Sparse Precision Matrix HADAP: A Hadamard-Dantzig Estimation of a Sparse Precision Matrix |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 15310330 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11201609625W Country of ref document: SG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15909536 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17/09/2018) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15909536 Country of ref document: EP Kind code of ref document: A1 |