CN112633514A - Multi-task function-to-function regression method - Google Patents

Multi-task function-to-function regression method Download PDF

Info

Publication number
CN112633514A
CN112633514A CN202011156334.XA CN202011156334A CN112633514A CN 112633514 A CN112633514 A CN 112633514A CN 202011156334 A CN202011156334 A CN 202011156334A CN 112633514 A CN112633514 A CN 112633514A
Authority
CN
China
Prior art keywords
function
regression
regularization
task
tasks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011156334.XA
Other languages
Chinese (zh)
Inventor
谭琦
杨沛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
South China Normal University
Original Assignee
South China University of Technology SCUT
South China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT, South China Normal University filed Critical South China University of Technology SCUT
Priority to CN202011156334.XA priority Critical patent/CN112633514A/en
Publication of CN112633514A publication Critical patent/CN112633514A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Operations Research (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)

Abstract

The invention relates to the field of machine learning, and discloses a multi-task function-to-function regression method, which comprises the following steps: s1, constructing a function-to-function regression model based on the double expansion of the basis function, and establishing a target function of the mapping from the independent variable function to the dependent variable function; s2, further constructing a multi-task function to an objective function of the function regression model, and improving the performance of each regression task by collaboratively grouping and mining the hidden structures among the tasks, wherein the objective function in the step contains a regression coefficient matrix; s3, applying constraint to the regression coefficient matrix in the step S2 by adopting different sparsity regularization technologies; s4, optimizing the problem that the objective function of the final function regression model is not smooth and independent; the multi-task function-to-function regression model based on structural sparsity has the advantages that the similarity of tasks and the clustering characteristics of basis functions can be mined simultaneously, and the clustering characteristics are used for improving the performance of a function regression system.

Description

Multi-task function-to-function regression method
Technical Field
The invention relates to the field of machine learning, in particular to a multi-task function-to-function regression method.
Background
Heterogeneous machine learning is used for researching how to mine heterogeneous data association relation among multiple tasks, fields, views and modalities so as to improve performance of a single system. Because the sources of real-world data are diverse, heterogeneity is a natural property of functional data. The development of heterogeneous machine learning has promoted the rapid development of functional data analysis, wherein the functional data considers continuous infinite-dimension data rather than discrete finite-dimension vectors. The heterogeneous machine learning fully excavates data heterogeneity, and the generalization performance of the machine learning system is effectively improved through knowledge sharing or complementary utilization among heterogeneous data.
Feature deconstruction is one of the key challenges of machine learning. In the field of traditional machine learning, researchers have proposed many sophisticated methods of feature analysis and feature deconstruction, such as: non-negative matrix factorization, principal component analysis, singular value decomposition, canonical correlation analysis, and the like. However, there is currently a lack of mature and effective feature deconstruction methods in the field of heterogeneous machine learning. On the other hand, data distribution difference is a core research problem of heterogeneous learning (such as migration learning, multitask learning, lifetime learning and the like), and is one of the main obstacles for knowledge sharing and migration.
Functional data is data defined over one or more continuous domains (e.g., time domain, spatial domain, spectral domain, and genetic location, etc.). Because the feature space is of infinite dimensions, functional data has greater data representation capabilities. The data (such as time series, images, audio, video and text) commonly used in the machine learning system can be represented by the new frame. The functional data contains complex and diverse associated information, which is divided into two types, one is functional, which refers to the characteristics (such as smoothness, periodicity, sparsity and the like) contained in the functional data collected from a single data source; the other type is heterogeneity, which refers to the correlation between functional data collected from different data sources (such as domain heterogeneity, task heterogeneity, etc.).
In the real world, many application problems in the field of machine learning can be summarized as a mapping problem between two functions. However, few studies have been made at present. It is necessary to provide a regression method based on multitask function to function, which solves the problems in the current research and helps to develop artificial intelligence.
Disclosure of Invention
The present invention is directed to overcoming the above-mentioned drawbacks of the prior art, and provides a multi-task function-to-function regression method, which improves the prediction performance of each functional regression task by mining the correlation between different tasks.
To achieve the above object, the present invention provides a multitask function-to-function regression method, comprising the steps of:
s1, constructing a function-to-function regression model based on the double expansion of the basis function, and establishing a target function of the mapping from the independent variable function to the dependent variable function;
s2, further constructing a multi-task function to an objective function of the function regression model, and mining a hidden structure among tasks through a cooperative grouping technology to improve the performance of each regression task, wherein the objective function in the step contains a regression coefficient matrix;
s3, applying constraint to the regression coefficient matrix in the step S2 by adopting different sparsity regularization technologies;
and S4, optimizing the problem that the objective function of the final function regression model is not smooth and independent.
Preferably, the function-to-function regression model in step S1 is as follows:
Figure BDA0002742873150000031
where x(s) and y (t) are the independent and dependent variable functions, respectively, [ epsilon ] (t) is the error function, W is the regression coefficient matrix,
Figure BDA0002742873150000032
and θ is a basis function.
Preferably, the objective function of the function-to-function regression model in step S1 is as follows:
Figure BDA0002742873150000033
wherein the content of the first and second substances,
Figure BDA0002742873150000034
Ω (W) is a regularization term. (ii) a
Preferably, the objective function of the function-to-function regression model for multitasking in step S2 is as follows:
Figure BDA0002742873150000035
where T is the number of function regression tasks, xi(s) and yi(t) is the independent variable function and dependent variable function of the ith regression task, WiIs the regression coefficient matrix of the ith function regression task,
Figure BDA0002742873150000036
Ω({Wi}) is a regularization term.
Preferably, the sparsity regularization technique in step S3 is Lasso, L1Regularization, L2,1Regularization or Schatten techniques.
Preferably, step S3 includes structure sparsity regularization, which includes two regularization terms, the first regularization term is regularization of task clustering, and the second regularization term is regularization of basis function grouping.
Preferably, the optimization processing in step S4 is as follows: and converting the target function into a smooth function, and decoupling a plurality of tasks to form independent functions.
Compared with the prior art, the invention has the beneficial effects that:
1. the problem of the method research based on the multi-task function-to-function regression is a new learning problem of heterogeneous function type data, and is not considered by the prior art and the method;
2. this method based on multi-tasking function-to-function regression is also a novel approach. Because it models the correlations between tasks and between basis functions simultaneously under consideration of cooperative grouping and structural sparsity. This is not available with the prior art and methods;
3. the method can effectively solve the problems of non-smoothness and non-independence caused by the structural sparsity of the cooperative grouping, so that the method has excellent characteristics of separability, convex functionality, global convergence and the like, and can be widely applied to the related technology of structural sparsity regularization.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a first schematic diagram of a multitasking function-to-function regression method flow provided by the present invention;
FIG. 2 is a second schematic diagram of a multitasking function-to-function regression method flow provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1 and 2, the present invention provides a multitask function-to-function regression method, comprising the steps of:
s1, constructing a function-to-function regression model based on the double expansion of the basis function, and establishing a target function of the mapping from the independent variable function to the dependent variable function;
s2, further constructing a multi-task function to an objective function of the function regression model, and mining a hidden structure among tasks through a cooperative grouping technology to improve the performance of each regression task, wherein the objective function in the step contains a regression coefficient matrix;
s3, applying constraint to the regression coefficient matrix in the step S2 by adopting different sparsity regularization technologies;
and S4, optimizing the problem that the objective function of the final function regression model is not smooth and independent.
The important steps in the overall process are further described below.
One, two-fold spread basis function system
We first construct a function-to-function regression model based on basis function extensions, whose goal is to create a mapping from the independent variable function (input function) to the dependent variable function (output function).
Let x(s) and y (t) be the independent and dependent variable functions, respectively. Here, the independent and dependent variable functions may be defined over different continuous domains, i.e., S ∈ S, T ∈ T. Here, we do not require that the input function and the output function belong to the same domain to adapt to different application scenarios. The function-function regression model we used is as follows:
Figure BDA0002742873150000051
where ε (t) is the error function, W is the regression coefficient matrix,
Figure BDA0002742873150000052
and θ is a basis function.
The objective function of the function-to-function regression model is a regular term that minimizes the regression loss and the regression coefficient matrix for reconstructing the dependent variable function using the independent variable function:
Figure BDA0002742873150000061
wherein the content of the first and second substances,
Figure BDA0002742873150000062
Ω (W) is a regularization term.
Two, multi-tasking function-to-function regression
Based on the function-to-function regression model, a multi-task function-to-function regression model is further provided, and the performance of each regression task is improved by mining the hidden structure among tasks through a sparse regularization technology.
Assume that there are T function regression tasks. x is the number ofi(s) and yi(t) are the independent variable function and the dependent variable function of the ith regression task, respectively. WiIs the regression coefficient matrix of the ith function regression task. We propose a multitask function to function regression model based on structural sparsity as follows. The objective function is to minimize the function regression loss and structural sparsity regularization term of each task:
Figure BDA0002742873150000063
wherein
Figure BDA0002742873150000064
Ω({Wi}) is a regularization term.
The purpose of this method is to model both the dependencies between tasks and the dependencies between basis functions, and therefore to consider both aspects. On the one hand, common basis functions are generally divided into several classes: fourier functions, spline functions, wavelet functions, functional principal components, etc., with different types of basis functions being suitable for processing functional data of different attributes. On the other hand, in many cases, we can get a priori knowledge of a set of tasks.
The assumption of cooperative grouping is that the regression coefficient weight matrix WiThe rows of (2) may be grouped according to the type of basis function; regression coefficient weight matrix WiMay be grouped according to soft clustering of tasks. In this way, clustering groupings of multiple regression tasks may overlap, e.g., each task may belong to one or more cluster groups.
Task and basis function sparse regularization
According to a specific application scenario, we use different sparsity regularization techniques to constrain model parameters, such as: lasso, L1Regularization, L2,1Regularization, Schatten regularization, and the like. In addition, implicit clustering structures exist in tasks, and different groups of basic functions (such as wavelet functions, Fourier functions, spline functions and the like) can exist. In this way, we can apply a structural sparsity constraint to the regression coefficient matrix. The multi-task function to function regression model based on structural sparsity has the advantage that the task soft clustering features and the basis function clustering features can be modeled simultaneously. It encourages similar tasks to choose similar basis functions or sets of basis functions. In addition to feature selection, sparsity regularization techniques may also enhance the interpretability of the multitask learning model.
The structural sparsity regularization of the method is as follows:
Figure BDA0002742873150000071
wherein, W(k)Is a block matrix which is the combination of regression coefficient matrices corresponding to all regression tasks in the kth task cluster.
Figure BDA0002742873150000072
Is W(k)The submatrices corresponding to the b-th set of basis functions. Assume that there are a total of G task clusters and B groups of basis functions.
The structural sparsity regularization technique includes two regularizations. The first regularization is regularization of task clustering, which can approximate the result of task selection basis functions in the same cluster, and ensure the specific sparsity of the task clustering by reducing irrelevant basis functions. The second regularization is regularization of the basis function groups, ensuring group sparsity of the basis functions by reducing the associated groups. Therefore, in this way we can model both the dependencies between tasks and the dependencies between basis functions.
We use the generalized Schatten paradigm to simultaneously model soft clustering of tasks and grouping of basis functions, with the goal of having an input function x for each taski(s) to an output function yi(t) the loss of the regression model is minimal. The generalized Schatten paradigm may encompass many models of sparse regularization, such as l2, the p regularization model, and the Schatten-p regularization model. l2, p regularization model, can make the results of task selection basis functions in the same cluster similar, reducing irrelevant basis functions. The Schatten-p regularization model can enable similar tasks to share similar low rank structures. Both may produce a weight matrix of sparse structures. Further, the generalized Schatten paradigm is a flexible mechanism for characterizing task dependencies. It can model soft clustering of regression tasks. For example, we can learn task-dependent correlations and task-independent correlations simultaneously through the multi-tasking function-function regression approach presented herein. Task-dependent dependencies may capture specific attributes of each different task, while task-independent dependencies may capture common attributes of all tasks.
Fourthly, optimization processing of method
Because the cooperative grouping is structurally sparse and multiple tasks are coupled together, the problems of non-smoothness and non-independence are generated, and optimization processing is needed. To solve this problem, the objective function can be first transformed into a smooth function, and then in this modified function, the tasks can be decoupled, i.e. independent functions. The two processes can be performed simultaneously. The result of this optimization process is convex optimized, convergent, fitting l2, the p regularization model and the Schatten-p regularization model.
In conclusion, the beneficial effects of the invention are as follows: firstly, the problem of the research of the method from function to function regression based on multiple tasks is a new learning problem of heterogeneous function type data, and is not considered by the prior art and the method; secondly, this method based on multi-tasking function-to-function regression is also a novel method. Because it models the correlations between tasks and between basis functions simultaneously under consideration of cooperative grouping and structural sparsity. This is not available with the prior art and methods; in addition, the method can effectively solve the problems of non-smoothness and non-independence caused by the structural sparsity of the cooperative grouping, so that the method has excellent characteristics of separability, convex functionality, global convergence and the like, and can be widely applied to the related technology of structural sparsity regularization.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (7)

1. A multitasking function-to-function regression method, comprising the steps of:
s1, constructing a function-to-function regression model based on the double expansion of the basis function, and establishing a target function of the mapping from the independent variable function to the dependent variable function;
s2, further constructing a multi-task function to an objective function of the function regression model, and mining a hidden structure among tasks through a cooperative grouping technology to improve the performance of each regression task, wherein the objective function in the step contains a regression coefficient matrix;
s3, applying constraint to the regression coefficient matrix in the step S2 by adopting different sparsity regularization technologies;
and S4, optimizing the problem that the objective function of the final function regression model is not smooth and independent.
2. The multi-tasking function-to-function regression method of claim 1, wherein the function-to-function regression model in step S1 is as follows:
Figure FDA0002742873140000011
where x(s) and y (t) are the independent and dependent variable functions, respectively, [ epsilon ] (t) is the error function, W is the regression coefficient matrix,
Figure FDA0002742873140000012
and θ is a basis function.
3. The multi-tasking function-to-function regression method of claim 2, wherein the objective function of the function-to-function regression model in step S1 is as follows:
Figure FDA0002742873140000013
wherein the content of the first and second substances,
Figure FDA0002742873140000014
Ω (W) is a regularization term. (ii) a
4. The method of claim 3, wherein the objective function of the multi-tasking function-to-function regression model in step S2 is as follows:
Figure FDA0002742873140000021
where T is the number of function regression tasks, xi(s) and yi(t) are respectively the ith roundIndependent and dependent variable functions of the tasking, WiIs the regression coefficient matrix of the ith function regression task,
Figure FDA0002742873140000022
Ω({Wi}) is a regularization term.
5. The multi-tasking function-to-function regression method of claim 1, wherein the sparsity regularization technique in step S3 is Lasso, L1Regularization, L2,1Regularization or Schatten techniques.
6. The multi-tasking function-to-function regression method of claim 1, wherein step S3 comprises structure sparsity regularization, which comprises two regularization terms, the first regularization term is regularization of task clustering, and the second regularization term is regularization of basis function grouping.
7. The multi-tasking function-to-function regression method of claim 1, wherein the optimization in step S4 is performed by: and converting the target function into a smooth function, and decoupling a plurality of tasks to form independent functions.
CN202011156334.XA 2020-10-26 2020-10-26 Multi-task function-to-function regression method Pending CN112633514A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011156334.XA CN112633514A (en) 2020-10-26 2020-10-26 Multi-task function-to-function regression method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011156334.XA CN112633514A (en) 2020-10-26 2020-10-26 Multi-task function-to-function regression method

Publications (1)

Publication Number Publication Date
CN112633514A true CN112633514A (en) 2021-04-09

Family

ID=75303106

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011156334.XA Pending CN112633514A (en) 2020-10-26 2020-10-26 Multi-task function-to-function regression method

Country Status (1)

Country Link
CN (1) CN112633514A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114696788A (en) * 2022-04-12 2022-07-01 电子科技大学 Multi-main-lobe interference resistant waveform and filter joint cognitive design method
CN115415851A (en) * 2022-10-08 2022-12-02 清华大学 Cutter health monitoring method based on functional data principal component analysis

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114696788A (en) * 2022-04-12 2022-07-01 电子科技大学 Multi-main-lobe interference resistant waveform and filter joint cognitive design method
CN114696788B (en) * 2022-04-12 2023-05-05 电子科技大学 Wave form and filter combined cognition design method for resisting multi-main lobe interference
CN115415851A (en) * 2022-10-08 2022-12-02 清华大学 Cutter health monitoring method based on functional data principal component analysis
CN115415851B (en) * 2022-10-08 2023-09-26 清华大学 Cutter health monitoring method based on functional data principal component analysis

Similar Documents

Publication Publication Date Title
WO2021190127A1 (en) Data processing method and data processing device
US20200026992A1 (en) Hardware neural network conversion method, computing device, compiling method and neural network software and hardware collaboration system
Han et al. Signal processing and networking for big data applications
CN112633514A (en) Multi-task function-to-function regression method
CN115543639A (en) Optimization method for distributed execution of deep learning task and distributed system
CN112163601A (en) Image classification method, system, computer device and storage medium
US11567778B2 (en) Neural network operation reordering for parallel execution
CN114356540A (en) Parameter updating method and device, electronic equipment and storage medium
Li et al. Optimizing makespan and resource utilization for multi-DNN training in GPU cluster
CN113792621A (en) Target detection accelerator design method based on FPGA
CN113935489A (en) Variational quantum model TFQ-VQA based on quantum neural network and two-stage optimization method thereof
CN113537365A (en) Multitask learning self-adaptive balancing method based on information entropy dynamic weighting
Dong et al. Lambo: Large language model empowered edge intelligence
CN111753995A (en) Local interpretable method based on gradient lifting tree
Ma et al. Accelerating deep neural network filter pruning with mask-aware convolutional computations on modern CPUs
Singh et al. Exploiting Sparsity in Pruned Neural Networks to Optimize Large Model Training
Du In-machine-learning database: Reimagining deep learning with old-school SQL
WO2021238734A1 (en) Method for training neural network, and related device
CN113836174B (en) Asynchronous SQL (structured query language) connection query optimization method based on reinforcement learning DQN (direct-to-inverse) algorithm
He et al. ECS-SC: Long-tailed classification via data augmentation based on easily confused sample selection and combination
CN114120447A (en) Behavior recognition method and system based on prototype comparison learning and storage medium
Niu et al. A Novel Distributed Duration-Aware LSTM for Large Scale Sequential Data Analysis
Dai et al. Distributed Encoding and Updating for SAZD Coded Distributed Training
Suresh et al. Divisible load scheduling in distributed system with buffer constraints: Genetic algorithm and linear programming approach
CN112036546A (en) Sequence processing method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination