CN114386523A - Prediction method, device, equipment and medium based on multi-mode extreme learning machine - Google Patents

Prediction method, device, equipment and medium based on multi-mode extreme learning machine Download PDF

Info

Publication number
CN114386523A
CN114386523A CN202210047643.6A CN202210047643A CN114386523A CN 114386523 A CN114386523 A CN 114386523A CN 202210047643 A CN202210047643 A CN 202210047643A CN 114386523 A CN114386523 A CN 114386523A
Authority
CN
China
Prior art keywords
sample
learning machine
extreme learning
constructing
probability distribution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210047643.6A
Other languages
Chinese (zh)
Inventor
吕文君
李鲲
康宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Advanced Technology University of Science and Technology of China
Original Assignee
Institute of Advanced Technology University of Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Advanced Technology University of Science and Technology of China filed Critical Institute of Advanced Technology University of Science and Technology of China
Priority to CN202210047643.6A priority Critical patent/CN114386523A/en
Publication of CN114386523A publication Critical patent/CN114386523A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/061Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Neurology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a prediction method, a prediction device, prediction equipment and a prediction medium based on a multi-mode extreme learning machine, wherein the method comprises the following steps: acquiring training data and neuron weight parameters under probability distribution, and constructing a label vector corresponding to the training data; constructing a plurality of intermediate neurons under each probability distribution according to the weight parameters of the neurons; constructing composite characteristics of the training data under each probability distribution according to each plurality of intermediate neurons; calculating second-order sample characteristics corresponding to the composite characteristics, and constructing a kernel matrix corresponding to the second-order sample characteristics; and obtaining a sample to be predicted, and predicting the sample to be predicted according to an extreme learning machine model jointly constructed by the kernel matrix and the label vector to obtain a prediction result. The invention can be applied to solve data-driven modeling problems such as image classification, sequence prediction, geophysical, credit evaluation, etc. The method and the device solve the technical problem that the precision of the extreme learning machine model is unstable due to random mapping in the prior art.

Description

Prediction method, device, equipment and medium based on multi-mode extreme learning machine
Technical Field
The application relates to the technical field of big data, in particular to a prediction method, a prediction device, prediction equipment and a prediction medium based on a multi-mode extreme learning machine.
Background
With the continuous development of machine learning, various machine learning methods are presented at present, wherein an extreme learning machine model can be used for solving the problems of classification and regression, but random mapping of the extreme learning machine model is always a problem which is difficult to solve, that is, hidden layers obtained by sampling different probability distributions in a training process are different, so that different prediction performances are generated, and the probability distributions in the training process of the current extreme learning machine model are usually selected by hand depending on experience and have great uncertainty, so that the stability of the precision of the extreme learning machine model can be influenced. The invention can be applied to solve data-driven modeling problems such as image classification, sequence prediction, geophysical, credit evaluation, etc.
In an invention patent with the application number of CN201810811670.X, a computer-aided reference system and a method for fusing a multi-modal mammary gland image are disclosed, wherein extracted GLCM, LBP, HIS texture features and CNN depth features are subjected to serial fusion to construct a texture-depth fusion feature model. The multi-texture features are multi-modal and are suitable for the field of image classification, which needs to extract features in advance. However, in many fields, such as well log interpretation, the input is a feature composed of log values having actual physical meaning, and thus, various features cannot be extracted. The invention patent with the application number of CN202110418142.X discloses a service demand dynamic prediction method and a system based on multi-mode machine learning, which respectively extract the characteristics of text data and image data, share the characteristics and obtain a service expression vector used by a user, wherein the multi-mode is generated by two data forms of text and image. In the related inventions disclosed in application numbers CN202110470704.5 and CN202011191238.9, there are also multi-modal technical features. However, from the published technical literature, multi-modal machine learning is directed to the multi-source heterogeneous data itself. In fact, the uncertainty of feature mapping also brings about multiple modes, and if a wrong mode is selected, the learning precision is reduced, so that multiple modes need to be fused sufficiently to eliminate the instability of precision.
Disclosure of Invention
The application mainly aims to provide a prediction method, a prediction device, prediction equipment and a prediction medium based on a multi-mode extreme learning machine, and aims to solve the technical problem of unstable precision caused by random mapping in the existing extreme learning machine model.
In order to achieve the above object, the present application provides a prediction method based on a multi-modal extreme learning machine, including:
acquiring training data and neuron weight parameters under each probability distribution, and constructing a label vector corresponding to the training data;
constructing a plurality of intermediate neurons under each probability distribution according to each neuron weight parameter;
constructing composite characteristics of the training data under each probability distribution according to each plurality of intermediate neurons;
calculating second-order sample characteristics corresponding to the composite characteristics, and constructing a kernel matrix corresponding to the second-order sample characteristics;
and obtaining a sample to be predicted, and predicting the sample to be predicted according to an extreme learning machine model jointly constructed by the kernel matrix and the label vector to obtain a prediction result.
The application also provides a prediction device based on the multi-mode extreme learning machine, which comprises:
the acquisition module is used for acquiring training data and neuron weight parameters under each probability distribution and constructing a label vector corresponding to the training data;
the intermediate neuron constructing module is used for constructing a plurality of intermediate neurons under each probability distribution according to each neuron weight parameter;
a composite feature construction module, configured to construct a composite feature of the training data under each of the probability distributions according to each of the plurality of intermediate neurons;
the kernel matrix construction module is used for calculating second-order sample characteristics corresponding to the composite characteristics and constructing a kernel matrix corresponding to the second-order sample characteristics;
and the prediction module is used for obtaining a sample to be predicted, predicting the sample to be predicted according to an extreme learning machine model jointly constructed by the kernel matrix and the label vector, and obtaining a prediction result.
The present application further provides an electronic device, the electronic device including: the system comprises a memory, a processor and a program of the prediction method based on the multi-modal extreme learning machine, wherein the program of the prediction method based on the multi-modal extreme learning machine is stored in the memory and can run on the processor, and when the program of the prediction method based on the multi-modal extreme learning machine is executed by the processor, the steps of the prediction method based on the multi-modal extreme learning machine can be realized.
The present application also provides a computer-readable storage medium having stored thereon a program for implementing the multi-modal extreme learning machine-based prediction method, which when executed by a processor, implements the steps of the multi-modal extreme learning machine-based prediction method as described above.
The present application also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the multimodal extreme learning machine based prediction method as described above.
Compared with the technical means that probability distribution is usually selected by hands by depending on experience in the training process of the current extreme learning machine model in the prior art, the method obtains training data and neuron weight parameters under each probability distribution, and constructs a label vector corresponding to the training data; according to the weight parameters of the neurons, constructing a plurality of intermediate neurons under each probability distribution, achieving the purpose of constructing hidden layers under different probability distributions, and further constructing composite characteristics of the training data under each probability distribution according to the plurality of intermediate neurons; the method comprises the steps of calculating second-order sample characteristics corresponding to the composite characteristics, and constructing a kernel matrix corresponding to the second-order sample characteristics, so that the purpose of fusing a plurality of neurons (hidden layers) under different probability distributions into the kernel matrix in a composite characteristic constructing mode is achieved, the kernel matrix is more stable and reliable, an extreme learning machine model constructed jointly based on the kernel matrix and a label vector is more stable and reliable, a sample to be predicted is obtained, and the sample to be predicted is predicted according to the extreme learning machine model constructed jointly based on the kernel matrix and the label vector to obtain a prediction result. The method can achieve the purpose of predicting based on a more stable and reliable extreme learning machine model, overcomes the technical defect that in the prior art, probability distribution is usually selected manually by experience in the training process of the current extreme learning machine model, and has great uncertainty, so that the stability of the precision of the extreme learning machine model can be influenced, and solves the technical problem that the precision of the extreme learning machine model is unstable due to random mapping.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart of a first embodiment of a prediction method based on a multi-modal extreme learning machine according to the present application;
fig. 2 is a schematic device structure diagram of a hardware operating environment related to a multi-modal extreme learning machine-based prediction method in an embodiment of the present application.
The objectives, features, and advantages of the present application will be further described with reference to the accompanying drawings.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the application provides a prediction method based on a multi-modal extreme learning machine, which is applied to federal participants, and in a first embodiment of the prediction method based on the multi-modal extreme learning machine, referring to fig. 1, the prediction method based on the multi-modal extreme learning machine comprises the following steps:
step S10, acquiring training data and neuron weight parameters under each probability distribution, and constructing a label vector corresponding to the training data;
step S20, constructing a plurality of intermediate neurons under each probability distribution according to each neuron weight parameter;
step S30, constructing composite characteristics of the training data under each probability distribution according to each plurality of intermediate neurons;
step S40, calculating second-order sample characteristics corresponding to the composite characteristics, and constructing a kernel matrix corresponding to the second-order sample characteristics;
and step S50, obtaining a sample to be predicted, and predicting the sample to be predicted according to an extreme learning machine model jointly constructed by the kernel matrix and the label vector to obtain a prediction result.
In this embodiment, it should be noted that the training data is used to construct an extreme learning machine model, where the extreme learning machine model is a neural network model, and the neural network model at least includes a neuron, where the neuron is a basic unit constituting a neural network; the neuron weight parameters may be input weight vectors corresponding to neurons and input bias coefficients corresponding to the neurons, the training data at least includes a training sample, one of the training samples corresponds to a sample label, and the label training is constructed based on the sample label corresponding to each of the training samples; the probability distribution is a probability distribution of neuron weight parameters, wherein a plurality of groups of input weight vectors and input bias coefficients exist, the input weight vectors and the input bias coefficients in each group respectively accord with corresponding probability distributions, and the probability distribution can be normal distribution.
Steps S10 to S50 include: acquiring training samples, input weight vectors and input bias coefficients under the probability distributions, and constructing corresponding label vectors according to sample labels corresponding to the training samples; constructing a plurality of intermediate neurons under each probability distribution according to a definition formula of the neurons, input weight vectors and input bias coefficients under each probability distribution, wherein the intermediate neurons form a hidden layer of a neural network; respectively inputting training samples into the plurality of neurons, calculating output characteristics of the training samples under each probability distribution, and fusing the output characteristics into composite characteristics; calculating second-order characteristics corresponding to the composite characteristics to obtain second-order sample characteristics corresponding to the composite characteristics; calculating a kernel matrix of an extreme learning machine model according to the second-order sample characteristics, and constructing the extreme learning machine model according to the kernel matrix and the label vector; and obtaining a sample to be predicted, inputting the sample to be predicted into the extreme learning machine model, outputting a classification label corresponding to the sample to be predicted to predict the sample to be predicted, and taking the classification label as a prediction result.
As an example, the specific implementation process of steps S10 to S50 is as follows:
s1: collecting a training data set, wherein the training data set at least comprises a training sample
Figure BDA0003472906970000061
d is the initial characteristic dimension of the training sample, and the sample label corresponding to the training sample is
Figure BDA0003472906970000062
Figure BDA0003472906970000063
Representing the real number field, and if n training samples are collected, the training data set is
Figure BDA0003472906970000064
xiFor the ith training sample, yiIs xiCorresponding label, label vector is defined as y ═ y1,…,yn]T
The following illustrates the actual meaning of the data set:
taking image classification as an example, the data set has n images, that is, feature vectors of the images can be extracted by using a common feature extraction method such as Scale-invariant feature transform (SIFT) and the like as samples, and if the classification target has c, the number of the samples is xiCorresponding label yiE {1, …, c }, y if one-hot coding is employediVector y writable in c-dimensioniThe corresponding tag vector Y is written in the form of a tag matrix, i.e. Y ═ Y1;…;yn](ii) a If the operation is required to be carried out in a one-hot coding mode, Y can be replaced by Y in the related operation. The task is a classification task.
Taking a well logging explanation as an example, a feature vector is formed by well logging curve values at each depth, that is, a sample is constructed, if a well has n depth points, n samples are obtained, and the corresponding labels are geological information, such as porosity. The task is a regression task.
S2: setting m different probability distributions, namely m different probability distributions, setting the number z of intermediate neurons to be more than 0, and setting the dimensionality r of the reduced sample to be more than 0; here, the m probability distributions may be set to normal distributions having different mean values and variances, or may be set to a plurality of probability distributions such as uniform distribution, bernoulli distribution, binomial distribution, and geometric distribution;
s3: randomly generating z input weight vectors
Figure BDA0003472906970000071
And input offset coefficient
Figure BDA0003472906970000072
Is recorded as w1,…,wzAnd b1,…,bzThen m groups can be obtained according to m different preset probability distributions
{(wi,bi),i=1,...,z}
Thereby defining
Figure BDA0003472906970000073
The ith interneuron excited by the training sample x under the jth probability distribution is recorded as
Figure BDA0003472906970000074
Wherein the content of the first and second substances,
Figure BDA0003472906970000075
and
Figure BDA0003472906970000076
respectively representing an ith input weight vector and an input offset coefficient which are randomly generated under the jth distribution; phi (x) denotes an interneuron;
s4: the composite feature of sample x is calculated as:
Figure BDA0003472906970000077
wherein, the ith column vector of A (x)
Figure BDA0003472906970000078
The second order features of sample x are then computed, i.e.:
Figure BDA0003472906970000079
wherein the content of the first and second substances,
Figure BDA00034729069700000710
‖·‖2represents a 2 norm;
s5: computing a kernel matrix
Figure BDA0003472906970000081
Its ith row and jth column element
Figure BDA0003472906970000082
Figure BDA0003472906970000083
Wherein the content of the first and second substances,
Figure BDA0003472906970000084
which represents the Hadamard product of the two,
Figure BDA0003472906970000085
and
Figure BDA0003472906970000086
the matrix is randomly generated, the elements of the matrix are 1 or-1, and the probability of generating positive and negative 1 is the same;
s6: sample to be predicted
Figure BDA0003472906970000087
Input to:
Figure BDA0003472906970000088
wherein the content of the first and second substances,
Figure BDA0003472906970000089
to represent
Figure BDA00034729069700000810
The M-P generalized inverse of (1), wherein,
Figure BDA00034729069700000811
the ith element of (a) is:
Figure BDA00034729069700000812
Figure BDA00034729069700000813
is that
Figure BDA00034729069700000814
F is the extreme learning machine model.
It should be noted that, the extreme learning machine model may be a machine learning model for performing image classification, that is, an image classification model, and steps S10 to S50 include: acquiring training image data and neuron weight parameters under each probability distribution, and constructing a label vector corresponding to the training image data; constructing a plurality of intermediate neurons under each probability distribution according to each neuron weight parameter; constructing a composite image characteristic of the training image data under each probability distribution according to each plurality of the intermediate neurons; calculating second-order image features corresponding to the composite image features, and constructing a kernel matrix corresponding to the second-order image features; and acquiring an image to be predicted, and carrying out image classification on the image to be predicted according to an image classification model jointly constructed by the kernel matrix and the tag vector to obtain an image classification result. Wherein the training image data at least comprises an image as a training sample. Furthermore, in the embodiment of the application, the hidden layers of the image classification models under different probability distributions are constructed, and a plurality of neurons (hidden layers) under different probability distributions are fused into the kernel matrix of the image classification model in a manner of constructing composite image features, so that the kernel matrix is more stable and reliable, and further, image classification is performed according to the image classification model constructed by the more stable and reliable kernel matrix, and therefore the situation that probability distribution is usually selected by hand depending on experience in the training process of the current extreme learning machine model and has great uncertainty, the stability of the precision of image classification of the extreme learning machine model is influenced can be avoided, and the precision stability of image classification by using the extreme learning machine model can be improved.
Compared with the technical means that probability distribution is usually selected by hands by depending on experience in the training process of the current extreme learning machine model in the prior art, the prediction method based on the multi-mode extreme learning machine obtains training data and neuron weight parameters under each probability distribution, and constructs a label vector corresponding to the training data; according to the weight parameters of the neurons, constructing a plurality of intermediate neurons under each probability distribution, achieving the purpose of constructing hidden layers under different probability distributions, and further constructing composite characteristics of the training data under each probability distribution according to the plurality of intermediate neurons; the method comprises the steps of calculating second-order sample characteristics corresponding to the composite characteristics, and constructing a kernel matrix corresponding to the second-order sample characteristics, so that the purpose of fusing a plurality of neurons (hidden layers) under different probability distributions into the kernel matrix in a composite characteristic constructing mode is achieved, the kernel matrix is more stable and reliable, an extreme learning machine model constructed jointly based on the kernel matrix and a label vector is more stable and reliable, a sample to be predicted is obtained, and the sample to be predicted is predicted according to the extreme learning machine model constructed jointly based on the kernel matrix and the label vector to obtain a prediction result. The method can achieve the purpose of predicting based on a more stable and reliable extreme learning machine model, overcomes the technical defect that in the prior art, probability distribution is usually selected manually by experience in the training process of the current extreme learning machine model, and has great uncertainty, so that the stability of the precision of the extreme learning machine model can be influenced, and solves the technical problem that the precision of the extreme learning machine model is unstable due to random mapping.
The embodiment of the present application further provides a prediction device based on the multi-modal extreme learning machine, where the prediction device based on the multi-modal extreme learning machine includes:
the acquisition module is used for acquiring training data and neuron weight parameters under each probability distribution and constructing a label vector corresponding to the training data;
the intermediate neuron constructing module is used for constructing a plurality of intermediate neurons under each probability distribution according to each neuron weight parameter;
a composite feature construction module, configured to construct a composite feature of the training data under each of the probability distributions according to each of the plurality of intermediate neurons;
the kernel matrix construction module is used for calculating second-order sample characteristics corresponding to the composite characteristics and constructing a kernel matrix corresponding to the second-order sample characteristics;
and the prediction module is used for obtaining a sample to be predicted, predicting the sample to be predicted according to an extreme learning machine model jointly constructed by the kernel matrix and the label vector, and obtaining a prediction result.
Optionally, the training data includes at least one training sample, and the neuron constructing module is further configured to:
constructing a plurality of intermediate neurons under each of the probability distributions according to each of the neuron weight parameters by using the following formula:
Figure BDA0003472906970000101
wherein φ (x) is the interneuron, x is the training sample, w is the input weight vector, b is the input bias coefficient,
Figure BDA0003472906970000102
d is the characteristic dimension of the training sample;
recording the ith neuron excited by the training sample x under the jth probability distribution as
Figure BDA0003472906970000103
Wherein the content of the first and second substances,
Figure BDA0003472906970000104
and
Figure BDA0003472906970000105
respectively representing the ith input weight vector generated under the jth probability distribution and the ith input bias coefficient generated under the jth probability distribution.
Optionally, the training data includes at least one training sample, and the composite feature construction module is further configured to:
constructing a composite feature of the training data under each of the probability distributions according to each of the plurality of interneurons using the following formula:
Figure BDA0003472906970000106
wherein A (x) is the composite feature,
Figure BDA0003472906970000107
Figure BDA0003472906970000108
the number of the ith neuron excited under the mth probability distribution of the training sample x is z, and the number of the neuron weight parameters under the probability distribution is z.
Optionally, the training data at least includes a training sample, and the kernel matrix constructing module is further configured to:
calculating the second-order sample characteristic corresponding to the composite characteristic by using the following formula:
Figure BDA0003472906970000111
wherein h (x) is the second order sample feature,
Figure BDA0003472906970000112
(x) is the composite feature, m is the number of probability distributions, and x is the training sample.
Optionally, the training data at least includes a training sample, and the kernel matrix constructing module is further configured to:
and constructing a kernel matrix corresponding to the second-order sample characteristic by using the following formula:
the kernel matrix
Figure BDA0003472906970000113
Row i and column j of
Figure BDA0003472906970000114
Figure BDA0003472906970000115
Wherein the content of the first and second substances,
Figure BDA0003472906970000116
which represents the Hadamard product of the two,
Figure BDA0003472906970000117
and
Figure BDA0003472906970000118
is a randomly generated matrix with elements of 1 or-1 and the probability of generating positive and negative 1 is the same, h (x) is the second-order sample feature, x is the training sample, r < m2And r is the dimensionality of the training sample after dimensionality reduction.
Optionally, the prediction module is further configured to:
predicting the sample to be predicted according to an extreme learning machine model jointly constructed by the kernel matrix and the label vector by using the following formula to obtain a prediction result:
Figure BDA0003472906970000119
wherein the content of the first and second substances,
Figure BDA00034729069700001110
representing the kernel matrix
Figure BDA00034729069700001111
Is the M-P generalized inverse of (a), y is the tag vector,
Figure BDA00034729069700001112
for the sample to be predicted, f is the extreme learning machine model, wherein,
Figure BDA00034729069700001113
wherein the content of the first and second substances,
Figure BDA00034729069700001114
for the sample to be predicted
Figure BDA00034729069700001115
Corresponding prediction result, xnThe nth sample feature in the sample to be predicted,
Figure BDA00034729069700001116
expressed as the second-order sample characteristic, h (x), corresponding to the sample to be predictedi) Is a second-order feature corresponding to the ith feature in the sample to be predicted,
Figure BDA00034729069700001117
which represents the Hadamard product of the two,
Figure BDA00034729069700001118
and
Figure BDA0003472906970000121
is a randomly generated matrix, the elements of which are 1 or-1, the probability of generating positive and negative 1 is the same, r < m2And r is after the dimension reduction of the training sampleDimension.
Optionally, the training data includes at least one training sample, and the obtaining module is further configured to:
obtaining a sample label corresponding to each training sample;
transposing vectors formed by the sample labels to obtain the label vectors, wherein the formula for obtaining the label vectors through calculation is as follows:
y=[y1,…,yn]T
wherein y is the label vector, y1To ynAre all the sample labels.
The prediction device based on the multi-modal extreme learning machine provided by the invention adopts the prediction method based on the multi-modal extreme learning machine in the embodiment, and solves the technical problem that the precision of an extreme learning machine model is unstable due to random mapping. Compared with the prior art, the beneficial effects of the prediction device based on the multi-modal extreme learning machine provided by the embodiment of the invention are the same as the beneficial effects of the prediction method based on the multi-modal extreme learning machine provided by the embodiment, and other technical features in the prediction device based on the multi-modal extreme learning machine are the same as those disclosed in the embodiment method, and are not repeated herein.
An embodiment of the present invention provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to perform the multi-modal extreme learning machine based prediction method in the first embodiment.
Referring now to FIG. 2, shown is a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 2 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 2, the electronic device may include a processing apparatus (e.g., a central processing unit, a graphic processor, etc.) that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) or a program loaded from a storage apparatus into a Random Access Memory (RAM). In the RAM, various programs and data necessary for the operation of the electronic apparatus are also stored. The processing device, ROM and RAM are trained on each other via the bus. An input/output (I/O) interface is also connected to the bus.
Generally, the following systems may be connected to the I/O interface: input devices including, for example, touch screens, touch pads, keyboards, mice, image sensors, microphones, accelerometers, gyroscopes, and the like; output devices including, for example, Liquid Crystal Displays (LCDs), speakers, vibrators, and the like; storage devices including, for example, magnetic tape, hard disk, etc.; and a communication device. The communication means may allow the electronic device to communicate wirelessly or by wire with other devices to exchange data. While the figures illustrate an electronic device with various systems, it is to be understood that not all illustrated systems are required to be implemented or provided. More or fewer systems may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means, or installed from a storage means, or installed from a ROM. The computer program, when executed by a processing device, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
According to the electronic device provided by the invention, the technical problem of unstable precision of the extreme learning machine model due to random mapping is solved by adopting the prediction method based on the multi-mode extreme learning machine in the embodiment. Compared with the prior art, the beneficial effects of the electronic device provided by the embodiment of the invention are the same as the beneficial effects of the prediction method based on the multi-mode extreme learning machine provided by the embodiment, and other technical features of the electronic device are the same as those disclosed by the embodiment method, which are not repeated herein.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the foregoing description of embodiments, the particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
The present embodiment provides a computer-readable storage medium having computer-readable program instructions stored thereon for performing the method for multi-modal extreme learning based prediction in the first embodiment.
The computer readable storage medium provided by the embodiments of the present invention may be, for example, a USB flash disk, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or device, or any combination thereof. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present embodiment, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer-readable storage medium may be embodied in an electronic device; or may be present alone without being incorporated into the electronic device.
The computer readable storage medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring training data and neuron weight parameters under each probability distribution, and constructing a label vector corresponding to the training data; constructing a plurality of intermediate neurons under each probability distribution according to each neuron weight parameter; constructing composite characteristics of the training data under each probability distribution according to each plurality of intermediate neurons; calculating second-order sample characteristics corresponding to the composite characteristics, and constructing a kernel matrix corresponding to the second-order sample characteristics; and obtaining a sample to be predicted, and predicting the sample to be predicted according to an extreme learning machine model jointly constructed by the kernel matrix and the label vector to obtain a prediction result.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. Wherein the names of the modules do not in some cases constitute a limitation of the unit itself.
The computer-readable storage medium provided by the invention stores computer-readable program instructions for executing the multi-modal extreme learning machine-based prediction method, and solves the technical problem that the extreme learning machine model is unstable in precision due to random mapping. Compared with the prior art, the beneficial effects of the computer-readable storage medium provided by the embodiment of the invention are the same as the beneficial effects of the prediction method based on the multi-modal extreme learning machine provided by the embodiment, and are not repeated herein.
The present application also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the multimodal extreme learning machine based prediction method as described above.
The computer program product solves the technical problem that the accuracy of the extreme learning machine model is unstable due to random mapping. Compared with the prior art, the beneficial effects of the computer program product provided by the embodiment of the invention are the same as the beneficial effects of the prediction method based on the multi-modal extreme learning machine provided by the embodiment, and are not repeated herein.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (10)

1. A prediction method based on a multi-modal extreme learning machine is characterized by comprising the following steps:
acquiring training data and neuron weight parameters under each probability distribution, and constructing a label vector corresponding to the training data;
constructing a plurality of intermediate neurons under each probability distribution according to each neuron weight parameter;
constructing composite characteristics of the training data under each probability distribution according to each plurality of intermediate neurons;
calculating second-order sample characteristics corresponding to the composite characteristics, and constructing a kernel matrix corresponding to the second-order sample characteristics;
and obtaining a sample to be predicted, and predicting the sample to be predicted according to an extreme learning machine model jointly constructed by the kernel matrix and the label vector to obtain a prediction result.
2. The method as claimed in claim 1, wherein the neuron weight parameters comprise input weight vectors and input bias coefficients, the training data comprises at least one training sample, and the neuron weight parameters are used to construct a plurality of intermediate neurons under each probability distribution according to the following formula:
Figure FDA0003472906960000011
wherein φ (x) is the interneuron, x is the training sample, w is the input weight vector, b is the input bias coefficient,
Figure FDA0003472906960000012
d is the characteristic dimension of the training sample;
recording the ith neuron excited by the training sample x under the jth probability distribution as
Figure FDA0003472906960000013
Wherein the content of the first and second substances,
Figure FDA0003472906960000014
and
Figure FDA0003472906960000015
respectively representing the ith input weight vector generated under the jth probability distribution and the ith input bias coefficient generated under the jth probability distribution.
3. The method of claim 1, wherein the training data comprises at least one training sample, and the composite features of the training data under each probability distribution are constructed according to each of the plurality of interneurons by using the following formula:
Figure FDA0003472906960000021
wherein A (x) is the composite feature,
Figure FDA0003472906960000022
Figure FDA0003472906960000023
the number of the ith neuron excited under the mth probability distribution of the training sample x is z, and the number of the neuron weight parameters under the probability distribution is z.
4. The multi-modal extreme learning machine-based prediction method as claimed in claim 1, wherein the training data comprises at least one training sample, and the second-order sample feature corresponding to the composite feature is calculated by using the following formula:
Figure FDA0003472906960000024
wherein h (x) is the second order sample feature,
Figure FDA0003472906960000025
(x) is the composite feature, m is the number of probability distributions, and x is the training sample.
5. The multi-modal extreme learning machine-based prediction method as claimed in claim 1, wherein the training data at least comprises a training sample, and the kernel matrix corresponding to the second-order sample feature is constructed by using the following formula:
the kernel matrix
Figure FDA0003472906960000026
Row i and column j of
Figure FDA0003472906960000027
Figure FDA0003472906960000028
Wherein the content of the first and second substances,
Figure FDA0003472906960000029
which represents the Hadamard product of the two,
Figure FDA00034729069600000210
and
Figure FDA00034729069600000211
is a randomly generated matrix with elements of 1 or-1 and the probability of generating positive and negative 1 is the same, h (x) is the second-order sample feature, x is the training sample, r < m2And r is the dimensionality of the training sample after dimensionality reduction.
6. The multi-modal extreme learning machine-based prediction method as claimed in claim 1, wherein the prediction result is obtained by predicting the sample to be predicted according to an extreme learning machine model jointly constructed by the kernel matrix and the label vector by using the following formula:
Figure FDA0003472906960000031
wherein the content of the first and second substances,
Figure FDA0003472906960000032
representing the kernel matrix
Figure FDA0003472906960000033
Is the M-P generalized inverse of (a), y is the tag vector,
Figure FDA0003472906960000034
for the sample to be predicted, f is the extreme learning machine model, wherein,
Figure FDA0003472906960000035
wherein the content of the first and second substances,
Figure FDA0003472906960000036
for the sample to be predicted
Figure FDA0003472906960000037
Corresponding prediction result, xnThe nth sample feature in the sample to be predicted,
Figure FDA0003472906960000038
expressed as the second-order sample characteristic, h (x), corresponding to the sample to be predictedi) Is a second-order feature corresponding to the ith feature in the sample to be predicted,
Figure FDA0003472906960000039
which represents the Hadamard product of the two,
Figure FDA00034729069600000310
and
Figure FDA00034729069600000311
is a randomly generated matrix, the elements of which are 1 or-1, the probability of generating positive and negative 1 is the same, r < m2And r is the dimensionality of the training sample after dimensionality reduction.
7. The multi-modal extreme learning machine based prediction method as claimed in claim 1, wherein the training data comprises at least one training sample, and the step of constructing the label vector corresponding to the training data comprises:
obtaining a sample label corresponding to each training sample;
transposing vectors formed by the sample labels to obtain the label vectors, wherein the formula for obtaining the label vectors through calculation is as follows:
Figure FDA00034729069600000312
wherein y is the label vector, y1To ynAre all the sample labels.
8. A prediction device based on a multi-modal extreme learning machine is characterized by comprising:
the acquisition module is used for acquiring training data and neuron weight parameters under each probability distribution and constructing a label vector corresponding to the training data;
the intermediate neuron constructing module is used for constructing a plurality of intermediate neurons under each probability distribution according to each neuron weight parameter;
a composite feature construction module, configured to construct a composite feature of the training data under each of the probability distributions according to each of the plurality of intermediate neurons;
the kernel matrix construction module is used for calculating second-order sample characteristics corresponding to the composite characteristics and constructing a kernel matrix corresponding to the second-order sample characteristics;
and the prediction module is used for obtaining a sample to be predicted, predicting the sample to be predicted according to an extreme learning machine model jointly constructed by the kernel matrix and the label vector, and obtaining a prediction result.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the steps of the multi-modal extreme learning machine based prediction method of any of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a program for implementing the multi-modal extreme learning machine-based prediction method, the program for implementing the multi-modal extreme learning machine-based prediction method being executed by a processor to implement the steps of the multi-modal extreme learning machine-based prediction method according to any one of claims 1 to 7.
CN202210047643.6A 2022-01-17 2022-01-17 Prediction method, device, equipment and medium based on multi-mode extreme learning machine Pending CN114386523A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210047643.6A CN114386523A (en) 2022-01-17 2022-01-17 Prediction method, device, equipment and medium based on multi-mode extreme learning machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210047643.6A CN114386523A (en) 2022-01-17 2022-01-17 Prediction method, device, equipment and medium based on multi-mode extreme learning machine

Publications (1)

Publication Number Publication Date
CN114386523A true CN114386523A (en) 2022-04-22

Family

ID=81200940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210047643.6A Pending CN114386523A (en) 2022-01-17 2022-01-17 Prediction method, device, equipment and medium based on multi-mode extreme learning machine

Country Status (1)

Country Link
CN (1) CN114386523A (en)

Similar Documents

Publication Publication Date Title
EP3398119B1 (en) Generative neural networks for generating images using a hidden canvas
KR102170199B1 (en) Classify input examples using comparison sets
US20230024382A1 (en) Video clip positioning method and apparatus, computer device, and storage medium
CN112115257B (en) Method and device for generating information evaluation model
GB2571825A (en) Semantic class localization digital environment
KR20210092147A (en) Method and apparatus for mining entity focus in text
CN106776673A (en) Multimedia document is summarized
CN112149699B (en) Method and device for generating model and method and device for identifying image
CN112734873B (en) Image attribute editing method, device, equipment and medium for countermeasure generation network
CN111897934A (en) Question and answer pair generation method and device
CN112037223B (en) Image defect detection method and device and electronic equipment
CN115222066A (en) Model training method and device, behavior prediction method and device, and storage medium
CN114357170A (en) Model training method, analysis method, device, equipment and medium
CN110188158B (en) Keyword and topic label generation method, device, medium and electronic equipment
CN114638411A (en) Carbon dioxide concentration prediction method, device, equipment and medium
US20220187486A1 (en) Computer system and data processing method
CN114118526A (en) Enterprise risk prediction method, device, equipment and storage medium
CN112115744A (en) Point cloud data processing method and device, computer storage medium and electronic equipment
WO2021104274A1 (en) Image and text joint representation search method and system, and server and storage medium
CN113742590A (en) Recommendation method and device, storage medium and electronic equipment
CN112307243B (en) Method and apparatus for retrieving images
CN110555861B (en) Optical flow calculation method and device and electronic equipment
CN116151961A (en) Credit risk prediction method, electronic device and readable storage medium
CN114386523A (en) Prediction method, device, equipment and medium based on multi-mode extreme learning machine
CN114511152A (en) Training method and device of prediction model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination