EP4254273A1 - Maschinenlernprogramm, maschinenlernvorrichtung und maschinenlernverfahren - Google Patents

Maschinenlernprogramm, maschinenlernvorrichtung und maschinenlernverfahren Download PDF

Info

Publication number
EP4254273A1
EP4254273A1 EP23153237.5A EP23153237A EP4254273A1 EP 4254273 A1 EP4254273 A1 EP 4254273A1 EP 23153237 A EP23153237 A EP 23153237A EP 4254273 A1 EP4254273 A1 EP 4254273A1
Authority
EP
European Patent Office
Prior art keywords
training data
label
distribution
data
label distribution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP23153237.5A
Other languages
English (en)
French (fr)
Inventor
Takashi Katoh
Kento Uemura
Suguru YASUTOMI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Publication of EP4254273A1 publication Critical patent/EP4254273A1/de
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning

Definitions

  • the embodiment discussed herein is related to a machine learning program, a machine learning apparatus, a machine learning method.
  • Transfer training is known in which a machine learning model having been trained in a certain domain (region) is used in another domain in machine learning to realize additional training with a small amount of data.
  • transductive training in which training is performed with labeled data and labeling is performed on unlabeled data.
  • the transductive training is used for, for example, a case where labeled data created in an experimental environment is expanded to a plurality of application targets.
  • the features common to the domains are selected and created by using a distribution of the features as a clue, thereby suppressing the reduction of the accuracy due to features unique to an application source.
  • the distribution of the labels refers to an appearance frequency of the labels on a class-by-class basis.
  • FIG. 11 is a diagram illustrating a comparison between label distributions of the transfer source domain and label distributions of the transfer target domain.
  • This FIG. 11 indicates label distributions of company T and company R in both the transfer source domain and the transfer target domain with respect to a classification model that presumes a manufacturer name from an image of an automobile.
  • company T is a Japanese manufacturer
  • company R is a French manufacturer.
  • sign A1 and sign B1 of FIG. 11 both the label distribution of the transfer source domain and the label distribution of the transfer target domain are indicated.
  • Sign A1 denotes an example in which both the transfer source domain and the transfer target domain are Japan.
  • sign A1 label distributions are formed in which company T exceeds company R in both of the transfer source domain and the transfer target domain, and the label distributions in the transfer source domain and the transfer target domain are substantially the same.
  • sign B1 denotes an example in which the transfer source domain is Japan and the transfer target domain is France.
  • a label distribution is formed in which company T exceeds company R in the transfer source domain.
  • a label distribution is formed in which company R exceeds company T in the transfer target domain. For example, there is a difference in the label distribution between the transfer source domain and the transfer target domain.
  • sign A2 and sign B2 of FIG. 11 the distribution of the feature in the transfer source domain and the transfer target domain are indicated.
  • Sign A2 denotes an example in which both the transfer source domain and the transfer target domain are Japan
  • sign B2 denotes an example in which the transfer source domain is Japan and the transfer target domain is France.
  • the distributions of labels in transfer source data and transfer target data coincide with each other
  • the distributions of features in the transfer source data and the transfer target data also coincide with each other as indicated by sign A2, and transfer succeeds without reduction of model accuracy in the transfer target.
  • the machine learning model is biased by influence of the label distribution of the transfer source.
  • estimation accuracy of the machine learning model reduces.
  • the transductive training is originally performed under a condition in which the label distributions of the transfer source and the transfer target are similar to each other. Thus, when transfer to a domain having a difference in the label distribution is attempted, the assumption of the transfer training does not hold.
  • an object of the present disclosure is to improve accuracy of a classification model trained by transductive transfer training.
  • a machine learning program that causes at least one computer to execute a process, the process includes estimating a first label distribution that is a label distribution of unlabeled training data based on a classification model and an initial value of a label distribution of a transfer target domain, the classification model being trained by using labeled training data which corresponds to a transfer source domain and unlabeled training data which corresponds to the transfer target domain; acquiring a second label distribution based on the labeled training data; acquiring a weight of each label included in at least one training data selected from the labeled training data and the unlabeled training data based on a difference between the first label distribution and the second label distribution; and re-training the classification model by the labeled training data and the unlabeled training data, the labeled training data and the unlabeled training data being reflected the weight of each label.
  • the accuracy of the classification model trained by the transductive transfer training may be improved.
  • FIG. 1 is a diagram schematically illustrating a functional configuration of an information processing apparatus 1 as an example of the embodiment.
  • the information processing apparatus 1 realizes transductive training with respect to data classification that uses a machine learning model.
  • Transductive transfer training is transfer training of a classification model (machine learning model) using labeled training data corresponding to a transfer source domain (transfer source data) and unlabeled training data corresponding to a transfer target domain (transfer target data).
  • a classification model machine learning model
  • the information processing apparatus 1 trains the machine learning model with the labeled data in the first domain that is the transfer source (transfer source domain) and labels the unlabeled data in the transfer target domain by using this machine learning model.
  • the information processing apparatus 1 includes an encoder 101, a classifier 102, a first training control unit 103, a second training control unit 104, a third training control unit 105, and a fourth training control unit 106.
  • the encoder 101 and the classifier 102 are included in the machine learning model.
  • the information processing apparatus 1 weights data in the transfer source domain (transfer source data) or data in the transfer target domain (transfer target data) to reduce a difference (label distribution difference) between a label distribution of the transfer source data and a label distribution of the transfer target data. After that, transfer training is performed so that the distribution of features of the transfer source data coincides with the distribution of features of the transfer target data.
  • FIG. 2 is a diagram for explaining the reduction of the label distribution difference between the transfer source domain and the transfer target domain in the information processing apparatus 1 as the example of the embodiment.
  • sign A indicates a state in which a label distribution difference exists between the transfer source domain and the transfer target domain
  • sign B indicates a state in which the label distribution difference is reduced by weighting the data
  • Sign C indicates the distribution difference in feature between the transfer source data and the transfer target data.
  • FIG. 2 indicates label distributions of company T and company R in both the transfer source domain and the transfer target domain with respect to a classification model that presumes a manufacturer name from an image of an automobile.
  • company T is a Japanese manufacturer and company R is a French manufacturer.
  • Each of signs A and B indicates an example in which the transfer source domain is Japan and the transfer target domain is France.
  • company T exceeds company R in the transfer source domain
  • company R exceeds company T in the transfer target domain.
  • the second training control unit 104 to be described later calculates the degree of influence of each piece of data such that the difference between the label distribution of the transfer source data and the estimated label distribution of the transfer target data reduces (such that the label distribution of the transfer source data coincides with the estimated label distribution of the transfer target data) and weights the transfer source data. For example, by weighting the transfer source data, the degree of influence on the machine learning is adjusted so as coincide with the label distribution of the transfer target data.
  • the degree of influence is an index indicating the degree of influence exerted, by the difference in label distribution between the transfer source domain and the transfer target domain, on label estimation using machine learning or a trained machine learning model.
  • the label distribution difference between the transfer source domain and the transfer target domain is resolved.
  • the transfer training is performed so that the distribution of the features of the transfer source data coincides with the distribution of the features of the transfer target data.
  • this information processing apparatus 1 estimates the label distribution of the transfer target data based on the following characteristics (1) to (3).
  • the information processing apparatus 1 simultaneously/repeatedly applies training for causing output distributions of the encoder 101 to be coincident with each other by using label distribution estimation by output of the machine learning model and data weighted by the estimated label distribution and performs transductive transfer training.
  • the encoder 101 performs feature extraction (calculation of features) on data having been input thereto (input data).
  • the encoder 101 may perform feature extraction by using various known techniques, and specific description of the techniques is omitted.
  • a label (correct answer data) is assigned to the data in the transfer source domain (hereinafter referred to as transfer source data).
  • transfer source data the label assigned to the transfer source data
  • the transfer source data may also be referred to as transfer source labeled data.
  • a combination of the transfer source data and the label (correct answer data) may be referred to as a transfer source data set.
  • the information processing apparatus 1 trains the machine learning model by using a training data set including a plurality of such transfer source data sets.
  • transfer source data is denoted by sign Xs.
  • the label (transfer source label) assigned to the transfer source data is denoted by sign Ys.
  • the data in the transfer target domain (hereinafter referred to as transfer target data) is denoted by reference sign Xt.
  • the encoder 101 calculates features Zs based on the transfer source data Xs and features Zt based on the transfer target data Xt.
  • the classifier 102 classifies the input data based on the features calculated by the encoder 101.
  • the classifier 102 may classify the input data by using various known techniques, and specific description of the techniques is omitted.
  • a classification result for transfer source data is denoted by sign Ys'.
  • a classification result for transfer target data is denoted by sign Yt'.
  • the first training control unit 103 compares a prior label distribution (prior-dist.) with an estimated label distribution (estimated-dist.) and updates a value of the estimated label distribution (estimated-dist.) such that the estimated label distribution (estimated-dist.) approaches the prior label distribution (prior-dist.).
  • the estimated label distribution (estimated-dist.) is an estimated value of a label distribution in the transfer target domain and is a label distribution of unlabeled training data.
  • a predicted value of the label distribution in the transfer target domain is set in advance as the prior label distribution (prior-dist.).
  • a plurality of pieces of labeled (with the correct answer data) data in the transfer target domain may be prepared, and a distribution of the labels of the plurality of pieces of transfer target labeled data may be used.
  • the number of the pieces of the transfer target labeled data may be small.
  • the prior label distribution is not limited to the above description and may be appropriately changed for performing. For example, a user may arbitrarily input the prior label distribution. As the prior label distribution (prior-dist.), the labels may be set to a uniform distribution. Also, the distribution of the labels may be randomly set.
  • a value of the prior label distribution may be used as an initial value of the estimated label distribution (estimated-dist.).
  • the first training control unit 103 updates (trains) the estimated label distribution (estimated-dist.) so as to reduce the distribution difference from the prior label distribution estimated from the transfer target labeled data or assigned by the user.
  • the first training control unit 103 may use Kullback-Leibler divergence (KL-divergence) to compare the prior label distribution (prior-dist.) and the estimated label distribution (estimated-dist.). For example, the first training control unit 103 may update the estimated label distribution (estimated-dist.) so as to reduce the value of the KL-divergence.
  • KL-divergence Kullback-Leibler divergence
  • Update of the estimated label distribution (estimated-dist.) by the first training control unit 103 may be realized by using a known technique. For example, in updating the estimated label distribution (estimated-dist.), the first training control unit 103 may add or subtract a predetermined value to or from a value of a distribution of a specific label in the estimated label distribution (estimated-dist.) such that the estimated label distribution (estimated-dist.) approaches the prior label distribution (prior-dist.).
  • the first training control unit 103 estimates the estimated label distribution (estimated-dist.) that is the label distribution of the unlabeled training data.
  • the second training control unit 104 weights the data of the transfer source.
  • the second training control unit 104 calculates (measures) the label distribution in the transfer source domain (transfer source label distribution).
  • the second training control unit 104 calculates the weight of the data on a label-by-label basis.
  • the weight may be set for the transfer source data or the transfer target data. An example in which the second training control unit 104 performs weighting on the transfer source data is described below.
  • the second training control unit 104 calculates the degree of influence of each piece of data so as to reduce the difference (distribution difference) between the label distribution of the weighted transfer source data and the estimated label distribution of the transfer target.
  • the second training control unit 104 weights the transfer source data such that the degree of influence of the transfer source data of the label L is pt/ps times.
  • the second training control unit 104 calculates the weight related to an object label by reflecting in the degree of influence of the object label the ratio between the proportion of the data of the object label in the transfer source data (labeled training data) and the proportion (the proportion of the estimated data) of the data for which the object label is estimated in the transfer target data (unlabeled training data).
  • the second training control unit 104 calculates the weight related to the object label by reflecting in the degree of influence of the object label the ratio between the proportion of the data of the object label in the labeled training data and the proportion of the data for which the object label is estimated in the unlabeled training data.
  • the second training control unit 104 performs the similar process on each label while sequentially changing the object labels, thereby setting the degree of influence of each piece of data for the transfer source data to obtain coincidence with the estimated label distribution of the transfer target.
  • the second training control unit 104 performs training (machine learning) on the machine learning model (the encoder 101 and the classifier 102) by using the training data set.
  • the machine learning model is configured by using a neural network.
  • the machine learning model performs classification (labeling, classification) of the input data.
  • the neural network may be a hardware circuit or a virtual network by software that couples layers virtually built in a computer program by a processor 11 to be described later (see FIG. 3 ).
  • the neural network may be referred to as "NN".
  • the second training control unit 104 performs the training on the machine learning model (the encoder 101 and the classifier 102) by using the transfer source data (Xs) weighted as described above and the correct answer data (Ys) corresponding to the transfer source data (Xs). For example, the second training control unit 104 repeatedly performs a process of updating parameters (training parameters) of the neural network of the machine learning model (the encoder 101 and the classifier 102) so as to reduce an error between the classification result (Ys') which is an output of the machine learning model and the correct answer data (Ys).
  • the second training control unit 104 updates the training parameters of the classifier 102 and the encoder 101 so as to reduce the classification error of the weighted transfer source data, thereby realizing the training of the machine learning model.
  • the second training control unit 104 performs a training process by reflecting the updated estimated label distribution (estimated-dist.) as the weight in the training data corresponding to the transfer source domain or the training data corresponding to the transfer target domain.
  • the third training control unit 105 measures the label distribution of the classification result (Yt') for the transfer source data (output label distribution).
  • the third training control unit 105 compares the output label distribution with the estimated label distribution (estimated-dist.) and updates the value of the estimated label distribution (estimated-dist.) such that the estimated label distribution (estimated-dist.) approaches the output label distribution.
  • This information processing apparatus 1 uses this estimated label distribution (estimated-dist.) as the label distribution of the transfer target data. The reason for this is that the output label distribution of the machine learning model in a case where the transfer is successful may be regarded as the label distribution of the transfer target data.
  • the third training control unit 105 updates (trains) the estimated label distribution (estimated-dist.) so as to reduce the distribution difference from the distribution of the output label (output label distribution) of the classifier 102 for the transfer target data.
  • the third training control unit 105 may use KL-divergence to compare the output label distribution and the estimated label distribution (estimated-dist.).
  • the update of the estimated label distribution (estimated-dist.) by the third training control unit 105 may be realized by using a known technique. For example, in updating the estimated label distribution (estimated-dist.), the third training control unit 105 may add or subtract a predetermined value to or from a value of a distribution of a specific label in the estimated label distribution (estimated-dist.) such that the estimated label distribution (estimated-dist.) approaches the output label distribution. The third training control unit 105 updates the estimated label distribution (estimated-dist.) so as to reduce the value of the KL-divergence.
  • the third training control unit 105 updates the estimated label distribution (estimated-dist.) so as to reduce the difference between the estimated label distribution (estimated-dist.) and the label distribution (output distribution of the model: Yt') obtained from an estimation result by the machine learning model (classification model).
  • the fourth training control unit 106 trains (performs the machine learning on) the encoder 101 so as to reduce the difference (distribution difference) between the distribution of the features of the transfer source data (Zs) weighted as described above and the distribution of the features of the transfer target data (Zt).
  • the fourth training control unit 106 may use, for example, maximum mean discrepancy (MMD) to compare the prior label distribution (prior-dist.) and the estimated label distribution (estimated-dist.).
  • MMD maximum mean discrepancy
  • the second training control unit 104 sets the weight of the transfer source data on a label-by-label basis so as to reduce the distribution difference between the transfer source label distribution and the estimated label distribution (estimated-dist.). Measurement of the difference (distribution difFerence) between the distribution of the features Zs of the weighted transfer source data and the distribution of the features Zt of the transfer target data by using MMD as described above may be referred to as a weighted-MMD.
  • the fourth training control unit 106 updates (trains) the parameter (training parameter) of the encoder 101 so as to reduce the difference (distribution difFerence) between the distribution of the features Zs of the weighted transfer source data and the distribution of the features Zt of the transfer target data.
  • FIG. 3 is a diagram exemplifying a hardware configuration of the information processing apparatus 1 as an example of the embodiment.
  • the information processing apparatus 1 includes, for example, the processor 11, a memory 12, a storage device 13, a graphic processing device 14, an input interface 15, an optical drive device 16, a device coupling interface 17, and a network interface 18 as elements. These elements 11 to 18 are mutually communicably configured via a bus 19.
  • the processor (processing unit) 11 controls the entirety of the information processing apparatus 1.
  • the processor 11 may be a multiprocessor.
  • the processor 11 may be any one of, for example, a central processing unit (CPU), a microprocessor unit (MPU), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a programmable logic device (PLD), a field-programmable gate array (FPGA), and a graphics processing unit (GPU).
  • the processor 11 may be a combination of two or more types of the elements out of the CPU, the MPU, the DSP, the ASIC, the PLD, the FPGA, and the GPU.
  • the processor 11 executes control programs (the machine learning program, the data processing program, and an operating system (OS) program) for the information processing apparatus 1 to function as the encoder 101, the classifier 102, the first training control unit 103, the second training control unit 104, the third training control unit 105, and the fourth training control unit 106 exemplified in FIG. 1 .
  • control programs the machine learning program, the data processing program, and an operating system (OS) program
  • the information processing apparatus 1 executes the programs (the machine learning program and the OS program) recorded in, for example, a non-transitory computer-readably recording medium to realize the functions as the encoder 101, the classifier 102, the first training control unit 103, the second training control unit 104, the third training control unit 105, and the fourth training control unit 106.
  • the programs the machine learning program and the OS program recorded in, for example, a non-transitory computer-readably recording medium to realize the functions as the encoder 101, the classifier 102, the first training control unit 103, the second training control unit 104, the third training control unit 105, and the fourth training control unit 106.
  • the information processing apparatus 1 executes the programs (the data processing program, the OS program) recorded in, for example, a computer-readable non-transitory recording medium to realize the functions of the encoder 101 and the classifier 102.
  • Programs in which the content of processing to be executed by the information processing apparatus 1 is described may be recorded in various recording media.
  • the programs to be executed by the information processing apparatus 1 may be stored in the storage device 13.
  • the processor 11 loads at least part of the programs in the storage device 13 onto the memory 12 and executes the loaded program.
  • the programs to be executed by the information processing apparatus 1 may be recorded in a non-transitory portable-type recording medium such as an optical disc 16a, a memory device 17a, and a memory card 17c.
  • a non-transitory portable-type recording medium such as an optical disc 16a, a memory device 17a, and a memory card 17c.
  • the programs stored in the portable-type recording medium become executable after being installed in the storage device 13 under the control from the processor 11.
  • the processor 11 may execute the programs by reading the programs directly from the portable-type recording medium.
  • the memory 12 is a storage memory including a read-only memory (ROM) and a random-access memory (RAM).
  • the RAM of the memory 12 is used as a main storage device of the information processing apparatus 1.
  • the programs to be executed by the processor 11 are at least partially stored in the RAM temporarily.
  • Various types of data desired for the processing by the processor 11 are stored in the memory 12.
  • the storage device 13 is a storage device such as a hard disk drive (HDD), a solid-state drive (SSD), or a storage class memory (SCM) and stores various types of data.
  • the storage device 13 is used as an auxiliary storage device of the information processing apparatus 1.
  • the OS program, the control programs, and various types of data are stored in the storage device 13.
  • the control programs include the machine learning program and the data processing program.
  • a semiconductor storage device such as an SCM or a flash memory may be used as the auxiliary storage device.
  • Redundant arrays of inexpensive disks RAID may be configured with a plurality of storage devices 13.
  • the storage device 13 may store various types of data generated when the encoder 101, the classifier 102, the first training control unit 103, the second training control unit 104, the third training control unit 105, and the fourth training control unit 106 described above execute processes.
  • a monitor 14a is coupled to the graphic processing device 14.
  • the graphic processing device 14 displays an image in a screen of the monitor 14a in accordance with an instruction from the processor 11.
  • Examples of the monitor 14a include a display device using a cathode ray tube (CRT), a liquid crystal display device, and so forth.
  • a keyboard 15a and a mouse 15b are coupled to the input interface 15.
  • the input interface 15 transmits signals received from the keyboard 15a and the mouse 15b to the processor 11.
  • the mouse 15b is an example of a pointing device, and a different pointing device may be used. Examples of the different pointing device include a touch panel, a tablet, a touch pad, a track ball, and so forth.
  • the optical drive device 16 reads data recorded in the optical disc 16a by using laser light or the like.
  • the optical disc 16a is a portable-type non-transitory recording medium in which data is recorded in such a way that the data is readable by using light reflection.
  • Examples of the optical disc 16a include a Digital Versatile Disc (DVD), a DVD-RAM, a compact disc ROM (CD-ROM), a CD-recordable (R)/rewritable (RW), and the like.
  • the device coupling interface 17 is a communication interface for coupling peripheral devices to the information processing apparatus 1.
  • the memory device 17a or a memory reader/writer 17b may be coupled to the device coupling interface 17.
  • the memory device 17a is a non-transitory recording medium such as a Universal Serial Bus (USB) memory that has a function of communicating with the device coupling interface 17.
  • the memory reader/writer 17b writes data to the memory card 17c or reads data from the memory card 17c.
  • the memory card 17c is a card-type non-transitory recording medium.
  • the network interface 18 is coupled to a network.
  • the network interface 18 exchanges data via the network.
  • Other information processing apparatuses, communication devices, and so forth may be coupled to the network.
  • the processor 11 executes the machine learning program to realize the functions as the encoder 101, the classifier 102, the first training control unit 103, the second training control unit 104, the third training control unit 105, and the fourth training control unit 106.
  • These units including the encoder 101, the classifier 102, the first training control unit 103, the second training control unit 104, the third training control unit 105, and the fourth training control unit 106 function in a training phase.
  • the processor 11 executes the data processing program to realize the functions as the encoder 101 and the classifier 102. These units including the encoder 101 and the classifier 102 function in an inference phase.
  • FIG. 4 steps S1 to S10 with reference to FIGs. 5 to 8.
  • FIGs. 5 to 8 are diagrams for explaining the processing executed by the information processing apparatus 1.
  • step S1 for example, the second training control unit 104 initializes the encoder 101 and the classifier 102 included in the machine learning model.
  • step S2 the predicted value of the label distribution in the transfer target domain is set in advance as the prior label distribution (prior-dist.), (see sign A1 of FIG. 5 ).
  • the user may set the distribution of the labels of the plurality of pieces of transfer target labeled data as the prior label distribution (prior-dist.).
  • step S3 the first training control unit 103 initializes the estimated label distribution (estimated-dist.) by using the value of the prior label distribution (prior-dist.), (see sign A2 of FIG. 5 ).
  • step S4 based on the labels of the transfer source data (transfer source label), the second training control unit 104 calculates (measures) the label distribution in the transfer source domain (transfer source label distribution), (see sign A3 of FIG. 6 ).
  • step S5 based on the transfer source label distribution and the estimated label distribution (estimated-dist.), the second training control unit 104 calculates the weight of the data on a label-by-label basis (see sign A4 of FIG. 6 ).
  • step S6 the second training control unit 104 performs the training on the machine learning model (the encoder 101 and the classifier 102) by using the weighted transfer source data and the correct answer data corresponding to the transfer source data (see signs A5 and A6 of FIG. 6 ). Accordingly, the second training control unit 104 performs the process of updating the training parameters of the neural network of the machine learning model (the encoder 101 and the classifier 102) so as to reduce the error between the classification result which is the output of the machine learning model and the correct answer data.
  • step S7 the fourth training control unit 106 updates (trains) the parameter (training parameter) of the encoder 101 so as to reduce the difference (distribution difFerence) between the distribution of the features of the weighted transfer source data and the distribution of the features of the transfer target data (see sign A7 of FIG. 7 ).
  • step S8 the third training control unit 105 measures the label distribution of the classification result for the transfer source data (output label distribution).
  • step S9 the value of the estimated label distribution (estimated-dist.) is updated.
  • the first training control unit 103 compares the prior label distribution (prior-dist.) with the estimated label distribution (estimated-dist.) and updates the value of the estimated label distribution (estimated-dist.) such that the estimated label distribution (estimated-dist.) approaches the prior label distribution (prior-dist.), (see sign A8 of FIG. 8 ).
  • the third training control unit 105 compares the output label distribution with the estimated label distribution (estimated-dist.) and updates the value of the estimated label distribution (estimated-dist.) such that the estimated label distribution (estimated-dist.) approaches the output label distribution (see sign A9 of FIG. 8 ).
  • step S10 whether a training end condition is satisfied is checked.
  • the training end condition may be determined to be satisfied in a case where the number of times of training performed by using the training data reaches a predetermined number of epochs or in a case where accuracy of the machine learning model reaches a predetermined threshold.
  • step S10 In a case where the training end condition is not satisfied (see a NO route in step S10), the processing returns to step S5. In a case where the training end condition is satisfied (see YES route in step S10), the processing ends.
  • the transfer target data Xt is input to the encoder 101.
  • the encoder 101 calculates the features Zt based on the transfer target data Xt.
  • the classifier 102 classifies the input data based on the features calculated by the encoder 101 and outputs the classification result Yt'.
  • the second training control unit 104 sets the degree of influence of the transfer source data so as to reduce the difference between the label distribution of the transfer source data and the estimated label distribution of the transfer target data and weights the transfer source data. Accordingly, the difference in distribution of features between the transfer source data and the transfer target data reduces, and, in the inference phase, the accuracy of classification of the transfer target data is improved.
  • FIG. 10 is a diagram illustrating a comparison between feature distributions before and after the training in the information processing apparatus 1 as the example of the embodiment.
  • sign A indicates a feature distribution before the training and sign B indicates a feature distribution after the training.
  • distributions of features of class A and features of class B in the transfer source domain are different from distributions of the features of class A and the features of class B in the transfer target domain.
  • the transfer target data is not necessarily correctly classified into the classes with a boundary of the classes set based on the transfer source data.
  • the degree of influence of each piece of data is calculated such that the difference between the label distribution of the transfer source data and the estimated label distribution of the transfer target data reduces (or, such that the label distribution of the transfer source data and the estimated label distribution of the transfer target data coincide with each other) and the weights are assigned to the transfer source data.
  • the machine learning model (the encoder 101 and the classifier 102) is trained by using the transfer source data weighted as described above.
  • the encoder 101 is trained (undergoes the machine learning) so as to reduce the difference (distribution difFerence) between the distribution of the features of the weighted transfer source data and the distribution of the features of the transfer target data.
  • the difference between distributions of class A and class B in the transfer source domain and distributions of class A and class B in the transfer target domain reduces, and the transfer target data may be correctly classified into the classes with the boundary of the classes set based the transfer source data. For example, the classification accuracy in the inference phase is improved. Since the weight is assigned to the transfer source data, the loss (MMD) is reduced.
  • the transductive transfer training may be performed, and, in addition, the classification accuracy of the transductive transfer training may be improved.
  • the third training control unit 105 updates (estimates) the estimated label distribution by using the output of the machine learning model including the encoder 101 and the classifier 102.
  • the second training control unit 104 performs weighting on the transfer source data based on this estimated label distribution. Accordingly, the difference between the label distribution in the transfer source data and the label distribution in the transfer target data (label distribution difFerence) may be reduced.
  • the fourth training control unit 106 simultaneously/repeatedly applies the training for causing the output distributions of the encoder 101 to be coincident with each other by using the weighted transfer source data and performs the transfer training. Accordingly, the distributions of the features in the transfer source domain and the features in the transfer target domain may coincide with each other, and reduction of accuracy due to the transductive transfer training may be suppressed.
  • the information processing apparatus 1 is provided with, as the parameter, the estimated label distribution indicating the estimated distribution of the labels of transfer target data in real time to train the encoder 101 and the classifier 102.
  • the first training control unit 103 updates (trains) the estimated label distribution (estimated-dist.) so as to reduce the difference from the output label distribution estimated from the transfer target labeled data or the prior label distribution assigned by the user. Accordingly, the estimated label distribution of the transfer target data may be determined even in the transductive transfer training.
  • the third training control unit 105 updates (trains) the estimated label distribution (estimated-dist.) so as to reduce the distribution difference from the distribution of the output label (output label distribution) of the classifier 102 for the transfer target data. Accordingly, the estimated label distribution (estimated-dist.) may be maintained in the latest state, and reduction of the accuracy due to the transductive transfer training may be suppressed by reflecting in the weights reflected in the transfer source data or the training parameter of the encoder 101.
  • the fourth training control unit 106 updates (trains) the parameter (training parameter) of the encoder 101 so as to reduce the difference (distribution difFerence) between the distribution of the features Zs of the weighted transfer source data and the distribution of the features Zt of the transfer target data. Accordingly, reduction of the accuracy in the transductive transfer training caused by the difference (distribution difFerence) between the distribution of the features Zs of the transfer source data and the distribution of the features Zt of the transfer target data may be suppressed.
  • the second training control unit 104 updates the parameters of the classifier 102 and the encoder 101 so as to reduce the classification error of the weighted transfer source data, thereby realizing the training of the machine learning model. Accordingly, the machine learning model (the classifier 102 and the encoder 101) is trained by using the transfer source data that has been weighted so as to reduce the difference between the label distribution of the transfer source data and the estimated label distribution of the transfer target data. Accordingly, reduction of the accuracy in the transductive transfer training caused by the difference (label distribution difference) between the label distribution of the transfer source data and the label distribution of the transfer target data may be suppressed.
  • the embodiment is not limited thereto.
  • another technique such as Pearson distance may be used instead of the KL distance, and may be appropriately changed for implementation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)
EP23153237.5A 2022-03-28 2023-01-25 Maschinenlernprogramm, maschinenlernvorrichtung und maschinenlernverfahren Pending EP4254273A1 (de)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2022051602A JP2023144562A (ja) 2022-03-28 2022-03-28 機械学習プログラム,データ処理プログラム,情報処理装置,機械学習方法およびデータ処理方法

Publications (1)

Publication Number Publication Date
EP4254273A1 true EP4254273A1 (de) 2023-10-04

Family

ID=85076206

Family Applications (1)

Application Number Title Priority Date Filing Date
EP23153237.5A Pending EP4254273A1 (de) 2022-03-28 2023-01-25 Maschinenlernprogramm, maschinenlernvorrichtung und maschinenlernverfahren

Country Status (3)

Country Link
US (1) US20230306306A1 (de)
EP (1) EP4254273A1 (de)
JP (1) JP2023144562A (de)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009237923A (ja) 2008-03-27 2009-10-15 Nec Corp 学習方法およびシステム
JP2009543254A (ja) 2006-07-12 2009-12-03 コファックス インコーポレイテッド トランスダクティブデータ分類のための方法およびシステム、ならびに機械学習手法を用いたデータ分類方法
US20150339591A1 (en) 2014-05-23 2015-11-26 Washington State University, Office of Commercialization Collegial Activity Learning Between Heterogeneous Sensors
JP2016143094A (ja) 2015-01-29 2016-08-08 パナソニックIpマネジメント株式会社 転移学習装置、転移学習システム、転移学習方法およびプログラム
US20210019629A1 (en) * 2019-07-17 2021-01-21 Naver Corporation Latent code for unsupervised domain adaptation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009543254A (ja) 2006-07-12 2009-12-03 コファックス インコーポレイテッド トランスダクティブデータ分類のための方法およびシステム、ならびに機械学習手法を用いたデータ分類方法
JP2009237923A (ja) 2008-03-27 2009-10-15 Nec Corp 学習方法およびシステム
US20150339591A1 (en) 2014-05-23 2015-11-26 Washington State University, Office of Commercialization Collegial Activity Learning Between Heterogeneous Sensors
JP2016143094A (ja) 2015-01-29 2016-08-08 パナソニックIpマネジメント株式会社 転移学習装置、転移学習システム、転移学習方法およびプログラム
US20210019629A1 (en) * 2019-07-17 2021-01-21 Naver Corporation Latent code for unsupervised domain adaptation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ABOLFAZL FARAHANI ET AL: "A Brief Review of Domain Adaptation", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 7 October 2020 (2020-10-07), XP081781561 *

Also Published As

Publication number Publication date
US20230306306A1 (en) 2023-09-28
JP2023144562A (ja) 2023-10-11

Similar Documents

Publication Publication Date Title
JP4757116B2 (ja) パラメータ学習方法及びその装置、パターン識別方法及びその装置、プログラム
US20200321118A1 (en) Method for domain adaptation based on adversarial learning and apparatus thereof
JP7040104B2 (ja) 学習プログラム、学習方法および学習装置
US11501203B2 (en) Learning data selection method, learning data selection device, and computer-readable recording medium
US20200042903A1 (en) Multi-layered machine learning system to support ensemble learning
US20190279039A1 (en) Learning method, learning apparatus, and computer-readable recording medium
US20210383214A1 (en) Method for learning of deep learning model and computing device for executing the method
US8750604B2 (en) Image recognition information attaching apparatus, image recognition information attaching method, and non-transitory computer readable medium
US20210089823A1 (en) Information processing device, information processing method, and non-transitory computer-readable storage medium
US20210192392A1 (en) Learning method, storage medium storing learning program, and information processing device
Norris Machine Learning with the Raspberry Pi
JP2019204214A (ja) 学習装置、学習方法、プログラム及び推定装置
EP4254273A1 (de) Maschinenlernprogramm, maschinenlernvorrichtung und maschinenlernverfahren
US20230107006A1 (en) Disentangled out-of-distribution (ood) calibration and data detection
US20230186118A1 (en) Computer-readable recording medium storing accuracy estimation program, device, and method
US20220392107A1 (en) Image processing apparatus, image processing method, image capturing apparatus, and non-transitory computer-readable storage medium
EP4141746A1 (de) Maschinenlernprogramm, maschinenlernverfahren und maschinenlernvorrichtung
US11854528B2 (en) Method and system for detecting unsupported utterances in natural language understanding
US20220237512A1 (en) Storage medium, information processing method, and information processing apparatus
US11113569B2 (en) Information processing device, information processing method, and computer program product
CN113722675A (zh) 一种多模态轨迹预测模型的训练方法
US20230154151A1 (en) Image processing apparatus, control method thereof, and storage medium
US20230368072A1 (en) Computer-readable recording medium storing machine learning program, machine learning method, and information processing device
US20240177061A1 (en) Label accuracy improvement device, label accuracy improvement method, and storage medium
EP4350585A1 (de) Maschinenlernprogramm, maschinenlernverfahren und maschinenlernvorrichtung

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC ME MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20231123

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC ME MK MT NL NO PL PT RO RS SE SI SK SM TR