TWI755774B - Loss function optimization system, method and the computer-readable record medium - Google Patents

Loss function optimization system, method and the computer-readable record medium Download PDF

Info

Publication number
TWI755774B
TWI755774B TW109121654A TW109121654A TWI755774B TW I755774 B TWI755774 B TW I755774B TW 109121654 A TW109121654 A TW 109121654A TW 109121654 A TW109121654 A TW 109121654A TW I755774 B TWI755774 B TW I755774B
Authority
TW
Taiwan
Prior art keywords
class label
updated
loss function
classification
true
Prior art date
Application number
TW109121654A
Other languages
Chinese (zh)
Other versions
TW202201290A (en
Inventor
郭立言
劉謹瑋
朱俊翰
王彥翔
Original Assignee
萬里雲互聯網路有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 萬里雲互聯網路有限公司 filed Critical 萬里雲互聯網路有限公司
Priority to TW109121654A priority Critical patent/TWI755774B/en
Publication of TW202201290A publication Critical patent/TW202201290A/en
Application granted granted Critical
Publication of TWI755774B publication Critical patent/TWI755774B/en

Links

Images

Abstract

A loss function optimization method and the computer-readable record medium comprise: predicted classification labels of the classification model according to the first real classification label and the second real classification label are input separately; an updated hyperparameter used by the first real classification label is set in the original Focal Loss function. The updated hyperparameter is bigger than the updated second hyperparameter used by the second real classification label so that an updated loss function of classification is obtained. The updated loss function of classification provided by this invention is helpful to increase the precision in binary and multi-class classification model. This invention provides further a loss function optimization system.

Description

損失函數的優化系統、優化方法及其電腦可讀取記錄媒體 Loss function optimization system, optimization method and computer-readable recording medium thereof

本發明涉及機器學習技術,尤指一種可執行機器學習任務及優化Focal Loss損失函數,以用於訓練機器學習模型的「損失函數的優化系統、優化方法及其電腦可讀取記錄媒體」。The present invention relates to machine learning technology, and more particularly, to a "loss function optimization system, optimization method and computer-readable recording medium" that can perform machine learning tasks and optimize a Focal Loss loss function for training a machine learning model.

隨著應用機器學習技術之分類模型的快速發展,越來越多應用層面依賴於影像分類或事件分類的結果,但是在模型的學習過程中,常常面臨二元分類、或多類別訓練樣本不均衡,而造成分類模型學習不準確的問題,從而降低了分類任務的準確性,對此,目前的作法大多是透過設置損失函數解決訓練樣本不均衡的問題,例如一交叉熵損失函數(Cross Entropy)或一權重自適應交叉熵損失函數(Focal Loss)等,換言之,模型的好壞有絕大部分的因素來至損失函數,其中,Focal Loss損失函數

Figure 02_image001
可表示如下:
Figure 02_image003
Figure 02_image005
With the rapid development of classification models using machine learning technology, more and more applications rely on the results of image classification or event classification. However, in the process of model learning, binary classification or multi-class training samples are often unbalanced. , which causes the problem of inaccurate learning of the classification model, thereby reducing the accuracy of the classification task. In this regard, the current practice mostly solves the problem of imbalanced training samples by setting a loss function, such as a cross entropy loss function (Cross Entropy) Or a weight adaptive cross entropy loss function (Focal Loss), in other words, the quality of the model has most of the factors from the loss function, among which, the Focal Loss loss function
Figure 02_image001
It can be expressed as follows:
Figure 02_image003
Figure 02_image005

承上,若分類任務為二元分類,則

Figure 02_image007
為真實類別標籤、且有y=[1,0](正樣本標籤)或y=[0,1](負樣本標籤)兩種instance label的情況,
Figure 02_image009
為預測類別標籤,
Figure 02_image011
可被視為預測正確的機率,假設y=[1,0](例如將此真實樣本標籤解讀為「有犯罪」),ŷ1=[0.9],ŷ2=[0.1],則q1=0.9,q2=0.1,
Figure 02_image013
Figure 02_image013
Figure 02_image015
;平衡參數
Figure 02_image017
可避免因正負樣本本身數量比例不均,而影響模型檢測準確度的問題;超參數
Figure 02_image019
在此則作為一種調制係數(modulating factor)。 Continuing from the above, if the classification task is binary classification, then
Figure 02_image007
For the real category label, and there are two instance labels y=[1,0] (positive sample label) or y=[0,1] (negative sample label),
Figure 02_image009
is the predicted class label,
Figure 02_image011
It can be regarded as the probability of correct prediction, assuming y=[1,0] (such as interpreting this real sample label as "criminal"), ŷ1=[0.9], ŷ2=[0.1], then q1=0.9, q2 =0.1,
Figure 02_image013
;
Figure 02_image013
,
Figure 02_image015
; balance parameter
Figure 02_image017
It can avoid the problem of affecting the accuracy of model detection due to the uneven proportion of positive and negative samples; hyperparameters
Figure 02_image019
Here, it is used as a modulating factor.

相較於使用Cross Entropy損失函數的分類模型,雖然使用Focal Loss損失函數的分類模型,可使得模型在訓練時更專注於較難分類的樣本,但無論是何種分類任務,都會因為Focal Loss損失函數的超參數γ通常為固定值(例如通常預設為

Figure 02_image021
),導致其在損失(loss)的效能(performance)表現,仍具有一定的局限性,是以,如何提出一種可優化既有Focal Loss損失函數,以提升使用者較在意分類(若以二元分類為例,使用者可能較在意真實類別標籤為1的模型預測情況)之損失,以盡可能達成最小化錯誤預測、利用優化後Focal Loss損失函數,訓練出效能更佳之分類模型、及同時可解決因訓練樣本懸殊,所以在計算損失(loss)時會有極大不平衡(extreme imbalance)的問題的技術手段,仍為有待解決之問題。 Compared with the classification model using the Cross Entropy loss function, although the classification model using the Focal Loss loss function can make the model focus more on the samples that are more difficult to classify during training, no matter what kind of classification task, it will be because of the Focal Loss loss. The hyperparameter γ of the function is usually a fixed value (for example, it is usually preset to
Figure 02_image021
), resulting in its performance in loss, which still has certain limitations. Therefore, how to propose a loss function that can optimize the existing Focal Loss to improve the classification that users are more concerned about (if binary For example, in classification, users may be more concerned about the loss of the model with the true class label of 1), in order to minimize the wrong prediction as much as possible, use the optimized Focal Loss loss function to train a classification model with better performance, and at the same time can The technical means to solve the problem of extreme imbalance in the calculation of loss due to the disparity of training samples is still a problem to be solved.

為達上述目的,本發明人基於多年從事於機器學習技術之研究、開發及實務經驗,提出一種損失函數的優化系統,包括一處理器及通訊/電性耦接於處理器的一記憶體,其中,處理器用以存取並執行記憶體所儲存的至少一指令,以建構一Focal Loss函數,以作爲一原始分類損失函數;輸入一分類模型(可為二元或多元分類模型)對於一第一真實類別標籤的一第一預測類別標籤;輸入分類模型對於一第二真實類別標籤的一第二預測類別標籤;於原始分類損失函數中,設定第一真實類別標籤所使用之一更新後超參數,且更新後超參數大於第二真實類別標籤所使用之一更新後第二超參數,以得到一更新後分類損失函數,且更新後分類損失函數,用於提升分類模型生成預測類別標籤的準確性(例如可提升二元分類模型之預測類別標籤屬於偽陰性指標的損失);另,本發明的處理器亦可輸入分類模型對於一第三真實類別標籤的一第三預測類別標籤,使得處理器可於原始分類損失函數中,設定第一真實類別標籤所使用之更新後超參數,皆大於第二真實類別標籤所使用之更新後第二超參數、及第三真實類別標籤所使用之一更新後第三超參數,以得到更新後分類損失函數,此外,更新後分類損失函數可供以訓練例如圖像分類、音頻分類、異常交易分類、詐騙偵測等任務的分類模型;本發明人亦提出一種可執行前述指令之損失函數的優化方法、及其電腦可讀取記錄媒體。In order to achieve the above purpose, the present inventor proposes a loss function optimization system based on years of experience in the research, development and practice of machine learning technology, including a processor and a memory communicatively/electrically coupled to the processor, The processor is used to access and execute at least one instruction stored in the memory to construct a Focal Loss function as an original classification loss function; input a classification model (which can be a binary or multivariate classification model) for a first A first predicted class label of a true class label; a second predicted class label of a second true class label of the input classification model; in the original classification loss function, set the one used by the first true class label to update the superclass parameter, and the updated hyperparameter is greater than the updated second hyperparameter used by the second true category label to obtain an updated classification loss function, and the updated classification loss function is used to improve the classification model to generate the predicted category label. Accuracy (for example, it can improve the loss that the predicted class label of the binary classification model belongs to the false negative index); in addition, the processor of the present invention can also input a third predicted class label of the classification model for a third real class label, so that The processor may set the updated hyperparameters used by the first true class label in the original classification loss function to be greater than the updated second hyperparameters used by the second true class label and the updated second hyperparameters used by the third true class label. an updated third hyperparameter to obtain the updated classification loss function, in addition, the updated classification loss function can be used to train classification models for tasks such as image classification, audio classification, abnormal transaction classification, fraud detection, etc.; the present invention Also proposed is an optimization method for executing the loss function of the aforementioned instruction, and a computer-readable recording medium thereof.

為使 貴審查委員得以清楚瞭解本發明之目的、技術特徵及其實施後之功效,茲以下列說明搭配圖示進行說明,敬請參閱。In order to enable your examiners to clearly understand the purpose, technical features and effects of the present invention, the following descriptions are combined with the diagrams for illustration, please refer to.

請參閱「第1圖」,其為本發明之系統方塊圖,本實施例的損失函數的優化系統100包含一處理器111,另有一記憶體112可通訊或電性耦接於處理器111,其中,處理器111用以存取並執行記憶體112儲存的至少一指令I,以:建構一Focal Loss函數,Focal Loss函數被定義為一原始分類損失函數;並輸入一分類模型M對於一第一真實類別標籤的至少一第一預測類別標籤(可隨機輸入(Fetch instance randomly));再輸入分類模型M對於一第二真實類別標籤的至少一第二預測類別標籤(可隨機輸入);接著於原始分類損失函數中,設定第一真實類別標籤所使用之一更新後超參數,且所述的更新後超參數可大於第二真實類別標籤所使用之一更新後第二超參數,以得到一更新後分類損失函數(Parameterized Focal Loss),其屬於一種Focal Loss損失函數的變體。Please refer to “FIG. 1”, which is a system block diagram of the present invention. The loss function optimization system 100 of the present embodiment includes a processor 111, and a memory 112 is communicatively or electrically coupled to the processor 111. The processor 111 is used to access and execute at least one instruction I stored in the memory 112 to: construct a Focal Loss function, which is defined as an original classification loss function; and input a classification model M for a first At least one first predicted class label of a real class label (Fetch instance randomly); then input at least one second predicted class label (can be randomly input) of a second real class label of the classification model M; then In the original classification loss function, an updated hyperparameter used by the first true class label is set, and the updated hyperparameter can be larger than an updated second hyperparameter used by the second true class label to obtain An updated classification loss function (Parameterized Focal Loss), which is a variant of the Focal Loss loss function.

請參閱「第2圖」,其為本發明之一實施例的方法流程圖,本發明提出一種損失函數的優化方法S,包括: (1)(步驟S10)建構一Focal Loss函數,Focal Loss函數被定義為一原始分類損失函數,而此實施例所稱的原始分類損失函數,可例如為習知的

Figure 02_image003
(2)(步驟S20)輸入分類模型對於一第一真實類別標籤的至少一第一預測類別標籤;其中,應理解,本發明所稱的真實類別標籤與預測類別標籤,皆以向量方式表示,例如若目標為二元分類任務,則第一真實類別標籤可表示為y=[1,0],其中的1可被表示為y 1,而第一預測類別標籤可例如被表示為
Figure 02_image023
,此外,在步驟S20執行前,可先初始化一分類器(classifier),並進行一次新的迭代(new iteration),以開始輸入分類模型所生成的預測類別標籤; (3)(步驟S30)輸入分類模型對於一第二真實類別標籤的至少一第二預測類別標籤;其中,同上所述,第二真實類別標籤亦得以向量表示為y=[0,1],其中的1可被表示為y 2,而第二預測類別標籤可例如被表示為
Figure 02_image025
;以及 (4)(步驟S40)於原始分類損失函數中,設定第一真實類別標籤所使用之更新後超參數,且更新後超參數大於第二真實類別標籤所使用之超參數(更新後第二超參數),以得到一更新後分類損失函數;換言之,所述的更新後分類損失函數係屬於一種Focal Loss損失函數的變體。 Please refer to "Fig. 2", which is a flow chart of a method according to an embodiment of the present invention. The present invention proposes a loss function optimization method S, including: (1) (step S10) Constructing a Focal Loss function, the Focal Loss function is defined as an original classification loss function, and the original classification loss function referred to in this embodiment can be, for example, a conventional
Figure 02_image003
(2) (step S20) input classification model for at least one first predicted class label of a first true class label; Wherein, it should be understood that the so-called true class label and predicted class label of the present invention are all represented by a vector , for example, if the target is a binary classification task, the first true class label can be expressed as y=[1,0], where 1 can be expressed as y 1 , and the first predicted class label can be expressed as, for example,
Figure 02_image023
, in addition, before step S20 is executed, a classifier can be initialized and a new iteration is performed to start inputting the predicted class labels generated by the classification model; (3) (step S30) input Classification model for at least one second predicted class label of a second true class label; wherein, as described above, the second true class label can also be represented by a vector as y=[0,1], where 1 can be represented as y 2 , while the second predicted class label can for example be represented as
Figure 02_image025
and (4) (step S40) in the original classification loss function, set the updated hyperparameters used by the first true class label, and the updated hyperparameters are greater than the hyperparameters used by the second true class label (the second true class label after the update). two hyperparameters) to obtain an updated classification loss function; in other words, the updated classification loss function belongs to a variant of the Focal Loss loss function.

承上,請繼續參閱「第1圖」~「第2圖」,作為示例,本發明在一實施例中,分類模型M可為一二元分類模型,此時,若以一二元混淆矩陣(Confusion matrix)為一二元分類(binary case)指標,則第一真實類別標籤可為一陽性(True)類別標籤,例如陽性類別標籤為y=[1,0],並以其代表「有犯罪」為例,第二真實類別標籤可為一陰性(False)類別標籤,例如陰性類別標籤為y=[0,1],並以其代表「無犯罪」為例,且更新後分類損失函數,用於提升二元分類模型之預測類別標籤屬於一偽陰性指標(False Negative, FN)的損失,例如可提升二元分類模型對於「真實情況為有犯罪,但二元分類模型之預測類別標籤被判斷為沒有犯罪」的損失,並且,更具體而言,本實施例的更新後分類損失函數可被定義如下:

Figure 02_image027
Figure 02_image029
Figure 02_image031
Continuing from the above, please continue to refer to "Figure 1" ~ "Figure 2". As an example, in an embodiment of the present invention, the classification model M may be a binary classification model. At this time, if a binary confusion matrix is used (Confusion matrix) is a binary case index, then the first true class label can be a positive (True) class label, for example, the positive class label is y=[1,0], and it represents “there is crime” as an example, the second true class label can be a negative (False) class label, for example, the negative class label is y=[0,1], and it represents “no crime” as an example, and the updated classification loss function , which is used to improve the loss of the predicted class label of the binary classification model belongs to a false negative indicator (False Negative, FN). is judged to be no crime” loss, and, more specifically, the updated classification loss function of this embodiment can be defined as follows:
Figure 02_image027
Figure 02_image029
Figure 02_image031

承上,本發明進一步說明前述更新後分類損失函數的各參數定義:

Figure 02_image033
為陰性類別標籤所採用的一負樣本更新後超參數(例如預設為2),
Figure 02_image035
為陽性類別標籤所採用的一正樣本更新後超參數,
Figure 02_image037
為陽性類別標籤(即y)與第一預測類別標籤之乘積,
Figure 02_image039
為陰性類別標籤(即1-y)與第二預測類別標籤(即1-
Figure 02_image041
)之乘積,
Figure 02_image009
分別為二元分類模型對於陽性類別標籤、與陰性類別標籤所產生的預測類別標籤,α與
Figure 02_image043
皆為平衡參數,平衡參數α的功能已於先前技術的Focal Loss損失函數提及,而本實施例的平衡參數
Figure 02_image043
則可補償因對陽性類別標籤(positive instance)賦予較大超參數,而產生的偏差(deviation)情形,意即補償不同超參數
Figure 02_image019
而造成的權重不均問題。 Continuing from the above, the present invention further describes the definition of each parameter of the aforementioned updated classification loss function:
Figure 02_image033
an updated hyperparameter for negative samples used for negative class labels (e.g. defaulted to 2),
Figure 02_image035
The updated hyperparameters of a positive sample used for positive class labels,
Figure 02_image037
is the product of the positive class label (ie y) and the first predicted class label,
Figure 02_image039
is the negative class label (ie 1-y) and the second predicted class label (ie 1-
Figure 02_image041
) product,
Figure 02_image009
are the predicted class labels generated by the binary classification model for positive class labels and negative class labels, respectively, α and
Figure 02_image043
are balance parameters, the function of the balance parameter α has been mentioned in the Focal Loss loss function of the prior art, and the balance parameter of this embodiment
Figure 02_image043
Then it can compensate for the deviation caused by assigning larger hyperparameters to the positive instance label, which means compensating for different hyperparameters.
Figure 02_image019
resulting in uneven weights.

請繼續參閱「第3圖」,其為本發明之以更新後分類損失函數評估模型表現的示意圖,由圖所例示的一分類模型效能評估示意圖G可知,本發明據以實施後,針對使用者較在意的分類(例如可被解讀為y=[1,0],意即代表「有犯罪」的第一真實類別標籤),其在計算出第一預測類別標籤趨近於第二真實類別標籤(例如運算出的第一預測類別標籤為

Figure 02_image045
,此屬於偽陰性指標,但此僅為舉例)的損失值,相較於習知的Focal Loss損失函數,可例如由2被提升至4,至於使用者較不在意的另一分類(例如可被解讀為y=[0,1],意即代表「未犯罪」的第二真實類別標籤),若未調整其對應的超參數(例如仍維持超參數的值為2),則其在計算出預測類別標籤趨近於第二真實類別標籤的損失值,可與習知的Focal Loss損失函數有相近表現;換言之,本發明藉由對不同類別進行加權以調整超參數的技術手段,可於模型訓練時盡可能提升其分類準確率。 Please continue to refer to "Fig. 3", which is a schematic diagram of evaluating the performance of the model with the updated classification loss function according to the present invention. It can be seen from a schematic diagram G of the performance evaluation of a classification model illustrated in the figure. The more concerned classification (for example, it can be interpreted as y=[1,0], which means the first real category label representing "criminal"), which is calculated when the first predicted category label is close to the second real category label. (For example, the computed first predicted category label is
Figure 02_image045
, this is a false negative index, but this is only an example) loss value, compared with the conventional Focal Loss loss function, can be increased from 2 to 4, for example, as for another category that users are less concerned about (for example, it can is interpreted as y=[0,1], which means the second true category label representing "no crime"), if the corresponding hyperparameter is not adjusted (for example, the value of the hyperparameter is still maintained at 2), it is calculated in The loss value of the predicted class label approaching the second true class label can be similar to the conventional Focal Loss loss function; in other words, the present invention adjusts the hyperparameters by weighting different classes. When training the model, try to improve its classification accuracy as much as possible.

請參閱「第4圖」,其為本發明之另一實施例的方法流程圖,並請搭配參閱「第1圖」~「第2圖」,作為示例,本發明在一實施例中,若以分類模型M的分類任務為圖像分類任務為例,且分類模型M係為一多元分類模型的情況下(例如任務目標為正確分類3種類別的貓),處理器111亦可存取並執行至少一指令I,以於步驟S30執行完畢後:輸入分類模型對於一第三真實類別標籤(例如可被解讀為y=[0,0,1],意即代表「Cat3(第3種貓)」的第三真實類別標籤)的至少一第三預測類別標籤(步驟S35)(可隨機輸入,例如被表示為

Figure 02_image047
),使得處理器101可於原始分類損失函數中,設定第一真實類別標籤(例如可被解讀為y=[1,0,0],意即代表「Cat1(第1種貓)」的第一真實類別標籤)所使用之一更新後超參數,大於第二真實類別標籤(例如可被解讀為y=[0,1,0],意即代表「Cat2(第2種貓)」的第二真實類別標籤)、及第三真實類別標籤所使用之超參數(即大於更新後第二超參數、及大於一更新後第三超參數),以得到更新後分類損失函數(步驟S40’),並且,更具體而言,更新後分類損失函數可被定義如下:
Figure 02_image049
Figure 02_image051
Figure 02_image053
Figure 02_image055
Please refer to “FIG. 4”, which is a flowchart of a method according to another embodiment of the present invention, and please refer to “FIG. 1” to “FIG. 2” in combination. As an example, in an embodiment of the present invention, if Taking the classification task of the classification model M as an image classification task as an example, and the classification model M is a multivariate classification model (for example, the task goal is to correctly classify cats of 3 categories), the processor 111 can also access the And execute at least one instruction I, so that after step S30 is executed: input the classification model for a third real category label (for example, it can be interpreted as y=[0,0,1], which means "Cat3 (the third type)" at least one third predicted class label (step S35) (can be input randomly, for example, represented as
Figure 02_image047
), so that the processor 101 can set the first true category label in the original classification loss function (for example, it can be interpreted as y=[1,0,0], which means the first category of “Cat1 (the first cat)” One of the updated hyperparameters used by a true category label) is greater than the second true category label (for example, it can be interpreted as y=[0,1,0], which means that the first “Cat2 (the second cat)” Two true class labels), and the hyperparameters used by the third true class label (that is, greater than the updated second hyperparameter and greater than one updated third hyperparameter) to obtain the updated classification loss function (step S40') , and, more specifically, the updated classification loss function can be defined as follows:
Figure 02_image049
Figure 02_image051
Figure 02_image053
Figure 02_image055

承上,本發明進一步說明前述更新後分類損失函數的各參數定義:L為多元分類模型的一類別標籤數量(例如本實施例的類別數量為3,則L=3),

Figure 02_image057
為第N個類別標籤的樣本數量(例如N 1為第一種類別標籤的樣本數量,例如N 2為第二種類別標籤的樣本數量),
Figure 02_image059
為每個真實類別標籤於更新後分類損失函數所使用的更新後超參數,
Figure 02_image061
為對各真實類別標籤所使用的更新後超參數所求得的最小值(但若第一真實類別標籤為使用者較在意的分類,則
Figure 02_image063
通常不會是最小值),故
Figure 02_image065
於本實施例中係作為一種補償函數,以補償因對陽性類別標籤(positive instance)賦予較大超參數,而產生的偏差(deviation)情形,意即補償不同超參數
Figure 02_image019
而造成的權重不均問題,本發明據以實施後,針對使用者較在意的分類(例如可被解讀為y=[1,0,0],意即代表「Cat1(第1種貓)」的第一真實類別標籤,且其對應的第一預測分類標籤,於本實施例中,可例如被表示為
Figure 02_image067
),可於Focal Loss損失函數的變體中賦予其較大的超參數(作為更新後超參數),以提升其產生錯誤判斷預測類別標籤的損失,進而於模型訓練時盡可能提升其分類準確率。 Continuing from the above, the present invention further describes the definition of each parameter of the aforementioned updated classification loss function: L is the number of labels in a category of the multivariate classification model (for example, the number of categories in this embodiment is 3, then L=3),
Figure 02_image057
is the number of samples of the Nth class label (for example, N 1 is the number of samples of the first class label, such as N 2 is the number of samples of the second class label),
Figure 02_image059
The updated hyperparameters used by the updated classification loss function for each true class label,
Figure 02_image061
is the minimum value obtained from the updated hyperparameters used for each true class label (but if the first true class label is a category that the user cares more about, then
Figure 02_image063
usually not the minimum value), so
Figure 02_image065
In this embodiment, it is used as a compensation function to compensate for the deviation caused by assigning a larger hyperparameter to the positive instance label, which means to compensate for different hyperparameters.
Figure 02_image019
The problem of uneven weights caused by the implementation of the present invention is aimed at the classification that users are more concerned about (for example, it can be interpreted as y=[1,0,0], which means "Cat1 (the first cat)" The first true class label of , and its corresponding first predicted class label, in this embodiment, can be represented as, for example,
Figure 02_image067
), which can be given a larger hyperparameter (as an updated hyperparameter) in the variant of the Focal Loss loss function to improve the loss of misjudging the predicted category label, and then improve its classification accuracy as much as possible during model training. accuracy.

請參閱「第1圖」,應理解,爲了讓處理器101載入分類模型M以進行前述的分類任務,分類模型M可被定義爲與儲存於記憶體102中的一分類訓練資料庫(圖中未繪示)形成資訊連接,以基於分類訓練資料庫中對應於不同物件特徵的多個分類訓練資料集(Classifier Training Data)、多個已標記樣本(指已預先標記部分的物件特徵)、多個不完全標記樣本、及多個測試資料集,而産生出分類模型M。Please refer to “FIG. 1”, it should be understood that in order for the processor 101 to load the classification model M to perform the aforementioned classification task, the classification model M may be defined as a classification training database stored in the memory 102 (Fig. (not shown in ) to form an information link based on a plurality of classification training data sets (Classifier Training Data) corresponding to different object features in the classification training database, a plurality of labeled samples (referring to the pre-labeled part of the object features), Multiple incompletely labeled samples and multiple test datasets are used to generate a classification model M.

請繼續參閱「第1圖」,本發明於一實施例中,損失函數的優化方法S更可包含:依據ROC曲線(Receiver operating characteristic curve)、及/或AUC (Area Under Curve,曲線下面積)、及/或依據召回率指標(Recall,意即在所有正樣本當中,能夠預測多少正樣本的比例)、及/或依據準確率指標(Precision,意即在所有預測為正樣本中,真實情況中有多少為正樣本),判別訓練中之分類模型M的成效(即評估模型的好壞),以作為是否要再更新各類別標籤所對應之更新後超參數的判斷依據,完成更新後,可再接續執行步驟S40。Please continue to refer to FIG. 1. In an embodiment of the present invention, the optimization method S of the loss function may further include: according to the ROC curve (Receiver operating characteristic curve), and/or the AUC (Area Under Curve, area under the curve) , and/or based on the recall rate indicator (Recall, which means that among all positive samples, the proportion of how many positive samples can be predicted), and/or based on the accuracy rate indicator (Precision, which means that among all the predicted positive samples, the true situation How many of them are positive samples), to judge the effectiveness of the classification model M in training (that is, to evaluate the quality of the model), as a basis for judging whether to update the updated hyperparameters corresponding to each category label. After the update is completed, Step S40 can be continued.

請參閱「第1圖」及「第2圖」,於本發明之一實施例中,本發明更提供一種非暫態(non-transitory)電腦可讀取記錄媒體,關聯於至少一指令I以界定前述損失函數的優化方法S,各步驟之相關說明已詳述於「第2圖」及「第4圖」所示的實施例,於此不再贅述。Please refer to “FIG. 1” and “FIG. 2”. In one embodiment of the present invention, the present invention further provides a non-transitory computer-readable recording medium, which is associated with at least one command I to In the optimization method S for defining the aforementioned loss function, the relevant descriptions of each step have been described in detail in the embodiments shown in "Fig. 2" and "Fig. 4", and will not be repeated here.

請繼續參閱「第1圖」及「第2圖」,於本發明之一實施例中,本發明更提供一種電腦程式産品,當電腦系統載入該電腦程式產品的多個指令I後,係至少可完成如前述的Focal Loss損失函數的優化方法S,步驟之相關說明已詳述於「第2圖」及「第4圖」所示的實施例,於此不再贅述。Please continue to refer to “FIG. 1” and “FIG. 2”. In one embodiment of the present invention, the present invention further provides a computer program product. After the computer system loads a plurality of instructions I of the computer program product, the At least the above-mentioned optimization method S of the Focal Loss loss function can be completed. The relevant description of the steps has been described in detail in the embodiments shown in "Fig. 2" and "Fig. 4", and will not be repeated here.

請參閱「第1圖」及「第2圖」,作為示例,本發明所述的處理器101具備邏輯運算、暫存運算結果、保存資料運算指令位置等功能,其可包含但不限於單一處理器以及多個微處理器之集成,例如一中央處理器(CPU)、一微處理器(MPU)、一微控制器(MCU)、一應用處理器(AP)、一嵌入式處理器、一圖形處理單元(GPU)、或一特殊應用積體電路(ASIC)之集成,但均不以此為限,藉此,處理器111可用以自記憶體112存取至少一指令I,以依據所述的至少一指令I執行前述的Focal Loss損失函數的優化方法S。Please refer to “FIG. 1” and “FIG. 2”. As an example, the processor 101 of the present invention has functions such as logic operation, temporary storage of operation results, and storage of data operation instruction positions, which may include but are not limited to single processing integration of multiple microprocessors, such as a central processing unit (CPU), a microprocessor (MPU), a microcontroller (MCU), an application processor (AP), an embedded processor, a The integration of a graphics processing unit (GPU) or an application-specific integrated circuit (ASIC), but not limited thereto, whereby the processor 111 can be used to access at least one instruction I from the memory 112 to Said at least one instruction I executes the aforementioned optimization method S of the Focal Loss loss function.

請參閱「第1圖」,作為示例,本發明所述的記憶體112可為快閃(flash)記憶體、硬碟(HDD)、固態硬碟(SSD)、動態隨機存取記憶體(DRAM)或靜態隨機存取記憶體(SRAM),若作為一種非暫態電腦可讀取媒體,則記憶體102可儲存關聯於前述Focal Loss損失函數的優化方法S的至少一指令I,該至少一指令I可供處理器111存取並執行。Please refer to FIG. 1. As an example, the memory 112 of the present invention may be a flash memory, a hard disk drive (HDD), a solid state drive (SSD), a dynamic random access memory (DRAM) ) or static random access memory (SRAM), if used as a non-transitory computer-readable medium, the memory 102 can store at least one instruction I related to the optimization method S of the aforementioned Focal Loss loss function, the at least one instruction I Instruction I may be accessed and executed by the processor 111 .

請參閱「第1圖」,作為示例,本發明所述的更新後分類損失函數可供以訓練一圖像分類任務、一音頻分類任務、一異常交易分類任務、一詐騙偵測任務之其中一種或其組合的分類模型M,舉例而言,若分類模型M的分類任務為圖像分類任務(Image Recognition),則其可例如為:LeNet、AlexNet、VGGnet、NIN、GoogLeNet、MobileNet、SqueezeNet、ResNet、SiameseNet、NASNet、RNN、RetinaNet或其它基於神經網路的訓練模型;若分類模型M的分類任務為音頻分類任務,則其可例如為基於YouTube-8M Dataset(其可獲取於https://research.google.com/youtube8m/)的YouTube-8M模型(其可獲取於https://github.com/google/youtube-8m#overview-of-);若分類模型M的分類任務為異常交易分類任務,則其可適用於反洗錢應用(Anti-Money Laundering, AML),並可例如從IBM Watson提供的(https://www.ibm.com/us-en/marketplace/financial-crimes-insight-alert-triage)上獲取;若分類模型M的分類任務為詐騙偵測任務(Fraud Detection),則其可例如為習知的Amazon 詐騙偵測器,其可於(https://aws.amazon.com/tw/fraud-detector/)上獲取,亦可例如為IBM的Watson Studio,其可於(https://www.ibm.com/tw-zh/analytics/fraud-prediction)上獲取,但以上僅為舉例,皆不以此為限。Please refer to "Figure 1". As an example, the updated classification loss function of the present invention can be used to train one of an image classification task, an audio classification task, an abnormal transaction classification task, and a fraud detection task. A classification model M or a combination thereof, for example, if the classification task of the classification model M is an image classification task (Image Recognition), it can be, for example: LeNet, AlexNet, VGGnet, NIN, GoogLeNet, MobileNet, SqueezeNet, ResNet , SiameseNet, NASNet, RNN, RetinaNet or other neural network-based training models; if the classification task of the classification model M is an audio classification task, it can be, for example, based on the YouTube-8M Dataset (which can be obtained at https://research .google.com/youtube8m/) YouTube-8M model (which can be obtained at https://github.com/google/youtube-8m#overview-of-); if the classification task of the classification model M is the abnormal transaction classification task , it can be adapted for anti-money laundering applications (Anti-Money Laundering, AML) and can be provided, for example, from IBM Watson (https://www.ibm.com/us-en/marketplace/financial-crimes-insight-alert -triage); if the classification task of the classification model M is Fraud Detection, it can be, for example, the known Amazon Fraud Detector, which can be found at (https://aws.amazon.com /tw/fraud-detector/), for example, IBM's Watson Studio, which can be obtained at (https://www.ibm.com/tw-zh/analytics/fraud-prediction), but the above only For example, it is not limited to this.

綜上可知,本發明據以實施後,透過對較在意事件分類(例如真實類別標籤為y=[1,0],且此種真實類別標籤的訓練樣本,相較於真實類別標籤為y=[0,1]的訓練樣本,在數量上可能相當懸殊),於原始的Focal Loss損失函數中,賦予較大的超參數,進而提出一種Focal Loss損失函數的變體,可使得分類模型在訓練時,對於較在意事件分類之預測類別標籤屬於偽陰性指標(FN)、或誤判的損失(loss)更大,以盡可能達成讓當前模型之錯誤預測最小化、及利用優化後Focal Loss損失函數,訓練出效能更佳之當前分類模型的有利功效;換言之,本發明人主要提出一種應用類別加權於Focal Loss損失函數的機器學習方法。To sum up, after the implementation of the present invention, by classifying the more concerned events (for example, the real category label is y=[1,0], and the training sample of such a real category label is compared with the real category label of y= [0,1] training samples may be quite different in number), in the original Focal Loss loss function, a larger hyperparameter is given, and then a variant of the Focal Loss loss function is proposed, which can make the classification model in training. When the prediction category labels that are more concerned with event classification belong to false negative indicators (FN), or the loss of misjudgment is larger, in order to minimize the wrong prediction of the current model as much as possible, and use the optimized Focal Loss loss function , the advantageous effect of training the current classification model with better performance; in other words, the inventor mainly proposes a machine learning method that applies class weighting to the Focal Loss loss function.

唯,以上所述者,僅為本發明之較佳之實施例而已,並非用以限定本發明實施之範圍;任何熟習此技藝者,在不脫離本發明之精神與範圍下所作之均等變化與修飾,皆應涵蓋於本發明之專利範圍內。However, the above descriptions are only preferred embodiments of the present invention, and are not intended to limit the scope of the present invention; anyone familiar with the art can make equal changes and modifications without departing from the spirit and scope of the present invention. , all should be covered within the patent scope of the present invention.

綜上所述,本發明之功效,係具有發明之「產業可利用性」、「新穎性」與「進步性」等專利要件;申請人爰依專利法之規定,向 鈞局提起發明專利之申請。To sum up, the effect of the present invention is that it has the patent requirements of "industrial applicability", "novelty" and "progressiveness" of the invention; the applicant should file an invention patent with the Jun Bureau in accordance with the provisions of the Patent Law Apply.

100             損失函數的優化系統 101             處理器 102             記憶體                          I                  指令 M               分類模型 G                分類模型效能評估示意圖 S                 損失函數的優化方法 S10             建構Focal Loss函數,Focal Loss函數被定義為原始分類損失函數 S20             輸入分類模型對於第一真實類別標籤的至少一第一預測類別標籤 S30             輸入分類模型對於第二真實類別標籤的至少一第二預測類別標籤 S40             於原始分類損失函數中,設定第一真實類別標籤所使用之更新後超參數,且更新後超參數大於第二真實類別標籤所使用之超參數,以得到一更新後分類損失函數 S35             輸入分類模型對於第三真實類別標籤的至少一第三預測類別標籤 S40’           於原始分類損失函數中,設定第一真實類別標籤所使用之更新後超參數,大於第二真實類別標籤、及第三真實類別標籤所使用之超參數,以得到更新後分類損失函數 100 Optimization system for loss function 101 Processor 102 Memory Commands M Classification Model G Schematic diagram of performance evaluation of classification model S The optimization method of loss function S10 Constructs the Focal Loss function, which is defined as the original classification loss function S20 Input classification model for at least one first predicted class label of the first true class label S30 Input classification model for at least one second predicted class label of the second true class label S40 In the original classification loss function, set the updated hyperparameters used by the first true class label, and the updated hyperparameters are larger than the hyperparameters used by the second true class label to obtain an updated classification loss function S35 At least one third predicted class label for the third true class label of the input classification model S40’ In the original classification loss function, set the updated hyperparameters used by the first true class label to be greater than the hyperparameters used by the second true class label and the third true class label to obtain the updated classification loss function

第1圖,為本發明之系統方塊圖。 第2圖,為本發明之一實施例的方法流程圖。 第3圖,為本發明之以更新後分類損失函數評估模型表現的示意圖。 第4圖,為本發明之另一實施例的方法流程圖。 Fig. 1 is a system block diagram of the present invention. FIG. 2 is a flowchart of a method according to an embodiment of the present invention. FIG. 3 is a schematic diagram of evaluating the performance of the model with the updated classification loss function according to the present invention. FIG. 4 is a flowchart of a method according to another embodiment of the present invention.

S:損失函數的優化方法 S: Optimization method of loss function

S10:建構Focal Loss函數,Focal Loss函數被定義為原始分類損失函數 S10: Construct the Focal Loss function, which is defined as the original classification loss function

S20:輸入分類模型對於第一真實類別標籤的至少一第一預測類別標籤 S20: Input the classification model for at least one first predicted class label of the first real class label

S30:輸入分類模型對於第二真實類別標籤的至少一第二預測類別標籤 S30: Input the classification model for at least one second predicted class label of the second true class label

S40:於原始分類損失函數中,設定第一真實類別標籤所使用之更新後超參數,且更新後超參數大於第二真實類別標籤所使用之超參數,以得到一更新後分類損失函數 S40: In the original classification loss function, set the updated hyperparameters used by the first true class label, and the updated hyperparameters are larger than the hyperparameters used by the second true class label, so as to obtain an updated classification loss function

Claims (5)

一種損失函數的優化方法,由一處理器所執行,供以訓練一分類模型,包含:建構一Focal Loss函數,該Focal Loss函數被定義為一原始分類損失函數;輸入該分類模型對於一第一真實類別標籤的一第一預測類別標籤;輸入該分類模型對於一第二真實類別標籤的一第二預測類別標籤;於該原始分類損失函數中,設定該第一真實類別標籤所使用之一更新後超參數,且該更新後超參數大於該第二真實類別標籤所使用之一更新後第二超參數,以得到一更新後分類損失函數;該分類模型為一二元分類模型,該第一真實類別標籤為一陽性類別標籤,該第二真實類別標籤為一陰性類別標籤,且該更新後分類損失函數,用於提升該二元分類模型之該第一預測類別標籤屬於一偽陰性指標的損失;以及該更新後分類損失函數被定義為:
Figure 109121654-A0305-02-0016-18
Figure 109121654-A0305-02-0016-19
,
Figure 109121654-A0305-02-0016-15
,
Figure 109121654-A0305-02-0016-16
,其中,γ -為該陰性類別標籤所採用的一負樣本更新後超參數,γ +為該陽性類別標籤所採用的一正樣本更新後超參數,q +為該陽性類別標籤與該第一預測類別標籤之乘積,q -為該陰性類別標籤與該第二預測類別標籤之乘積。
A loss function optimization method, executed by a processor for training a classification model, comprising: constructing a Focal Loss function, the Focal Loss function is defined as an original classification loss function; inputting the classification model for a first a first predicted class label of the true class label; input a second predicted class label of the classification model for a second true class label; in the original classification loss function, set an update used by the first true class label Post hyperparameters, and the updated hyperparameters are greater than the updated second hyperparameters used by the second true category label to obtain an updated classification loss function; the classification model is a binary classification model, the first The true class label is a positive class label, the second true class label is a negative class label, and the updated classification loss function is used to improve the first predicted class label of the binary classification model belongs to a false negative index loss; and the updated classification loss function is defined as:
Figure 109121654-A0305-02-0016-18
Figure 109121654-A0305-02-0016-19
,
Figure 109121654-A0305-02-0016-15
,
Figure 109121654-A0305-02-0016-16
, where γ - is a negative sample updated hyperparameter used by the negative class label, γ + is a positive sample updated hyperparameter used by the positive class label, q + is the positive class label and the first updated hyperparameter The product of predicted class labels, q - is the product of the negative class label and the second predicted class label.
如請求項1的損失函數的優化方法,更包含:輸入該分類模型對於一第三真實類別標籤的一第三預測類別標籤,並於該原始分類損失函數中,設定該第一真實類別標籤所使用之該更新後超參數,大於該第二真實類別標籤所使用之該更新後第二超參數、及該第三真實類別標籤所使用之一更新後 第三超參數,以得到該更新後分類損失函數,其中,該分類模型為一多元分類模型,且該更新後分類損失函數被定義為:
Figure 109121654-A0305-02-0017-5
q l,n ) n log(q l,n ),
Figure 109121654-A0305-02-0017-1
,
Figure 109121654-A0305-02-0017-6
,
Figure 109121654-A0305-02-0017-7
Figure 109121654-A0305-02-0017-2
,其中,L為該多元分類模型的一類別標籤數量,N l 為第N個類別標籤的數量,
Figure 109121654-A0305-02-0017-3
為每個真實類別標籤於該更新後分類損失函數使用的更新後超參數,
Figure 109121654-A0305-02-0017-4
為對各真實類別標籤所使用的更新後超參數所求得的最小值。
The method for optimizing the loss function of claim 1, further comprising: inputting a third predicted class label of the classification model for a third true class label, and setting the first true class label in the original classification loss function The updated hyperparameter used is greater than the updated second hyperparameter used by the second true category label and an updated third hyperparameter used by the third true category label to obtain the updated classification loss function, where the classification model is a multivariate classification model, and the updated classification loss function is defined as:
Figure 109121654-A0305-02-0017-5
q l,n ) n log( q l,n ),
Figure 109121654-A0305-02-0017-1
,
Figure 109121654-A0305-02-0017-6
,
Figure 109121654-A0305-02-0017-7
Figure 109121654-A0305-02-0017-2
, where L is the number of class labels of the multivariate classification model, N l is the number of the Nth class label,
Figure 109121654-A0305-02-0017-3
The updated hyperparameters used by this updated classification loss function for each true class label,
Figure 109121654-A0305-02-0017-4
The minimum value obtained for the updated hyperparameters used for each ground-truth class label.
如請求項1的損失函數的優化方法,其中,該更新後分類損失函數供以訓練一圖像分類任務、一音頻分類任務、一異常交易分類任務、一詐騙偵測任務之其中一種或其組合的該分類模型。 The method for optimizing a loss function of claim 1, wherein the updated classification loss function is used to train one of an image classification task, an audio classification task, an abnormal transaction classification task, and a fraud detection task, or a combination thereof of the classification model. 一種損失函數的優化系統,供以訓練一分類模型,包含:一記憶體,儲存至少一指令;一處理器,與該記憶體通訊連接,其中,該處理器用以存取並執行該至少一指令,以建構一Focal Loss函數,該Focal Loss函數被定義為一原始分類損失函數;並輸入該分類模型對於一第一真實類別標籤的一第一預測類別標籤;再輸入該分類模型對於一第二真實類別標籤的一第二預測類別標籤;接著於該原始分類損失函數中,設定該第一真實類別標籤所使用之一更新後超參數,且該更新後超參數大於該第二真實類別標籤所使用之一更新後第二超參數,以得到一更新後分類損失函數; 該分類模型為一二元分類模型,該第一真實類別標籤為一陽性類別標籤,該第二真實類別標籤為一陰性類別標籤,且該更新後分類損失函數,用於提升該二元分類模型之該第一預測類別標籤屬於一偽陰性指標的損失;以及該更新後分類損失函數被定義為:
Figure 109121654-A0305-02-0018-20
Figure 109121654-A0305-02-0018-21
,
Figure 109121654-A0305-02-0018-8
,
Figure 109121654-A0305-02-0018-9
,其中,γ -為該陰性類別標籤所採用的一負樣本更新後超參數,γ +為該陽性類別標籤所採用的一正樣本更新後超參數,q +為該陽性類別標籤與該第一預測類別標籤之乘積,q -為該陰性類別標籤與該第二預測類別標籤之乘積。
A loss function optimization system for training a classification model, comprising: a memory storing at least one instruction; a processor communicating with the memory, wherein the processor is used to access and execute the at least one instruction , to construct a Focal Loss function, the Focal Loss function is defined as an original classification loss function; and input a first predicted class label of the classification model for a first true class label; then input the classification model for a second A second predicted class label of the true class label; then, in the original classification loss function, an updated hyperparameter used by the first true class label is set, and the updated hyperparameter is greater than the second true class label. Use an updated second hyperparameter to obtain an updated classification loss function; the classification model is a binary classification model, the first true class label is a positive class label, and the second true class label is a negative class class label, and the updated classification loss function is used to improve the loss that the first predicted class label of the binary classification model belongs to a false negative index; and the updated classification loss function is defined as:
Figure 109121654-A0305-02-0018-20
Figure 109121654-A0305-02-0018-21
,
Figure 109121654-A0305-02-0018-8
,
Figure 109121654-A0305-02-0018-9
, where γ - is a negative sample updated hyperparameter used by the negative class label, γ + is a positive sample updated hyperparameter used by the positive class label, q + is the positive class label and the first updated hyperparameter The product of predicted class labels, q - is the product of the negative class label and the second predicted class label.
如請求項4的損失函數的優化系統,其中,該處理器亦用以存取並執行該至少一指令,以輸入該分類模型對於一第三真實類別標籤的一第三預測類別標籤,並於該原始分類損失函數中,設定該第一真實類別標籤所使用之該更新後超參數,大於該第二真實類別標籤所使用之該更新後第二超參數、及該第三真實類別標籤所使用之一更新後第三超參數,以得到該更新後分類損失函數,其中,該分類模型為一多元分類模型,且該更新後分類損失函數被定義為:
Figure 109121654-A0305-02-0018-10
,
Figure 109121654-A0305-02-0018-11
Figure 109121654-A0305-02-0018-17
,其中,L為該多元分類模型的一類別標籤數量,N l 為第N個類別標籤的樣本數量,
Figure 109121654-A0305-02-0018-12
為每個真實類別標籤於該更新後分類損失函數使用的更新後超參數,
Figure 109121654-A0305-02-0018-13
為對各真實類別標籤所使用的更新後超參數所求得的最小值。
The optimization system of the loss function of claim 4, wherein the processor is also used to access and execute the at least one instruction to input a third predicted class label of the classification model for a third true class label, and to In the original classification loss function, the updated hyperparameter used by the first true class label is set to be greater than the updated second hyperparameter used by the second true class label and the updated second hyperparameter used by the third true class label. One of the updated third hyperparameters to obtain the updated classification loss function, wherein the classification model is a multivariate classification model, and the updated classification loss function is defined as:
Figure 109121654-A0305-02-0018-10
,
Figure 109121654-A0305-02-0018-11
Figure 109121654-A0305-02-0018-17
, where L is the number of class labels of the multivariate classification model, N l is the number of samples of the Nth class label,
Figure 109121654-A0305-02-0018-12
The updated hyperparameters used by this updated classification loss function for each true class label,
Figure 109121654-A0305-02-0018-13
The minimum value obtained for the updated hyperparameters used for each ground-truth class label.
TW109121654A 2020-06-24 2020-06-24 Loss function optimization system, method and the computer-readable record medium TWI755774B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW109121654A TWI755774B (en) 2020-06-24 2020-06-24 Loss function optimization system, method and the computer-readable record medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW109121654A TWI755774B (en) 2020-06-24 2020-06-24 Loss function optimization system, method and the computer-readable record medium

Publications (2)

Publication Number Publication Date
TW202201290A TW202201290A (en) 2022-01-01
TWI755774B true TWI755774B (en) 2022-02-21

Family

ID=80787977

Family Applications (1)

Application Number Title Priority Date Filing Date
TW109121654A TWI755774B (en) 2020-06-24 2020-06-24 Loss function optimization system, method and the computer-readable record medium

Country Status (1)

Country Link
TW (1) TWI755774B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWM576281U (en) * 2018-11-27 2019-04-01 洽吧智能股份有限公司 Automatic text labeling system
US20200025931A1 (en) * 2018-03-14 2020-01-23 Uber Technologies, Inc. Three-Dimensional Object Detection
US20200085382A1 (en) * 2017-05-30 2020-03-19 Arterys Inc. Automated lesion detection, segmentation, and longitudinal identification
US20200175384A1 (en) * 2018-11-30 2020-06-04 Samsung Electronics Co., Ltd. System and method for incremental learning
WO2020120238A1 (en) * 2018-12-12 2020-06-18 Koninklijke Philips N.V. System and method for providing stroke lesion segmentation using conditional generative adversarial networks
CN111325386A (en) * 2020-02-11 2020-06-23 Oppo广东移动通信有限公司 Method, device, terminal and storage medium for predicting running state of vehicle

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200085382A1 (en) * 2017-05-30 2020-03-19 Arterys Inc. Automated lesion detection, segmentation, and longitudinal identification
US20200025931A1 (en) * 2018-03-14 2020-01-23 Uber Technologies, Inc. Three-Dimensional Object Detection
US20200025935A1 (en) * 2018-03-14 2020-01-23 Uber Technologies, Inc. Three-Dimensional Object Detection
TWM576281U (en) * 2018-11-27 2019-04-01 洽吧智能股份有限公司 Automatic text labeling system
US20200175384A1 (en) * 2018-11-30 2020-06-04 Samsung Electronics Co., Ltd. System and method for incremental learning
WO2020120238A1 (en) * 2018-12-12 2020-06-18 Koninklijke Philips N.V. System and method for providing stroke lesion segmentation using conditional generative adversarial networks
CN111325386A (en) * 2020-02-11 2020-06-23 Oppo广东移动通信有限公司 Method, device, terminal and storage medium for predicting running state of vehicle

Also Published As

Publication number Publication date
TW202201290A (en) 2022-01-01

Similar Documents

Publication Publication Date Title
JP6182242B1 (en) Machine learning method, computer and program related to data labeling model
WO2019179403A1 (en) Fraud transaction detection method based on sequence width depth learning
CN111582651A (en) User risk analysis model training method and device and electronic equipment
JP2018092614A (en) Determination device and determination method for convolutional neural network model for database
AU2020427921B2 (en) Automated generation of explainable machine learning
US20220092411A1 (en) Data prediction method based on generative adversarial network and apparatus implementing the same method
US11481707B2 (en) Risk prediction system and operation method thereof
CN112541532B (en) Target detection method based on dense connection structure
Kozodoi et al. Shallow self-learning for reject inference in credit scoring
TWI755774B (en) Loss function optimization system, method and the computer-readable record medium
CN113627538B (en) Method for training asymmetric generation of image generated by countermeasure network and electronic device
WO2023113946A1 (en) Hyperparameter selection using budget-aware bayesian optimization
WO2021244105A1 (en) Feature vector dimension compression method and apparatus, and device and medium
Mokheleli et al. Machine learning approach for credit score predictions
CN114428720A (en) Software defect prediction method and device based on P-K, electronic equipment and medium
CN114428719A (en) K-B-based software defect prediction method and device, electronic equipment and medium
TWI758762B (en) Considering both imbalanced data and generation of adversarial examples with high recall requirements method, system and computer-readable record medium
Assiroj et al. Comparing CART and C5. 0 algorithm performance of human development index
US20200302326A1 (en) System and method for correcting bias in outputs
US20230035461A1 (en) Automatic analysis system for quality data based on machine learning
Egan Improving Credit Default Prediction Using Explainable AI
US20230041209A1 (en) Automatic analysis system for quality data based on machine learning
KR102400804B1 (en) Method and apparatus for explaining the reason of the evaluation resulted from credit evaluation models based on machine learning
US20220027779A1 (en) Value over replacement feature (vorf) based determination of feature importance in machine learning
US20230214629A1 (en) Transformer-based autoregressive language model selection