CN110634027A - First-order user refined loss prediction method based on transfer learning - Google Patents

First-order user refined loss prediction method based on transfer learning Download PDF

Info

Publication number
CN110634027A
CN110634027A CN201910881387.9A CN201910881387A CN110634027A CN 110634027 A CN110634027 A CN 110634027A CN 201910881387 A CN201910881387 A CN 201910881387A CN 110634027 A CN110634027 A CN 110634027A
Authority
CN
China
Prior art keywords
data
dim
smp
domain
similar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910881387.9A
Other languages
Chinese (zh)
Inventor
钱虹
徐佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chuangluo (shanghai) Data Technology Co Ltd
Original Assignee
Chuangluo (shanghai) Data Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chuangluo (shanghai) Data Technology Co Ltd filed Critical Chuangluo (shanghai) Data Technology Co Ltd
Priority to CN201910881387.9A priority Critical patent/CN110634027A/en
Publication of CN110634027A publication Critical patent/CN110634027A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • G06Q30/0202Market predictions or forecasting for commercial activities

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a first-order user refined loss prediction method based on transfer learning, which comprises the following steps: (1) data acquisition (2) data organization (3) transfer learning. According to the method, similar user groups based on target domain expansion in a source domain are focused through a Lookalike algorithm, the similar expansion user groups of the target domain and the target domain are used as training samples, a C5 decision tree is adopted, model building training is conducted by introducing a model-based transfer learning idea, the obtained model prediction accuracy is compared with the prediction accuracy obtained by calculating without adopting any transfer learning model building, and the prediction effect is improved to a certain extent.

Description

First-order user refined loss prediction method based on transfer learning
Technical Field
The invention relates to a user refined loss prediction method, in particular to a user refined loss prediction method based on transfer learning.
Background
In increasingly competitive situations, it is far more valuable to retain an old customer than attract a new customer. Effectively predicting and saving lost users and having very important significance for the survival and development of enterprises. In the case that the problem of loss prediction of the multi-order user is solved, it is very important to effectively predict the loss of the first-order user. In view of the fact that the first-purchase user has a short board with too few records of the same user behavior, the traditional prediction method has the defect of low accuracy.
Disclosure of Invention
The invention aims to provide a first-order user refined loss prediction method based on transfer learning so as to improve the prediction effect.
The invention adopts the following technical scheme:
a first-order user refined loss prediction method based on transfer learning is characterized by comprising the following steps:
(1) data acquisition
Obtaining an order record, taking a user who has ordered in a preset time period as an observation object, wherein a new user who has just come in a latest life cycle needs to be filtered out from the observation object, then taking all historical order behaviors of the observation object as initial training data,
among the observed subjects, attrition users are defined as: no subscription for >90 days, return 1: and (4) loss, otherwise, 0: the mixture is left for a long time,
(2) data organization
Calculating a data index user ID forming the following format by taking the daily consumption of the user as the minimum consumption granularity
Consumption No on the nth day. Correlation index Simple derivation index 1 Timing derivative index 2
n=1 -
n>1
(3) Transfer learning
Defining all order record sets with the order times of single users being more than 1 as a source domain DimS(ii) a Data set with order number 1 as target domain DimTGradually completing the migration of the source domain knowledge to the target domain through the following steps:
3-1: dim is the dimension setS∩DimTWherein: dimSFor the computational dimension of the source domain data, DimTComputing dimensions for the target domain data;
3-2: if the treatment of removing the negative migration is needed, directly switching to the step 3-3;
sample set Smp ═ DataS∪DataTWherein: datasFor source domain Data, DataTIs target domain data;
3-3: the target sample set is processed to remove the negative migration, Smp _ Tr ═ DataT∪DataT_Similar
Wherein: dataT_SimilarFor the Data of the source domain and the target domainTA similar data set of (a);
Smp==Smp_Tr
3-4: dividing a sample set Smp into training sets TrainSmpAccounting for 80 percent; and Test set TestSmpAccounting for 20 percent;
3-5: by PCA, Dim is mapped to a low-dimensional subspace, i.e.: dim ═ > DimP, where: DimP is a factor that Dim maps in a low dimensional subspace;
3-6: the model DimP > Dim', where: dim' is an important factor relative to target Y;
3-7: model training: in the training set TrainSmpIn the method, Dim 'is used as an input, Train _ Y is used as an output, and C5 decision tree training modeling is used to obtain Train _ Y ═ F (Dim'); then at TestSmpEvaluating the obtained model;
3-8: model obtained by training using Step7, for DataTF (Dim') > DataT _ Y, DataT _ Y referring to DataTAnd mapping the obtained output result according to the function F (Dim').
Further, the invention discloses a migration learning-based first-order user refined loss prediction method, which is characterized by comprising the following steps:
the specific steps of step 3-3 are as follows:
data of target domain Data set (i.e. record ordered for the first time)TAs seed Data, expansion is carried out on the basis of seed Data, i.e. the record Data most similar to the seed Data is screened out from the source domainS_SimilarObtaining Smp _ Tr ═ DataT∪DataS_SimilarThe Smp _ Tr is the data set after the negative migration processing,
the migration sample consists of two parts:
Smp_Tr=DataT∪DataS_Similar
wherein: dataTIs a target domain data set;
DataS_Similarfor data sets in a source domain similar to a target data source
3-3-1: for each target domain DataTThe seed data of (2) are respectively calculated according to the following formulas (1) and (2), and the similarity between the seed data and each member in the source domain data is calculated for each target domain member;
pearson similarity
Figure BDA0002205974960000041
Sim=1-R (1)
Distance
Figure BDA0002205974960000042
The similarity distance is calculated by calculating the obtained PCA mapping factor in the data set of the combination of the source domain and the target domain by using the formula,
wherein:
the PCA factor in the source domain is: fs ═ Fs1,Fs2,......Fsnn is the number of factors of the PCA mapping factor, FS: a set of PCA factors in the source domain comprising n factors Fs1, Fs 2. FT ═ Ft1,Ft2,......Ftnn is the number of factors for the PCA mapping factor FT: a set of PCA factors in the target domain, comprising n factors Ft1, Ft 2.. Ftn,
3-3-2: r, D, respectively taking Sim 0.2and D1 as values, and obtaining a target domain data record ID set, wherein Sim is a similarity measure calculated by formula (1);
3-3-3: removing the duplication of the target data record ID set obtained in the Step2, and adding the record ID in the source domain;
3-3-4: the record ID obtained at Step3 is deduplicated again, and the record set corresponding to the record ID set obtained later is DataT∪DataT_SimilarI.e. to form a processed extended data set Smp _ Tr.
Advantageous effects of the invention
According to the method for predicting the fine loss of the first-time ordered user based on the transfer learning, similar user groups based on the target domain expansion in a source domain are focused through a Lookalike algorithm, the similar expansion user groups of the target domain and the target domain are used as training samples, a C5 decision tree is adopted, model-based transfer learning idea modeling training is introduced, the prediction accuracy of an obtained model is compared with the prediction accuracy obtained by adopting no transfer learning modeling calculation, and the prediction effect is improved to a certain extent.
Drawings
FIG. 1 is a schematic diagram of the partitioning of a data source.
Detailed Description
The technical scheme of the invention is described in detail by combining the specific examples as follows:
1. model training method based on model transfer learning
(1) Data acquisition
For the convenience of understanding of the description of the method, all historical airline ticket ordering records provided by a international airline ticket search platform G are taken as an example and are described herein
Referring to fig. 1, the users who have over-ordered behaviors during 2015.12.01-2018.3.11 through the platform, historical order data during 2015.12.01-2018.10.11 are used as observation objects, new users who have just come in the latest life cycle need to be filtered out of the observation objects, then all historical order behaviors of the observation objects are used as initial training data,
in this embodiment, the time from 2018.03.11 to 2018.10.11 in the reservation map is for the user to go through a complete lifecycle N. In other embodiments, the complete lifecycle should be determined separately from the industry-traffic lifecycles of the different products. For airline ticket subscriptions, the user's lifecycle N is 180 days.
On this basis, the attrition user defines: no subscription for >90 days, (1: attrition), otherwise (0: retention).
And during this time period the new user's behavior is not used as a training sample.
(2) Data organization
And calculating data indexes forming the following format by taking the daily consumption of the user as the minimum consumption granularity:
table 1: data index
Consumption No on the nth day. Correlation index Simple derivation index 1 Timing derivative index 2
n=1
n>1
The specific indexes are summarized and detailed in table 2:
table 2: specific data index
Figure BDA0002205974960000061
Figure BDA0002205974960000071
(3) Migration learning
Defining all order record sets with the order times of single users being more than 1 as a source domain DimS(ii) a Data set with order number 1 as target domain DimTGradually completing the migration of the source domain knowledge to the target domain through the following steps:
step 1: dim is the dimension setS∩DimTWherein: dimSFor the computational dimension of the source domain data, DimTComputing dimensions for the target domain data;
step 2: if the processing of removing the negative migration is needed, directly switching to Step 3;
sample set Smp ═ DataS∪DataTWherein: datasFor source domain Data, DataTIs target domain data;
step 3: the target sample set is processed to remove the negative migration, Smp _ Tr ═ DataT∪DataT_Similar
Wherein: dataT_SimilarFor the Data of the source domain and the target domainTA similar data set of (a);
the specific calculation steps are detailed as follows: the method part of the Lookalike relieving negative migration is described, and the forming process of the Smp _ Tr of the data processing part is described;
Smp=Smp_Tr;
step 4: dividing a sample set Smp into training sets TrainSmp(80%), and Test set TestSmp(20%);
Step 5: by PCA, Dim is mapped to a low-dimensional subspace, i.e.: dim ═ > DimP, where: DimP is a factor that Dim maps in a low dimensional subspace;
step 6: the model DimP > Dim', where: dim' is an important factor relative to target Y;
step 7: model training in training set TrainSmpIn the method, Dim 'is used as an input, Train _ Y is used as an output, and C5 decision tree training modeling is used to obtain Train _ Y ═ F (Dim'); then at TestSmpEvaluating the obtained model;
step 8: model obtained by training using Step7, for DataTF (Dim') > DataT _ Y; wherein DataT _ Y refers to DataTAnd mapping the obtained output result according to the function F (Dim').
2. Lookalike-based negative migration mitigation method
Data of target domain Data set (i.e. record ordered for the first time)TAs seed Data, expansion is carried out on the basis of seed Data, i.e. the record Data most similar to the seed Data is screened out from the source domainS_SimilarObtaining Smp _ Tr ═ DataT∪DataS_Similar. Smp _ Tr is the data set after the negative migration processing.
The migration sample consists of two parts:
Smp_Tr=DataT∪DataS_Similar
wherein: dataTIs a target domain data set;
DataS_Similarfor data sets in a source domain similar to a target data source
Step 1: for each target domain DataTThe seed data of (2) are calculated respectively according to the following formulas (1) and (2), and the seed data and the source domain data are calculated for each target domain memberSimilarity between members of (a);
pearson similarity
Figure BDA0002205974960000091
Sim=1-R (1)
Distance
Figure BDA0002205974960000092
The similarity distance is calculated using the above formula by calculating the resulting PCA mapping factor in the dataset of the source domain and target domain combinations.
Wherein: ft: is the set of PCA factors in the target domain, Fs: is a set of PCA factors in the source domain, the PCA factors in the source domain being: FS ═ Fs1,Fs2,......Fsnn is the number of factors of the PCA mapping factor, FS: a set of PCA factors in the source domain comprising n factors Fs1, Fs 2. The PCA factor in the target domain is: FT ═ Ft1,Ft2,......Ftnn is the number of factors of the PCA mapping factor, FT: the set of PCA factors in the target domain consists of n factors Ft1, Ft 2.
Step 2: r, D, respectively taking Sim 0.2and D1, and obtaining a target domain data record ID set;
step 3: removing the duplication of the target data record ID set obtained in the Step2, and adding the record ID in the source domain;
step 4: the record ID obtained at Step3 is deduplicated again, and the record set corresponding to the record ID set obtained later is DataT∪DataT_SimilarI.e. to form a processed extended data set Smp _ Tr.
3. And (3) according to the new first-purchase user, predicting whether the user will lose by using the trained model and predicting the new first-purchase user by using the model constructed in the parts 1 and 2 to obtain a loss prediction conclusion.
Table 3: accuracy of each prediction method
Figure BDA0002205974960000101
The present invention is not limited to the above description or what is shown in the drawings, and modifications and changes may be made within the scope of the present invention without departing from the gist of the present invention.

Claims (2)

1. A first-order user refined loss prediction method based on transfer learning is characterized by comprising the following steps:
(1) data acquisition
Obtaining an order record, taking a user who has ordered in a preset time period as an observation object, wherein a new user who has just come in a latest life cycle needs to be filtered out from the observation object, then taking all historical order behaviors of the observation object as initial training data,
among the observed subjects, attrition users are defined as: no subscription for >90 days, return 1: run-off, otherwise return 0: the mixture is left for a long time,
(2) data organization
Calculating a data index user ID forming the following format by taking the daily consumption of the user as the minimum consumption granularity
Consumption No on the nth day. Correlation index Simple derivation index 1 Timing derivative index 2 n=1 n>1
(3) Transfer learning
Number of single user subscriptions>1 as the source domain DimS(ii) a Data set with order number 1 as target domain DimTGradually completing the migration of the source domain knowledge to the target domain through the following steps:
3-1: dim is the dimension setS∩DimTWherein: dimSFor the computational dimension of the source domain data, DimTComputing dimensions for the target domain data;
3-2: if the treatment of removing the negative migration is needed, directly switching to the step 3-3;
sample set Smp ═ DataS∪DataTWherein: dataSFor source domain Data, DataTIs target domain data;
3-3: the target sample set is processed to remove the negative migration, Smp _ Tr ═ DataT∪DataT_Similar
Wherein: dataT_SimilarFor the Data of the source domain and the target domainTThe similar data set of (a) is,
Smp=Smp_Tr;
3-4: dividing a sample set Smp into training sets TrainSmpAccount for 80%, and Test set TestSmp20 percent of the total weight of the composition;
3-5: by PCA, Dim is mapped to a low-dimensional subspace, i.e.: dim ═ DimP, where: DimP is a factor that Dim maps in a low dimensional subspace;
3-6: the model DimP ═ > Dim', where: dim' is an important factor relative to target Y;
3-7: model training: in the training set TrainSmpIn the method, Dim' is used as input and Train _ Y is used as output, so thatTraining and modeling by using a C5 decision tree to obtain Train _ Y ═ F (Dim'); then at TestSmpEvaluating the obtained model;
3-8: model obtained by training using Step7, for DataTIs predicted, F (Dim') ═ is predicted>DataT _ Y, DataT _ Y refers to DataTAnd mapping the obtained output result according to the function F (Dim').
2. The migration learning based first-order user refinement churn prediction method of claim 1, wherein:
the specific steps of step 3-3 are as follows:
data of target domain Data set (i.e. record ordered for the first time)TAs seed Data, expansion is carried out on the basis of seed Data, i.e. the record Data most similar to the seed Data is screened out from the source domainS_SimilarObtaining Smp _ Tr ═ DataT∪DataS_SimilarThe Smp _ Tr is the data set after the negative migration processing,
the migration sample consists of two parts:
Smp_Tr=DataT∪DataS_Similar
wherein: dataTIs a target domain data set;
DataS_Similarfor data sets in a source domain similar to a target data source
3-3-1: for each target domain DataTThe seed data of (2) are respectively calculated according to the following formulas (1) and (2), and the similarity between the seed data and each member in the source domain data is calculated for each target domain member;
pearson similarity
Sim=1–R (1)
Distance
Figure FDA0002205974950000032
The similarity distance is calculated by calculating the obtained PCA mapping factor in the data set of the combination of the source domain and the target domain by using the formula,
wherein:
the PCA factor in the source domain is: FS ═ Fs1,Fs2,……,Fsnn is the number of factors of the PCA mapping factor, FS: the set of PCA factors in the source domain, comprising n factors Fs1, Fs2, … … Fsn,
the PCA factor in the target domain is: FT ═ Ft1,Ft2,......Ftnn is the number of factors for the PCA mapping factor FT: the set of PCA factors in the target domain, comprising n factors Ft1, Ft2, … … Ftn,
3-3-2: r, D, respectively taking Sim (0.2 and D (1) as values, and obtaining a target domain data record ID set, wherein Sim is a similarity measure calculated by formula (1);
3-3-3: removing the duplication of the target data record ID set obtained in the Step2, and adding the record ID in the source domain;
3-3-4: the record ID obtained at Step3 is deduplicated again, and the record set corresponding to the record ID set obtained later is DataT∪DataT_SimilarI.e. to form a processed extended data set Smp _ Tr.
CN201910881387.9A 2019-09-18 2019-09-18 First-order user refined loss prediction method based on transfer learning Withdrawn CN110634027A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910881387.9A CN110634027A (en) 2019-09-18 2019-09-18 First-order user refined loss prediction method based on transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910881387.9A CN110634027A (en) 2019-09-18 2019-09-18 First-order user refined loss prediction method based on transfer learning

Publications (1)

Publication Number Publication Date
CN110634027A true CN110634027A (en) 2019-12-31

Family

ID=68971201

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910881387.9A Withdrawn CN110634027A (en) 2019-09-18 2019-09-18 First-order user refined loss prediction method based on transfer learning

Country Status (1)

Country Link
CN (1) CN110634027A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325006A (en) * 2020-03-17 2020-06-23 北京百度网讯科技有限公司 Information interaction method and device, electronic equipment and storage medium
CN112150201A (en) * 2020-09-23 2020-12-29 创络(上海)数据科技有限公司 Application of KNN-based time sequence migration learning in sales prediction
CN113421122A (en) * 2021-06-25 2021-09-21 创络(上海)数据科技有限公司 First-purchase user refined loss prediction method under improved transfer learning framework
CN113591943A (en) * 2021-07-13 2021-11-02 北京淇瑀信息科技有限公司 Method and device for quickly authenticating user of newly added channel and electronic equipment
CN114022202A (en) * 2021-11-03 2022-02-08 中南大学 User loss prediction method and system based on deep learning

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325006A (en) * 2020-03-17 2020-06-23 北京百度网讯科技有限公司 Information interaction method and device, electronic equipment and storage medium
CN111325006B (en) * 2020-03-17 2023-05-05 北京百度网讯科技有限公司 Information interaction method and device, electronic equipment and storage medium
CN112150201A (en) * 2020-09-23 2020-12-29 创络(上海)数据科技有限公司 Application of KNN-based time sequence migration learning in sales prediction
CN113421122A (en) * 2021-06-25 2021-09-21 创络(上海)数据科技有限公司 First-purchase user refined loss prediction method under improved transfer learning framework
CN113591943A (en) * 2021-07-13 2021-11-02 北京淇瑀信息科技有限公司 Method and device for quickly authenticating user of newly added channel and electronic equipment
CN114022202A (en) * 2021-11-03 2022-02-08 中南大学 User loss prediction method and system based on deep learning

Similar Documents

Publication Publication Date Title
CN110634027A (en) First-order user refined loss prediction method based on transfer learning
CN108596362B (en) Power load curve form clustering method based on adaptive piecewise aggregation approximation
Li et al. What a nasty day: Exploring mood-weather relationship from twitter
CN109002492B (en) Performance point prediction method based on LightGBM
Joenssen et al. Hot deck methods for imputing missing data: the effects of limiting donor usage
KR20150036117A (en) Query expansion
CN103123653A (en) Search engine retrieving ordering method based on Bayesian classification learning
CN105164675A (en) Incrementally updating statistics
CN109359135B (en) Time sequence similarity searching method based on segment weight
CN104281635A (en) Method for predicting basic attributes of mobile user based on privacy feedback
CN105893380A (en) Improved text classification characteristic selection method
WO2017071474A1 (en) Method and device for processing language data items and method and device for analyzing language data items
Núñez et al. Resolving regional frequency analysis of precipitation at large and complex scales using a bottom-up approach: The Latin America and the Caribbean Drought Atlas
CN103761286A (en) Method for retrieving service resources on basis of user interest
CN111625578B (en) Feature extraction method suitable for time series data in cultural science and technology fusion field
CN106874286B (en) Method and device for screening user characteristics
Shang et al. Forecasting Australian subnational age-specific mortality rates
CN106156875B (en) Method and device for predicting predicted object
CN111507528A (en) Stock long-term trend prediction method based on CNN-L STM
CN113222255A (en) Method and device for contract performance quantification and short-term default prediction
CN114880490A (en) Knowledge graph completion method based on graph attention network
Mulaudzi et al. Improving the performance of multivariate forecasting models through feature engineering: A South African unemployment rate forecasting case study
CN108121772B (en) Method for measuring friend influence of social network user based on tool variable method
Sarlo et al. Lumpy and intermittent retail demand forecasts with score-driven models
CN101710392B (en) Important information acquiring method based on variable boundary support vector machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20191231

WW01 Invention patent application withdrawn after publication