WO2017129033A1 - 一种问题推荐方法及设备 - Google Patents
一种问题推荐方法及设备 Download PDFInfo
- Publication number
- WO2017129033A1 WO2017129033A1 PCT/CN2017/071704 CN2017071704W WO2017129033A1 WO 2017129033 A1 WO2017129033 A1 WO 2017129033A1 CN 2017071704 W CN2017071704 W CN 2017071704W WO 2017129033 A1 WO2017129033 A1 WO 2017129033A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- feature
- probability
- numerical
- acquired
- obtaining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0499—Feedforward networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Definitions
- the present application relates to the field of communication technologies, and in particular, to a problem recommendation method, and the present application also relates to a problem recommendation device.
- the self-service customer service system needs to have greater processing power to meet customer service needs.
- Self-service system that automatically handles user issues.
- the increase in the amount of data to be processed in the self-service system makes the existing methods unable to process the full amount of data.
- Existing algorithms decrease in computational efficiency as problems increase. And most of the features are sparse, and the prior art is suitable for dealing with dense features. Thus, while the number of problem features in the system increases, the prediction accuracy of the user problem decreases.
- the model in the prior art is single and the effect is limited. Therefore, with the continuous explosion of information, the current machine learning model can no longer meet the demand.
- the technical problem to be solved by those skilled in the art is how to improve the accuracy of recommending the problem to the user by calculating the problem of the previous problem, and then solving the user problem in the self-service customer node, thereby reducing the user entering the manual customer service. , reduce the cost of manual customer service.
- the present invention provides a problem recommendation method for improving the accuracy of recommending a problem to a user.
- the method includes the following steps:
- Processing the problem feature, and the processed problem feature is within a preset numerical interval
- each of the problems and the second probability in the problem are obtained by the processed problem feature and the first probability; the first probability is obtained by the problem feature.
- the problem feature comprises a numerical feature and a textual feature, the numerical feature being continuous and the textual feature being discontinuous.
- the obtaining the problem specifically includes:
- the value of the unobtained problem is null
- the acquiring the problem feature corresponding to the problem includes:
- the mean value of the obtained problem feature value corresponding to the problem is taken as a problem feature
- the problem feature is the text-type problem feature
- the highest frequency of the problem feature corresponding to the problem is obtained as a problem feature
- the acquired problem feature is taken as the problem feature.
- the problem feature is processed, and specifically includes:
- the problem feature is a numerical problem feature, the problem feature is normalized
- the problem feature is a text-type problem feature
- the problem feature is vectorized
- the problematic feature after vectorization processing is a numerical problem feature.
- the second probability is obtained by performing a deep neural network DNN calculation on the processed problem feature and the first probability.
- the corresponding application also proposes a problem recommendation device, the device comprising:
- Obtaining a module acquiring a problem and obtaining a problem feature corresponding to the problem during a sample collection period;
- Processing module processing the problem feature, and the processed problem feature is within a specified numerical interval
- Determining a module determining a recommended question based on each of the described problems and their second probability in the question and a specified recommendation threshold;
- each of the problems and the second probability in the problem are obtained by the processed problem feature and the first probability; the first probability is obtained by the problem feature.
- the problem feature comprises a numerical feature and a textual feature, the numerical feature being continuous and the textual feature being discontinuous.
- the obtaining module is specifically configured to:
- the value of the unobtained problem is null
- the obtaining module is specifically configured to:
- the mean value of the obtained problem feature value corresponding to the problem is taken as a problem feature
- the problem feature is the text-type problem feature
- the highest frequency of the problem feature corresponding to the problem is obtained as a problem feature
- the acquired problem feature is taken as the problem feature.
- the processing module is specifically configured to:
- the problem feature is a numerical problem feature, the problem feature is normalized
- the problem feature is a text-type problem feature
- the problem feature is vectorized, and the problem feature after the vectorization process is a numerical problem feature.
- the second probability is obtained by performing a deep neural network DNN calculation on the processed problem feature and the first probability.
- FIG. 1 is a schematic flow chart of a method for recommending a problem according to the present application
- FIG. 2 is a schematic diagram of a DNN model proposed by a specific embodiment of the present application.
- FIG. 3 is a schematic structural diagram of a problem recommendation device according to the present application.
- the present invention proposes a problem recommendation method, which is applied to a problem recommendation system, and combines a machine learning model and a deep neural network model DNN for model training.
- the system is able to recommend the required questions to users based on historical records, and is good at handling sparse and dense problem features, which can be used to improve the accuracy of recommending problems to users.
- a schematic flowchart of a verification information processing method proposed by the present application includes the following steps:
- the problem recommendation system usually includes a collection layer, a processing layer, a storage layer, and an output layer.
- the collection layer is responsible for collecting questions and problem characteristics sent by other devices.
- the processing layer uses the collected questions and problem characteristics for model training.
- the storage layer is responsible for data storage, which stores the user's history.
- the output layer performs the output of the problem and problem features.
- the problem recommendation system in the present application can be implemented on a server, preferably a distributed server. And this application can use one server or a cluster of multiple servers.
- the problem feature includes a numeric feature and a textual feature, the numerical feature being continuous, for example, the numerical feature is the number of times an application software has been used, and the numerical value 9 represents the use of the text type feature.
- text-based features are invoice status, corresponding to uninvoiced and invoiced.
- a feature acquisition period is set, and the problem is acquired during the feature acquisition period. If there is a problem that is not acquired in the feature acquisition period, the value of the unobtained problem is null. If there is no problem that is not acquired in the feature acquisition period, the acquired problem is taken as the problem.
- Obtaining a problem feature in a feature acquisition period if there is a problem feature that is not acquired in the feature acquisition period, and the problem feature is the numerical feature, the mean value of the obtained problem feature value corresponding to the problem is used as a problem feature If there is a problem feature that is not acquired in the feature acquisition period, and the problem feature is the text-type problem feature, the highest frequency of the problem feature corresponding to the problem is obtained as a problem feature, if there is no feature acquisition
- the problem features that are not acquired during the cycle are characterized by the acquired problem features.
- the recommendation system filters the problem features to delete some features, such as deleting all the problem characteristics of the same user, the problem characteristics that are easy to exceed the feature acquisition cycle, and the problems unrelated to the business operation. feature.
- the selected features can be prepared for subsequent establishment of the classification model.
- the problem recommendation system After obtaining the problem and the corresponding problem feature, the problem recommendation system processes the problem feature. If the problem feature is a numerical problem feature, the problem feature is normalized to make the processed problem feature within a specified numerical interval; if the problem feature is a text-type problem feature, the problem feature is performed The vectorization process is such that the processed problem feature is a numerical problem feature and is within a specified numerical interval.
- the percentile binning algorithm can be used for normalization so that all problem features are within a specified numerical interval after being processed.
- the original values are grouped into 100 bins and then the bins are encoded, such as 0.01, 0.02, ... 1.00.
- the processed numerical problem features are in the range of 0 to 1.
- the one-hot encoding can be used to process text-type features, and the frequency of each feature can be calculated to give one hot encoding by frequency.
- the text-type feature is the invoice status, corresponding to the uninvoiced and invoiced, after vectorization, the numerical features 0 and 1 are obtained, which are in the numerical range of 0 to 1.
- the problem feature After the problem feature is processed, it is within the specified numerical range to facilitate participation in subsequent calculations. It should be noted that the present application needs to obtain the problem feature in the specified numerical region. Therefore, the above-described percent binning algorithm and vectorization processing method are only examples of the preferred embodiment of the present application, and the scope of protection of the present application is not The above is only an example of the preferred embodiment of the present application. On the basis of this, other methods may be selected for calculation, so that the present application is applicable to more application fields, and these improvements are all within the protection scope of the present invention.
- the present application After obtaining the problem and the corresponding problem feature, the present application also needs to perform simple classification model calculation on the problem feature, and the decision tree algorithm can be used to obtain the first probability.
- the original variables and derived variables of the data set will be more and more, so the information value IV (Information Value) is very important in the actual data application.
- the information value IV is used to indicate how much "information" each variable has for the target variable, making feature selection simple and fast.
- the importance of the feature is always quantified and then selected, and how to quantify the feature becomes the biggest difference between the various methods.
- information gain the measure of importance is to see how much information a feature can bring to a classification system. The more information it brings, the more important it is. Therefore, for a feature, the information gain is the amount of information when the system has the feature and the feature does not exist. The difference between the two is the amount of information that the feature brings to the system, that is, the information gain IG (Information Gain) ).
- both the information value IV and the information gain IG can represent the weight corresponding to the problem feature
- the weight is the information value IV and/or the information gain IG
- the important features are selected according to the weights, and then the classification model is established according to the important features. Then, the classification problem is analyzed by the classification model to obtain the first probability. The corresponding probability obtained by calculating each problem feature through the decision tree is taken as the first probability.
- the Deep Neural Network (DNN) in the problem recommendation system includes an input node and a compute node.
- the DNN calculation includes the following steps: (1) The input node acquires the processed problem feature and the first probability. (2) The computing node calculates the processed problem feature and the first probability through the fully connected layer, the activation function ReLu, and the multi-class loss function softmax loss to obtain a second probability.
- the processed problem feature and the first probability are obtained by the input layer.
- the decision tree Before the DNN training, the decision tree can be used to initially classify the data, and the weight of the network nodes in the deep neural network DNN can be controlled by the first probability.
- the problem is recommended by the middle layer, that is, the calculation layer, and the calculation layer calculates the processed problem feature and the first probability through the fully connected layer, the activation function ReLu, and the multi-class loss function softmax loss, and obtains corresponding problem features.
- the problem and the second probability are recommended by the middle layer, that is, the calculation layer, and the calculation layer calculates the processed problem feature and the first probability through the fully connected layer, the activation function ReLu, and the multi-class loss function softmax loss, and obtains corresponding problem features.
- the problem and the second probability is recommended by the middle layer, that is, the calculation layer, and the calculation layer calculates the processed problem feature and the first probability through the fully connected layer, the activation function ReLu, and the multi-class loss function softmax loss, and obtains corresponding problem features.
- the problem and the second probability are recommended by the middle layer, that is, the calculation layer, and the calculation layer calculates the processed problem feature and the first probability through the fully connected layer, the activation function ReLu, and the multi
- the output of neurons in a part of the network is 0, thus creating the sparseness of the network, and reducing the interdependence of parameters, alleviating the occurrence of over-fitting problems.
- making the calculation amount of the computing node smaller is beneficial to improve the efficiency of the system recommendation problem.
- DNN training can use the GPU to accelerate the matrix calculations and further increase the calculation speed.
- the output layer outputs each of the described problems and their corresponding second probabilities.
- the present application is a first probability and a numerical problem feature obtained after processing, and the second probability is obtained.
- the calculation method proposed in the present application is a DNN calculation, and the scope of protection of the present application is not limited thereto.
- the examples presented in the preferred embodiments may be selected based on other methods to perform the calculations, so that the present application is applicable to more fields of application, and such improvements are within the scope of the present invention.
- the question recommendation system determines the recommended question based on each of the described questions and their second probability in the question and the specified recommendation threshold. Then, the problem feature within the threshold is obtained according to the threshold value, and then the problem corresponding to the problem feature is taken as the recommended problem. For example, if you get the problem characteristics of six questions within the threshold, the system recommends these six questions.
- the invention calculates the problem and the problem feature in the history record corresponding to each user, and further After determining the problem to be recommended, the corresponding result is directly called when the user accesses the problem recommendation system. Through the problem recommendation system in this application, the user can directly obtain the problem with which the correlation is very high.
- the present application also provides a problem recommendation device.
- the device includes:
- the obtaining module 310 acquiring a problem and obtaining a problem feature corresponding to the problem during the sample collection period;
- the processing module 320 processing the problem feature, and the processed problem feature is within a specified numerical interval;
- a determining module 330 determining a recommended question according to each of the problem and its second probability in the question and a specified recommendation threshold;
- each of the problems and the second probability in the problem are obtained by the processed problem feature and the first probability; the first probability is obtained by the problem feature.
- the problem feature comprises a numerical feature and a textual feature, the numerical feature being continuous and the textual feature being discontinuous.
- the obtaining module is specifically configured to:
- the value of the unobtained problem is null
- the obtaining module is specifically configured to:
- the problem feature is the text-type problem feature
- the highest frequency of the problem feature corresponding to the problem is obtained as a problem feature
- the acquired problem feature is taken as the problem feature.
- the processing module is specifically configured to:
- the problem feature is a numerical problem feature, the problem feature is normalized
- the problem feature is a text-type problem feature
- the problem feature is vectorized, and the problem feature after the vectorization process is a numerical problem feature.
- the second probability is obtained by performing a deep neural network DNN calculation on the processed problem feature and the first probability.
- the present application can be implemented by hardware, or by software plus a necessary general hardware platform.
- the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a USB flash drive, a mobile hard disk, etc.), including several The instructions are for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform the methods described in various implementation scenarios of the present application.
- modules in the apparatus in the implementation scenario may be distributed in the apparatus for implementing the scenario according to the implementation scenario description, or may be correspondingly changed in one or more devices different from the implementation scenario.
- the modules of the above implementation scenarios may be combined into one module, or may be further split into multiple sub-modules.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Databases & Information Systems (AREA)
- Strategic Management (AREA)
- Human Resources & Organizations (AREA)
- Entrepreneurship & Innovation (AREA)
- Economics (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Tourism & Hospitality (AREA)
- Educational Administration (AREA)
- Development Economics (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Game Theory and Decision Science (AREA)
- General Business, Economics & Management (AREA)
- Probability & Statistics with Applications (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)
- Telephone Function (AREA)
- Pens And Brushes (AREA)
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP17743648.2A EP3410310A4 (en) | 2016-01-29 | 2017-01-19 | Question recommendation method and device |
| JP2018538883A JP7007279B2 (ja) | 2016-01-29 | 2017-01-19 | 質問を推薦する方法及び装置 |
| US16/046,800 US20180330226A1 (en) | 2016-01-29 | 2018-07-26 | Question recommendation method and device |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610065638.2A CN107025228B (zh) | 2016-01-29 | 2016-01-29 | 一种问题推荐方法及设备 |
| CN201610065638.2 | 2016-01-29 |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/046,800 Continuation US20180330226A1 (en) | 2016-01-29 | 2018-07-26 | Question recommendation method and device |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2017129033A1 true WO2017129033A1 (zh) | 2017-08-03 |
Family
ID=59397449
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2017/071704 Ceased WO2017129033A1 (zh) | 2016-01-29 | 2017-01-19 | 一种问题推荐方法及设备 |
Country Status (6)
| Country | Link |
|---|---|
| US (1) | US20180330226A1 (enExample) |
| EP (1) | EP3410310A4 (enExample) |
| JP (1) | JP7007279B2 (enExample) |
| CN (1) | CN107025228B (enExample) |
| TW (1) | TWI772287B (enExample) |
| WO (1) | WO2017129033A1 (enExample) |
Families Citing this family (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108363745B (zh) | 2018-01-26 | 2020-06-30 | 阿里巴巴集团控股有限公司 | 机器人客服转人工客服的方法和装置 |
| CN108764273B (zh) * | 2018-04-09 | 2023-12-05 | 中国平安人寿保险股份有限公司 | 一种数据处理的方法、装置、终端设备及存储介质 |
| US11586417B2 (en) | 2018-09-28 | 2023-02-21 | Qualcomm Incorporated | Exploiting activation sparsity in deep neural networks |
| CN111353093B (zh) * | 2018-12-24 | 2023-05-23 | 北京嘀嘀无限科技发展有限公司 | 问题推荐方法、装置、服务器及可读存储介质 |
| CN109711982A (zh) * | 2019-01-04 | 2019-05-03 | 深圳壹账通智能科技有限公司 | 面核提问方法、装置、计算机设备和可读存储介质 |
| CN110263133B (zh) * | 2019-05-07 | 2023-11-24 | 平安科技(深圳)有限公司 | 基于知识图谱的问答方法、电子装置、设备及存储介质 |
| CN112819019B (zh) * | 2019-11-15 | 2023-06-20 | 财团法人资讯工业策进会 | 分类模型生成装置及其分类模型生成方法 |
| CN112528010B (zh) * | 2020-12-15 | 2022-09-02 | 建信金融科技有限责任公司 | 知识推荐方法、装置、计算机设备及可读存储介质 |
| CN116955623A (zh) * | 2023-07-31 | 2023-10-27 | 苏州云上看科技有限公司 | 相关问题推荐方法、设备和存储介质 |
| CN118827415A (zh) * | 2023-12-06 | 2024-10-22 | 中移物联网有限公司 | 一种物联网模组的射频性能预测方法、装置、介质及设备 |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101986298A (zh) * | 2010-10-28 | 2011-03-16 | 浙江大学 | 用于在线论坛的信息实时推荐方法 |
| CN104572734A (zh) * | 2013-10-23 | 2015-04-29 | 腾讯科技(深圳)有限公司 | 问题推荐方法、装置及系统 |
| CN105095477A (zh) * | 2015-08-12 | 2015-11-25 | 华南理工大学 | 一种基于多指标评分的推荐算法 |
| CN105243389A (zh) * | 2015-09-28 | 2016-01-13 | 北京橙鑫数据科技有限公司 | 公司名称的行业分类标签的确定方法和装置 |
Family Cites Families (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH09128401A (ja) * | 1995-10-27 | 1997-05-16 | Sharp Corp | 動画像検索装置及びビデオ・オン・デマンド装置 |
| US7409335B1 (en) * | 2001-06-29 | 2008-08-05 | Microsoft Corporation | Inferring informational goals and preferred level of detail of answers based on application being employed by the user |
| US6922680B2 (en) * | 2002-03-19 | 2005-07-26 | Koninklijke Philips Electronics N.V. | Method and apparatus for recommending an item of interest using a radial basis function to fuse a plurality of recommendation scores |
| US20050089878A1 (en) * | 2003-02-14 | 2005-04-28 | Debe Derek A. | Method for determining functional sites in a protein |
| US20100235343A1 (en) * | 2009-03-13 | 2010-09-16 | Microsoft Corporation | Predicting Interestingness of Questions in Community Question Answering |
| US9916538B2 (en) * | 2012-09-15 | 2018-03-13 | Z Advanced Computing, Inc. | Method and system for feature detection |
| US9249287B2 (en) * | 2012-02-24 | 2016-02-02 | Nec Corporation | Document evaluation apparatus, document evaluation method, and computer-readable recording medium using missing patterns |
| US8938438B2 (en) * | 2012-10-11 | 2015-01-20 | Go Daddy Operating Company, LLC | Optimizing search engine ranking by recommending content including frequently searched questions |
| CN104462156B (zh) * | 2013-09-25 | 2018-12-28 | 阿里巴巴集团控股有限公司 | 一种基于用户行为的特征提取、个性化推荐方法和系统 |
| US9911088B2 (en) * | 2014-05-01 | 2018-03-06 | Microsoft Technology Licensing, Llc | Optimizing task recommendations in context-aware mobile crowdsourcing |
| US10943181B2 (en) * | 2015-06-26 | 2021-03-09 | Microsoft Technology Licensing, Llc | Just in time classifier training |
| CN105279288B (zh) * | 2015-12-04 | 2018-08-24 | 深圳大学 | 一种基于深度神经网络的在线内容推荐方法 |
| US10861106B1 (en) * | 2016-01-14 | 2020-12-08 | Intuit Inc. | Computer generated user interfaces, computerized systems and methods and articles of manufacture for personalizing standardized deduction or itemized deduction flow determinations |
-
2016
- 2016-01-29 CN CN201610065638.2A patent/CN107025228B/zh active Active
-
2017
- 2017-01-19 EP EP17743648.2A patent/EP3410310A4/en not_active Withdrawn
- 2017-01-19 WO PCT/CN2017/071704 patent/WO2017129033A1/zh not_active Ceased
- 2017-01-19 JP JP2018538883A patent/JP7007279B2/ja active Active
- 2017-01-24 TW TW106102678A patent/TWI772287B/zh not_active IP Right Cessation
-
2018
- 2018-07-26 US US16/046,800 patent/US20180330226A1/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101986298A (zh) * | 2010-10-28 | 2011-03-16 | 浙江大学 | 用于在线论坛的信息实时推荐方法 |
| CN104572734A (zh) * | 2013-10-23 | 2015-04-29 | 腾讯科技(深圳)有限公司 | 问题推荐方法、装置及系统 |
| CN105095477A (zh) * | 2015-08-12 | 2015-11-25 | 华南理工大学 | 一种基于多指标评分的推荐算法 |
| CN105243389A (zh) * | 2015-09-28 | 2016-01-13 | 北京橙鑫数据科技有限公司 | 公司名称的行业分类标签的确定方法和装置 |
Non-Patent Citations (1)
| Title |
|---|
| See also references of EP3410310A4 * |
Also Published As
| Publication number | Publication date |
|---|---|
| EP3410310A4 (en) | 2019-01-02 |
| TW201800987A (zh) | 2018-01-01 |
| CN107025228B (zh) | 2021-01-26 |
| TWI772287B (zh) | 2022-08-01 |
| EP3410310A1 (en) | 2018-12-05 |
| JP2019511764A (ja) | 2019-04-25 |
| CN107025228A (zh) | 2017-08-08 |
| US20180330226A1 (en) | 2018-11-15 |
| JP7007279B2 (ja) | 2022-01-24 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2017129033A1 (zh) | 一种问题推荐方法及设备 | |
| CN111797321B (zh) | 一种面向不同场景的个性化知识推荐方法及系统 | |
| CN110503531B (zh) | 时序感知的动态社交场景推荐方法 | |
| TWI689874B (zh) | 神經網路模型訓練、交易行為風險識別方法及裝置 | |
| WO2019037202A1 (zh) | 目标客户的识别方法、装置、电子设备及介质 | |
| US10789240B2 (en) | Duplicative data detection | |
| CN104933622A (zh) | 一种基于用户和微博主题的微博流行度预测方法及系统 | |
| CN109978020B (zh) | 一种基于多维特征的社交网络账号马甲身份辨识方法 | |
| CN116010574A (zh) | 智能对话处理的方法、云服务器及可读存储介质 | |
| CN106354856A (zh) | 基于人工智能的深度神经网络强化搜索方法和装置 | |
| CN103150383B (zh) | 一种短文本数据的事件演化分析方法 | |
| CN110445939A (zh) | 容量资源的预测方法及装置 | |
| CN110147388A (zh) | 一种数据处理的方法及装置 | |
| CN104462480A (zh) | 基于典型性的评论大数据挖掘方法 | |
| US9336249B2 (en) | Decision tree with just-in-time nodal computations | |
| CN113138977A (zh) | 交易转化分析方法、装置、设备及存储介质 | |
| CN117668351B (zh) | 推荐方法、模型的训练方法、装置、电子设备及存储介质 | |
| CN115831339B (zh) | 基于深度学习的医疗系统风险管控事前预测方法、系统 | |
| Khan et al. | Bigdata analytics techniques to obtain valuable knowledge | |
| Permatasari et al. | Features selection for entity resolution in prostitution on Twitter | |
| CN114329175B (zh) | 一种信息推荐方法、系统及存储介质和服务器 | |
| US20140324524A1 (en) | Evolving a capped customer linkage model using genetic models | |
| CN113688421A (zh) | 基于隐私保护的预测模型更新方法及装置 | |
| HK1240341B (zh) | 一种问题推荐方法及设备 | |
| CN112541705A (zh) | 生成用户行为评估模型的方法、装置、设备以及存储介质 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17743648 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2018538883 Country of ref document: JP Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2017743648 Country of ref document: EP |
|
| ENP | Entry into the national phase |
Ref document number: 2017743648 Country of ref document: EP Effective date: 20180829 |