WO2020150955A1 - Data classification method and apparatus, and device and storage medium - Google Patents
Data classification method and apparatus, and device and storage medium Download PDFInfo
- Publication number
- WO2020150955A1 WO2020150955A1 PCT/CN2019/072932 CN2019072932W WO2020150955A1 WO 2020150955 A1 WO2020150955 A1 WO 2020150955A1 CN 2019072932 W CN2019072932 W CN 2019072932W WO 2020150955 A1 WO2020150955 A1 WO 2020150955A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- value attribute
- continuous value
- data
- continuous
- attribute
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
Definitions
- the present invention relates to the field of data processing technology, in particular to a data classification method, device, equipment and storage medium.
- the operating data are mostly mixed-value attributes, which include continuous value attributes and discrete value attributes.
- a common classification method is to continuousize discrete-valued attributes, and then classify continuous-valued attributes.
- the attribute value after the one-hot encoding operation is still discrete in the sense of numerical distribution, and it does not fundamentally solve the continuity of the discrete value attribute.
- the present invention provides a data classification method, device, equipment and storage medium to solve the problem that the existing classification method does not realize the continuity of discrete value attributes by using one-hot encoding operation.
- the present invention provides a data classification method, which includes: performing continuous encoding processing on discrete value attributes to obtain a second continuous value attribute, wherein the data includes the discrete value attribute and the first continuous value attribute; Continuous value attributes are trained, and the first Hidden layer data as the third continuous value attribute, where the neural network includes A hidden layer; the first continuous value attribute and the third continuous value attribute are merged to obtain the fourth continuous value attribute; the fourth continuous value attribute is classified to obtain the classified data.
- the discrete-valued attributes are first continuously coded, and then the neural network is used to train the second continuous-valued attributes, thereby thoroughly transforming the discrete-valued attributes into ordered information A continuous value attribute whose value is a real number.
- the first Hidden layer data as the third continuous value attribute, it also includes: constructing the objective function, where the objective function is the sum of the error value of the third continuous value attribute and the entropy; using the second continuous value attribute to train the neural network , Until the value of the objective function is the minimum.
- the error value of the third continuous value attribute and the sum of substituted entropy are used as the objective function to train the neural network, and the neural network is used to train the second continuous value attribute, that is, except In addition to ensuring the minimum error between the actual output and the actual output, it also ensures the minimum uncertainty of the data set after the conversion.
- constructing the objective function specifically includes: performing subtraction processing on the theoretical value of the third continuous value attribute and the third continuous value attribute to obtain an error value; performing data set division on the third continuous value attribute to obtain the third continuous value attribute A sub-data set, wherein the first data set includes a plurality of first sub-data sets; obtains the substitution entropy of the first sub-data set; superimposes the substitution entropy of the plurality of first sub-data sets to obtain the third consecutive The substitution entropy of the value attribute.
- the third continuous value attribute is divided into data sets to obtain the first sub-data set, and the substitution entropy of the first sub-data set is obtained to obtain the substitution entropy of the first data set, Reduce computational complexity.
- obtaining the substitution entropy of the first sub-data set specifically includes:
- the substitution entropy of the first sub-data set is obtained according to the first formula, where the first formula is: Represents the first subdata, En[ ⁇ ] represents the entropy, Indicates the number of samples of the data, b q represents the window width of the kernel density estimation method, Respectively represent the nth and mth elements in the first subdata.
- performing superposition processing on the substitution entropy of a plurality of first sub-data sets to obtain the substitution entropy of the third continuous value attribute specifically includes:
- the substitution entropy of the third continuous value attribute is obtained according to the second formula, where the second formula is: among them, For the first The number of nodes in a hidden layer, Is the third continuous value attribute,
- the data classification device is introduced below, and its implementation principle and technical effect are similar to the principle and technical effect of the above method, and will not be repeated here.
- the present invention provides a data classification device, including: an obtaining module for performing continuous encoding processing on discrete value attributes to obtain a second continuous value attribute, wherein the data includes the discrete value attribute and the first continuous value attribute; Module, used to train the second continuous value attribute using neural network, Hidden layer data as the third continuous value attribute, where the neural network includes A hidden layer; the acquisition module is used to merge the first continuous value attribute and the third continuous value attribute to obtain the fourth continuous value attribute; the acquisition module is also used to classify the fourth continuous value attribute to Obtain the classified data.
- the device further includes: a construction module for constructing an objective function, where the objective function is the sum of the error value of the third continuous value attribute and the substituting entropy; and the training module is used for the neural network Perform training until the value of the objective function is the minimum value.
- the building module specifically includes: a subtraction module, which is used to subtract the third continuous value attribute and the theoretical value of the third continuous value attribute to obtain an error value; and the division module is used to subtract the third continuous value attribute.
- the attributes are divided into data sets to obtain the first sub-data set, where the first data set includes a plurality of first sub-data sets; the obtaining module is used to obtain the substitution entropy of the first sub-data set; the superposition module is used to compare The substitution entropy of the multiple first sub-data sets is superimposed to obtain the substitution entropy of the third continuous value attribute.
- the building module specifically includes:
- the substitution entropy of the first sub-data set is obtained according to the first formula, where the first formula is: Represents the first subdata, En[ ⁇ ] represents the entropy, Indicates the number of samples of the data, b q represents the window width of the kernel density estimation method, Respectively represent the nth and mth elements in the first subdata.
- the building module specifically includes:
- the substitution entropy of the third continuous value attribute is obtained according to the second formula, where the second formula is: among them, For the first The number of nodes in a hidden layer, Is the third continuous value attribute,
- the present invention provides an electronic device comprising: at least one processor and a memory; wherein the memory stores computer-executable instructions; at least one processor executes the computer-executable instructions stored in the memory, so that at least one processor executes the first aspect And the data classification method involved in the optional plan.
- the present invention provides a computer-readable storage medium, characterized in that the computer-readable storage medium stores computer-executable instructions, and when the processor executes the computer-executable instructions, the first aspect and the alternative solutions involved Data classification method.
- the present invention provides a data classification method, device, equipment and storage medium.
- a discrete value attribute is continuously encoded to obtain a second continuous value attribute; a neural network is used to train the second continuous value attribute, and First Hidden layer data is used as the third continuous value attribute, thereby completely transforming discrete value attributes into continuous value attributes with order information and real numbers.
- the classification process is performed to obtain the classified data, so that the classification accuracy is higher than that in the prior art that only uses one-hot encoding to classify mixed-value attribute data Accuracy.
- Fig. 1 is a flowchart of a data classification method according to an exemplary embodiment of the present invention
- Fig. 2 is a flowchart of a data classification method according to an exemplary embodiment of the present invention
- Fig. 3 is a schematic diagram showing the structure of a data classification device according to an exemplary embodiment of the present invention.
- Fig. 4 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present invention.
- the present invention provides a data classification method, device, equipment and storage medium to solve the problem that the existing classification method does not realize the continuity of discrete value attributes by using one-hot encoding operation.
- Fig. 1 is a flowchart of a data classification method according to an exemplary embodiment of the present invention. As shown in Figure 1, the data classification method provided in this embodiment includes:
- the data includes discrete value attributes and first continuous value attributes. Continuously encoding the discrete-valued attributes to obtain the second continuous-valued attributes, and realize the preliminary conversion of the discrete-valued attributes into continuous-valued attributes.
- one-hot encoding can be used to convert the discrete value attribute into the second continuous value attribute.
- the data set is divided into continuous value attributes and discrete value attributes; with Respectively represent the data set The number of continuous and discrete value attributes included, Representative data set Contains the number of samples; Representative Continuous value attributes, then Representative Discrete-valued attributes, assuming its value is Represents discrete value attributes The number of values, then Represents the category of the nth sample, assuming the data set Share Categories then
- the neural network includes Hidden layer.
- the second continuous value attribute is input to the neural network for training, and the first The hidden layer data is output as the third continuous value attribute.
- an Encoding Neural Network (ENN). Among them, And take the one hot encoding data set shown in Table 3 as input.
- the input of ENN is expressed by formula (2).
- the number of input layer nodes of ENN is:
- the number of output layer nodes of ENN is the number of output layer nodes of ENN.
- the hidden layer node uses the Sigmoid function to activate its input, and the f-th hidden layer contains Nodes, where The f-th hidden layer is expressed by formula (5).
- Table 4 The third continuous value attribute data set
- S103 Perform merging processing on the first continuous value attribute and the third continuous value attribute to obtain a fourth continuous value attribute.
- the first continuous value attribute and the third continuous value attribute are combined to obtain a fourth continuous value attribute, where the fourth continuous value attribute includes the first continuous value attribute and the third continuous value attribute.
- the third continuous value attribute is expressed as:
- S104 Perform classification processing on the fourth continuous value attribute to obtain classified data.
- any classification method for continuous-valued attribute data can be used, such as support vector machines and neural networks, to process real-valued attribute data sets
- the discrete value attribute is continuously encoded to obtain the second continuous value attribute; the neural network is used to train the second continuous value attribute, and the first Hidden layer data is used as the third continuous value attribute, thereby completely transforming discrete value attributes into continuous value attributes with order information and real numbers.
- the classification process is performed to obtain the classified data, so that the classification accuracy is higher than that in the prior art that only uses one-hot encoding to classify mixed-value attribute data Accuracy.
- Fig. 2 is a flowchart of a data classification method according to an exemplary embodiment of the present invention. As shown in Figure 2, the data classification method provided in this embodiment includes:
- S201 Perform continuous encoding processing on the discrete value attribute to obtain a second continuous value attribute.
- S202 Construct an objective function, and use the second continuous value attribute to train the neural network until the value of the objective function is the minimum value.
- the objective function is the sum of the error value of the third continuous value attribute and the substituted entropy.
- E[ ⁇ ] is the third continuous value attribute data set corresponding to ENN
- U[ ⁇ ] is the first Hidden layer data Uncertainty.
- the error value can be obtained by subtracting the theoretical value of the third continuous value attribute and the third continuous value attribute.
- S301 Perform data set division on the third continuous value attribute to obtain a first sub-data set.
- the first data set includes a plurality of first sub-data sets.
- the third continuous value attribute data set is expressed as:
- the first sub-data set is expressed as:
- substitution entropy calculation method of the first sub-data set is as follows:
- Data set Corresponding to the entropy, Represented in the data set The data set obtained by the kernel density estimation method The probability density function.
- b q represents the window width parameter of the kernel density estimation method
- b q is about the number of samples
- S302 Perform superposition processing on the substitution entropy of the multiple first sub-data sets to obtain the substitution entropy of the third continuous value attribute.
- substitution entropy U[ ⁇ ] of the third continuous value attribute is calculated as follows:
- S204 Perform merging processing on the first continuous value attribute and the third continuous value attribute to obtain a fourth continuous value attribute.
- S205 Perform classification processing on the fourth continuous value attribute to obtain classified data.
- Fig. 3 is a schematic diagram showing the structure of a data classification device according to an exemplary embodiment of the present invention.
- this embodiment provides a data classification device, including: an obtaining module 101, configured to perform continuous encoding processing on discrete value attributes to obtain a second continuous value attribute, where the data includes the discrete value attribute and the first continuous value attribute. Value attribute; as the module 102, it is used to train the second continuous value attribute using a neural network, and the first Hidden layer data as the third continuous value attribute, where the neural network includes Hidden layers; the obtaining module 101 is also used to merge the first continuous value attribute and the third continuous value attribute to obtain the fourth continuous value attribute; the obtaining module 101 is also used to classify the fourth continuous value attribute To obtain classified data.
- the device further includes: a construction module 103 for constructing an objective function, where the objective function is the sum of the error value and the substitution entropy of the third continuous value attribute; the training module 104 is used for using the second continuous value attribute pair The neural network is trained until the value of the objective function is the minimum.
- the construction module 103 specifically includes: performing subtraction processing on the third continuous value attribute and theoretical values of the third continuous value attribute to obtain an error value; performing data set division on the third continuous value attribute to obtain the first A sub-data set, where the first data set includes a plurality of first sub-data sets; the substitution entropy of the first sub-data set is obtained; the superposition module is used to superimpose the substitution entropy of the plurality of first sub-data sets to Obtain the substitution entropy of the third continuous value attribute.
- the building module 103 specifically includes:
- the substitution entropy of the first sub-data set is obtained according to the first formula, where the first formula is: Represents the first sub-data, Entropy[ ⁇ ] represents the entropy, Indicates the number of samples of the data, b q represents the window width of the kernel density estimation method, Respectively represent the nth and mth elements in the first subdata.
- the building module 103 specifically includes:
- the substitution entropy of the third continuous value attribute is obtained according to the second formula, where the second formula is: among them, For the first The number of nodes in a hidden layer, Is the third continuous value attribute,
- the data classification device provided by this application can be used to implement the above data classification method, and its content and effects can be referred to the method part, which will not be repeated in this application.
- Fig. 4 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present invention.
- the electronic device 200 of this embodiment includes a processor 201 and a memory 202, where:
- the memory 202 is used to store computer execution instructions
- the processor 201 is configured to execute computer-executable instructions stored in the memory to implement each step executed by the receiving device in the foregoing embodiment. For details, refer to the related description in the foregoing method embodiment.
- the memory 202 may be independent or integrated with the processor 201.
- the flow control device 200 further includes a bus 203 for connecting the memory 202 and the processor 201.
- the embodiment of the present invention also provides a computer-readable storage medium, in which computer-executable instructions are stored, and when the processor executes the computer-executable instructions, the data classification method as described above is implemented.
Abstract
Description
Claims (10)
- 一种数据分类方法,其特征在于,包括:A data classification method is characterized in that it includes:对离散值属性进行连续编码处理获得第二连续值属性,其中,数据包括所述离散值属性和第一连续值属性;Continuously encoding the discrete value attribute to obtain a second continuous value attribute, where the data includes the discrete value attribute and the first continuous value attribute;利用神经网络对所述第二连续值属性进行训练,将第 层隐含层数据作为第三连续值属性,其中,所述神经网络包括 个隐含层; Use the neural network to train the second continuous value attribute, and the first Hidden layer data as the third continuous value attribute, wherein the neural network includes Hidden layer对所述第一连续值属性和所述第三连续值属性进行合并处理,以获得第四连续值属性;Merging the first continuous value attribute and the third continuous value attribute to obtain a fourth continuous value attribute;对所述第四连续值属性进行分类处理,以获得分类后数据。Perform classification processing on the fourth continuous value attribute to obtain classified data.
- 根据权利要求1所述的方法,其特征在于,在所述利用神经网络对所述第二连续值属性进行训练,将第 层隐含层数据作为第三连续值属性,之前还包括: The method according to claim 1, characterized in that, in the training of the second continuous value attribute by the neural network, the first Hidden layer data as the third continuous value attribute, previously included:构建目标函数,其中,所述目标函数为所述第三连续值属性的误差值和代入熵之和;Constructing an objective function, wherein the objective function is the sum of the error value of the third continuous value attribute and the substituted entropy;利用所述第二连续值属性对所述神经网络进行训练,直至所述目标函数的数值为最小值。The neural network is trained using the second continuous value attribute until the value of the objective function is the minimum value.
- 根据权利要求2所述的方法,其特征在于,所述构建目标函数,具体包括:The method according to claim 2, wherein the constructing the objective function specifically comprises:对所述第三连续值属性和所述第三连续值属性的理论值进行相减处理,以获得所述误差值;Performing subtraction processing on the theoretical value of the third continuous value attribute and the third continuous value attribute to obtain the error value;对所述第三连续值属性进行数据集划分,以获得第一子数据集,其中,第一数据集包括多个第一子数据集;Data set division is performed on the third continuous value attribute to obtain a first sub-data set, where the first data set includes a plurality of first sub-data sets;获得所述第一子数据集的代入熵;Obtaining the substitution entropy of the first sub-data set;对多个所述第一子数据集的代入熵进行叠加处理,以获得所述第三连续值属性的代入熵。Perform superposition processing on the substitution entropy of the multiple first sub-data sets to obtain the substitution entropy of the third continuous value attribute.
- 根据权利要求3所述的方法,其特征在于,所述获得所述第一子数据集的代入熵,具体包括:The method according to claim 3, wherein the obtaining the substitution entropy of the first sub-data set specifically comprises:根据第一公式获得所述第一子数据集的代入熵,其中,所述第一公式为: 表示第一子数据,En[·]表示代入熵, 表 示数据的样本个数, b q表示核密度估计方法的窗口宽度, 分别表示第一子数据中第n和m个元素。 The substitution entropy of the first sub-data set is obtained according to the first formula, where the first formula is: Represents the first subdata, En[·] represents the entropy, Indicates the number of samples of the data, b q represents the window width of the kernel density estimation method, Respectively represent the nth and mth elements in the first subdata.
- 根据权利要求3所述的方法,其特征在于,所述对多个所述第一子数据集的代入熵进行叠加处理,以获得所述第三连续值属性的代入熵,具体包括:The method according to claim 3, wherein the superimposing the substitution entropy of the multiple first sub-data sets to obtain the substitution entropy of the third continuous value attribute specifically comprises:根据第二公式获得第三连续值属性的代入熵,其中,所述第二公式为: 其中, 为第 个隐含层含的节点数量, 为所述第三连续值属性, The substitution entropy of the third continuous value attribute is obtained according to the second formula, where the second formula is: among them, For the first The number of nodes in a hidden layer, Is the third continuous value attribute,
- 一种数据分类装置,其特征在于,包括:A data classification device, characterized in that it comprises:获得模块,用于对离散值属性进行连续编码处理获得第二连续值属性,其中,数据包括所述离散值属性和第一连续值属性;An obtaining module, configured to perform continuous encoding processing on discrete value attributes to obtain a second continuous value attribute, wherein the data includes the discrete value attribute and the first continuous value attribute;作为模块,用于利用神经网络对所述第二连续值属性进行训练,将第 层隐含层数据作为第三连续值属性,其中,所述神经网络包括 个隐含层; As a module, it is used to train the second continuous value attribute using a neural network, and the first Hidden layer data as the third continuous value attribute, wherein the neural network includes Hidden layer所述获得模块还用于对所述第一连续值属性和所述第三连续值属性进行合并处理,以获得第四连续值属性;The obtaining module is further configured to merge the first continuous value attribute and the third continuous value attribute to obtain a fourth continuous value attribute;所述获得模块还用于对所述第四连续值属性进行分类处理,以获得分类后数据。The obtaining module is further configured to perform classification processing on the fourth continuous value attribute to obtain classified data.
- 根据权利要求6所述的装置,其特征在于,所述装置还包括:The device according to claim 6, wherein the device further comprises:构建模块,用于构建目标函数,其中,所述目标函数为所述第三连续值属性的误差值和代入熵之和;A building module for building an objective function, wherein the objective function is the sum of the error value of the third continuous value attribute and the substituted entropy;训练模块,用于利用所述第二连续值属性对所述神经网络进行训练,直至所述目标函数的数值为最小值。The training module is used to train the neural network by using the second continuous value attribute until the value of the objective function is the minimum value.
- 根据权利要求7所述的装置,其特征在于,所述构建模块具体用于:The device according to claim 7, wherein the building module is specifically used for:相减模块,用于对所述第三连续值属性和所述第三连续值属性的理论值进行相减处理,以获得所述误差值;A subtraction module, configured to perform subtraction processing on the third continuous value attribute and the theoretical value of the third continuous value attribute to obtain the error value;划分模块,用于对所述第三连续值属性进行数据集划分,以获得第一子数据集,其中,第一数据集包括多个第一子数据集;A dividing module, configured to divide a data set of the third continuous value attribute to obtain a first sub-data set, where the first data set includes a plurality of first sub-data sets;获得模块,用于获得所述第一子数据集的代入熵;An obtaining module for obtaining the substitution entropy of the first sub-data set;叠加模块,用于对多个所述第一子数据集的代入熵进行叠加处理,以获得所述第三连续值属性的代入熵。The superposition module is used to superimpose the substitution entropy of a plurality of the first sub-data sets to obtain the substitution entropy of the third continuous value attribute.
- 一种电子设备,其特征在于,包括:至少一个处理器和存储器;An electronic device, characterized by comprising: at least one processor and a memory;其中,所述存储器存储计算机执行指令;Wherein, the memory stores computer execution instructions;所述至少一个处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行如权利要求1至5任一项所述的数据分类方法。The at least one processor executes computer-executable instructions stored in the memory, so that the at least one processor executes the data classification method according to any one of claims 1 to 5.
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如权利要求1至5任一项所述的数据分类方法。A computer-readable storage medium, wherein a computer-executable instruction is stored in the computer-readable storage medium, and when the processor executes the computer-executable instruction, the computer-executable instruction is implemented as described in any one of claims 1 to 5 Data classification method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/072932 WO2020150955A1 (en) | 2019-01-24 | 2019-01-24 | Data classification method and apparatus, and device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/072932 WO2020150955A1 (en) | 2019-01-24 | 2019-01-24 | Data classification method and apparatus, and device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020150955A1 true WO2020150955A1 (en) | 2020-07-30 |
Family
ID=71736027
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/072932 WO2020150955A1 (en) | 2019-01-24 | 2019-01-24 | Data classification method and apparatus, and device and storage medium |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2020150955A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0877132A (en) * | 1994-08-31 | 1996-03-22 | Victor Co Of Japan Ltd | Learning method for cross coupling type neural network |
CN105786860A (en) * | 2014-12-23 | 2016-07-20 | 华为技术有限公司 | Data processing method and device in data modeling |
CN108362510A (en) * | 2017-11-30 | 2018-08-03 | 中国航空综合技术研究所 | A kind of engineering goods method of fault pattern recognition based on evidence neural network model |
CN108628868A (en) * | 2017-03-16 | 2018-10-09 | 北京京东尚科信息技术有限公司 | File classification method and device |
-
2019
- 2019-01-24 WO PCT/CN2019/072932 patent/WO2020150955A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0877132A (en) * | 1994-08-31 | 1996-03-22 | Victor Co Of Japan Ltd | Learning method for cross coupling type neural network |
CN105786860A (en) * | 2014-12-23 | 2016-07-20 | 华为技术有限公司 | Data processing method and device in data modeling |
CN108628868A (en) * | 2017-03-16 | 2018-10-09 | 北京京东尚科信息技术有限公司 | File classification method and device |
CN108362510A (en) * | 2017-11-30 | 2018-08-03 | 中国航空综合技术研究所 | A kind of engineering goods method of fault pattern recognition based on evidence neural network model |
Non-Patent Citations (1)
Title |
---|
SUN, JINGUANG ET AL.: "DBN Classification Algorithm for Numerical Attribute", COMPUTER ENGINEERING AND APPLICATIONS, vol. 50, no. 2, 15 January 2014 (2014-01-15), pages 112 - 114, ISSN: 1002-8331 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP2022056316A (en) | Character structuring extraction method and device, electronic apparatus, storage medium, and computer program | |
WO2021000556A1 (en) | Method and system for predicting remaining useful life of industrial equipment, and electronic device | |
JP7347202B2 (en) | Device and method for training a classification model and classification device using the classification model | |
WO2018099084A1 (en) | Method, device, chip and system for training neural network model | |
WO2021027193A1 (en) | Face clustering method and apparatus, device and storage medium | |
WO2021082863A1 (en) | Method and device for evaluating consensus node | |
WO2021031825A1 (en) | Network fraud identification method and device, computer device, and storage medium | |
US11587291B2 (en) | Systems and methods of contrastive point completion with fine-to-coarse refinement | |
CN110889416B (en) | Salient object detection method based on cascade improved network | |
WO2022142001A1 (en) | Target object evaluation method based on multi-score card fusion, and related device therefor | |
US11386507B2 (en) | Tensor-based predictions from analysis of time-varying graphs | |
CN109787958B (en) | Network flow real-time detection method, detection terminal and computer readable storage medium | |
CN114330670A (en) | Graph neural network training method, device, equipment and storage medium | |
Rehman et al. | A modified self‐adaptive extragradient method for pseudomonotone equilibrium problem in a real Hilbert space with applications | |
CN108924385B (en) | Video de-jittering method based on width learning | |
CN113205495B (en) | Image quality evaluation and model training method, device, equipment and storage medium | |
CN113240177B (en) | Method for training prediction model, prediction method, device, electronic equipment and medium | |
CN113672735A (en) | Link prediction method based on theme perception heterogeneous graph neural network | |
WO2020150955A1 (en) | Data classification method and apparatus, and device and storage medium | |
CN113723072A (en) | RPA (resilient packet Access) and AI (Artificial Intelligence) combined model fusion result acquisition method and device and electronic equipment | |
CN113641829A (en) | Method and device for training neural network of graph and complementing knowledge graph | |
CN115953651B (en) | Cross-domain equipment-based model training method, device, equipment and medium | |
WO2024001653A1 (en) | Feature extraction method and apparatus, storage medium, and electronic device | |
WO2021035980A1 (en) | Facial recognition model training method and apparatus, and device and readable storage medium | |
CN115186738B (en) | Model training method, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19911376 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 15.09.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19911376 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07.04.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19911376 Country of ref document: EP Kind code of ref document: A1 |