WO2021240832A1 - Dispositif de traitement, procédé de traitement et programme de traitement - Google Patents

Dispositif de traitement, procédé de traitement et programme de traitement Download PDF

Info

Publication number
WO2021240832A1
WO2021240832A1 PCT/JP2020/032016 JP2020032016W WO2021240832A1 WO 2021240832 A1 WO2021240832 A1 WO 2021240832A1 JP 2020032016 W JP2020032016 W JP 2020032016W WO 2021240832 A1 WO2021240832 A1 WO 2021240832A1
Authority
WO
WIPO (PCT)
Prior art keywords
evaluation
data
similarity
relation
evaluations
Prior art date
Application number
PCT/JP2020/032016
Other languages
English (en)
Japanese (ja)
Inventor
真吾 小俣
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to JP2022527476A priority Critical patent/JP7477791B2/ja
Publication of WO2021240832A1 publication Critical patent/WO2021240832A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management

Definitions

  • the present invention relates to a processing apparatus, a processing method and a processing program.
  • the operation to improve the service is performed in the evaluation target of the person, organization, existing system, etc. related to the evaluation.
  • the work and analysis of analyzing the evaluation and associating the evaluation with the evaluation target is generally performed manually by a person.
  • Non-Patent Document 1 there is a method of expressing the development model of an AI (Artificial Intelligence) service system with ArchiMate (see Non-Patent Document 1). There is also a proposal to visualize the current situation and utilize it for selecting a system reconstruction method or supporting improvement points (see Non-Patent Document 2).
  • AI Artificial Intelligence
  • the present invention has been made in view of the above circumstances, and an object of the present invention is to provide a technique capable of easily associating an evaluation with an evaluation target.
  • the processing device has an acquisition unit for acquiring an evaluation for an evaluation target, a similarity calculation unit for calculating the similarity between each relation of an entity in the evaluation target, and an evaluation among the relations. It is provided with an output unit that outputs a relation having a high degree of similarity to.
  • the computer obtains an evaluation for the evaluation target, the computer calculates each relation of the entity in the evaluation target, and the computer calculates the similarity between the evaluations.
  • the computer calculates each relation of the entity in the evaluation target, and the computer calculates the similarity between the evaluations.
  • a step of outputting a relation having a high degree of similarity to the evaluation is provided.
  • One aspect of the present invention is a processing program that causes a computer to function as the processing device.
  • FIG. 1 is a diagram illustrating a functional block of a processing device according to an embodiment of the present invention.
  • FIG. 2 is a diagram illustrating a data structure of evaluation data and an example of the data.
  • FIG. 3 is a diagram illustrating a data structure of model data and an example of the data.
  • FIG. 4 is a diagram illustrating an example of the name of the entity used in the model data.
  • FIG. 5 is a diagram illustrating an example of a model configuration of an organization to be evaluated.
  • FIG. 6 is a diagram illustrating a data structure of similarity data and an example of the data.
  • FIG. 7 is a flowchart illustrating the acquisition process by the acquisition unit.
  • FIG. 8 is a flowchart illustrating the similarity calculation process by the similarity calculation unit.
  • FIG. 1 is a diagram illustrating a functional block of a processing device according to an embodiment of the present invention.
  • FIG. 2 is a diagram illustrating a data structure of evaluation data and an example of the data.
  • FIG. 9 is a diagram illustrating an example of an output result by the output unit.
  • FIG. 10 is a diagram illustrating a functional block of the output unit according to the first modification.
  • FIG. 11 is a diagram illustrating an example of a data structure of EA data.
  • FIG. 12 is a diagram illustrating a functional block of the output unit according to the second modification.
  • FIG. 13 is a diagram illustrating a learning unit according to the second modification.
  • FIG. 14 is a diagram illustrating a hardware configuration of a computer used in a processing device.
  • the processing apparatus 1 specifies an entity corresponding to the evaluation among the entities constituting the evaluation target organization 3 for the evaluation regarding the evaluation target organization 3 by computer processing.
  • the evaluation is a group of data in which the evaluation is meaningful for the organization to be evaluated, the products or services provided by the organization, etc. by the user, and is expressed in a language such as a sentence or a term.
  • the evaluation is one post of a comment by the user, one post of a word of mouth, and the like.
  • the evaluation data set may be customer complaint information, needs information, etc. managed by CRM (Customer Relationship Management) held by the evaluation target organization 3.
  • the processing device 1 includes evaluation data 11, model data 12, similarity data 13, acquisition unit 21, definition unit 22, similarity calculation unit 23, and output unit 24.
  • the evaluation data 11, the model data 12, and the similarity data 13 are data stored in the memory 902 or the storage 903.
  • the acquisition unit 21, the definition unit 22, the similarity calculation unit 23, and the output unit 24 are functional units mounted on the processing device 1 by executing a CPU 901 or a GPU (Graphics Processing Unit) (not shown).
  • the similarity calculation unit 23 may be executed by the GPU at the time of processing a neural network such as Word2vec or Doc2vec.
  • Evaluation data 11 is a set of evaluations. When the emotion score is calculated for each evaluation, the evaluation data 11 may associate the emotion score of the evaluation with each evaluation, for example, as shown in FIG.
  • the model data 12 is data that defines the relation of the entity related to the evaluation target organization 3. As shown in FIG. 3, the model data 12 associates the identifiers of the entities that define each relation. The identifier of an entity is associated with the name of the entity, as shown in FIG.
  • the operation model of the organization to be evaluated 3 is defined by a model language called ArchiMate.
  • the evaluation target organization 3 is divided into a plurality of entities.
  • the organization to be evaluated is divided into a plurality of layers, and one or more entities are associated with each layer.
  • the operation model of the evaluation target organization 3 is defined by the relation associated with the entities selected from each hierarchy.
  • the evaluation target organization 3 is divided into four layers, a tissue layer, an active structure layer, a behavior layer, and a passive structure layer, and one or more entities correspond to each layer. Be attached. Relations associate entities selected from each layer.
  • the model data 12 shown in FIG. 3 is formed corresponding to the model of the evaluation target organization 3 shown in FIG.
  • the model data 12 models the evaluation target organization 3 by the relation of each entity constituting the evaluation target organization 3.
  • the relation defined by the model data 12 is defined by a combination of the relation ID, the organization ID, the active structure ID, the behavior ID, and the passive structure ID.
  • the relation ID identifies the relation.
  • the organization ID, the active structure ID, the behavior ID, and the passive structure ID are IDs of entities selected from the organization layer, the active structure layer, the behavior layer, and the passive structure layer associated with each other in the relation. In the embodiment of the present invention, the IDs of the entities are numbered so as to be identifiable in each layer.
  • the organization ID "100,000”, the active structure ID "200,000”, the behavior ID "200,000”, and the passive structure ID “200,000” are associated with the relation ID "2" shown in FIG.
  • the organization ID "100,000” is a "customer support center” as shown in FIG. 4A in which the organization ID and the organization name are associated with each other.
  • the active structure ID “200,000” is a “sales representative” as shown in FIG. 4 (b) that associates the active structure ID with the active structure name.
  • the behavior ID "200,000” is “complaint correspondence” as shown in FIG. 4 (c) in which the behavior ID and the behavior name are associated with each other.
  • the passive structure ID "200,000” is “claim information” as shown in FIG. 4D in which the passive structure ID and the passive structure name are associated with each other.
  • the relation ID "2" corresponds to the "customer support center", “sales representative", “complaint handling” and “complaint information” in the evaluation target organization 3.
  • the similarity data 13 is data that associates the evaluation with the relationship between the evaluation target organization 3.
  • the similarity is calculated by the similarity calculation unit 23, which will be described later.
  • the similarity data 13 shown in FIG. 6 associates the relation ID with the highest similarity with the similarity between the relation IDs for each evaluation.
  • the acquisition unit 21 acquires the evaluation for the evaluation target.
  • the acquisition unit 21 acquires an evaluation regarding the evaluation target organization 3 from the evaluation providing device 2.
  • the evaluation providing device 2 is, for example, a posting site such as a microblog, an SNS (Social Networking Service) site, a word-of-mouth site, an in-house CRM system, or the like.
  • the acquisition unit 21 may acquire a plurality of evaluations.
  • the acquisition unit 21 may acquire the evaluation by the PULL type from the site or the sysstem, or may acquire the evaluation by the PUSH type.
  • the acquisition unit 21 may acquire an evaluation having a negative emotional score among the evaluations for the evaluation target.
  • the acquisition unit 21 may calculate an emotion score for each evaluation for the evaluation target and acquire a negative evaluation in which the emotion score is equal to or less than a predetermined value.
  • the higher the emotion score the more positive the impression of the evaluated tissue 3, and the lower the score, the negative the impression of the evaluated tissue 3.
  • the acquisition unit 21 calculates the emotion score of each evaluation by using, for example, the Google Natural Language API, and filters the evaluations having an emotion score of ⁇ 0.7 or less.
  • the acquisition unit 21 may acquire the evaluation corresponding to the name of the entity among the evaluations for the evaluation target.
  • the acquisition unit 21 may acquire an evaluation matching the search key by using, for example, the organization name, the active structure name, the behavior name, and the passive structure name of each table shown in FIG. 4 as the search key.
  • the search key may be the name of an entity corresponding to any structure among the organization layer, the active structure layer, the behavior layer, and the passive structure layer.
  • the search key may be the name of an entity that satisfies a predetermined condition.
  • the search key may be the name selected by the worker among the names of each entity.
  • the acquisition process by the acquisition unit 21 will be described with reference to FIG. 7.
  • the process shown in FIG. 7 is an example and is not limited to this.
  • step S101 the acquisition unit 21 collects evaluations that match the search word.
  • step S102 the acquisition unit 21 calculates an emotion score for each evaluation acquired in step S101.
  • step S103 the acquisition unit 21 filters the evaluation whose emotion score calculated in step S102 is equal to or less than the threshold value.
  • step S104 the acquisition unit 21 stores the evaluation filtered in step S103 in the evaluation data 11.
  • the definition unit 22 generates model data 12 for the evaluation target organization 3. For example, when an operation model is created by ArchiMate, the definition unit 22 acquires the output result and generates model data 12.
  • the similarity calculation unit 23 calculates the similarity between each relation of the entity of the operation model in the evaluation target and the evaluation.
  • the similarity between the relation and the evaluation is the similarity between the synthetic vector of the entity name of the relation and the synthetic vector of the words included in the evaluation.
  • the acquisition unit 21 acquires a plurality of evaluations
  • the similarity calculation unit 23 calculates the similarity between the evaluation and each relation for each of the plurality of evaluations.
  • the composite vector is calculated by, for example, python's gensim library and Word2Vec using a neural network.
  • the composite vector of the entity name of the relation is calculated.
  • the synthetic vector of the evaluation is calculated by processing the words included in the evaluation with the gensim library and Word2Vec. Here, words that frequently appear in the evaluation may be excluded.
  • the similarity between the relation synthesis vector and the evaluation synthesis vector is calculated by any method.
  • it may be Cos similarity, it may be calculated by Doc2Vec, or it may be calculated by a plurality of these calculation methods.
  • the similarity calculation unit 23 stores the similarity of each relation and each evaluation in the similarity data 13. Alternatively, as shown in FIG. 6, the similarity calculation unit 23 associates the evaluation with the ID of the relation having the highest similarity for the evaluation and stores it in the similarity data 13.
  • the similarity calculation process by the similarity calculation unit 23 will be described with reference to FIG.
  • the process shown in FIG. 8 is an example and is not limited to this.
  • step S201 the similarity calculation unit 23 converts each evaluation of the evaluation data 11 into a vector.
  • step S202 the similarity calculation unit 23 converts each relation of the model data 12 into a vector.
  • step S203 the similarity calculation unit 23 calculates the similarity between the vector of each evaluation calculated in step S201 and the vector of each relation calculated in step S202.
  • step S204 the similarity calculation unit 23 associates the evaluation with the relation ID having the highest similarity for each evaluation, and stores the evaluation in the similarity data 13.
  • the output unit 24 outputs a relation having a high degree of similarity to the evaluation among each relation. For example, the output unit 24 outputs a relation ID having the highest degree of similarity to the evaluation for each evaluation. The entity identified by the relation ID that has the highest similarity to the evaluation corresponds with reference to the evaluation.
  • the output unit 24 may output the number of evaluations having a similarity degree of a predetermined value or more for each relation. For example, as shown in FIG. 9, for each relation, the number of evaluations for which the relation is judged to have the highest degree of similarity is output. Since a relation with a large number of evaluations has many problems in improving the evaluation, it is possible to effectively reduce the negative evaluation by improving the relation with a large number of evaluations.
  • the output unit 24 outputs an index corresponding to the emotion score of the evaluation whose similarity is equal to or higher than a predetermined value for each relation. For example, the output unit 24 calculates and outputs an index having a positive correlation with the emotion score in the evaluation in which the relation is determined to have the highest degree of similarity for each relation.
  • the index is, for example, the average of the emotional scores of each evaluation, or the most negative emotional score of the emotional scores of each evaluation. Negative evaluations can be effectively reduced by improving with relations related to evaluations that strongly express negative emotions.
  • the processing device 1 can associate the evaluation with the evaluation target by computer processing, the cost for improving the evaluation can be significantly reduced. Further, since the relationship between the evaluation and the relationship in the evaluation target organization 3 can be indexed as the degree of similarity, the evaluation target organization 3 can efficiently work on the evaluation improvement.
  • the output unit 24a refers to the enterprise architecture data (EA data) 111 that has accumulated improvement results, acquires data related to the relation having a high degree of similarity to the evaluation from the EA data 111, and outputs the data.
  • the data output here can be an improvement measure in relations with high evaluation and similarity.
  • the output unit 24a includes an EA data 111, a recommendation data 112, a specific unit 121, a relation data 122, and a recommendation unit 123.
  • EA data 111 is enterprise architecture data.
  • the EA data 111 includes, for example, as shown in FIG. 11, the EA identifier, As-Is, To-Be, and Union-EA items.
  • An identifier that specifies the relationship between As-Is, To-Be, and Union-EA is set in the item of EA identifier.
  • the As-Is item the data of the As-Is model of the enterprise architecture model that has been improved by other companies is stored.
  • To-Be item the data of the To-Be model of the enterprise architecture model that has been improved by other companies is stored.
  • the Union-EA item the data of the integrated model of the enterprise architecture model that has been improved by other companies is stored.
  • the integrated model is a model in which the As-Is model, the To-Be model, and the Transition are linked. In the first modification, the case where the EA data 111 is generated in advance will be described.
  • the recommendation data 112 is data that specifies the improvement measures output by the recommendation unit 123.
  • the recommendation data 112 may be, for example, the EA identifier of the EA data 111, or may be the data of each item specified from the EA identifier.
  • the specific unit 121 outputs the relations having a high degree of similarity to the evaluation among the relations of the entities in the evaluation target as the relation data 122.
  • the relation data 122 may be a relation ID having the highest similarity with the evaluation as long as it can be converted into an entity name related to the relation, or each entity name converted from the relation ID having the highest similarity with the evaluation. May be.
  • the recommendation unit 123 refers to the EA data 111 and specifies the EA identifier related to the relation data 122.
  • the recommendation unit 123 searches the EA data 111 using the name of each entity specified from the relation data 122 as a search key, and identifies an EA identifier similar to the search key.
  • the recommendation unit 123 outputs the specified EA identifier or the data of each item specified from the EA identifier to the recommendation data 112.
  • the output unit 24a outputs the enterprise architecture model related to the relation for which improvement is required this time as a proposal for improvement measures from the past improvement measures by other companies and the like.
  • the improvement measures that solve the same problems as this time can be referred to.
  • the output unit 24a can output a relation highly related to the evaluation and can find a solution for the relation at an early stage.
  • the output unit 24b sets the As-Is model data of the EA data (enterprise architecture data) 111 as the input data of the encoder, and predicts the data of the To-Be model of the enterprise architecture data 111 as the input of the decoder and the correct answer data.
  • the model data 113 is learned, the prediction model data 113 is referred to, the relation data having a high degree of similarity to the evaluation is set as the input data, and the To-Be model data is output.
  • the data of the To-Be model is considered to be the future of relations with high evaluation and similarity, and can be an improvement measure.
  • the output unit 24b includes an EA data 111, a recommendation data 112, a prediction model data 113, a specific unit 121, a recommendation unit 123a, and a learning unit 124.
  • the EA data 111, the recommendation data 112, the specific unit 121, and the relation data 122 are as in the first modification.
  • the learning unit 124 learns by inputting the EA data 111 and outputs the prediction model data 113.
  • the learning unit 124 constructs a Seq2Seq (sequence to sequence) model by using teacher coercion using the EA data 111 that has already been improved in business.
  • the data of the As-Is model of the EA data 111 is set as the input of the encoder unit.
  • the data of the To-Be model of the EA data 111 is set as the input of the decoder unit and the correct answer data.
  • the data of the As-Is model and the data of the To-Be model of the EA data 111 are described in, for example, XML (Extensible Markup Language) format.
  • the learning unit 124 tunes the optimization algorithm and the like, and generates the prediction model data 113 using a model in which the error is sufficiently converged.
  • the learning unit 124 learns a prediction model that predicts the To-Be model from the As-Is model of the EA data.
  • the learning unit 124 learns using, for example, Python's Keras framework, the loss value obtained as a result of the learning becomes an error. When the loss value is as close to 0 as possible, it is determined that the error has sufficiently converged.
  • the recommendation unit 123a transfers the prediction model data 113 to the relation of the relation data 122 as an As-Is model, specifically, the entity name of the relation having a high degree of similarity to the evaluation. Enter.
  • the recommendation unit 123a acquires and outputs a To-Be model as an improvement measure in the relation specified by the specific unit 121.
  • the output unit 24b refers to the prediction model data 113 that has learned the past improvement measures by other companies and the like, and improves the To-Be model related to the relation that is required to be improved this time. Output as a plan. As a result, the output unit 24b can output a relation highly related to the evaluation and can find a solution for the relation at an early stage.
  • the processing device 1 of the present embodiment described above includes, for example, a CPU (Central Processing Unit, processor) 901, a memory 902, a storage 903 (HDD: Hard Disk Drive, SSD: Solid State Drive), and a communication device 904.
  • a general purpose computer system including an input device 905 and an output device 906 is used.
  • each function of the processing device 1 is realized by executing the processing program loaded on the memory 902 by the CPU 901.
  • a GPU may be used in combination with the CPU 901.
  • the processing device 1 may be mounted on one computer or may be mounted on a plurality of computers. Further, the processing device 1 may be a virtual machine mounted on a computer.
  • the processing program of the processing device 1 can be stored in a computer-readable recording medium such as an HDD, SSD, USB (Universal Serial Bus) memory, CD (Compact Disc), DVD (Digital Versatile Disc), or via a network. It can also be delivered.
  • a computer-readable recording medium such as an HDD, SSD, USB (Universal Serial Bus) memory, CD (Compact Disc), DVD (Digital Versatile Disc), or via a network. It can also be delivered.
  • the present invention is not limited to the above embodiment, and many modifications can be made within the scope of the gist thereof.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Operations Research (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Dispositif de traitement 1 comprenant : une unité d'acquisition 21 pour acquérir une évaluation d'une cible d'évaluation ; une unité de calcul de similarité 23 pour calculer une similarité entre l'évaluation et les relations respectives d'entités dans la cible d'évaluation ; et une unité de sortie 24 pour délivrer en sortie une relation ayant une similarité élevée à l'évaluation, parmi les relations respectives.
PCT/JP2020/032016 2020-05-27 2020-08-25 Dispositif de traitement, procédé de traitement et programme de traitement WO2021240832A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022527476A JP7477791B2 (ja) 2020-05-27 2020-08-25 処理装置、処理方法および処理プログラム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JPPCT/JP2020/020928 2020-05-27
PCT/JP2020/020928 WO2021240686A1 (fr) 2020-05-27 2020-05-27 Dispositif de traitement, procédé de traitement et programme de traitement

Publications (1)

Publication Number Publication Date
WO2021240832A1 true WO2021240832A1 (fr) 2021-12-02

Family

ID=78723105

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/JP2020/020928 WO2021240686A1 (fr) 2020-05-27 2020-05-27 Dispositif de traitement, procédé de traitement et programme de traitement
PCT/JP2020/032016 WO2021240832A1 (fr) 2020-05-27 2020-08-25 Dispositif de traitement, procédé de traitement et programme de traitement

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/020928 WO2021240686A1 (fr) 2020-05-27 2020-05-27 Dispositif de traitement, procédé de traitement et programme de traitement

Country Status (2)

Country Link
JP (1) JP7477791B2 (fr)
WO (2) WO2021240686A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001256251A (ja) * 2000-03-08 2001-09-21 Nec Software Chugoku Ltd 文書情報自動評価装置及び文書情報自動評価システム
JP2004046588A (ja) * 2002-07-12 2004-02-12 Katsuhiko Inoue クレーム情報処理システム
JP2007025823A (ja) * 2005-07-12 2007-02-01 Fujitsu Ltd シミュレーションプログラム、シミュレーション方法
JP2008287328A (ja) * 2007-05-15 2008-11-27 Ntt Data Corp 評価装置及び方法ならびにコンピュータプログラム
JP2011233164A (ja) * 2011-07-21 2011-11-17 Mitsubishi Electric Corp 文章対応付けシステムおよび文章対応付けプログラム
WO2016132558A1 (fr) * 2015-02-20 2016-08-25 株式会社Ubic Programme, procédé et dispositif de traitement d'informations

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7168334B2 (ja) 2018-03-20 2022-11-09 ヤフー株式会社 情報処理装置、情報処理方法及びプログラム

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001256251A (ja) * 2000-03-08 2001-09-21 Nec Software Chugoku Ltd 文書情報自動評価装置及び文書情報自動評価システム
JP2004046588A (ja) * 2002-07-12 2004-02-12 Katsuhiko Inoue クレーム情報処理システム
JP2007025823A (ja) * 2005-07-12 2007-02-01 Fujitsu Ltd シミュレーションプログラム、シミュレーション方法
JP2008287328A (ja) * 2007-05-15 2008-11-27 Ntt Data Corp 評価装置及び方法ならびにコンピュータプログラム
JP2011233164A (ja) * 2011-07-21 2011-11-17 Mitsubishi Electric Corp 文章対応付けシステムおよび文章対応付けプログラム
WO2016132558A1 (fr) * 2015-02-20 2016-08-25 株式会社Ubic Programme, procédé et dispositif de traitement d'informations

Also Published As

Publication number Publication date
JPWO2021240832A1 (fr) 2021-12-02
JP7477791B2 (ja) 2024-05-02
WO2021240686A1 (fr) 2021-12-02

Similar Documents

Publication Publication Date Title
US10430469B2 (en) Enhanced document input parsing
CN103443787B (zh) 用于标识文本关系的系统
Kagdi et al. Assigning change requests to software developers
US9536003B2 (en) Method and system for hybrid information query
AU2018264012B1 (en) Identification of domain information for use in machine learning models
JP4464975B2 (ja) コンピュータネットワーク上の電子文書の重要度を、当該電子文書に関係付けられた他の電子文書の当該電子文書に対する批評に基づいて、計算するためのコンピュータ装置、コンピュータプログラム及び方法
US20090077531A1 (en) Systems and Methods to Generate a Software Framework Based on Semantic Modeling and Business Rules
US20150088593A1 (en) System and method for categorization of social media conversation for response management
Wylot et al. Tripleprov: Efficient processing of lineage queries in a native rdf store
US20190026436A1 (en) Automated system and method for improving healthcare communication
US8661004B2 (en) Representing incomplete and uncertain information in graph data
Kagdi et al. Who can help me with this change request?
Pita et al. A Spark-based Workflow for Probabilistic Record Linkage of Healthcare Data.
WO2019016647A1 (fr) Système et procédé automatisés permettant d'améliorer la communication relative aux soins de santé
Welten et al. DAMS: A distributed analytics metadata schema
US8862609B2 (en) Expanding high level queries
Arch-Int et al. Graph‐Based Semantic Web Service Composition for Healthcare Data Integration
WO2021240832A1 (fr) Dispositif de traitement, procédé de traitement et programme de traitement
US20230081891A1 (en) System and method of managing knowledge for knowledge graphs
Iacob et al. MARAM: tool support for mobile app review management.
Eken et al. Predicting defects with latent and semantic features from commit logs in an industrial setting
KR20140034350A (ko) 임상 개념을 위한 개인화 상세 임상 모델의 생성 방법
Kock-Schoppenhauer et al. Practical extension of provenance to healthcare data based on the W3C PROV standard
Pérez Pupo et al. Linguistic data summarization with multilingual approach
Ahmed et al. Ontological Based Approach of Integrating Big Data: Issues and Prospects

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20937884

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022527476

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20937884

Country of ref document: EP

Kind code of ref document: A1