WO2022157766A1 - Système et procédé de génération de langage naturel d'un reportage - Google Patents

Système et procédé de génération de langage naturel d'un reportage Download PDF

Info

Publication number
WO2022157766A1
WO2022157766A1 PCT/IL2022/050079 IL2022050079W WO2022157766A1 WO 2022157766 A1 WO2022157766 A1 WO 2022157766A1 IL 2022050079 W IL2022050079 W IL 2022050079W WO 2022157766 A1 WO2022157766 A1 WO 2022157766A1
Authority
WO
WIPO (PCT)
Prior art keywords
story
facts
fact
data
tree
Prior art date
Application number
PCT/IL2022/050079
Other languages
English (en)
Inventor
Amir ERELL
Udi NEVO
Original Assignee
Hoopsai Technologies Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hoopsai Technologies Ltd. filed Critical Hoopsai Technologies Ltd.
Publication of WO2022157766A1 publication Critical patent/WO2022157766A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/55Rule-based translation
    • G06F40/56Natural language generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • G06F40/186Templates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Definitions

  • the invention is in the field of natural language generation, and in particular relates to generating a news story from data acquired from sources such as live videos, news feeds, and historical archives.
  • US patent 9,720,884 B2 discloses a system and method for automatically generating a narrative story that receive data and information pertaining to a domain event.
  • the received data and information and/or one or more derived features are then used to identify a plurality of angles for the narrative story.
  • the plurality of angles is then filtered, for example through use of parameters that specify a focus for the narrative story, length of the narrative story, etc. Points associated with the filtered plurality of angles are then assembled and the narrative story is rendered using the filtered plurali ty of angl es and the assembled points.
  • China patent application 110309320A discloses an NBA basketball news automatic generation method combined with an NBA competition knowledge map, and the method comprises the steps of preprocessing the NBA text live text data, crawled by a network, removing the crawler webpage labels, removing the stop words in the character text, and representing through a quintuple; according to a proposed segmentation algorithm, performing data segmentation on the preprocessed text live broadcast data to obtain a competition development trend; performing special event extraction according to the proposed definition of the basketball competition special event; defining a basketball news description template; combining the data segmentation result, the special event extraction result and the corresponding news description template to generate a news first draft; generating the competition background information in combination with the knowledge graph to obtain a news final draft, so that the automatic generation of the NBA competition news is realized, the quality of the generated NBA competition news is improved, and the generated news content can be better controlled.
  • US patent 9,721,207 B2 discloses a method for generating written content; in an application in accordance with an embodiment includes: receiving a query from a user; importing data from at least one data source in response to the query; ranking the imported data based on a plurality of ranking factors to determine a relevance of the imported data; automatically generating written content using at least a portion of the imported data based on the determined relevance of the imported data; and automatically customizing the written content based on a file format of the application.
  • the present invention advances the technology for natural language generation of written content, as further described below.
  • NLG story-writing systems typically begin with a well-defined topic or story structure and attempt to find information that best fits the defined topic or structure.
  • the present invention begins with a general aspect of a particular subject area, and then follows the information itself, wherever it may lead, as a guide to the topic and story structure yielding the most significant (e.g., most important or most interesting) story.
  • a data point amalgamator searches one or more data sources for data points informing on a particular aspect of a subject area.
  • Data points are amalgamated into facts and the facts are scored for their significance.
  • a fact's significance score can be based on, for example, uniqueness of the fact or on the fact's appeal to biases of a user for whom a story is being prepared.
  • a story planning module selects story trees from a story tree library. Each story tree defines a story outline and how facts are assembled therein to form a story. The story planning module populates each one of the selected story tress with as many of the scored facts as possible given constraints of the story tree. The story planning module sums the combined score of the inserted facts. The populated story tree with highest summed score is selected as containing the most noteworthy story.
  • a story scripting module matches data points in each leaf node of the selected populated story tree to a best-fitting message template.
  • the message templates may comprise either textual components of the story or metadata of non-textual media files, such as images, sound, and video. It is therefore within the scope of the present invention to provide a computer-based system for natural language generation of a story, comprising a. one or more data point amalgamators, each configured to i. access one or more data sources comprising data points concerning a subject area; ii. establish one or more facts from the data sources, each fact determined from a combination of one or more of the data points, the data points selected according to selection rules for a predetermined aspect of the subject area; iii.
  • a story tree library comprising one or more story trees associated with one or more of the data point amalgamators, each story tree comprising i. section nodes, each section node corresponding to an outline heading of an article or a behavior of child nodes of the section node; ii. leaf nodes corresponding to article contents; each leaf node is characterized by a minimum and a maximum number of scored facts to be populated in the leaf node; and c. a story planning module, configured to i.
  • a story scripting module configured to obtain the message templates associated with the data point amalgamator or with a fact and, for each fact in each leaf of the story plan, i. find the best-matching message template for the fact; and ii. inject the textual contribution of the variables of the data points into the selected message.
  • the data sources comprise an AT analyzer of a live video steam, a live textual news feed, a historical archive, a news site, a social media site, or any combination thereof.
  • the story planning module is further configured to populate the story trees with facts established by two or more of the data point amalgamators.
  • the story scripting module is further configured to insert a non- textual media item into the story, the media item serving as the basis for one or more of the data points in a fact.
  • the scripting module is configured to find the best-matching template by a. eliminating message templates containing names of indicators or variables not found among the data points; b. testing the remaining message templates for how many indicators in the fact have matching values in the message template; and c. selecting message templates with the most matching indicator values.
  • scripting module is further configured to randomly select message templates that are tied for the most matching indicator values.
  • a story planning stage comprising steps of a. providing the system for natural language generation of a story; b. providing the system with access to one or more data sources comprising data points concerning a subject area; c. establishing one or more facts from the data sources, each fact comprising a combination of one or more of the data points, the data points selected according to a selection rules for a predetermined aspect of the subject area; d. for each fact, characterizing a data type of each data point as one or more of i. an indicator, characterizing the data point; and ii. a variable, comprising a textual contribution of the data point; e.
  • each story tree comprising i. section nodes, each section node corresponding to an outline heading of an article or a behavior of child nodes of the section node; ii. leaf nodes corresponding to article contents; each leaf node is characterized by a minimum and a maximum number of scored facts to be populated in the leaf node; g. for each story tree, populating each leaf node with the scored facts, according to the minimum and a maximum number of scored facts and a story ruleset of the story tree; h. for each story tree, summing the significance scores of facts placed in the leaf nodes; and i.
  • the data sources comprise an Al analyzer of a live video stream, a historical archive, a news site, a social media site, or any combination thereof.
  • finding the best-matching template comprises steps of a. eliminating message templates containing names of indicators or variables not found among the data points; b. testing the remaining message templates for how many indicators in the fact have matching values in the message template; and c. selecting message templates with the most matching indicator values.
  • scripting module is further configured to randomly select message templates that are tied for the most matching indicator values.
  • Figs. 1A and 1B show a computer-based system for natural language generation of a story, according to some embodiments of the invention.
  • Fig. 2 shows steps of a computer-based method for natural language generation of a story, according to some embodiments of the invention.
  • FIG. 1A and 1B showing a computer-based system 100 for natural language generation of a story.
  • Fig. 1 A shows modules involved in the story planning stage, for which the system 100 comprises one or more data point amalgamators 112, a story tree library 117, a story planning module 135.
  • Fig. 1B shows modules involved in the story realization stage, for which the system 100 comprises a message template library 140 and a story scripting module 150.
  • the modules of the system 100 are implemented by one or more processors and one or more non-transitory computer-readable media (CRMs).
  • CRMs store instructions to the processors for executing the module functions.
  • the system 100 may be made available to users by any means; for example, as licensed software, software-as- service (SaS), etc.
  • Computer configuration details such as the type(s) of computer(s), storage media, display devices, operating system(s), etc., are not specified in this disclosure. A person skilled in the art, given this disclosure, would know' one or more configurations for implementing the system 100.
  • a data point amalgamator 112 has access to one or more data sources 105 containing data points 110 concerning a subject area.
  • a subject area can be, for example, a particular sport (e.g., basketball) or finance.
  • a data source 105 may- be publicly available, subscribed, or proprietary.
  • the data source 105 can be, for example, an Al analyzer of a basketball game's live video stream and/or a live statistical feed from the game. The Al analyzer computes each team's probability of winning in real time, after each play.
  • Another possible data source 105 is an archive of historical data concerning the subject area, such as an archive of team and player statistics for present and/or previous seasons.
  • Yet other possible data sources 105 while not real-time and not historical, contain current newsworthy data; examples of such data sources 105 include news sites and social media sites. Live, current, and historical data points 110 can be combined, enabling comparing and contrasting of outcomes over different time scales.
  • Possible data points 110 include a player with the ball, an expected outcome of a game or single play, a win probability of either team, an outcome (e.g., a basket, a final score), active and inactive injured players, the teams' league standings, past years' performances, etc.
  • Each data point amalgamator 112 is characterized by a particular aspect of the subject area.
  • a data point amalgamator 112 harvests data source(s) 105 for one or more predetermined data points 110 needed in order to establish a fact 115 pertaining to the particular aspect of the data point amalgamator 112.
  • a data point amalgamator 112 may be dedicated to seeking players who made notable performances during a game.
  • the data point amalgamator 112 expresses data points 110 of players' individual contributions to the team's win probability throughout the game.
  • the data point amalgamator 112 combines the data points to establish one or more facts 115.
  • one fact 115 can be a comparison between a player's performance in a play with a set of expectations based, for example, on the player's historical performance and league averages.
  • the data point aggregator 112 may further establish facts 115 of a rank of each performance according to the likelihood of such a performance to take place.
  • the data point amalgamator 112 predeterminedly allocates a data, type to data point 110.
  • the data type of a data point 110 reflects the content of the data point 110 and determines how the data point 110 shall be processed later, as further described in connection with the realization stage.
  • the data types comprise 1) an indicator 1101, used to match facts 115 with message templates 145 (further described herein); 2) a variable 110V, providing a variable string value to a message template 145; and 3) a hybrid 110H, comprising both an indicator 1101 and a variable 110V component.
  • Each data point 110 is characterized by a name and a value.
  • the data point amalgamator 112 calculates a significance score for each established fact 115. Computation of a fact's 115 significance score may be a function of uniqueness of the fact 115; for example, an injury to an important player, a sudden change in a team's winning probability, an unexpected win, etc. A significance score may be based, for example, on a user bias. If a particular user requesting the news story is known to be interested in a particular team or player, then an event that involves the team or player of interest receives a higher significance score than an event that does not. A data point amalgamator 112 may employ one or more statistical methods to compute significance scores of facts 115.
  • the data point amalgamator 112 may employ a neural network module (not shown) to predict, in combination with other data points 110, the win probability of each team.
  • a neural network module may also simulate a contrary play scenario — for example, a missed basket instead of a basket — and compute the win probabilities after the contrary play.
  • the impact of the play on the game outcome, and the significance score of the play's fact 115, are computed in correlation to the difference between the win-probability prediction for the real play and the contrary play.
  • a data point amalgamator 112 may similarly score a player's game performance by summing the impact on the predicted game outcome for each play the player participates in. For example, the data point amalgamator 112 may tally the impact sum of the player's points, assists, and rebounds during die game, and assign a significance score to the play's fact 115 in correlation.
  • a data point amalgamator 112 may assign a score to an asset's predicted daily returns. For each asset, the data point amalgamator 112 continuously calculates data points 110 comprising a distribution of the historical daily returns using a decay model, such as a Johnson's Su-distribution. The data point amalgamator 112 then calculates likelihood of achieving the actual daily return given the distribution fact 115. A significance score is calculated using the found likelihood and, typically, other contributing factors.
  • the story tree library 117 is a database containing a selection of a type of template called a story tree 120. Some or all of the story trees 120 may be associated with a particular data point amalgamator 112.
  • a story tree is comprised of leaf nodes 130 and section nodes 125.
  • Leaf nodes 130 are designated for placement therein of facts 115 scored by the data point amalgamator 112; such placement is further described herein in connection with the story planning module 135.
  • Leaf nodes 130 can be characterized by a minimum and maximum number of facts 115, designating a range of number of facts 115 that may populate each leaf node 130.
  • a section node 125 can serve two purposes: 1) correspondence to an outline sectional heading of a news story; and/or 2) behavior of child nodes of the section node.
  • a section node 125 may dictate that all facts 115 placed in children of the section node 125 be placed in chronological order; or, for example, facts 115 comprising “good news” (e.g., gaining possession of the basketball in the opponent's side of the court) be separated from facts 115 comprising “bad news” (e.g. a 3-point basket by the opposing team).
  • facts 115 comprising “good news” e.g., gaining possession of the basketball in the opponent's side of the court
  • “bad news” e.g. a 3-point basket by the opposing team
  • the story planning module 135 receives a set of scored facts from the data point amalgamator 112.
  • the story planning module 135 selects appropriate story trees 120 from the story tree library 117. For one of the selected story trees 120, the story planning module 135 populates the leaf nodes 130 with the scored facts 115.
  • the story planning module 135 observes sectional and behavioral placement rules dictated by section nodes 125, as a fact 115 is cascaded from the top of the story tree 120 down to a leaf node 130.
  • the story planning module 135 further observes the maximum number of facts 115 of each leaf node 130, and will stop further populating of a leaf node 130 whose maximum has been reached.
  • the story planning module 135 adds the significance scores of facts 115 populating the leaf nodes 130, producing an overall score of the story tree 120.
  • the story tree's 120 overall score is penalized (reduced by a predetermined number of points) if the minimum number of facts 115 in a leaf node 130 is not met.
  • the story planning module 135 repeats the process of populating of leaf nodes 130 with the scored facts 115 for each of the selected story trees 120.
  • the story planning module 135 selects a story plan 120', the populated story tree 120 with the highest overall score.
  • the story plan 120 ' is deemed to contain the outline and content of the best story for the subject area aspect of the data point amalgamator 112.
  • the message template library 140 contains message templates 145.
  • a message template 145 comprises indicators 1101 to be best matched with indicators 1101 within the data points 110 of a fact 115. Additionally, a message template 145 contains a skeletal text message (e.g., one or more sentences or phrases) with variable text to be filled in by the textual contribution of variables 110V in the fact 115.
  • a skeletal text message e.g., one or more sentences or phrases
  • a message template could be,
  • the matching indicators up and daily specify that the message template 145 is a best match for a combination of data points 110 with indicators up and daily (showing that the combination of data points relates to an asset whose daily value has risen).
  • the unboldfaced text is to be completed by textual contributions of the boldfaced variables 110 V in the matched combination.
  • each message template 145 is associated with one or more data point amalgamators 112 or a fact 115 generated thereby.
  • the story scripting module 150 receives the story plan 120 ? from the story planning module 135; and receives the message templates 145 associated with the data point amalgamator 112 from the message template library 140. For each fact 115 in the story plan 120', the scripting module 150 finds the best-matching message template 145; for example, as follows: 1) the story scripting module 150 eliminates message templates 145 containing names of indicators 1101 or variables 110V not found among the data points 110 constituting the fact 115; 2) the remaining message templates 145 are tested to see how many indicators constituting the fact 115 have matching values in the message template 145; and 3) the message templates 145 with the most matching indicator values is selected. If more two or more message templates 145 are tied for the most matching indicator values, a message template 145 may be selected at random among them.
  • a fact 115 contains the data points 110 from a finance data source 105 listed in Table 1:
  • Message template 2 is eliminated because the indicator 1101 named ‘weekly' in message template 2 is not among the indicators 1101 in the fact 115.
  • Message template 3 is eliminated because the variable 110V named ‘open value' in message template 3 is not among the variables 110V in the fact 115.
  • the indicators 1101 in the fact 115 (of Table 1) is compared with indicators 110I in remaining message templates, 1 and 4.
  • Message template 1 contains two indicators 110I in the fact 115, 'daily' and ‘flat.
  • Message template 4 contains one indicator 1101 in the fact 115, 'daily.' Therefore, message template 1, with the most indicators 1101 matching the indicators 1101 in the fact 115 (among non-eliminated message templates 145) is chosen as the message template 145 for the fact 115.
  • the story scripting module 150 injects the textual contribution from values of variables 110V in the fact 115 into the variable text of the selected message template 145.
  • the scripting module generates the text, “A mostly flat day for the euro, now at 1.1847 to the dollar.”
  • the story scripting module 150 iteratively writes the story 155 as the story scripting module 150 traverses data points 110 in the leaf nodes 130 of the story plan 120 ? .
  • the story scripting module 150 may employ orthographic realization to fix punctuation and casing.
  • the selected story plan 120' may alternatively or additionally be matched to nontextual media serving as the basis for data points 110 of a fact 115.
  • the media may be, for example, an image, an audio clip, a video clip, or any combination thereof.
  • the story scripting module 150 may insert the media file into the story, in lieu of or in addition to the textual message.
  • the textual message may serve as a caption for the media file content.
  • Fig. 2 showing steps of a computer-based method 200 for natural language generation of a story, according to some embodiments of the invention.
  • the method 200 comprises a story planning stage comprising steps of a. providing a system for natural language generation of a story 202; b.
  • each fact comprising a combination of one or more of the data points, the data points selected according to a selection rules for a predetermined aspect of the subject area 210; d. for each fact, characterizing a data type of each data point 215 as one or more of i. an indicator, characterizing the data point; and ii. a variable, comprising a textual contribution of the data point; e. calculating significance scores of each fact 220; f. obtaining one or more story trees associated with the aspect 225, each story tree comprising i.
  • each section node corresponding to an outline heading of an article or a behavior of child nodes of the section node; ii. leaf nodes corresponding to article contents; each leaf node is characterized by a minimum and a maximum number of scored facts to be populated in the leaf node; g. for each story tree, populating its leaf nodes with the scored facts, according to the minimum and a maximum number of scored facts and a story ruleset of the story tree 230; h. for each story tree, summing the significance scores of facts placed in the leaf nodes 235; and i.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Machine Translation (AREA)

Abstract

Un système informatique pour la génération de langage naturel d'une histoire utilise des sources de données pour collecter des données, déterminer des faits à partir des données selon des règles spécifiques d'aspects prédéterminés d'une zone de sujet spécifique, caractériser les types de données desdits faits et calculer les scores d'importance de ces faits. Les faits du système et l'arborescence d'histoire génèrent un titre et un plan d'histoire ou d'article et remplissent ceux-ci avec des faits sur la base des sujets spécifiques. L'histoire est ensuite alimentée avec des phrases créées à l'aide de faits préalablement extraits et de données notées.
PCT/IL2022/050079 2021-01-21 2022-01-19 Système et procédé de génération de langage naturel d'un reportage WO2022157766A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163139832P 2021-01-21 2021-01-21
US63/139,832 2021-01-21

Publications (1)

Publication Number Publication Date
WO2022157766A1 true WO2022157766A1 (fr) 2022-07-28

Family

ID=82548695

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2022/050079 WO2022157766A1 (fr) 2021-01-21 2022-01-19 Système et procédé de génération de langage naturel d'un reportage

Country Status (1)

Country Link
WO (1) WO2022157766A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160232152A1 (en) * 2014-04-18 2016-08-11 Arria Data2Text Limited Method and apparatus for document planning
US20200019592A1 (en) * 2013-09-16 2020-01-16 Arria Data2Text Limited Method And Apparatus For Interactive Reports
US20200293617A1 (en) * 2019-03-14 2020-09-17 International Business Machines Corporation Predictive natural language rule generation
US20200401770A1 (en) * 2017-02-17 2020-12-24 Narrative Science Inc. Applied Artificial Intelligence Technology for Performing Natural Language Generation (NLG) Using Composable Communication Goals and Ontologies to Generate Narrative Stories

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200019592A1 (en) * 2013-09-16 2020-01-16 Arria Data2Text Limited Method And Apparatus For Interactive Reports
US20160232152A1 (en) * 2014-04-18 2016-08-11 Arria Data2Text Limited Method and apparatus for document planning
US20200401770A1 (en) * 2017-02-17 2020-12-24 Narrative Science Inc. Applied Artificial Intelligence Technology for Performing Natural Language Generation (NLG) Using Composable Communication Goals and Ontologies to Generate Narrative Stories
US20200293617A1 (en) * 2019-03-14 2020-09-17 International Business Machines Corporation Predictive natural language rule generation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NEIL MCINTYRE ; MIRELLA LAPATA: "Learning to tell tales", NATURAL LANGUAGE PROCESSING OF THE AFNLP: VOLUME 1, ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, N. EIGHT STREET, STROUDSBURG, PA, 18360 07960-1961 USA, 2 August 2009 (2009-08-02) - 7 August 2009 (2009-08-07), N. Eight Street, Stroudsburg, PA, 18360 07960-1961 USA , pages 217 - 225, XP058111650, ISBN: 978-1-932432-45-9 *

Similar Documents

Publication Publication Date Title
US10176170B2 (en) Systems for dynamically generating and presenting narrative content
US20210173847A1 (en) Systems and methods for categorizing and presenting performance assessment data
Von Ahn et al. Verbosity: a game for collecting common-sense facts
US20080124687A1 (en) Virtual world aptitude and interest assessment system and method
US20120216115A1 (en) System of automated management of event information
CN109011580A (zh) 残局牌面获取方法、装置、计算机设备及存储介质
Dulačka et al. Validation of music metadata via game with a purpose
Agrawal et al. Predicting results of Indian premier league T-20 matches using machine learning
Elfrink Predicting the outcomes of MLB games with a machine learning approach
Wolf et al. A football player rating system
Ahamad et al. An OWA‐based model for talent enhancement in cricket
US20230206636A1 (en) Video processing device, video processing method, and recording medium
WO2022157766A1 (fr) Système et procédé de génération de langage naturel d'un reportage
KR102020012B1 (ko) 빅데이터 분석 기반의 인공지능 실시간 스포츠 기사 자동 작성 시스템 및 방법
Rubleske et al. E-Sports Analytics: A Primer and Resource for Student Research Projects and Lesson Plans.
Kaur et al. Analyzing and Exploring the Impact of Big Data Analytics in Sports Science
Ekstrøm et al. Evaluating one-shot tournament predictions
CN110309415B (zh) 新闻信息生成方法、装置及电子设备可读存储介质
Zalewski et al. Recommender system for board games
Lee Automated story-based commentary for sports
Arif et al. Detection of bowler's strong and weak area in cricket through commentary
Indulkar PUBG Winner Ranking Prediction using R Interface ‘h2o’Scalable Machine Learning Platform
Pincus et al. Towards automatic identification of effective clues for team word-guessing games
US20230106936A1 (en) Interactive Gaming in Sports
Baughman et al. Large Scale Generative AI Text Applied to Sports and Music

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22742373

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14.11.2023)