CN112101037A - Semantic similarity calculation method - Google Patents

Semantic similarity calculation method Download PDF

Info

Publication number
CN112101037A
CN112101037A CN201910451691.XA CN201910451691A CN112101037A CN 112101037 A CN112101037 A CN 112101037A CN 201910451691 A CN201910451691 A CN 201910451691A CN 112101037 A CN112101037 A CN 112101037A
Authority
CN
China
Prior art keywords
word
words
sentence
feature
variable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910451691.XA
Other languages
Chinese (zh)
Inventor
黄本聪
陈建亨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unipattern Corp
Original Assignee
Unipattern Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unipattern Corp filed Critical Unipattern Corp
Priority to CN201910451691.XA priority Critical patent/CN112101037A/en
Publication of CN112101037A publication Critical patent/CN112101037A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Machine Translation (AREA)

Abstract

A semantic similarity calculation method comprises the following steps of firstly inputting a sentence to be analyzed, carrying out word removing treatment on the sentence and miscellaneous words preset by each marker word, then extracting words in the sentence and antisense words preset by each marker word to carry out antisense word check, then carrying out similar word replacement on the sentence and similar words preset by each marker word, then carrying out feature word check on the sentence and default feature words of each marker word to obtain a regular sentence after semantic analysis, and finally carrying out similarity calculation on the regular sentence and the marker words to output a response sentence corresponding to the semantic meaning of the regular sentence.

Description

Semantic similarity calculation method
Technical Field
The present invention relates to a similarity calculation method, and more particularly to a semantic similarity calculation method.
Background
With the development of science and technology, the communication mode between human beings and intelligent electronic devices has been performed through the most natural and convenient voice, and robots mainly requiring interaction have been released in recent years.
One of the well-known man-machine interaction techniques is to build a preset dialogue database in a robot body aiming at utterances or problems that a user may possibly express, compare the received voice message with the built-in dialogue database to identify the semantic meaning of the voice message, and perform interactive dialogue.
Another technique is to perform deep learning through a neural network, which is mostly implemented by using a supercomputer or a single chip system in practice. Under the condition of using a single chip system, the same set of circuit in the single chip system is respectively made to act as different operation layers in a multilayer artificial neural network at different time points, when the number of layers of the neural network is more, a complex function (namely a more complex judgment rule) can be simulated, however, when the number of layers is increased, the number of neurons required in the whole network can be increased along with the increase of the number of layers, huge hardware cost burden can be derived, and the input data, learnable parameters and the data quantity of operation results of each operation layer are very considerable, so that most of the enterprises can be burdened.
The above disadvantages are all problems derived from the use of the existing human-computer interaction technology, and it is still difficult to achieve human-computer autonomous interaction according to the development of the current artificial intelligence, after all, the language is a cultural product of long-term learning and experience accumulation of human beings, so how to utilize the finite dialogue database and quickly capture the user's semanteme for analysis becomes an important subject.
Disclosure of Invention
In view of the above, the present invention provides a semantic similarity calculation method, which includes the following steps.
Inputting a sentence to be analyzed, removing foreign words preset by the sentence and each indication word, extracting words in the sentence and antisense words preset by each indication word to perform antisense word check, performing similar word replacement on the sentence and similar words preset by each indication word, performing characteristic check on the sentence and default characteristic words of each indication word to obtain a regular sentence after semantic analysis, and performing similarity calculation on the regular sentence and the indication words to output a response sentence corresponding to the semantic of the regular sentence.
Another technical means of the present invention is that the sentence and the default constant feature word of each indication word are first subjected to feature word check, and then subjected to feature word check with the default variable feature word of each indication word, and the feature words of the indication words comprise at least one constant feature word, at least one variable feature word, or a combination thereof, and each variable feature word has a plurality of associated feature words related to the variable feature word.
The present invention further provides a technical means that the sentence and the default constant feature word of each indication word are firstly subjected to feature word check, and then the default variable feature word of each indication word is subjected to feature word check, and the feature words of the indication words comprise at least one constant feature word, at least one variable feature word, or a combination of the two, each variable feature word has a plurality of associated feature words related to the variable feature word, and the plurality of variable feature words are in an intersection relationship with each other.
The invention also provides a technical means that the plurality of variable characteristic words have a sequential arrangement order.
Another technical means of the present invention is that the constant feature words are checked first, and then the variable feature words are checked.
The present invention also provides a computer program product for executing the above method, wherein the computer program product is further configured to obtain the rule sentence when the constant feature word and the variable feature word of the markup word for feature inspection are simultaneously matched.
The present invention further provides a technical means that the constant response feature words or the variable response feature words of the corresponding markup words extracted from the response sentences are respectively provided with at least one constant response feature word, at least one variable response feature word, or a combination thereof, and the constant response feature words and the variable response feature words are arranged in sequence corresponding to the constant response feature words and the variable response feature words.
Another technical means of the present invention is that the similarity calculation performs feature word check according to the sequence of the constant feature words and the variable feature words.
The present invention also provides a technical means, wherein after the similarity between the rule statement and the indicator is calculated, the response statement is checked against a preset matching threshold, and the response statement larger than the matching threshold is retained to output the response statement.
The present invention further provides a method for matching a sentence with a broad rule base, wherein the broad rule base comprises a plurality of broad rule bases, each broad rule base comprises a plurality of marking words, and each marking word comprises a word of the sentence and a word of the broad rule base.
The invention has the beneficial effects that at least one constant characteristic word, at least one variable characteristic word or the combination of the constant characteristic word and the variable characteristic word are arranged on the marker word, the variable characteristic word is provided with the associated characteristic word related to the variable characteristic word, the characteristic word is checked in the expression mode of the diversified user sentences, and the corresponding variable response characteristic word is used for responding to various different answers.
Drawings
FIG. 1 is a flow chart illustrating a preferred embodiment of the semantic similarity calculation method according to the present invention.
Detailed Description
The features and technical content of the related applications of the present invention will become apparent from the following detailed description of the preferred embodiments, which is to be read in connection with the accompanying drawings.
Referring to fig. 1, a semantic similarity calculation method according to a preferred embodiment of the present invention is adapted to analyze the semantic meaning of a user in a communication process with a robot and generate a corresponding response, and the method includes the following steps.
First, step 91 is performed to input a sentence to be parsed, and perform de-wording processing on the sentence and the predetermined miscellaneous words of each labeled word, where de-wording means to remove the superfluous words in the question sentence, and the predetermined miscellaneous words may be 0 or more, for example, words with meaningless spoken language such as asking for questions, if, for example, …, etc., and the input sentence may be obtained by a user directly talking with a robot or by capturing speech, and then converting the speech into text or text into speech. Here, the following table 1 indicates a word field "you like { xq0} { xq1 }" as an explanation of the present embodiment, and the word field for the word to be removed lists predetermined words to be removed, such as: relative, comparative, visual, etc.
Then, step 92 is performed to extract the words in the sentence and the predetermined anti-sense words for each tag word for anti-sense word check. The antisense words are set according to the opposite words of each index word, and the preset antisense words can be 0 or more, for example, the willing antisense words in the index words can be unwilling, not loved, unwanted, not needed, etc., if the preset antisense words of the index words appear in this step, it means that the antisense words are different from the semantic meaning of the index words and then checked with other index words. The preset indication words can be used as a customer service consultation of a specific field or occasion of a hospital, a school, an amusement park, a department store and the like. The antisense words to the index words as in table 1 below are intended as stars.
TABLE 1
Figure BDA0002075339070000041
Then, step 93 is performed to replace the similar words with the preset similar words of each indication word in the sentence, the similar words are set according to each indication word, and the preset similar words may be 0 or more, for example, the favorite similar words in the indication words in table 1 above list favorite, and so on, or father and grand mountain, and so on. In practice, the similar word replacement in step 93 may be performed first, and then the antisense word check in step 92 may be performed, but not limited thereto.
Then, step 94 is performed, the sentence and the default constant feature word of each markup word are checked for feature words, if the field of the constant feature word in table 1 above defaults to you like, the next step is performed if the word "you like" means the same meaning as the semantic meaning of the markup word, otherwise, the sentence is checked for other markup words.
Then, step 95 is performed, feature word check is performed on the statement and the default variable feature word of each markup word to obtain a rule statement after semantic parsing. The feature words of the labeled words comprise at least one constant feature word, at least one variable feature word or a combination of the two, and each variable feature word is provided with a plurality of associated feature words related to the variable feature word. Here, the feature word may be a noun, verb, or adjective, such as the above-mentioned variable feature word field of Table 1 defaults to basketball, dance, etc.
Furthermore, the variable feature words have an intersection relationship with each other, and the variable feature words have a sequential order, such as { xq0} { xq1} in the notation of table 1 above. In the process, the constant characteristic words are checked first, then the variable characteristic words are checked, and furthermore, the regular sentences can be obtained only if the constant characteristic words and the variable characteristic words of the marker words subjected to the characteristic check are simultaneously met.
Then, step 96 is performed to perform similarity calculation between the regular sentence and the markup word, so as to output a response sentence corresponding to the semantic meaning of the regular sentence, as preset in the response sentence field of table 1 above: i prefer { xq1} a little, both sports I prefer, { xq0} and { xq1} are very good sports. When { xq0}, { xq1} are basketball and football, respectively, the response statement is: i like football, both sports I like and basketball and football are good sports.
In the step 96, the response sentence extracts the constant feature word and the variable feature word of the corresponding markup word, and the constant feature word and the variable feature word are respectively provided with at least one constant response feature word, at least one variable response feature word, or a combination thereof, and the constant response feature word and the variable response feature word are sequentially arranged corresponding to the constant feature word and the variable feature word. It is particularly noted that the similarity calculation is performed according to the sequence of the constant feature words and the variable feature words.
And finally, in step 97, after the similarity between the rule statement and the marker word is calculated, checking the response statement and a preset matching threshold, and keeping the response statement larger than the matching threshold to output the response statement, otherwise, not outputting the response statement. Generally, the similarity is set to about 70-80%, and although the de-wording process is performed at the beginning, the accuracy of the calculated result can be ensured through a preset matching threshold in order to avoid the remaining useless de-wording that is not completely removed.
It is particularly noted that, when the sentence is not matched with any of the preset labeled words, that is, one of the antisense word check, the feature word check and the similarity calculation is not matched, the sentence is matched with the labeled word library in the broad rule library according to the feature words of the sentence, so as to obtain the broad response sentence obtained according to the feature words of the sentence.
Assuming that the words are X1, X2, X3 …, and X1 have n expression modes such as X11, X12, X1 … X1n, and so on, and X2 and X3, assuming that the question or sentence to be analyzed of the user is Y1, when calculating whether Y1 is expressing the semantic meaning of X1, the conventional method is to calculate the similarity of Y1 and X11, X12, X13, and … X1n, so n tokens (Label) must be created, that is, X11, X12, X13, … X1n, which consumes the setup time and labor cost. The invention solves the complex practice that n labeled words must be established, namely one of the labeled words represents X11, X12,. X1n and other n expression modes, and X11 is used as the labeled word to represent X11-X1n, X21-X2n, X31-X3n …, to Xmn and other m semantemes, so that the expression modes of m X n expressions are obtained, thereby saving the time for establishing a rule base, shortening the calculation time and improving the real-time performance of human-computer interaction.
First, as an example of performing the anti-sense word check, the token is "do you like football", if the sentence is "do you like football", the similarity of the sentence is calculated to be extremely high, otherwise, if the sentence is "do not like football", the similarity of the sentence is also high, although only one "different" character is different between the two sentences, the two sentences are completely different and opposite in meaning, therefore, through the anti-sense word check step, the input sentence "do not like football" can be set as a result that is unmatched with the token "do you like football" in the step, and compared with other tokens, subsequent replacement, check or analysis and other steps with the token "do not need to be performed, so as to save analysis time and improve analysis accuracy.
Then, in an example where the indicator includes a feature word not including a variable, the indicator X11 is "you { like } { football } Do", the likes and football in parentheses are constant feature words, the language expression is diversified, for example, X12 "you love playing football Do", X13 "you love playing football Do", X14 "you like do not like football", X15 "old-fashioned language, you like playing football match", X16 "you do not like football", the sentences X12 to X16 are input respectively, the X13 views and X15 old practice are predetermined and nonsense miscellaneous words, which are removed in the step 91, the X12 love kick, the X13 love view, the X14 love dislike, the X15 love view, and the logo like are predetermined similar words, the "do you do not like football" of X16 and the label "do you like football" are preset similar words that are replaced in the step 93.
Then, feature word check is carried out, two feature words of a favorite feature word and a football need to exist at the same time in the step, check of the feature word of the football is carried out only when the feature word of 'favorite' is checked, and then the similarity calculation is carried out by changing X12 from 'do you like playing football' to 'do you like football'; x13 will be changed from "you love watching football" to "you like football" to carry out similarity calculation; x14 will change from "you like football" to perform similarity calculation; x15 would go from "Do you like to see football" to "you like football" for similarity calculation; x16 would perform the similarity calculation from "do you do not like football" to "do you like football". Through the processing flow of the design, the X12-X16 can obtain a very high similarity value when calculating the similarity, so as to recognize that the similarity value has the same meaning with the marker word (X11).
In addition, for example, when the indicator word includes a variable feature word, when the indicator word is "do you like { football/basketball/volleyball/yoga … }, the parentheses are one variable feature word, and m associated feature words such as basketball, volleyball, yoga … and the like related to football are set, and the variable response feature word of the response sentence is set corresponding to the variable feature word, the parentheses are one variable response feature word, and the response sentence answer design may be" i' favorite { football/basketball/volleyball/yoga … } ", { football/basketball/volleyball/yoga … } is also one of the favorite items", { football/basketball/volleyball/yoga … } has some difficulty, i do not like "and the like.
Further, when the input sentence is 'do you like basketball', the associated feature word 'basketball' in the variable feature words is met when the feature word is checked, the sentence is changed into 'do you like basketball', then the similarity is calculated to obtain high similarity, and finally the answer of the response sentence is 'I love basketball', or 'basketball is also one of the favorite items', or 'basketball is a little difficult, I do not like a lot', and other diversified answers are obtained. If the input sentence is 'do you like volleyball', the corresponding answer of the response sentence is 'volleyball has some difficulty, i do not like much'.
In summary, the semantic similarity calculation method of the present invention sets at least one constant feature word, at least one variable feature word or a combination of both in the indicator word, and the variable feature word further has associated feature words related to the variable feature word itself, performs feature word inspection in an expression manner of a diversified sentence, and responds to the feature words through the correspondingly set variables, so that there are many different answers, which can reduce the time for setting the indicator word by manpower and the operation time of a computer, and can greatly improve the flexibility of human-computer interaction to meet the use requirements of different fields and occasions, thereby achieving the purpose of the present invention.
The above-mentioned embodiments are only preferred embodiments of the present invention, and the scope of the present invention should not be limited thereby, and all the simple equivalent changes and modifications made according to the claims and the description of the present invention are also within the scope of the present invention.

Claims (9)

1. A semantic similarity calculation method is characterized by comprising the following steps of
Inputting a sentence to be analyzed, and carrying out word removing treatment on the sentence and each miscellaneous word preset by a marker word;
extracting words in the sentence and antisense words preset by each marker word to perform antisense word check;
performing similar word replacement on the sentence and the preset similar words of each marker word;
the sentence and the default characteristic word of each marker word are subjected to characteristic word check to obtain a regular sentence after semantic analysis; and
and performing similarity calculation on the regular statement and the marker word so as to output a response statement corresponding to the semantic meaning of the regular statement.
2. The semantic similarity calculation method according to claim 1, wherein during the feature word check of the sentence, the sentence and each default constant feature word of the markup words are first subjected to feature word check, and then subjected to feature word check with each default variable feature word of the markup words, and the feature words of the markup words comprise at least one constant feature word, at least one variable feature word, or a combination thereof, and each variable feature word has a plurality of associated feature words related to the variable feature word.
3. The semantic similarity calculation method according to claim 1, wherein during the feature word check of the sentence, the sentence and the default constant feature word of each indicator word are first subjected to feature word check, and then subjected to feature word check with the default variable feature word of each indicator word, and the feature words of the indicator words include at least one constant feature word, at least one variable feature word, or a combination thereof, each variable feature word has a plurality of associated feature words related to the variable feature words, and the plurality of variable feature words are in an intersection relationship with each other.
4. The semantic similarity calculation method according to claim 2 or 3, wherein a plurality of variable feature words are arranged in a sequential order when feature word check of the sentence is performed.
5. The semantic similarity calculation method according to claim 2 or 3, wherein, when feature word check of the sentence is performed, the rule sentence is obtained when a constant feature word of a token word for feature check and the variable feature word are simultaneously satisfied.
6. The semantic similarity calculation method according to claim 2 or 3, wherein in similarity calculation, the constant response feature words or the variable response feature words of the corresponding markup words extracted from the response sentence are respectively provided with at least one constant response feature word, at least one variable response feature word, or a combination thereof, and the constant response feature words and the variable response feature words are sequentially arranged corresponding to the constant response feature words and the variable response feature words.
7. The semantic similarity calculation method according to claim 2 or 3, wherein in similarity calculation, feature word check is performed according to the sequence of the constant feature words and the variable feature words.
8. The semantic similarity calculation method according to claim 1, wherein during similarity calculation, after similarity calculation is performed between the regular sentence and the markup words, the response sentence is checked against a preset matching threshold, and the response sentence larger than the matching threshold is retained to output the response sentence.
9. The semantic similarity calculation method according to claim 1, wherein when the sentence cannot be matched with any markup words in the rule base, the sentence is matched with markup words in a broad rule base according to words of the sentence, so as to obtain a broad response sentence according to the words of the sentence.
CN201910451691.XA 2019-05-28 2019-05-28 Semantic similarity calculation method Pending CN112101037A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910451691.XA CN112101037A (en) 2019-05-28 2019-05-28 Semantic similarity calculation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910451691.XA CN112101037A (en) 2019-05-28 2019-05-28 Semantic similarity calculation method

Publications (1)

Publication Number Publication Date
CN112101037A true CN112101037A (en) 2020-12-18

Family

ID=73748294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910451691.XA Pending CN112101037A (en) 2019-05-28 2019-05-28 Semantic similarity calculation method

Country Status (1)

Country Link
CN (1) CN112101037A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200602908A (en) * 2004-07-05 2006-01-16 Unipattern Co Ltd Data classification method and computer program
CN1734445A (en) * 2004-07-26 2006-02-15 索尼株式会社 Method, apparatus, and program for dialogue, and storage medium including a program stored therein
CN102054116A (en) * 2009-10-30 2011-05-11 财团法人资讯工业策进会 Emotion analysis method, emotion analysis system and emotion analysis device
CN102760153A (en) * 2011-04-21 2012-10-31 帕洛阿尔托研究中心公司 Incorporating lexicon knowledge to improve sentiment classification
CN105824797A (en) * 2015-01-04 2016-08-03 华为技术有限公司 Method, device and system evaluating semantic similarity
US20170060854A1 (en) * 2015-08-25 2017-03-02 Alibaba Group Holding Limited Statistics-based machine translation method, apparatus and electronic device
CN107291783A (en) * 2016-04-12 2017-10-24 芋头科技(杭州)有限公司 A kind of semantic matching method and smart machine
CN107944027A (en) * 2017-12-12 2018-04-20 苏州思必驰信息科技有限公司 Create the method and system of semantic key index
JP2018073411A (en) * 2016-11-04 2018-05-10 株式会社リコー Natural language generation method, natural language generation device, and electronic apparatus
CN108874896A (en) * 2018-05-22 2018-11-23 大连理工大学 A kind of humorous recognition methods based on neural network and humorous feature
CN109062892A (en) * 2018-07-10 2018-12-21 东北大学 A kind of Chinese sentence similarity calculating method based on Word2Vec

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200602908A (en) * 2004-07-05 2006-01-16 Unipattern Co Ltd Data classification method and computer program
CN1734445A (en) * 2004-07-26 2006-02-15 索尼株式会社 Method, apparatus, and program for dialogue, and storage medium including a program stored therein
CN102054116A (en) * 2009-10-30 2011-05-11 财团法人资讯工业策进会 Emotion analysis method, emotion analysis system and emotion analysis device
CN102760153A (en) * 2011-04-21 2012-10-31 帕洛阿尔托研究中心公司 Incorporating lexicon knowledge to improve sentiment classification
CN105824797A (en) * 2015-01-04 2016-08-03 华为技术有限公司 Method, device and system evaluating semantic similarity
US20170060854A1 (en) * 2015-08-25 2017-03-02 Alibaba Group Holding Limited Statistics-based machine translation method, apparatus and electronic device
CN107291783A (en) * 2016-04-12 2017-10-24 芋头科技(杭州)有限公司 A kind of semantic matching method and smart machine
JP2018073411A (en) * 2016-11-04 2018-05-10 株式会社リコー Natural language generation method, natural language generation device, and electronic apparatus
CN107944027A (en) * 2017-12-12 2018-04-20 苏州思必驰信息科技有限公司 Create the method and system of semantic key index
CN108874896A (en) * 2018-05-22 2018-11-23 大连理工大学 A kind of humorous recognition methods based on neural network and humorous feature
CN109062892A (en) * 2018-07-10 2018-12-21 东北大学 A kind of Chinese sentence similarity calculating method based on Word2Vec

Similar Documents

Publication Publication Date Title
CN109241524B (en) Semantic analysis method and device, computer-readable storage medium and electronic equipment
CN106599032B (en) Text event extraction method combining sparse coding and structure sensing machine
CN109840287A (en) A kind of cross-module state information retrieval method neural network based and device
Zhang et al. SG-Net: Syntax guided transformer for language representation
CN106503055A (en) A kind of generation method from structured text to iamge description
CN107247751B (en) LDA topic model-based content recommendation method
CN113220890A (en) Deep learning method combining news headlines and news long text contents based on pre-training
CN113821605A (en) Event extraction method
CN110795544A (en) Content search method, device, equipment and storage medium
Li et al. Intention understanding in human–robot interaction based on visual-NLP semantics
CN117216234A (en) Artificial intelligence-based speaking operation rewriting method, device, equipment and storage medium
US20220318506A1 (en) Method and apparatus for event extraction and extraction model training, device and medium
CN117272977A (en) Character description sentence recognition method and device, electronic equipment and storage medium
WO2023169301A1 (en) Text processing method and apparatus, and electronic device
Paduraru et al. Conversational Agents for Simulation Applications and Video Games.
CN116956902A (en) Text rewriting method, device, equipment and computer readable storage medium
CN116258147A (en) Multimode comment emotion analysis method and system based on heterogram convolution
CN113378826B (en) Data processing method, device, equipment and storage medium
CN113468311B (en) Knowledge graph-based complex question and answer method, device and storage medium
CN112101037A (en) Semantic similarity calculation method
TWI712949B (en) Method for calculating a semantic similarity
CN114492450A (en) Text matching method and device
Su et al. Automatic ontology population using deep learning for triple extraction
Alharahseheh et al. A survey on textual entailment: Benchmarks, approaches and applications
Li et al. Using GAN to generate sport news from live game stats

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination