WO2022130541A1 - 意見集約装置、意見集約方法、およびプログラム - Google Patents
意見集約装置、意見集約方法、およびプログラム Download PDFInfo
- Publication number
- WO2022130541A1 WO2022130541A1 PCT/JP2020/047000 JP2020047000W WO2022130541A1 WO 2022130541 A1 WO2022130541 A1 WO 2022130541A1 JP 2020047000 W JP2020047000 W JP 2020047000W WO 2022130541 A1 WO2022130541 A1 WO 2022130541A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- text data
- sentence
- score
- chat
- chat text
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 45
- 238000004364 calculation method Methods 0.000 claims abstract description 32
- 230000002776 aggregation Effects 0.000 claims description 134
- 238000004220 aggregation Methods 0.000 claims description 134
- 238000002347 injection Methods 0.000 description 43
- 239000007924 injection Substances 0.000 description 43
- 239000003086 colorant Substances 0.000 description 14
- 239000000446 fuel Substances 0.000 description 14
- 230000008569 process Effects 0.000 description 12
- 230000015654 memory Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 239000013598 vector Substances 0.000 description 3
- 230000004931 aggregating effect Effects 0.000 description 2
- 239000000470 constituent Substances 0.000 description 2
- 238000005401 electroluminescence Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000003058 natural language processing Methods 0.000 description 2
- 230000001052 transient effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3344—Query execution using natural language analysis
Definitions
- This disclosure relates to an opinion aggregation device, an opinion aggregation method, and a program.
- chat function is used to collect comments regarding distribution. If the distributor can respond to the opinions or questions received on the spot, it is expected that the viewer's understanding or satisfaction will be improved. Furthermore, it will lead to a lively exchange of opinions, and it can be expected to be useful for consensus building, especially in presentations. However, when a large number of comments are gathered, it is practically impossible for the distributor to confirm all of them during distribution, and a technique for collecting and organizing similar opinions or questions from chat sentences is required. ..
- Patent Document 1 discloses a microblogging text classification technique that realizes high classification accuracy based on a long-term tendency while adapting to a change in a tendency of a designated text set at high speed.
- the purpose of this disclosure made in view of such circumstances is to provide an opinion aggregation device, an opinion aggregation method, and a program capable of performing classification by capturing semantic information.
- the opinion aggregation device has a first determination unit for determining whether the input sentence is a declarative sentence or a question sentence, and when the input sentence is the declarative sentence, the input sentence is questioned.
- a plurality of chat text data are generated by the first generation unit that generates the first text data, the second generation unit that generates the second text data that simply answers the input sentence when the input sentence is the question sentence, and the second generation unit.
- a storage unit that stores the included chat text database, a first score indicating the sentence continuity between the first text data and the chat text data, or a sentence continuity between the chat text data and the second text data.
- a calculation unit that calculates the second score shown, and a second determination unit that outputs the chat text data having the first score or the second score when the first score or the second score is equal to or higher than the threshold value. , It is characterized by providing.
- the opinion aggregation method includes a step of determining whether the input sentence is a declarative sentence or a question sentence, and when the input sentence is the declarative sentence, the first text in which the input sentence is questioned.
- a step of generating data a step of generating second text data in which the input sentence is a simple answer to the input sentence when the input sentence is the question sentence, and a step of storing a chat text database containing a plurality of chat text data.
- the program according to one embodiment is characterized in that the computer functions as the above-mentioned opinion gathering device.
- an opinion aggregation device an opinion aggregation method, and a program capable of performing classification that captures semantic information.
- the opinion aggregation device 100 includes a control unit 110, a storage unit 120, an input unit 130, and an output unit 140.
- the control unit 110 may be configured by dedicated hardware, a general-purpose processor, or a processor specialized for a specific process.
- the control unit 110 includes a declarative sentence / interrogative sentence determination unit (first determination unit) 10, an interrogative sentence generation unit (first generation unit) 20, an answer sentence generation unit (second generation unit) 30, and sentence continuity.
- a score calculation unit (calculation unit) 40 and a threshold value determination unit (second determination unit) 50 are provided.
- the storage unit 120 includes one or more memories, and may include, for example, a semiconductor memory, a magnetic memory, an optical memory, and the like. Each memory included in the storage unit 120 may function as, for example, a main storage device, an auxiliary storage device, or a cache memory. Each memory does not necessarily have to be provided inside the opinion aggregation device 100, and may be provided outside the opinion aggregation device 100.
- the storage unit 120 stores arbitrary information used for the operation of the opinion aggregation device 100.
- the storage unit 120 stores, for example, a chat text database 121 including a plurality of chat text data. As chat text data, for example, as shown in Fig.
- the storage unit 120 stores, for example, various programs or data.
- the input unit 130 accepts input of various information.
- the input unit 130 may be any device as long as it can be operated by the user, and may be, for example, a microphone, a touch panel, a keyboard, a mouse, or the like.
- the input sentence is input to the control unit 110.
- the input sentence for example, as shown in FIG. 2, "I like the red model (declarative sentence)" and the like can be mentioned. Examples of the input sentence include "what is injection? (Question sentence)" as shown in FIG.
- the input unit 130 may be provided outside the opinion aggregation device 100, or may be integrated with the opinion aggregation device 100.
- the output unit 140 outputs various information.
- the output unit 140 is, for example, a speaker, a liquid crystal display, an organic EL (Electro-Luminescence) display, or the like.
- the output unit 140 outputs, for example, a similar sentence similar to the input sentence.
- a similar sentence similar to the input sentence for example, as shown in Fig. 2, for the input sentence "I like the red model”, “I like the red model”, “I like the red”, etc. Can be mentioned.
- As a similar sentence similar to the input sentence for example, as shown in FIG. 3, for the input sentence "What is injection?", "What is injection?", "I don't understand the injection", etc. are mentioned. Be done.
- the output unit 140 may be provided outside the opinion aggregation device 100, or may be integrated with the opinion aggregation device 100.
- the declarative sentence / interrogative sentence determination unit 10 determines whether the input sentence is a declarative sentence or an interrogative sentence. When the input sentence is a declarative sentence, the declarative sentence / interrogative sentence determination unit 10 outputs a determination result that the input sentence is a declarative sentence to the interrogative sentence generation unit 20. When the input sentence is an interrogative sentence, the declarative sentence / interrogative sentence determination unit 10 outputs a determination result that the input sentence is an interrogative sentence to the answer sentence generation unit 30.
- the interrogative sentence generation unit 20 interrogates the input sentence based on the determination result input from the declarative sentence / interrogative sentence determination unit 10, and generates the first text data which is the text data in which the input sentence is interrogative.
- the interrogative sentence generation unit 20 outputs the first text data to the sentence continuity score calculation unit 40. Examples of the first text data include, as shown in FIG. 2, "Do you like the red model?", "What color model do you like?" And the like.
- the interrogative sentence generation unit 20 may generate a single first text data or a plurality of first text data for one input sentence.
- the technique for generating the first text data by the question sentence generation unit 20 is not particularly limited, but for example, an automatic question generation technique may be used.
- an automatic question generation technique may be used.
- the following documents can be referred to. Sato Sato, Hiroyasu Sakai, Manabu Okumura, "Automatic generation of questions from product manuals", Proceedings of the Japanese Society for Artificial Intelligence National Convention, 32nd National Convention (2016), General Incorporated Association, Japanese Society for Artificial Intelligence, 2018
- the answer sentence generation unit 30 simply responds to the input sentence based on the determination result input from the declarative sentence / interrogative sentence determination unit 10, and generates the second text data which is the text data in which the input sentence is simply answered.
- the answer sentence generation unit 30 outputs the second text data to the sentence continuity score calculation unit 40.
- As the second text data for example, as shown in FIG. 3, "injection is a fuel supply device" and the like can be mentioned.
- the answer sentence generation unit 30 may generate a single second text data or a plurality of second text data for one input sentence.
- the technique for generating the second text data by the answer sentence generation unit 30 is not particularly limited, but for example, a FAQ search system is used to search for an appropriate answer to the input sentence and summarize the appropriate answer. It may be used as a simple answer sentence.
- a FAQ search system is used to search for an appropriate answer to the input sentence and summarize the appropriate answer. It may be used as a simple answer sentence.
- JP-A-2018-180938, JP-A-2018-147102 and the like can be referred to.
- the sentence continuity score calculation unit 40 uses the first text data input from the question sentence generation unit 20 and the chat text data extracted from the chat text database 121 (for example, “red model likes” and “red is subtle”. , “I like red,” “I thought it would be nice to have a lot of colors,” “I thought it would be nice to be a little smaller,” etc.), and calculate the first score that indicates the continuity of the sentence.
- the sentence continuity score calculation unit 40 outputs the calculated first score to the threshold value determination unit 50.
- the sentence continuity score calculation unit 40 uses chat text data extracted from the chat text database 121 (for example, “what is injection”, “I don't know the injection”, “it seems that a stable supply is necessary", and ". I thought it would be a little smaller. "), And the second text data input from the answer sentence generation unit 30 and the second score indicating the sentence continuity are calculated. The sentence continuity score calculation unit 40 outputs the calculated second score to the threshold value determination unit 50.
- the technique for calculating the first score or the second score by the sentence continuity score calculation unit 40 is not particularly limited, but for example, the output value of Next Sentence Prediction, which is one of the learning models of natural language processing, is used. , May be used as a score indicating sentence continuity.
- Next Sentence Prediction which is one of the learning models of natural language processing
- the following documents can be referred to. Devlin, Jacob, et al. "Bert: Pre-training of deep bidirectional transformers for language understanding.”
- the sentence continuity score calculation unit 40 determines the sentence continuity of the first text data: “Tomorrow's weather will be sunny” and the second text data: “Tomorrow's weather will be cloudy”. The indicated score is calculated as "8.5 (True)". The score indicates that the two sentences "Today's weather will be sunny” and “Tomorrow's weather will be cloudy" are highly continuous.
- the sentence continuity score calculation unit 40 determines the sentence continuity of the first text data: "Today's weather will be fine” and the second text data: "Probability statistics are an important subject”. The indicated score is calculated as "-5.4 (False)". The score indicates that the continuity of the two sentences "Today's weather will be fine” and "Probability statistics are an important subject" is low.
- the score indicating the continuity of sentences can be set in the range from - ⁇ to + ⁇ .
- the sentence continuity score calculation unit 40 outputs true (True), for example, when the value of the score indicating the sentence continuity is positive.
- the sentence continuity score calculation unit 40 outputs false (False), for example, when the value of the score indicating the sentence continuity is negative.
- the threshold value determination unit 50 ranks a plurality of chat text data in the order of scores based on the first score or the second score input from the sentence continuity score calculation unit 40. For example, as shown in FIG. 2, the threshold value determination unit 50 converts a plurality of chat text data for the first text data: "Do you like the red model?" In “9.2: Red model likes". “8.8: Red is subtle”, “8.5: Red is good”, “1.9: I thought it was good to have abundant colors", “-5.1: I hope it was a little smaller” I thought. "... Rank like this. For example, as shown in FIG. 2, the threshold value determination unit 50 converts a plurality of chat text data for the first text data: “what color model do you like?" To "8.7: red model likes".
- the threshold determination unit 50 converts a plurality of chat text data for the second text data: “injection is a fuel supply device” into “8.8: what is injection”. , “8.5: I don't know the injection”, “0.1: It seems that a stable supply is necessary”, “-5.1: I thought it should be a little smaller", and so on.
- the threshold value determination unit 50 determines whether or not the first score is equal to or higher than the threshold value. When the first score is equal to or higher than the threshold value, the threshold value determination unit 50 outputs the chat text data having the first score to the output unit 140, and when the first score is smaller than the threshold value, the chat text having the first score. No data is output to the output unit 140.
- the threshold value determination unit 50 determines whether or not the second score is equal to or higher than the threshold value. When the second score is equal to or higher than the threshold value, the threshold value determination unit 50 outputs the chat text data having the second score to the output unit 140, and when the second score is smaller than the threshold value, the chat text having the second score. No data is output to the output unit 140.
- the value of the threshold value is not particularly limited, and may be set to an arbitrary value by the opinion aggregation device 100.
- the threshold value determination unit 50 determines whether or not the first score in the singular or a plurality of chat text data is equal to or higher than the threshold value for the first text data. Then, the threshold value determination unit 50 outputs the chat text data having the first score to the output unit 140 when the first score is equal to or higher than the threshold value, and has the first score when the first score is smaller than the threshold value. The chat text data is not output to the output unit 140.
- the threshold determination unit 50 refers to the second text data (for example, "injection is a fuel supply device").
- the second score in singular or multiple chat text data for example, "What is injection?", “I don't know the injection”, “I think you need a stable supply”, “I thought it should be a little smaller"
- the threshold value for example, 5.0
- the threshold value determination unit 50 outputs the chat text data having the second score (for example, "what is injection", “does not know the injection") to the output unit 140. If the second score is smaller than the threshold value, the chat text data having the second score (for example, "It seems that stable supply is necessary", “I thought it should be a little smaller”) is not output to the output unit 140. ..
- the threshold value determination unit 50 uses all the first text data (for example, "Do you like the red model?", "What color model?" For single or multiple chat text data (eg, “red model likes”, “red is subtle”, “red is good”, “colors are good” for one or more chat text data (for example, "Do you like it?") It is determined whether or not the first score in “I thought” and “I thought it should be a little smaller”) is equal to or higher than the threshold value (for example, 5.0).
- the threshold value for example, 5.0
- chat text data for example, “red model likes” and “red is good” for which the first score is equal to or higher than the threshold for all the first text data.
- Chat text data that is output to section 140 and whose first score does not exceed the threshold for all first text data for example, “red is subtle”, “I thought it would be nice to have abundant colors”, “ I thought it would be better if it was a little smaller. ") Is not output to the output unit 140.
- the threshold value determination unit 50 determines whether or not the second score is equal to or higher than the threshold value for all the second text data. Then, the threshold value determination unit 50 outputs chat text data having a second score equal to or higher than the threshold value for all the second text data to the output unit 140, and the second score for all the second text data. Does not output chat text data that does not exceed the threshold value to the output unit 140.
- the opinion aggregation device 100 When the input sentence is a declarative sentence, the opinion aggregation device 100 according to the first embodiment extracts an answer sentence having a high sentence continuity score for the sentence in which the declarative sentence is questioned, and the input sentence is a question sentence. In the case of, the question sentence having a high sentence continuity score is extracted for the sentence which simply answered the question sentence. As a result, since a similar sentence similar to the input sentence can be output, it is possible to realize an opinion aggregation device 100 capable of performing classification by capturing semantic information of the same opinion or the same meaning.
- step 101 the input sentence is input to the opinion aggregation device 100.
- Input sentences include, for example, “I like the red model” and “What is injection?”.
- step 102 the opinion aggregation device 100 determines whether the input sentence is a declarative sentence or an interrogative sentence.
- the input sentence is a declarative sentence such as "I like the red model" (step 102 ⁇ declarative sentence)
- the opinion aggregation device 100 performs the process of step 103.
- an interrogative sentence such as "what is injection?” (Step 102 ⁇ interrogative sentence)
- the opinion aggregation device 100 performs the process of step 104.
- step 103 the opinion aggregation device 100 question-cultivates the input sentence and generates the first text data which is the text data in which the input sentence is question-cultivated. For example, the opinion aggregation device 100 question-cultivates the input sentence "I like the red model", and the first text "Do you like the red model?" And “What color model do you like?" Generate data.
- step 104 the opinion aggregation device 100 simply responds to the input sentence and generates the second text data which is the text data in which the input sentence is simply answered. For example, the opinion aggregation device 100 simply answers the input sentence "What is injection?" And generates the second text data "Injection is a fuel supply device".
- the opinion aggregation device 100 calculates the sentence continuity score. For example, the opinion aggregation device 100 calculates a first score indicating the sentence continuity of the first text data and the chat text data included in the chat text database 121. For example, the opinion aggregation device 100 calculates a second score indicating the sentence continuity of the chat text data included in the chat text database 121 and the second text data.
- the opinion aggregation device 100 uses the first text data: "Do you like the red model?" As the first text data, and the chat text data: "red” as the second text data. Using “model like", the first score indicating the continuity of two sentences is calculated as "9.2".
- the opinion aggregation device 100 uses the first text data: "Do you like the red model?" As the first text data, and the chat text data: "Red is” as the second text data. Using “subtle”, the first score indicating the continuity of two sentences is calculated as "8.8".
- the opinion aggregation device 100 uses the first text data: "Do you like the red model?" As the first text data, and the chat text data: “Red is” as the second text data.
- the first score which indicates the continuity of two sentences, is calculated as "8.5” using "I like it".
- the opinion aggregation device 100 uses the first text data: "Do you like the red model?” As the first text data, and the chat text data: "color is” as the second text data. Using “I thought it was good to have abundant”, the first score indicating the continuity of two sentences is calculated as "1.9".
- the opinion aggregation device 100 uses the first text data: "Do you like the red model?" As the first text data, and the chat text data: "a little smaller” as the second text data.
- the first score which indicates the continuity of two sentences, is calculated as "-5.1” using "I thought it was good”.
- the opinion aggregation device 100 uses the first text data: "What color model do you like?" As the first text data, and the chat text data as the second text data. : Using the "red model like", the first score indicating the continuity of the two sentences is calculated as "8.7".
- the opinion aggregation device 100 uses the first text data: "What color model do you like?" As the first text data, and the chat text data: “red” as the second text data.
- the first score which indicates the continuity of two sentences, is calculated as “6.5” using "I like it".
- the opinion aggregation device 100 uses the first text data: "What color model do you like?" As the first text data, and the chat text data: “red” as the second text data.
- the first score which indicates the continuity of two sentences, is calculated as "0.3” using "is subtle”.
- the opinion aggregation device 100 uses the first text data: "What color model do you like?" As the first text data, and the chat text data: "color” as the second text data.
- the first score which indicates the continuity of two sentences, is calculated as "-2.0" by using "I thought it would be good to have abundant data”.
- the opinion aggregation device 100 uses the first text data: "What color model do you like?" As the first text data, and the chat text data: "a little more” as the second text data. Using “I thought it was small”, the first score indicating the continuity of the two sentences is calculated as "-6.7".
- the opinion aggregation device 100 uses chat text data: "what is injection” as the first text data, and second text data: “injection” as the second text data. Is a fuel supply device ”, and the second score indicating the continuity of the two sentences is calculated as“ 8.8 ”.
- the opinion aggregation device 100 uses the chat text data: "I do not know the injection” as the first text data, and the second text data: "injection is the fuel supply device” as the second text data.
- the second score which indicates the continuity of two sentences, is calculated as "8.5".
- the opinion aggregation device 100 uses chat text data: "It seems that stable supply is necessary" as the first text data, and second text data: “injection is a fuel supply device” as the second text data.
- the second score which indicates the continuity of two sentences, is calculated as "0.1".
- the opinion aggregation device 100 uses the chat text data: "I thought it should be a little smaller" as the first text data, and the second text data: “injection is” as the second text data.
- the second score which indicates the continuity of the two sentences, is calculated as "-5.1” using "It is a fuel supply device”.
- step 106 the opinion aggregation device 100 ranks a plurality of chat text data in order of score based on the first score or the second score.
- the opinion aggregation device 100 displays a plurality of chat text data for the first text data: "Do you like the red model?", “9.2: Red model likes", and “8.8: Red is”. "Subtle”, “8.5: Red is good”, “1.9: I think it's good to have a lot of colors”, "-5.1: I think it's a little smaller” ... Rank like.
- the opinion aggregating device 100 displays a plurality of chat text data for the first text data: "What color model do you like?", "8.7: Red model likes", “6.5: Red”. “I like it”, “0.3: Red is subtle”, “-2.0: I thought it was good to have abundant colors”, “-6.7: I thought it would be nice to be a little smaller” ⁇ ⁇ Rank like.
- the opinion aggregating device 100 converts a plurality of chat text data for the second text data: "injection is a fuel supply device” into “8.8: what is injection” and “8.5: injection”. “I don't know”, “0.1: It seems that a stable supply is necessary”, “-5.1: I thought it should be a little smaller”, and so on.
- the opinion aggregation device 100 determines whether or not the first score or the second score is equal to or higher than the threshold value. When the first score or the second score is equal to or higher than the threshold value (step 106 ⁇ YES), the opinion aggregation device 100 performs the process of step 107. When the first score or the second score is smaller than the threshold value (step 106 ⁇ NO), the opinion aggregation device 100 ends the process.
- the opinion aggregation device 100 determines whether or not the first score of the chat text data with respect to the first text data is equal to or higher than the threshold value. For example, when there are a plurality of first text data, the opinion aggregation device 100 determines whether or not the first score of the chat text data for all the first text data is equal to or higher than the threshold value.
- the chat text data for the first text data “Do you like the red model?”:
- the first score "9.2" of the “red model like” is equal to or higher than the threshold value. Therefore, it is determined that the first score "8.7” of the first text data: “what color model do you like?" And the chat text data: “red model like” is also equal to or higher than the threshold value.
- the first score "8.5" of the first text data “Do you like the red model?” And the chat text data: “I like red” is equal to or higher than the threshold value.
- First text data Chat for "what color model do you like?” Text data: It is determined that the first score "6.5” of "red is good” is also above the threshold.
- the first score "8.5” of the first text data “Do you like the red model?” And the first score "8.5” of the chat text data: “Red is subtle” is equal to or higher than the threshold value, and the first.
- Text data Chat for "what color model do you like?” Text data: It is determined that the first score "0.3" of "red is subtle” is smaller than the threshold.
- the opinion aggregation device 100 has a first text data: "Do you like the red model?" And a chat text data: “I thought it would be nice to have abundant colors” with a first score of "1.9". Smaller than the threshold, 1st text data: Chat text data for "What color model do you like?": 1st score "-2.0" of "I thought it was good to have abundant colors” is also lower than the threshold Judge as small.
- the opinion aggregation device 100 has a threshold of the first text data: "Do you like the red model?" And the first score "-5.1" of the chat text data: “I thought it should be a little smaller”. Smaller, 1st text data: Chat text data for "What color model do you like?": 1st score "-6.7” of "I thought it should be a little smaller” is also judged to be smaller than the threshold do.
- the opinion aggregation device 100 determines whether or not the second score of the chat text data with respect to the second text data is equal to or higher than the threshold value. For example, when there are a plurality of second text data, the opinion aggregation device 100 determines whether or not the second score of the chat text data for all the second text data is equal to or higher than the threshold value.
- the chat text data for the second text data “injection is a fuel supply device”: the second score "8.8” of "what is injection” is the threshold value. It is determined that the above is the case.
- the opinion aggregation device 100 determines that the second score "8.5" of the chat text data: "I do not know the injection” for the second text data: "injection is a fuel supply device” is equal to or higher than the threshold value. ..
- the opinion aggregation device 100 determines that the second score "0.1" of the second text data: “injection is a fuel supply device” and the chat text data: “it seems that stable supply is necessary” is smaller than the threshold value. do.
- chat text data for the second text data “injection is a fuel supply device”: the second score "-5.1” of "I thought it should be a little smaller” is the threshold. Judged to be smaller.
- step 107 the opinion aggregation device 100 outputs a similar sentence similar to the input sentence based on the determination result.
- the opinion aggregation device 100 is based on the determination result that the first score of the chat text data for all the first text data is equal to or higher than the threshold value, and the "red model likes". , “Red is good” is output as a similar sentence similar to the input sentence. Specifically, the opinion aggregation device 100 responds to the first text data: "Do you like the red model?" As the chat text data whose first score is equal to or higher than the threshold value, "like the red model”. , "Red is subtle” and “Red is good” are classified into upper text groups.
- the opinion aggregation device 100 has "red model likes” and “red model likes” as chat text data in which the first score is equal to or higher than the threshold value for the first text data: "what color model do you like?". "Red is good” is classified into the upper text group. Then, the opinion aggregation device 100 outputs chat text data that is commonly included in both upper text groups, that is, “red model likes” and “red is good”.
- the opinion aggregation device 100 determines that the second score of the chat text data with respect to the second text data is equal to or higher than the threshold value, and based on the determination result, "What is injection?" , "I don't know the injection” is output as a similar sentence similar to the input sentence.
- the opinion aggregation device 100 has a second text data: "Injection is a fuel supply device", and as a chat text data whose second score is equal to or higher than the threshold value, "What is injection?" , "I don't know the injection” are classified into the upper text group. Then, the opinion aggregation device 100 outputs all the chat text data included in the higher-level text group, that is, "what is injection” and "I don't know the injection”.
- the opinion aggregation method classifies similar texts based on the sentence continuity score. That is, the input sentence is converted, and whether or not the predetermined sentence is established as a conversational sentence with respect to the converted input sentence is calculated as a sentence continuity score, and the input sentence and the predetermined sentence are calculated based on this score. Measure entrainment or similarity. For a declarative sentence, the interrogative sentence is taken, and the sentence continuity score between the interrogative sentence and a predetermined sentence is calculated to score the conformity with the original declarative sentence. For an interrogative sentence, the same as the original interrogative sentence is scored by taking the answer sentence and calculating the sentence continuity score between the predetermined sentence and the answer sentence.
- the difference between the opinion aggregation device 100A according to the second embodiment and the opinion aggregation device 100 according to the first embodiment is that the opinion aggregation device 100 according to the first embodiment does not have a similar grammar text search unit.
- the opinion aggregation device 100A according to the second embodiment is provided with a similar grammar text search unit. Since the other configurations are the same, duplicate explanations may be omitted.
- the opinion aggregation device 100A includes a control unit 110A, a storage unit 120, an input unit 130, and an output unit 140.
- the control unit 110A includes a declarative sentence / interrogative sentence determination unit (first determination unit) 10, an interrogative sentence generation unit (first generation unit) 20, an answer sentence generation unit (second generation unit) 30, and sentence continuity. It includes a score calculation unit (calculation unit) 40, a threshold value determination unit (second determination unit) 50, and a similar grammar text search unit (search unit) 60.
- the similar grammar text search unit 60 searches the chat text database 121 for chat text data that is grammatically similar to the input sentence. Then, the similar grammar text search unit 60 ranks a plurality of chat text data in order of similarity based on the similarity between the input sentence and the chat text data (for example, a value calculated by distance calculation).
- the similar grammar text search unit 60 uses a plurality of chat text data grammatically similar to the input sentence: "I like the red model” to "0.9: red model is good”. "1.4: Red is subtle”, “1.5: Red is good”, “11.7: I thought it would be nice to have a lot of colors”, “21.0: A little smaller I thought it was good. "... Rank like this.
- the similar grammar text search unit 60 determines whether or not the similarity is equal to or less than the threshold value.
- the similar grammar text search unit 60 outputs the chat text data having the similarity degree to the sentence continuity score calculation unit 40 as similar chat text data, and when the similarity degree is larger than the threshold value.
- the chat text data having the similarity is not output to the sentence continuity score calculation unit 40 as similar chat text data.
- the value of the threshold value is not particularly limited, and may be set to an arbitrary value by the opinion aggregation device 100A.
- the similar grammar text search unit 60 may see "0.9: Red model like", “1.4: Red model like”. “Red is subtle” and “1.5: Red is good” are output to the sentence continuity score calculation unit 40 as similar chat text data, and when the similarity is larger than the threshold value (for example, 5.0), “Red is subtle” and “1.5: Red is good”. 11.7: "I thought it was good to have abundant colors” and "21.0: I thought it was good to be a little smaller" are not output to the sentence continuity score calculation unit 40 as similar chat text data.
- the threshold value for example, 5.0
- the technique for the similar grammar text search unit 60 to search the chat text data grammatically similar to the input sentence from the chat text database 121 is not particularly limited, but for example, the text is a model of natural language processing. Text that is converted into a feature quantity vector by one BERT and whose norm value indicating the difference between the feature quantity vectors is smaller than a predetermined threshold may be used as a search result as a similar grammar text.
- the following documents can be referred to. Devlin, Jacob, et al. "Bert: Pre-training of deep bidirectional transformers for language understanding.” ArXiv preprint arXiv: 1810.04805 (2016).
- the sentence continuity score calculation unit 40 obtains a first score indicating the sentence continuity of the first text data input from the question sentence generation unit 20 and the similar chat text data input from the similar grammar text search unit 60. calculate.
- the sentence continuity score calculation unit 40 outputs the calculated first score to the threshold value determination unit 50.
- the sentence continuity score calculation unit 40 shows the sentence continuity of the similar chat text data input from the similar grammar text search unit 60 and the second text data input from the answer sentence generation unit 30. 2 Calculate the score.
- the sentence continuity score calculation unit 40 outputs the calculated second score to the threshold value determination unit 50.
- the threshold value determination unit 50 ranks a plurality of similar chat text data in the order of scores based on the first score input from the sentence continuity score calculation unit 40. For example, as shown in FIG. 6, the threshold value determination unit 50 converts a plurality of similar chat text data for the first text data: "Do you like the red model?" To "9.2: Red model likes". , “8.8: Red is subtle", “8.5: Red is good”. For example, as shown in FIG. 6, the threshold value determination unit 50 converts a plurality of similar chat text data for the first text data: "what color model do you like?" To "8.7: red model like". , “6.5: Red is good", “0.3: Red is subtle”.
- the threshold value determination unit 50 determines whether or not the first score is equal to or higher than the threshold value. When the first score is equal to or higher than the threshold value, the threshold value determination unit 50 outputs similar chat text data having the first score to the output unit 140, and when the first score is smaller than the threshold value, the threshold value determination unit 50 has the first score. The chat text data is not output to the output unit 140.
- the threshold value determination unit 50 determines whether or not the second score is equal to or higher than the threshold value. When the second score is equal to or higher than the threshold value, the threshold value determination unit 50 outputs similar chat text data having the second score to the output unit 140, and when the second score is smaller than the threshold value, the threshold value determination unit 50 has the second score. The chat text data is not output to the output unit 140.
- the threshold value determination unit 50 uses all the first text data (for example, "Do you like the red model?”, "What color model?” Do you like? ”), The first score in one or more similar chat text data (eg,“ red model likes ”,“ red is good ”,“ red is subtle ”) is the threshold. (For example, 5.0) or more is determined. Then, the threshold determination unit 50 inputs similar chat text data (for example, "red model likes” and “red is good”) having a first score equal to or higher than the threshold for all the first text data. It outputs to the output unit 140, and does not output similar chat text data (for example, "red is subtle") whose first score does not exceed the threshold value to the output unit 140 for all the first text data.
- similar chat text data for example, “red model likes” and “red is good
- the opinion aggregation device 100A When the input sentence is a declarative sentence, the opinion aggregation device 100A according to the second embodiment extracts an answer sentence having a high sentence continuity score for the sentence in which the declarative sentence is questioned, and the input sentence is a question sentence. In the case of, the question sentence having a high sentence continuity score is extracted for the sentence which simply answered the question sentence. As a result, since a similar sentence similar to the input sentence can be output, it is possible to realize the opinion aggregation device 100A capable of performing classification by capturing semantic information of the same opinion or the same meaning. Further, the sentence continuity score calculation unit 40 uses only the similar chat text data carefully selected in advance for the score calculation, so that it is possible to efficiently perform classification that captures semantic information while suppressing the calculation cost. It is possible to realize an opinion gathering device 100A.
- step S201 the input sentence is input to the opinion aggregation device 100A.
- an input sentence for example, "I like the red model” and the like.
- step 202 the opinion aggregation device 100A determines whether the input sentence is a declarative sentence or an interrogative sentence.
- the opinion aggregation device 100A performs the process of step 204.
- the input sentence is an interrogative sentence (step 202 ⁇ interrogative sentence)
- the opinion aggregation device 100A performs the process of step 205.
- the opinion aggregation device 100A searches the chat text database 121 for chat text data that is grammatically similar to the input sentence. Then, the opinion aggregation device 100A determines whether or not the similarity between the input sentence and the chat text data is equal to or less than the threshold value, and if the similarity is equal to or less than the threshold value, the chat text data having the similarity degree is used. Use similar chat text data.
- the opinion aggregation device 100A determines that the searched chat text data: “red model likes” similarity: “0.9” is equal to or less than the threshold value, and the searched chat text data: “red model likes”. Let's use similar chat text data. For example, the opinion aggregation device 100A determines that the searched chat text data: “red is subtle” similarity: "1.4" is equal to or less than the threshold value, and the searched chat text data: “red is subtle” is similar. Chat text data. For example, the opinion aggregation device 100A determines that the searched chat text data: “red is good” similarity: "1.5” is below the threshold, and the searched chat text data: "red is good”. Let's use similar chat text data.
- the opinion aggregation device 100A determines that the searched chat text data: "I thought it was good to have abundant colors” similarity: "11.7" is larger than the threshold, and the searched chat text data: ". I thought it would be nice to have a lot of colors. ”Do not use similar chat text data. For example, the opinion aggregation device 100A determines that the searched chat text data: “I thought it should be a little smaller” has a similarity: "21.0" larger than the threshold value, and the searched chat text data: "a little smaller”. "I thought it was good” is not used as similar chat text data.
- step 204 the opinion aggregation device 100A question-cultivates the input sentence and generates the first text data which is the text data in which the input sentence is question-cultivated.
- step 205 the opinion aggregation device 100A simply responds to the input sentence and generates the second text data which is the text data in which the input sentence is simply answered.
- the opinion aggregation device 100A calculates the sentence continuity score. For example, the opinion aggregation device 100A calculates a first score indicating the sentence continuity of the first text data and the similar chat text data. For example, the opinion aggregation device 100A calculates a second score indicating the sentence continuity of the similar chat text data and the second text data.
- the opinion aggregation device 100A uses the first text data: "Do you like the red model?" As the first text data, and the similar chat text data: "red” as the second text data.
- the first score indicating the continuity of two sentences is calculated as "9.2" by using "model like”.
- the opinion aggregation device 100A uses the first text data: "Do you like the red model?" As the first text data, and the similar chat text data: "red" as the second text data.
- the first score which indicates the continuity of two sentences, is calculated as “8.8” using "is subtle”.
- the opinion aggregation device 100A uses the first text data: "Do you like the red model?" As the first text data, and the similar chat text data: "red” as the second text data.
- the first score which indicates the continuity of two sentences, is calculated as "8.5” using "I like it".
- the opinion aggregation device 100A uses the first text data: "What color model do you like?" As the first text data, and the similar chat text data: "" as the second text data. Using “Red model likes", the first score indicating the continuity of two sentences is calculated as "8.7".
- the opinion aggregation device 100A uses the first text data: "What color model do you like?" As the first text data, and the similar chat text data: "" as the second text data. "Red is good” is used to calculate the first score, which indicates the continuity of two sentences, as "6.5".
- the opinion aggregation device 100A uses the first text data: "What color model do you like?" As the first text data, and the similar chat text data: "" as the second text data. Using “red is subtle", the first score indicating the continuity of two sentences is calculated as "0.3".
- step 207 the opinion aggregation device 100A ranks a plurality of similar chat text data in order of score based on the first score or the second score.
- the opinion aggregation device 100A displays a plurality of similar chat text data for the first text data: "Do you like the red model?", “9.2: Red model likes”, “8.8: Red”. It ranks like “is subtle” and "8.5: red is good”.
- the opinion aggregation device 100A converts a plurality of similar chat text data for the first text data: "What color model do you like?" In “8.7: Red model likes", “6.5: Rank like “Red is good” and “0.3: Red is subtle”.
- the opinion aggregation device 100A determines whether or not the first score or the second score is equal to or higher than the threshold value. When the first score or the second score is equal to or higher than the threshold value (step 207 ⁇ YES), the opinion aggregation device 100A performs the process of step 208. The opinion aggregation device 100A ends the process when the first score or the second score is smaller than the threshold value (step 207 ⁇ NO).
- the opinion aggregation device 100A determines whether or not the first score of the similar chat text data with respect to the first text data is equal to or higher than the threshold value. For example, when there are a plurality of first text data, the opinion aggregation device 100A determines whether or not the first score of the similar chat text data for all the first text data is equal to or higher than the threshold value.
- the opinion aggregation device 100A determines whether or not the second score of the similar chat text data with respect to the second text data is equal to or higher than the threshold value. For example, when there are a plurality of second text data, the opinion aggregation device 100A determines whether or not the second score of the similar chat text data for all the second text data is equal to or higher than the threshold value.
- the opinion aggregation device 100A has a threshold of the first text data: "Do you like the red model?" And the first score "9.2" of the similar chat text data: "Red model likes". With the above, it is determined that the first score "8.7” of the first text data: "What color model do you like?" And the similar chat text data: "Red model likes" is also above the threshold. ..
- the first score "8.5" of the first text data “Do you like the red model?” And the similar chat text data: “Red is good” is equal to or higher than the threshold value.
- 1st text data Similar chat to "What color model do you like?" 1st score "6.5” of "Red is good” is also judged to be above the threshold.
- the first score "8.5" of the first text data “Do you like the red model?” And the similar chat text data: “Red is subtle” is equal to or higher than the threshold value.
- 1 Text data Similar chat to "What color model do you like?” Text data: It is determined that the first score "0.3" of "Red is subtle” is smaller than the threshold.
- step 208 the opinion aggregation device 100A outputs a similar sentence similar to the input sentence based on the determination result.
- the opinion aggregation device 100A is based on the determination result that the first score of the chat text data for all the first text data is equal to or higher than the threshold value, and the "red model likes". , “Red is good” is output as a similar sentence similar to the input sentence. Specifically, the opinion aggregation device 100A responds to the first text data "Do you like the red model?" As similar chat text data in which the first score is equal to or higher than the threshold, and "likes the red model”. , "Red is subtle” and “Red is good” are classified into upper text groups.
- the opinion aggregation device 100A has "red model likes” and “red model likes” as similar chat text data in which the first score is equal to or higher than the threshold value for the first text data "what color model do you like?". "Red is good” is classified into the upper text group. Then, the opinion aggregation device 100A outputs similar chat text data that is commonly included in both upper text groups, that is, “red model likes” and “red is good”.
- the opinion aggregation method classifies similar texts based on the sentence continuity score. That is, the input sentence is converted, and whether or not the predetermined similar sentence is established as a conversational sentence with respect to the converted input sentence is calculated as a sentence continuity score, and based on this score, the input sentence and the predetermined similar sentence are calculated. Measure synchronism or similarity with. For a declarative sentence, the interrogative sentence is taken, and the sentence continuity score between the interrogative sentence and a predetermined similar sentence is calculated to score the conformity with the original declarative sentence. For an interrogative sentence, the answer sentence is taken, and the sentence continuity score between a predetermined similar sentence and the answer sentence is calculated to score the similarity with the original interrogative sentence.
- control units 110 and 110A are CPU (Central Processing Unit), MPU (Micro Processing Unit), GPU (Graphics Processing Unit), DSP (Digital Signal Processor), SoC (System on a Chip), and the like. It may be composed of a plurality of processors of the same type or different types.
- the control units 110 and 110A read the program from the storage unit 120 and execute the program to control each of the above configurations and perform various arithmetic processes. It should be noted that at least a part of these processing contents may be realized by hardware.
- the steps (S101, S102) for determining whether the input sentence is a declarative sentence or a question sentence, and the input sentence are In the case of a declarative sentence, the step of generating the first text data in which the input sentence is questioned (S103), and in the case of the input sentence, the step of generating the second text data in which the input sentence is simply answered (S104). ), And the step (S105) of calculating the first score indicating the sentence continuity between the first text data and the chat text data, or the second score indicating the sentence continuity between the chat text data and the second text data. , A step (S106, S107) of outputting chat text data having the first score or the second score when the first score or the second score is equal to or higher than the threshold value.
- this program may be recorded on a recording medium that can be read by a computer. Using such a recording medium, it is possible to install the program on the computer.
- the recording medium on which the program is recorded may be a non-transient recording medium. Even if the non-transient recording medium is a CD (Compact Disk) -ROM (Read-Only Memory), DVD (Digital Versatile Disc) -ROM, BD (Blu-ray (registered trademark) Disc) -ROM, etc. good.
- the program can also be provided by download over the network.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Machine Translation (AREA)
Abstract
Description
<意見集約装置の構成>
図1乃至図3を参照して、第1実施形態に係る意見集約装置の構成の一例について説明する。
佐藤紗都、伍井啓恭、奥村学、「製品マニュアル文からの質問自動生成」、人工知能学会全国大会論文集、第32回全国大会(2018)、一般社団法人、人工知能学会、2018
Devlin, Jacob, et al. "Bert: Pre-training of deep bidirectional transformers for language understanding." arXiv preprint arXiv:1810.04805 (2018).
図4を参照して、第1実施形態に係る意見集約方法の一例について説明する。
また、会話文として成立するか否かを分類基準とするため、分類結果の解釈が容易な意見集約方法を実現できる。
<意見集約装置の構成>
図5又は図6を参照して、第2実施形態に係る意見集約装置100Aの構成の一例について説明する。
Devlin, Jacob, et al. "Bert: Pre-training of deep bidirectional transformers for language understanding." arXiv preprint arXiv:1810.04805 (2018).
図7を参照して、第2実施形態に係る意見集約方法の一例について説明する。なお、第1実施形態に係る意見集約方法と同様の処理については、重複した説明を省略する場合がある。
本発明は上記の実施形態および変形例に限定されるものではない。例えば、上述の各種の処理は、記載にしたがって時系列に実行されるのみならず、処理を実行する装置の処理能力あるいは必要に応じて並列的にあるいは個別に実行されてもよい。その他、本発明の趣旨を逸脱しない範囲で適宜変更が可能である。
上記の実施形態及び変形例として機能させるためにプログラム命令を実行可能なコンピュータを用いることも可能である。ここで、コンピュータは、汎用コンピュータ、専用コンピュータ、ワークステーション、PC(Personal Computer)、電子ノートパッドなどであってもよい。プログラム命令は、必要なタスクを実行するためのプログラムコード、コードセグメントなどであってもよい。制御部110,110Aとして機能するプロセッサは、CPU(Central Processing Unit)、MPU(Micro Processing Unit)、GPU(Graphics Processing Unit)、DSP(Digital Signal Processor)、SoC(System on a Chip)などであり、同種又は異種の複数のプロセッサにより構成されてもよい。制御部110,110Aは、記憶部120からプログラムを読み出し、プログラムを実行することで、上記各構成の制御及び各種の演算処理を行う。なお、これらの処理内容の少なくとも一部をハードウェアで実現することとしてもよい。
20 疑問文生成部(第1生成部)
30 回答文生成部(第2生成部)
40 文章連続性スコア算出部(算出部)
50 閾値判定部(第2判定部)
60 類似文法テキスト検索部(検索部)
100,100A 意見集約装置
110,110A 制御部
120 記憶部
130 入力部
140 出力部
Claims (7)
- 入力文が平叙文であるか疑問文であるかを判定する第1判定部と、
前記入力文が前記平叙文である場合、前記入力文を疑問文化した第1テキストデータを生成する第1生成部と、
前記入力文が前記疑問文である場合、前記入力文に簡易回答した第2テキストデータを生成する第2生成部と、
複数のチャットテキストデータを含むチャットテキストデータベースを記憶する記憶部と、
前記第1テキストデータと前記チャットテキストデータとの文章連続性を示す第1スコア、又は、前記チャットテキストデータと前記第2テキストデータとの文章連続性を示す第2スコアを算出する算出部と、
前記第1スコア又は前記第2スコアが閾値以上である場合、該第1スコア又は該第2スコアを有する前記チャットテキストデータを出力する第2判定部と、
を備える、意見集約装置。 - 前記第2判定部は、
前記第1テキストデータ又は前記第2テキストデータが複数である場合、
全ての前記第1テキストデータ又は全ての前記第2テキストデータに対して、前記第1スコア又は前記第2スコアが前記閾値以上となる前記チャットテキストデータを出力する、
請求項1に記載の意見集約装置。 - 入力文が平叙文であるか疑問文であるかを判定する第1判定部と、
前記入力文が前記平叙文である場合、前記入力文を疑問文化した第1テキストデータを生成する第1生成部と、
前記入力文が前記疑問文である場合、前記入力文に簡易回答した第2テキストデータを生成する第2生成部と、
複数のチャットテキストデータを含むチャットテキストデータベースを記憶する記憶部と、
前記チャットテキストデータベースから、前記入力文と文法的に類似するチャットテキストデータを検索し、検索したチャットテキストデータと前記入力文との類似度に基づいて、類似チャットテキストデータを出力する検索部と、
前記第1テキストデータと前記類似チャットテキストデータとの文章連続性を示す第1スコア、又は、前記類似チャットテキストデータと前記第2テキストデータとの文章連続性を示す第2スコアを算出する算出部と、
前記第1スコア又は前記第2スコアが閾値以上である場合、該第1スコア又は該第2スコアを有する前記類似チャットテキストデータを出力する第2判定部と、
を備える、意見集約装置。 - 前記第2判定部は、
前記第1テキストデータ又は前記第2テキストデータが複数である場合、
全ての前記第1テキストデータ又は全ての前記第2テキストデータに対して、前記第1スコア又は前記第2スコアが前記閾値以上となる前記類似チャットテキストデータを出力する、
請求項3に記載の意見集約装置。 - 入力文が平叙文であるか疑問文であるかを判定するステップと、
前記入力文が前記平叙文である場合、前記入力文を疑問文化した第1テキストデータを生成するステップと、
前記入力文が前記疑問文である場合、前記入力文に簡易回答した第2テキストデータを生成するステップと、
複数のチャットテキストデータを含むチャットテキストデータベースを記憶するステップと、
前記第1テキストデータと前記チャットテキストデータとの文章連続性を示す第1スコア、又は、前記チャットテキストデータと前記第2テキストデータとの文章連続性を示す第2スコアを算出するステップと、
前記第1スコア又は前記第2スコアが閾値以上である場合、該第1スコア又は該第2スコアを有する前記チャットテキストデータを出力するステップと、
を含む、意見集約方法。 - 入力文が平叙文であるか疑問文であるかを判定するステップと、
前記入力文が前記平叙文である場合、前記入力文を疑問文化した第1テキストデータを生成するステップと、
前記入力文が前記疑問文である場合、前記入力文に簡易回答した第2テキストデータを生成するステップと、
複数のチャットテキストデータを含むチャットテキストデータベースを記憶するステップと、
前記チャットテキストデータベースから、前記入力文と文法的に類似するチャットテキストデータを検索し、検索したチャットテキストデータと前記入力文との類似度に基づいて、類似チャットテキストデータを出力するステップと、
前記第1テキストデータと前記類似チャットテキストデータとの文章連続性を示す第1スコア、又は、前記類似チャットテキストデータと前記第2テキストデータとの文章連続性を示す第2スコアを算出するステップと、
前記第1スコア又は前記第2スコアが閾値以上である場合、該第1スコア又は該第2スコアを有する前記類似チャットテキストデータを出力するステップと、
を含む、意見集約方法。 - コンピュータを、請求項1から4のいずれか一項に記載の意見集約装置として機能させるためのプログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/267,437 US20240046038A1 (en) | 2020-12-16 | 2020-12-16 | Opinion aggregation device, opinion aggregation method, and program |
PCT/JP2020/047000 WO2022130541A1 (ja) | 2020-12-16 | 2020-12-16 | 意見集約装置、意見集約方法、およびプログラム |
JP2022569400A JP7492166B2 (ja) | 2020-12-16 | 2020-12-16 | 意見集約装置、意見集約方法、およびプログラム |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2020/047000 WO2022130541A1 (ja) | 2020-12-16 | 2020-12-16 | 意見集約装置、意見集約方法、およびプログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022130541A1 true WO2022130541A1 (ja) | 2022-06-23 |
Family
ID=82059183
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2020/047000 WO2022130541A1 (ja) | 2020-12-16 | 2020-12-16 | 意見集約装置、意見集約方法、およびプログラム |
Country Status (3)
Country | Link |
---|---|
US (1) | US20240046038A1 (ja) |
JP (1) | JP7492166B2 (ja) |
WO (1) | WO2022130541A1 (ja) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012064073A (ja) * | 2010-09-17 | 2012-03-29 | Baazu Joho Kagaku Kenkyusho:Kk | 自動会話制御システム及び自動会話制御方法 |
JP2018055548A (ja) * | 2016-09-30 | 2018-04-05 | 株式会社Nextremer | 対話装置、学習装置、対話方法、学習方法、およびプログラム |
JP2020102193A (ja) * | 2018-12-20 | 2020-07-02 | 楽天株式会社 | 文章変換システム、文章変換方法、及びプログラム |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005117155A (ja) | 2003-10-03 | 2005-04-28 | Nippon Telegr & Teleph Corp <Ntt> | 電子会議データ取得方法、装置、プログラム、および記録媒体ならびに電子会議データ検索方法、装置、プログラム、および記録媒体 |
JP2010048953A (ja) | 2008-08-20 | 2010-03-04 | Toshiba Corp | 対話文生成装置 |
-
2020
- 2020-12-16 WO PCT/JP2020/047000 patent/WO2022130541A1/ja active Application Filing
- 2020-12-16 JP JP2022569400A patent/JP7492166B2/ja active Active
- 2020-12-16 US US18/267,437 patent/US20240046038A1/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012064073A (ja) * | 2010-09-17 | 2012-03-29 | Baazu Joho Kagaku Kenkyusho:Kk | 自動会話制御システム及び自動会話制御方法 |
JP2018055548A (ja) * | 2016-09-30 | 2018-04-05 | 株式会社Nextremer | 対話装置、学習装置、対話方法、学習方法、およびプログラム |
JP2020102193A (ja) * | 2018-12-20 | 2020-07-02 | 楽天株式会社 | 文章変換システム、文章変換方法、及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
JPWO2022130541A1 (ja) | 2022-06-23 |
US20240046038A1 (en) | 2024-02-08 |
JP7492166B2 (ja) | 2024-05-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110442718B (zh) | 语句处理方法、装置及服务器和存储介质 | |
JP7127106B2 (ja) | 質問応答処理、言語モデルの訓練方法、装置、機器および記憶媒体 | |
KR102222451B1 (ko) | 텍스트 기반 사용자심리상태예측 및 콘텐츠추천 장치 및 그 방법 | |
US20190385599A1 (en) | Speech recognition method and apparatus, and storage medium | |
US10102275B2 (en) | User interface for a query answering system | |
Mariooryad et al. | Building a naturalistic emotional speech corpus by retrieving expressive behaviors from existing speech corpora | |
US20200035234A1 (en) | Generating interactive audio-visual representations of individuals | |
JP2017534941A (ja) | オーファン発話検出システム及び方法 | |
WO2023236252A1 (zh) | 答案生成方法、装置、电子设备及存储介质 | |
CN114968788B (zh) | 人工智能算法编程能力自动评估方法、装置、介质及设备 | |
JP2006146567A (ja) | 表現検出システム、表現検出方法、及びプログラム | |
EP3685245A1 (en) | Method, apparatus, and computer-readable media for customer interaction semantic annotation and analytics | |
JP7498129B2 (ja) | 情報をプッシュするための方法及び装置、電子機器、記憶媒体並びにコンピュータプログラム | |
US20210358476A1 (en) | Monotone Speech Detection | |
US11935315B2 (en) | Document lineage management system | |
CN107943940A (zh) | 数据处理方法、介质、系统和电子设备 | |
Milea et al. | Prediction of the msci euro index based on fuzzy grammar fragments extracted from european central bank statements | |
CN117475351A (zh) | 视频分类方法、装置、计算机设备及计算机可读存储介质 | |
US20210034809A1 (en) | Predictive model for ranking argument convincingness of text passages | |
Addepalli et al. | A proposed framework for measuring customer satisfaction and product recommendation for ecommerce | |
CN117744664A (zh) | 面向大模型场景的内容评估方法、装置、设备及存储介质 | |
CN117494814A (zh) | 提示词全生命周期管理方法、系统、电子设备、存储介质 | |
WO2022130541A1 (ja) | 意見集約装置、意見集約方法、およびプログラム | |
CN112102062A (zh) | 一种基于弱监督学习的风险评估方法、装置及电子设备 | |
CN114676227A (zh) | 样本生成方法、模型的训练方法以及检索方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20965929 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022569400 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18267437 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20965929 Country of ref document: EP Kind code of ref document: A1 |