WO2022030134A1 - Information processing device, information processing method, program, and recording medium - Google Patents

Information processing device, information processing method, program, and recording medium Download PDF

Info

Publication number
WO2022030134A1
WO2022030134A1 PCT/JP2021/024124 JP2021024124W WO2022030134A1 WO 2022030134 A1 WO2022030134 A1 WO 2022030134A1 JP 2021024124 W JP2021024124 W JP 2021024124W WO 2022030134 A1 WO2022030134 A1 WO 2022030134A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
information
concept
unit
processing unit
Prior art date
Application number
PCT/JP2021/024124
Other languages
French (fr)
Japanese (ja)
Inventor
由仁 宮内
Original Assignee
Necソリューションイノベータ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Necソリューションイノベータ株式会社 filed Critical Necソリューションイノベータ株式会社
Priority to JP2022541152A priority Critical patent/JP7381143B2/en
Publication of WO2022030134A1 publication Critical patent/WO2022030134A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Definitions

  • the present invention relates to an information processing apparatus, an information processing method, a program and a recording medium.
  • Deep learning uses a large amount of training data to make a multi-layer neural network learn features.
  • Patent Documents 1 to 3 disclose a neural network processing apparatus capable of constructing a neural network with a small amount of labor and arithmetic processing by defining a large-scale neural network as a combination of a plurality of subnetworks. ing. Further, Patent Document 4 discloses a structure optimizing device that optimizes a neural network.
  • Japanese Unexamined Patent Publication No. 2001-051968 Japanese Patent Application Laid-Open No. 2002-251601
  • Japanese Patent Application Laid-Open No. 2003-317573 Japanese Unexamined Patent Publication No. 09-091263
  • An object of the present invention is to provide an information processing device, an information processing method, a program, and a recording medium capable of processing information with an algorithm closer to human thinking.
  • an input unit that generates first data representing a situation grasped from the information based on the received information, and a concept related to the situation based on the first data. It has a processing unit that generates a second data to be represented, and an output unit that generates and outputs a word expressing the concept based on the second data, and the output unit has the information that the generated word is generated.
  • An information processing apparatus configured to output the generated words as new information to the input unit when the solution does not match the corresponding solution is provided.
  • the first step of generating the first data representing the situation grasped from the information based on the received information has a second step of generating a second data representing a situation-related concept, and a third step of generating and outputting a word representing the concept based on the second data.
  • an information processing method is provided in which the generated words are used as new information and repeated from the first step.
  • the computer is based on the information received, the means for generating the first data representing the situation grasped from the information, and the first data.
  • a means for generating a second data representing a situation-related concept a word representing the concept is generated based on the second data, and the generated word does not match the solution according to the information.
  • a program is provided that functions as a means for outputting the generated words as new information to the means for generating the first data.
  • FIG. 1 is a schematic diagram showing a configuration example of an information processing apparatus according to the first embodiment of the present invention.
  • FIG. 2 is a schematic diagram showing a configuration example of a data processing module in the information processing apparatus according to the first embodiment of the present invention.
  • FIG. 3 is a diagram illustrating the operation of the data processing module in the information processing apparatus according to the first embodiment of the present invention.
  • FIG. 4 is a flowchart showing an information processing method using the information processing apparatus according to the first embodiment of the present invention.
  • FIG. 5 is a sequence diagram showing a first application example of information processing using the information processing apparatus according to the first embodiment of the present invention.
  • FIG. 6 is a diagram showing preconditions in a second application example of information processing using the information processing apparatus according to the first embodiment of the present invention.
  • FIG. 1 is a schematic diagram showing a configuration example of an information processing apparatus according to the first embodiment of the present invention.
  • FIG. 2 is a schematic diagram showing a configuration example of a data processing module in the information processing
  • FIG. 7 is a sequence diagram showing a second application example of information processing using the information processing apparatus according to the first embodiment of the present invention.
  • FIG. 8 is a schematic diagram (No. 1) showing a hardware configuration example of the information processing apparatus according to the first embodiment of the present invention.
  • FIG. 9 is a schematic diagram (No. 2) showing a hardware configuration example of the information processing apparatus according to the first embodiment of the present invention.
  • FIG. 10 is a diagram illustrating a data processing processor in the information processing apparatus according to the first embodiment of the present invention.
  • FIG. 11A is a diagram (No. 1) illustrating a data processing processor in the information processing apparatus according to the first embodiment of the present invention.
  • FIG. 11B is a diagram (No.
  • FIG. 11C is a diagram (No. 3) illustrating a data processing processor in the information processing apparatus according to the first embodiment of the present invention.
  • FIG. 12 is a schematic diagram showing a configuration example of the information processing apparatus according to the second embodiment of the present invention.
  • FIG. 1 is a schematic diagram showing a configuration example of an information processing apparatus according to the present embodiment.
  • FIG. 2 is a schematic diagram showing a configuration example of a data processing module in the information processing apparatus according to the present embodiment.
  • FIG. 3 is a diagram illustrating the operation of the data processing module in the information processing apparatus according to the present embodiment.
  • the information processing apparatus 1000 may be composed of an input unit 100, a processing unit 200, and an output unit 300.
  • the input unit 100 has a function of generating status information data (first data) based on the received information.
  • the situation information data is a data of the situation grasped from the received information in a predetermined format.
  • the format of the situation information data is not particularly limited, but for example, bitmap format data or vector representation format data can be applied.
  • the information received by the input unit 100 includes information received from the outside of the information processing apparatus 1000 and information output by the output unit 300.
  • the information received by the input unit 100 from the outside is not particularly limited, and examples thereof include auditory information (words, sounds, etc.), visual information (images, etc.), tactile information (sensible temperature, texture, etc.). Be done.
  • voice input or text input is made to the input unit 100 as information from the outside
  • the input unit 100 can convert the input data into the situation information data by using a known word identification AI system or the like.
  • the input unit 100 can convert the input data into situation information data using a known image identification AI system or the like.
  • the information received by the input unit 100 from the output unit 300 is information related to "words" (thoughts) generated by the output unit 300.
  • the processing unit 200 performs predetermined conceptual processing on the status information data received from the input unit 100, and generates conceptual information data (second data) representing a concept related to the situation represented in the status information data. , It has a function to output the generated conceptual information data to the output unit 300.
  • the processing unit 200 may be configured to include a recognition processing unit 210 and a concept processing unit 220.
  • the recognition processing unit 210 has a function of converting the situation information data received from the input unit 100 into conceptual information data (third data).
  • the conceptual information data generated by the recognition processing unit 210 is data that conceptualizes the situation represented by the situation information data received from the input unit 100.
  • the conceptual information data may be data in the same format as the situation information data received from the input unit 100. Data in the same format is data having the same data structure and size. However, the situation information data and the conceptual information data do not necessarily have to be in the same format, and may be in different formats as long as there is at least a partially overlapping portion.
  • the recognition processing unit 210 performs conceptual processing such as identification and classification on the status information data received from the input unit 100, and generates conceptual information data.
  • the recognition processing unit 210 may include a plurality of types of recognition modules depending on the type of status information data.
  • the recognition processing unit 210 may include a word recognition module that processes status information data received from the word identification AI system, an image recognition module that processes status information data received from the image identification AI system, and the like.
  • the classification performed by the recognition processing unit 210 may include, for example, classification of objects, requirements, environments, and the like shown in the situation information data.
  • the concept processing unit 220 performs conceptual processing on the conceptual information data received from the recognition processing unit 210, and converts the conceptual information data received from the recognition processing unit 210 into conceptual information data (second data) of another concept. It has a function to do.
  • the conceptual processing performed on the conceptual information data is a process of grasping the situation from the received conceptual information data, thinking about the grasped situation, and deriving the optimum solution for the grasped situation.
  • a certification support system such as a Coq system can be applied to the concept processing unit 220. If there is no answer (solution) from the concept processing unit 220 (incomplete), the output data is fed back to the input unit 100 via the output unit 300, which will be described later, so that the self-questioning self-answer is repeated and a more appropriate answer is given. It will be possible to obtain.
  • the output unit 300 has a function of generating "words" (thoughts) based on the solution (conceptual information data) output from the processing unit 200.
  • the "word” is not particularly limited, but as an example, a sentence including at least a subject and a predicate expressing an idea corresponding to the concept information data output from the concept processing unit 220 can be mentioned.
  • the "word” generated by the output unit 300 can be output as character information or voice information, and can be output to the input unit 100.
  • the method of converting a concept into a "word” is not particularly limited. As a simple method, for example, a method of generating a sentence by inserting a word extracted from a concept into a sentence of a fixed frame can be mentioned.
  • the output unit 300 may further include a function of generating an "action" based on the solution (conceptual information data) output from the processing unit 200.
  • the output unit 300 can be configured to output, for example, an instruction regarding the generated "behavior".
  • This instruction is not particularly limited, but may include, for example, a predetermined instruction for performing a task according to the generated "behavior", for example, execution of an application.
  • the concept information data received by the output unit 300 may be the concept information data output by the recognition processing unit 210 or the concept information data output by the concept processing unit 220.
  • FIG. 2 is a diagram showing a schematic configuration of a data processing module in the information processing apparatus according to the present embodiment.
  • the same data processing module can also be applied to the input unit 100 and the output unit 300.
  • one functional block may include a plurality of data processing modules 400.
  • one functional block can be configured to use different data processing modules 400 depending on the input data.
  • the data processing module 400 may include a data acquisition unit 410, a storage unit 420, an identification unit 430, and a data output unit 440.
  • the data acquisition unit 410 has a function of acquiring status information data and conceptual information data.
  • the situation information data and the conceptual information data may be data in a bitmap format having a predetermined size, as described above.
  • the storage unit 420 stores a learning model including a plurality of models.
  • a pattern representing a specific information or concept is associated with a value representing another information or concept corresponding to the pattern.
  • each model has a pattern that maps the relationship between multiple elements that represent a particular situation or concept and their element values, and multiple elements that represent different information or concepts that correspond to that particular situation or concept. It may include a value that maps the relationship to those element values.
  • the identification unit 430 compares the pattern of the data acquired by the data acquisition unit 410 with each pattern of the plurality of models stored in the storage unit 420, and selects the model having the most suitable pattern among the plurality of models. It has a function to select the associated value.
  • the patterns and values possessed by each of the plurality of models may be data in the same format as the situation information data and the conceptual information data, or data in different formats, as in the relationship between the situation information data and the conceptual information data. You may.
  • the value associated with the pattern does not necessarily have to be bitmap format data, and may be data including predetermined index information. In this case, this index information can be used to configure the data corresponding to the index information to be acquired from the outside.
  • the identification unit 430 may have a learning function of a learning model. For example, when there is no model in the learning model whose degree of conformity is equal to or higher than a predetermined threshold, a model corresponding to the data acquired by the data acquisition unit 410 is added to the learning model, and the learning model is updated. can do.
  • a model corresponding to the data acquired by the data acquisition unit 410 is added to the learning model, and the learning model is updated. can do.
  • the technique described in Japanese Patent Application No. 2019-144121 by the same applicant can be applied.
  • the data output unit 440 has a function of outputting the value (data in bitmap format) selected by the identification unit 430 as output data.
  • the data output unit 440 typically outputs the selected value to the next-stage processing unit, but returns the selected value to the data acquisition unit 410 and performs recursive processing in the data processing module 400. It may be configured to do so.
  • FIG. 3 is a schematic diagram showing the operation of the data processing module 400 in the information processing apparatus according to the present embodiment.
  • the data acquisition unit 410 acquires status information data from the input unit 100. Alternatively, the data acquisition unit 410 acquires conceptual information data from the processing unit 200. The data acquisition unit 410 outputs the acquired data (input data) to the identification unit 430.
  • the identification unit 430 compares the pattern of the data received from the data acquisition unit 410 with each pattern of the plurality of models of the learning model stored in the storage unit 420. Then, the identification unit 430 extracts a model having the pattern having the highest degree of conformity to the pattern of the data received from the data acquisition unit 410 from the plurality of models included in the learning model. It can be said that a model having a pattern having a high degree of conformity with respect to the pattern of the received data is a model in which the information distance is close to the received data.
  • the method of determining the goodness of fit between the input data (situation information data or conceptual information data) and the training model is not particularly limited, but for example, an internal product value of the input data pattern and the training model pattern is used. The method can be mentioned.
  • each of the situation information data, the conceptual information data, the pattern of the learning model, and the value of the training data is in a bit map format including nine cells arranged in a 3 ⁇ 3 matrix. It shall be a pattern.
  • the value of each cell is 0 or 1.
  • the internal product value of the input data pattern and the learning model pattern is calculated by multiplying the values of cells with the same coordinates and adding up the multiplication values of each coordinate.
  • the values of the cells constituting the input data pattern are A, B, C, D, E, F, G, H, and I, and the pattern of the learning model to be compared is configured. It is assumed that the value of each cell is 1,0,0,0,1,0,0,0,1.
  • the internal product value of the input data pattern and the learning model pattern is A ⁇ 1 + B ⁇ 0 + C ⁇ 0 + D ⁇ 0 + E ⁇ 1 + F ⁇ 0 + G ⁇ 0 + H ⁇ 0 + I ⁇ 1.
  • the inner product value calculated in this way is normalized by dividing by the number of cells having a value of 1 among the cells included in the input data.
  • the calculation and normalization of the inner product value for the input data is performed for each of the plurality of models included in the training model.
  • the model with the maximum normalized internal product value is extracted from the multiple models of the learning model.
  • the normalized internal product value represents the likelihood, and the larger the value, the higher the goodness of fit to the input data. Therefore, by extracting the model with the maximum normalized internal product value, the model with the highest goodness of fit to the input data can be extracted.
  • a model having a pattern in which the value of each cell is 1,0,0,0,1,0,0,0,1 has the highest goodness of fit to the input data. It is assumed that it is extracted as.
  • the identification unit 430 outputs the value associated with the extracted model to the data output unit 440 as another conceptual information data converted from the input data.
  • the extracted model has a pattern in which the value of each cell is 1,0,0,0,1,0,0,0,1 and the value of each cell is 1,1. , 0,0,1,0,0,1,1 and the value.
  • the value whose value in each cell is 1,1,0,0,1,0,0,1,1 becomes the output data to be output to the data output unit 440.
  • the data output unit 440 outputs the data acquired from the identification unit 430 to the next-stage processing unit.
  • the input data in which the value of each cell is A, B, C, D, E, F, G, H, I can be input, and the value of each cell is 1,1,0,0,1. It can be converted into output data of, 0, 0, 1, 1.
  • the recognition processing unit 210 By configuring the recognition processing unit 210 using such a data processing module 400, it is possible to convert the situation information data received from the input unit 100 into conceptual information data. Further, by configuring the concept processing unit 220 using such a data processing module 400, the concept information data received from the recognition processing unit 210 can be converted into the concept information data of another concept.
  • FIG. 4 is a flowchart showing an information processing method according to the present embodiment.
  • the input unit 100 acquires information from the outside (step S101).
  • the information acquired by the input unit 100 is not particularly limited, and examples thereof include auditory information (words, sounds, etc.), visual information (images, etc.), and tactile information (sensible, temperature, texture, etc.). ..
  • the input unit 100 converts the information received from the outside into data in a predetermined format and generates status information data (step S102). For example, when voice input or text input is made to the input unit 100 as information from the outside, the input unit 100 converts the input data into the situation information data by using a known word identification AI system or the like. Alternatively, when an image is input to the input unit 100 as information from the outside, the input unit 100 converts the input data into situation information data using a known image identification AI system or the like.
  • the input unit 100 outputs the generated status information data to the recognition processing unit 210 of the processing unit 200.
  • the recognition processing unit 210 uses, for example, the above-mentioned data processing module 400 to convert the situation information data received from the input unit into the concept information data and output it to the concept processing unit 220 (step S103).
  • the concept processing unit 220 uses, for example, the above-mentioned data processing module 400 to convert the concept information data received from the recognition processing unit 210 into concept information data representing a concept different from the received concept information data (. Step S104).
  • the concept processing unit 220 outputs the generated concept information data to the output unit 300.
  • the output unit 300 generates "words” and "actions” based on the conceptual information data received from the processing unit 200 (step S105).
  • the output unit 300 determines whether or not the generated "word” is sufficient as an answer (solution) (step S106).
  • step S106 if the generated "word” is insufficient (incomplete) as a solution (“No” in step S106), the process proceeds to step S102, and the generated “word” is fed back to the input unit 100. Then, the process of step S102 to step S106 is repeated. That is, repeat the self-questioning and self-answer using the generated "words".
  • the "word” generated by the output unit 300 represents, so to speak, a result based on past experience. Output unit 300 By processing the generated "words" again, it is possible to make a judgment based on past experience.
  • the output unit 300 outputs the generated “word” and / or "action” and ends a series of processing. (Step S107).
  • the output of "words” is not particularly limited, but for example, the generated “words” can be output as character information or voice information.
  • the output of the "behavior” is not particularly limited, but for example, a predetermined instruction for executing the task corresponding to the generated "behavior” can be output.
  • all the processing in the information processing apparatus according to the present embodiment is data-to-data conversion, that is, data-driven processing.
  • FIGS. 5 to 7 will be described with respect to an application example of information processing using the information processing apparatus 1000 according to the present embodiment.
  • two application examples will be given to more specifically explain the information processing in the present embodiment.
  • the first application example is an application example to a license issuing business process for the purpose of acquiring a license number in response to a license request from a user and reporting the acquired license number to the user. It is assumed that the information processing apparatus 1000 has knowledge of acquiring a license number by operating the numbering ledger.
  • FIG. 5 is a sequence diagram showing an information processing method in the first application example.
  • the input unit 100 converts the input information into status information data and outputs it to the recognition processing unit 210.
  • the recognition processing unit 210 converts the status information data received from the input unit 100 into conceptual information data.
  • the input information is converted into conceptual information data including the concept (concept A) indicating the situation of "Mr. A”, “license”, and "issue”.
  • the recognition processing unit 210 outputs the converted conceptual information data to the output unit 300.
  • the output unit 300 generates "words” based on the conceptual information data received from the recognition processing unit 210. For example, for the concept A indicating the situation of "Mr. A”, “license”, and “issuance”, a word such as “think” is added to generate a "word”. As a result, for example, the "word” “Consider issuing Mr. A's license” is generated. The output unit 300 outputs the generated “word” to the input unit 100.
  • the input unit 100 converts the input "word” into status information data and outputs it to the recognition processing unit 210.
  • the recognition processing unit 210 converts the situation information data received from the input unit 100 into conceptual information data.
  • the data is converted into conceptual information data including the concept (concept B) indicating the problems of "Mr. A”, “license”, and “issuance” according to the word "thinking”.
  • the recognition processing unit 210 outputs the converted concept information data to the concept processing unit 220.
  • the concept processing unit 220 converts the input concept information data into another concept information data.
  • the data is converted into conceptual information data including a concept (concept C) indicating an action recalled from the problems of "Mr. A”, “license”, and “issuance”.
  • concept C concept indicating an action recalled from the problems of "Mr. A”, “license”, and “issuance”.
  • the method of "numbering ledger operation” is recalled from the information of "license” and "issuance”
  • the input conceptual information data is used as the concept of "Mr. A”, “license”, and “numbering ledger operation”. Convert to include conceptual information data.
  • the concept processing unit 220 outputs the converted concept information data to the output unit 300.
  • the output unit 300 generates "words” based on the concept information data received from the concept processing unit 220. For example, a word such as “do” is added to the concept C indicating the behavior of "Mr. A", “license”, and “numbering ledger operation” to generate “words”. As a result, for example, the "word” of "operate the license numbering ledger of Mr. A” is generated.
  • the output unit 300 outputs the generated “word” to the input unit 100.
  • the input unit 100 converts the input "word” into status information data and outputs it to the recognition processing unit 210.
  • the recognition processing unit 210 converts the situation information data received from the input unit 100 into conceptual information data.
  • the data is converted into conceptual information data including the concept (concept D) indicating the work for "Mr. A”, “license”, and “numbering ledger operation” according to the word "do".
  • the recognition processing unit 210 outputs the converted conceptual information data to the output unit 300.
  • the output unit 300 generates an "action" based on the conceptual information data received from the recognition processing unit 210.
  • a schedule application for issuing a license number based on the information "license” and “numbering ledger operation” shown in the conceptual information data.
  • the schedule application outputs the license number obtained by execution to the input unit 100.
  • the input unit 100 converts the license number received from the schedule application as it is or into status information data, and outputs it to the recognition processing unit 210.
  • the recognition processing unit 210 identifies that the received information is the result of the execution of the schedule application, and outputs the received information to the output unit 300.
  • the output unit 300 generates "words” based on the information received from the recognition processing unit 210. For example, a word such as “desu” is added to the license number "123-456-7890" output by the schedule application to generate a word. As a result, for example, the "word” "123-456-7890.” Is generated.
  • the output unit 300 reports the generated "words" to the user by displaying them on a display device, outputting them by voice, or the like, and ends a series of processes.
  • the second application example is a schedule adjustment process for the purpose of comparing the schedules of two users (Mr. A and Mr. B) for 5 days from Monday to Friday, adjusting the free time of both users, and reporting. It is an application example to. Here, it is assumed that the schedules of Mr. A and Mr. B from Monday to Friday are as shown in FIG. In the table of FIG. 6, it is assumed that the "am” and “afternoon” time zones are working hours, and the "lunch break" is outside working hours. Further, “0” indicates a free time zone without reservation, and "1", "2", and "3" indicate a time zone with reservation.
  • the values of "1", “2”, and “3” indicate the priority of the reservation, "1" is the reservation that requires attendance, “2" is the reservation that attendance is preferable, and “3” is the reservation that can be absent.
  • the information processing apparatus 1000 is provided with knowledge of adjusting the schedule during working hours, but is not provided with knowledge of "lunch break” and "priority" in the initial state.
  • FIG. 7 is a sequence diagram showing an information processing method in the second application example.
  • the input unit 100 converts the input information into status information data and outputs it to the recognition processing unit 210.
  • the recognition processing unit 210 converts the status information data received from the input unit 100 into conceptual information data.
  • the input information is converted into conceptual information data including the concept (concept A) indicating the situation of "Mr. A”, “Mr. B", and "schedule adjustment".
  • the recognition processing unit 210 outputs the converted conceptual information data to the output unit 300.
  • the output unit 300 generates "words” based on the conceptual information data received from the recognition processing unit 210. For example, a word such as “think” is added to the concept A indicating the situation of "Mr. A”, “Mr. B", and “schedule adjustment” to generate “words”. As a result, for example, the "word” “Consider adjusting the schedule of Mr. A and Mr. B.” is generated. The output unit 300 outputs the generated "word” to the input unit 100.
  • the input unit 100 converts the input "word” into status information data and outputs it to the recognition processing unit 210.
  • the recognition processing unit 210 converts the situation information data received from the input unit 100 into conceptual information data.
  • the data is converted into conceptual information data including the concept (concept B) indicating the tasks of "Mr. A”, “Mr. B", and “schedule adjustment” according to the word "thinking”.
  • the recognition processing unit 210 outputs the converted concept information data to the concept processing unit 220.
  • the concept processing unit 220 converts the input concept information data into another concept information data.
  • the data is converted into conceptual information data including a concept (concept C) indicating an action recalled from the tasks of "Mr. A”, “Mr. B”, and “schedule adjustment”.
  • concept C concept indicating an action recalled from the tasks of "Mr. A”, “Mr. B”, and “schedule adjustment”.
  • the method of "working” and “searching with the schedule app” is recalled from the information “schedule adjustment”
  • the input conceptual information data is searched for "Mr. A”, “Mr. B", “working", and “schedule app”. It is converted into conceptual information data including the concept showing the behavior.
  • the concept processing unit 220 outputs the converted concept information data to the output unit 300.
  • the output unit 300 generates "words” based on the concept information data received from the concept processing unit 220. For example, for the concept C indicating the behaviors of "Mr. A”, “Mr. B", “working”, and “searching with the schedule application”, words such as “do” are added to generate “words”. As a result, for example, the “word” “Search with Mr. A and Mr. B's working schedule application” is generated.
  • the output unit 300 outputs the generated “word” to the input unit 100.
  • the input unit 100 converts the input "word” into status information data and outputs it to the recognition processing unit 210.
  • the recognition processing unit 210 converts the situation information data received from the input unit 100 into conceptual information data. Here, it is converted into conceptual information data including the concept (concept D) indicating the work for "Mr. A”, “Mr. B", “working”, and “searching with the schedule application” according to the word "do". And.
  • the recognition processing unit 210 outputs the converted conceptual information data to the output unit 300.
  • the output unit 300 generates an "action" based on the conceptual information data received from the recognition processing unit 210.
  • an instruction to execute a schedule application for adjusting the schedule during work of Mr. A and Mr. B is output.
  • the schedule application outputs the result obtained by execution to the input unit 100.
  • the input unit 100 converts the result received from the schedule application as it is or into status information data, and outputs it to the recognition processing unit 210.
  • the recognition processing unit 210 identifies that the received information is the result of the execution of the schedule application, and outputs the received information to the output unit 300.
  • the output unit 300 generates "words” based on the information received from the recognition processing unit 210. For example, as a result of output by the schedule application, a word such as “is” is added to “no space” to generate a word. As a result, for example, the "word” "There is no space” is generated.
  • the output unit 300 reports the generated "words" to the user by displaying them on a display device, outputting them by voice, or the like, and ends a series of processes.
  • the input unit 100 converts the input information into status information data and outputs it to the recognition processing unit 210.
  • the recognition processing unit 210 converts the status information data received from the input unit 100 into conceptual information data.
  • the input information is converted into conceptual information data including the concept (concept A) indicating the situation of "Mr. A”, “Mr. B”, “lunch break”, and "schedule adjustment”.
  • the recognition processing unit 210 outputs the converted conceptual information data to the output unit 300.
  • the output unit 300 generates "words” based on the conceptual information data received from the recognition processing unit 210. For example, for the concept A indicating the situation of "Mr. A”, “Mr. B", “lunch break”, and “schedule adjustment”, words such as "think” are added to generate “words”. As a result, for example, the "word” “Mr. A and Mr. B consider adjusting the lunch break schedule” is generated.
  • the output unit 300 outputs the generated "word” to the input unit 100.
  • the input unit 100 converts the input "word” into status information data and outputs it to the recognition processing unit 210.
  • the recognition processing unit 210 converts the situation information data received from the input unit 100 into conceptual information data.
  • the data is converted into conceptual information data including the concept (concept B) indicating the tasks of "Mr. A”, “Mr. B", “lunch break”, and "schedule adjustment” according to the word "thinking”.
  • the recognition processing unit 210 outputs the converted concept information data to the concept processing unit 220.
  • the concept processing unit 220 converts the input concept information data into another concept information data.
  • the data is converted into conceptual information data including a concept (concept C) indicating an action recalled from the tasks of "Mr. A”, “Mr. B”, “lunch break”, and “schedule adjustment”.
  • the information processing apparatus 1000 does not have "lunch break” as knowledge, but the information "schedule adjustment" recalls the methods of "lunch break” and "search with the schedule application”.
  • the input conceptual information data is converted into conceptual information data including concepts indicating actions such as “Mr. A”, “Mr. B", “lunch break", and “search with the schedule application”.
  • the concept processing unit 220 outputs the converted concept information data to the output unit 300.
  • the output unit 300 generates "words” based on the concept information data received from the concept processing unit 220. For example, to the concept C indicating the actions of "Mr. A”, “Mr. B", “lunch break”, and “search with the schedule application”, words such as “do” are added to generate “words”. As a result, for example, the "word” “Search with Mr. A and Mr. B's lunch break schedule application” is generated.
  • the output unit 300 outputs the generated "word” to the input unit 100.
  • the input unit 100 converts the input "word” into status information data and outputs it to the recognition processing unit 210.
  • the recognition processing unit 210 converts the situation information data received from the input unit 100 into conceptual information data. Here, it is converted into conceptual information data including the concept (concept D) indicating the work for "Mr. A”, “Mr. B", “lunch break", and "search with the schedule application” according to the word "do". do.
  • the recognition processing unit 210 outputs the converted conceptual information data to the output unit 300.
  • the output unit 300 generates an "action" based on the conceptual information data received from the recognition processing unit 210.
  • an instruction to execute a schedule application for adjusting the schedule of lunch breaks of Mr. A and Mr. B is output.
  • the schedule application outputs the result obtained by execution to the input unit 100.
  • the input unit 100 converts the result received from the schedule application as it is or into status information data, and outputs it to the recognition processing unit 210.
  • the recognition processing unit 210 identifies that the received information is the result of execution of the schedule application, and outputs the received information to the output unit 300.
  • the result of "Tuesday, Thursday, Friday lunch break" is output. ..
  • the input unit 100 converts the result received from the schedule application as it is or into status information data, and outputs it to the recognition processing unit 210.
  • the recognition processing unit 210 identifies that the received information is the result of the execution of the schedule application, and outputs the received information to the output unit 300.
  • the output unit 300 generates "words” based on the information received from the recognition processing unit 210. For example, as a result of the schedule application output, a word such as “desu” is added to "Tue, Thu, Fri lunch break” to generate a word. This produces, for example, the "word” that says, “Tuesday, Thursday, and Friday lunch break.”
  • the output unit 300 reports the generated "words" to the user by displaying them on a display device or outputting them by voice.
  • the output unit 300 generates "words” based on the concept information data received from the recognition processing unit 210 in parallel with the generation of the "action" for the concept D. For example, to the concept D indicating the work of "Mr. A”, “Mr. B", “lunch break", and “search with the schedule application”, words such as "done” are added to generate “words”. As a result, for example, a “word” such as “Mr. A and Mr. B searched with the lunch break schedule application” is generated. The output unit 300 outputs the generated "word” to the input unit 100.
  • the input unit 100 converts the input "word” into status information data and outputs it to the recognition processing unit 210.
  • the recognition processing unit 210 converts the situation information data received from the input unit 100 into conceptual information data.
  • the concept processing unit 220 learns the learning model stored in the storage unit 420 of the data processing module 400, and makes the "lunch break" memorized as knowledge. In this way, a series of processes is completed.
  • the input unit 100 converts the input information into status information data and outputs it to the recognition processing unit 210.
  • the recognition processing unit 210 converts the status information data received from the input unit 100 into conceptual information data.
  • the input information is converted into conceptual information data including the concept (concept A) indicating the situation of "Mr. A”, “Mr. B", “Priority 3", and "Schedule adjustment”.
  • the recognition processing unit 210 outputs the converted conceptual information data to the output unit 300.
  • the output unit 300 generates "words” based on the conceptual information data received from the recognition processing unit 210. For example, for the concept A indicating the situation of "Mr. A”, “Mr. B", “Priority 3", and “Schedule adjustment”, words such as “think” are added to generate “words”. As a result, for example, the "word” “Mr. A and Mr. B consider the priority 3 schedule adjustment” is generated.
  • the output unit 300 outputs the generated "word” to the input unit 100.
  • the input unit 100 converts the input "word” into status information data and outputs it to the recognition processing unit 210.
  • the recognition processing unit 210 converts the situation information data received from the input unit 100 into conceptual information data. Here, it is converted into conceptual information data including the concept (concept B) indicating the tasks of "Mr. A”, “Mr. B", “Priority 3", and "Schedule adjustment” according to the word "thinking". do.
  • the recognition processing unit 210 outputs the converted concept information data to the concept processing unit 220.
  • the concept processing unit 220 converts the input concept information data into another concept information data.
  • the data is converted into conceptual information data including a concept (concept C) indicating an action recalled from the tasks of "Mr. A”, “Mr. B”, “Priority 3", and “Schedule adjustment”.
  • the information processing apparatus 1000 does not have "priority 3" as knowledge, but the information "schedule adjustment" recalls the methods "priority 3" and "search by schedule application”.
  • the input conceptual information data is converted into conceptual information data including concepts indicating actions such as “Mr. A”, “Mr. B", “Priority 3", and “Search by schedule application”.
  • the concept processing unit 220 outputs the converted concept information data to the output unit 300.
  • the output unit 300 generates "words” based on the concept information data received from the concept processing unit 220. For example, for the concept C indicating the actions of "Mr. A”, “Mr. B", “Priority 3", and “Search by schedule application”, words such as “do” are added to generate “words”. As a result, for example, the "word” “Search with Mr. A and Mr. B priority 3 schedule application” is generated.
  • the output unit 300 outputs the generated "word” to the input unit 100.
  • the input unit 100 converts the input "word” into status information data and outputs it to the recognition processing unit 210.
  • the recognition processing unit 210 converts the situation information data received from the input unit 100 into conceptual information data. Here, it was converted into conceptual information data including the concept (concept D) indicating the work for "Mr. A”, “Mr. B", “Priority 3", and "Search with the schedule application” according to the word "do". It shall be.
  • the recognition processing unit 210 outputs the converted conceptual information data to the output unit 300.
  • the output unit 300 generates an "action" based on the conceptual information data received from the recognition processing unit 210.
  • an instruction to execute a schedule application for adjusting the schedule of priority 3 of Mr. A and Mr. B is output.
  • the schedule application outputs the result obtained by execution to the input unit 100.
  • the input unit 100 converts the result received from the schedule application as it is or into status information data, and outputs it to the recognition processing unit 210.
  • the recognition processing unit 210 identifies that the received information is the result of the execution of the schedule application, and outputs the received information to the output unit 300.
  • the free time or the time zone of priority 3 common to the schedules of Mr. A and Mr. B exists on Monday afternoon and Friday morning, "Monday afternoon, Friday morning priority 3". It is assumed that the result is output.
  • the input unit 100 converts the result received from the schedule application as it is or into status information data, and outputs it to the recognition processing unit 210.
  • the recognition processing unit 210 identifies that the received information is the result of the execution of the schedule application, and outputs the received information to the output unit 300.
  • the output unit 300 generates "words” based on the information received from the recognition processing unit 210. For example, as a result output by the schedule application, a word such as “is” is added to "priority 3 on Monday afternoon and Friday morning” to generate a word. As a result, for example, the "word” “Monday afternoon, Friday morning has priority 3" is generated.
  • the output unit 300 reports the generated "words" to the user by displaying them on a display device or outputting them by voice.
  • the output unit generates "words” based on the concept information data received from the recognition processing unit 210 in parallel with the generation of the "action" for the concept D. For example, to the concept D indicating the work of "Mr. A”, “Mr. B", “Priority 3", and “Search by schedule application”, words such as “done” are added to generate “words”. As a result, for example, a “word” such as “Mr. A and Mr. B searched with the priority 3 schedule application” is generated. The output unit 300 outputs the generated “word” to the input unit 100.
  • the input unit 100 converts the input "word” into status information data and outputs it to the recognition processing unit 210.
  • the recognition processing unit 210 converts the situation information data received from the input unit 100 into conceptual information data.
  • the result of "action” is referred to according to the word “done”
  • “priority 3” is obtained according to the result obtained even though “priority 3" is not provided as knowledge.
  • the concept processing unit 220 learns the learning model stored in the storage unit 420 of the data processing module 400, and makes the "priority 3" memorized as knowledge. In this way, a series of processes is completed.
  • FIGS. 8 to 11C are schematic views showing a hardware configuration example of the information processing apparatus according to the present embodiment.
  • FIG. 10 is a diagram illustrating a data processing processor in the information processing apparatus according to the present embodiment.
  • 11A, 11B and 11C are diagrams showing pattern examples of input data and a learning model.
  • the information processing device 1000 can be realized by a hardware configuration similar to that of a general information processing device, as shown in FIG. 8, for example.
  • the information processing apparatus 1000 may include a CPU (Central Processing Unit) 500, a main storage unit 502, a communication unit 504, and an input / output interface unit 506.
  • a CPU Central Processing Unit
  • main storage unit 502 main storage unit 502
  • communication unit 504 input / output interface unit 506.
  • the CPU 500 is a control / arithmetic unit that controls the overall control and arithmetic processing of the information processing apparatus 1000.
  • the main storage unit 502 is a storage unit used for a data work area or a data temporary save area, and may be configured by a memory such as a RAM (Random Access Memory).
  • the communication unit 504 is an interface for transmitting and receiving data via a network.
  • the input / output interface unit 506 is an interface for connecting to an external output device 510, an input device 512, a storage device 514, and the like to transmit / receive data.
  • the CPU 500, the main storage unit 502, the communication unit 504, and the input / output interface unit 506 are connected to each other by the system bus 508.
  • the storage device 514 may be configured by, for example, a hard disk device composed of a non-volatile memory such as a ROM (Read Only Memory), a magnetic disk, or a semiconductor memory.
  • the main storage unit 502 can be used as a work area for executing an operation in the data processing module 400 or the like.
  • the CPU 500 can function as a control unit that controls arithmetic processing in the main storage unit 502.
  • the storage device 514 can be used as a storage unit 420 and can store a trained learning model.
  • the communication unit 504 is a communication interface based on standards such as Ethernet (registered trademark) and Wi-Fi (registered trademark), and is a module for communicating with other devices.
  • the learning model stored in the storage device 514 may be configured to receive from another device via the communication unit 504. For example, a frequently used learning model can be stored in the storage device 514, and infrequently used learning cell information can be configured to be read from another device.
  • the output device 510 may include a display such as a liquid crystal display device.
  • the output device 510 can be used as a display device for presenting the processing result to the user.
  • the input device 512 is a keyboard, a mouse, a touch panel, or the like, and can be used for a user to input a predetermined instruction to the information processing device 1000.
  • each part of the information processing apparatus 1000 can be realized in terms of hardware by mounting circuit components that are hardware components such as LSI (Large Scale Integration) in which a program is incorporated.
  • LSI Large Scale Integration
  • it can be realized by software by storing the program providing the function in the storage device 514, loading the program in the main storage unit 502, and executing the program in the CPU 500.
  • the configuration of the information processing device 1000 shown in FIG. 1 does not necessarily have to be configured as one independent device.
  • a part of the input unit 100, the processing unit 200, and the output unit 300, for example, the processing unit 200 may be arranged on the cloud, and an information processing system may be constructed by these.
  • the information processing apparatus 1000 can be configured as a data-driven data flow machine, for example, as shown in FIG.
  • the information processing apparatus 1000 may include a plurality of data processing processors 600 and an input / output interface unit 620.
  • the plurality of data processing processors 600 are connected in series.
  • the first-stage data processing processor 600 of the series connection of the data processing processor 600 is connected to the input / output interface unit 620 and receives data from the input / output interface unit 620.
  • the data processing processor 600 at the final stage of the series connection body of the data processing processor 600 is connected to the input / output interface unit 620 and outputs data to the input / output interface unit 620.
  • the data processing processor 600 has a function of outputting a value associated with a pattern at the information distance closest to the input.
  • the input / output interface unit 620 is an interface for connecting to an external output device 630, input device 640, storage device 650, or the like to transmit / receive data.
  • FIG. 9 shows one route from the input / output interface unit 620 to the input / output interface unit 620 via the plurality of data processing processors 600, but a plurality of routes may be provided in parallel. Further, a branching route or a merging route may be provided for a plurality of routes.
  • each of the plurality of data processing processors 600 includes an input processing unit 602, a plurality of (for example, m) inner stackers 604 1 to 604 m , a comparator 606, a selector 608, and an output. It may be configured to include a processing unit 610.
  • Each of the plurality of data processing processors 600 has a function of receiving data, performing predetermined processing on input data, and outputting the processed data.
  • the inner stacker 604 1 to 604 m , the comparator 606 and the selector 608 may be configured by a logic gate circuit or the like.
  • the patterns (PAT1 to PATm) and values (Value1 to Valuem) of the training data can be stored in registers.
  • the patterns (PAT1 to PATm) and values (Value1 to Valuem) of the learning data can be stored in the storage device 650 in advance. In this case, when the information processing device 1000 is started, it can be read from the storage device 650 through the input / output interface unit 620 and set in each data processing processor 600.
  • the data output from the input / output interface unit 620 is input to the input processing unit 602 of the first stage data processing processor 600.
  • the data input to the input processing unit 602 is the above-mentioned situation information data and conceptual information data.
  • the input processing unit 602 outputs input data in parallel to each of the plurality of inner stackers 604 corresponding to the plurality of models constituting the learning model. For example, when the training model includes m models, the input data is input in parallel to the m inner stackers 604 1 to 604 m .
  • Each of the inner product devices 604 1 to 604 m performs an inner product calculation of the pattern of the input data and the pattern of the learning model.
  • the pattern of the input data is composed of the data as shown in FIG. 11A
  • the pattern of the learning model corresponding to the inner stacker 6041 is composed of the data as shown in FIG. 11B.
  • the value as shown in FIG. 11C is associated with the pattern of the learning model of FIG. 11B.
  • the sum of the multiplications for each element is calculated as in.
  • the calculation results of the inner stackers 604 1 to 604 m are input to the comparator 606.
  • the inner product calculation process may be performed in a plurality of times while exchanging the patterns of the learning data supplied to each inner product unit 604.
  • the comparator 606 compares the output values of the inner product units 604 1 to 604 m , and outputs the learning model number (1 to m) indicating the largest inner product value with respect to the input data to the selector 608.
  • the selector 608 selects the value associated with the learning model corresponding to the number output from the comparator 606 from the values associated with the patterns of the plurality of learning models, and outputs the value to the output processing unit 610. do.
  • the output processing unit 610 outputs the selected value to the next-stage data processing processor 600.
  • data may be transmitted from the input processing unit 602 to the output processing unit 610. Further, when performing processing such as a state transition, the processing result by the data processing processor 600 may be returned from the output processing unit 610 to the input processing unit 602.
  • FIG. 12 is a schematic diagram showing a schematic configuration of an information processing apparatus according to the present embodiment.
  • the information processing apparatus 1000 has an input unit 100, a processing unit 200, and an output unit 300.
  • the input unit 100 has a function of generating first data representing a situation grasped from the information based on the received information.
  • the processing unit 200 has a function of generating second data representing a situation-related concept based on the first data received from the input unit 100.
  • the output unit 300 has a function of generating a word expressing the concept based on the second data received from the processing unit 200. Further, the output unit 300 is configured to output the generated words as new information to the input unit 100 when the generated words do not match the solution corresponding to the information.
  • an example in which a partial configuration of any of the embodiments is added to another embodiment or an example in which a partial configuration of another embodiment is replaced with another embodiment is also an embodiment of the present invention.
  • a program for operating the configuration of the embodiment is recorded on a recording medium so as to realize the function of the above-described embodiment
  • the program recorded on the recording medium is read out as a code, and the program is executed by a computer.
  • a computer-readable recording medium is also included in the scope of each embodiment.
  • not only the recording medium on which the above-mentioned program is recorded but also the program itself is included in each embodiment.
  • the recording medium for example, a floppy (registered trademark) disk, a hard disk, an optical disk, a magneto-optical disk, a CD-ROM, a magnetic tape, a non-volatile memory card, or a ROM can be used.
  • a floppy (registered trademark) disk for example, a hard disk, an optical disk, a magneto-optical disk, a CD-ROM, a magnetic tape, a non-volatile memory card, or a ROM
  • the program recorded on the recording medium that executes the process alone, but also the program that operates on the OS and executes the process in cooperation with other software and the function of the expansion board is also an embodiment. Is included in the category of.
  • an input unit that generates first data representing the situation grasped from the information, and an input unit.
  • a processing unit that generates a second data representing a concept related to the situation based on the first data, and a processing unit. It has an output unit that generates and outputs a word expressing the concept based on the second data.
  • the information processing unit is configured to output the generated words as new information to the input unit when the generated words do not match the solution corresponding to the information.
  • Appendix 2 The information processing apparatus according to Appendix 1, wherein the output unit is configured to further output an instruction regarding an action according to the concept represented by the second data.
  • the processing unit has a first processing unit that generates a third data conceptualizing the situation represented by the first data, and the second data by converting the third data into another concept.
  • the first processing unit includes a first learning model and a first identification unit.
  • the first data is data that maps the relationship between a plurality of elements representing the situation and their element values.
  • the first learning model has a pattern that maps the relationship between a plurality of elements representing a specific situation and their element values, and a plurality of elements associated with the pattern and representing a concept corresponding to the specific situation. Contains multiple models, each containing a value that maps the relationship between and their element values,
  • the first identification unit transfers the value associated with the pattern having the highest goodness of fit to the first data among the plurality of models of the first learning model.
  • the information processing apparatus according to Appendix 3, characterized in that it is selected as data.
  • the second processing unit includes a second learning model and a second identification unit.
  • the third data is data that maps the relationship between a plurality of elements representing the concept and their element values.
  • the second learning model represents a pattern that maps the relationship between a plurality of elements representing a specific concept and their element values, and another concept associated with the pattern and assumed from the specific concept. Contains multiple models, each containing a value that maps the relationship between multiple elements and their element values.
  • the second identification unit transfers the value associated with the pattern having the highest goodness of fit to the third data among the plurality of models of the second learning model.
  • the information processing apparatus according to Appendix 3 or 4, characterized in that it is selected as data.
  • the second processing unit corresponds to the at least a part of the elements when the model corresponding to at least a part of the concept represented by the third data is not included in the second learning model.
  • the processing unit is composed of a plurality of data processing processors connected in series. 5. The item according to any one of Supplementary note 1 to 6, wherein each of the plurality of data processing processors is configured to output a value associated with a pattern at the information distance closest to the input. Information processing device.
  • the second step is A step of generating a third data that conceptualizes the situation represented by the first data, and The information processing method according to Supplementary note 8 or 9, further comprising a step of converting the third data into another concept to generate the second data.
  • the first data is data that maps the relationship between a plurality of elements representing the situation and their element values.
  • the first learning model includes a pattern that maps the relationship between a plurality of elements representing a specific situation and their element values, and a plurality of elements associated with the pattern and representing a concept corresponding to the specific situation. Includes multiple models, each containing a value that maps the relationship to those element values,
  • the value associated with the pattern having the highest goodness of fit to the first data among the plurality of models of the first learning model is referred to.
  • the information processing method according to Appendix 10 wherein the data is selected as the third data.
  • the third data is data that maps the relationship between a plurality of elements representing the concept and their element values.
  • the second learning model is a pattern that maps the relationship between a plurality of elements representing a specific concept and their element values, and a plurality of patterns that are associated with the pattern and represent another concept that is assumed from the specific concept. Contains multiple models, each containing a value that maps the relationship between the elements of and their element values.
  • the value associated with the pattern having the highest goodness of fit to the third data among the plurality of models of the second learning model is referred to.
  • (Appendix 14) Computer A means for generating first data representing the situation grasped from the information based on the received information.
  • Appendix 15 A computer-readable recording medium in which the program described in Appendix 14 is described.

Abstract

This information processing device comprises: an input unit that, on the basis of received information, generates first data representing a situation that is grasped from the information; a processing unit that, on the basis of the first data, generates second data representing a concept related to the situation; and an output unit that, on the basis of the second data, generates and outputs language representing the concept. The output unit is configured such that, when the generated language does not match a solution in response to the information, the generated language is output to the input unit as new information.

Description

情報処理装置、情報処理方法、プログラム及び記録媒体Information processing equipment, information processing methods, programs and recording media
 本発明は、情報処理装置、情報処理方法、プログラム及び記録媒体に関する。 The present invention relates to an information processing apparatus, an information processing method, a program and a recording medium.
 近年、機械学習手法として、多層ニューラルネットワークを用いた深層学習(ディープラーニング)が注目されている。深層学習は、大量の学習データを使用して多層ニューラルネットワークに特徴を学習させるものである。 In recent years, deep learning using a multi-layer neural network has been attracting attention as a machine learning method. Deep learning uses a large amount of training data to make a multi-layer neural network learn features.
 特許文献1乃至3には、大規模なニューラルネットワークを複数のサブネットワークの組み合わせとして規定することにより、少ない労力及び演算処理量でニューラルネットワークを構築することを可能にしたニューラルネットワーク処理装置が開示されている。また、特許文献4には、ニューラルネットワークの最適化を行う構造最適化装置が開示されている。 Patent Documents 1 to 3 disclose a neural network processing apparatus capable of constructing a neural network with a small amount of labor and arithmetic processing by defining a large-scale neural network as a combination of a plurality of subnetworks. ing. Further, Patent Document 4 discloses a structure optimizing device that optimizes a neural network.
特開2001-051968号公報Japanese Unexamined Patent Publication No. 2001-051968 特開2002-251601号公報Japanese Patent Application Laid-Open No. 2002-251601 特開2003-317073号公報Japanese Patent Application Laid-Open No. 2003-317573 特開平09-091263号公報Japanese Unexamined Patent Publication No. 09-091263
 外部の情報を認識して自ら解を得ることを目的とした情報処理装置が求められている。このような情報処理装置に対しても深層学習の技術を適用することが検討されている。しかしながら、深層学習等を用いた従来の人工情報処理技術では、情報の概念化や創造性の付与等が困難であり、人間の思考により近いアルゴリズムで情報処理が可能な情報処理技術の実現が模索されている。 There is a demand for information processing devices that aim to recognize external information and obtain solutions on their own. It is being considered to apply deep learning technology to such information processing devices. However, with conventional artificial information processing technology using deep learning, it is difficult to conceptualize information and give creativity, and the realization of information processing technology that can process information with an algorithm closer to human thinking has been sought. There is.
 本発明の目的は、人間の思考により近いアルゴリズムで情報処理が可能な情報処理装置、情報処理方法、プログラム及び記録媒体を提供することにある。 An object of the present invention is to provide an information processing device, an information processing method, a program, and a recording medium capable of processing information with an algorithm closer to human thinking.
 本発明の一観点によれば、受け取った情報に基づき、前記情報から把握される状況を表す第1のデータを生成する入力部と、前記第1のデータに基づき、前記状況に関連する概念を表す第2のデータを生成する処理部と、前記第2のデータに基づき、前記概念を表すことばを生成して出力する出力部と、を有し、前記出力部は、生成したことばが前記情報に応じた解に合致していないとき、生成したことばを新たな情報として前記入力部へと出力するように構成されている情報処理装置が提供される。 According to one aspect of the present invention, an input unit that generates first data representing a situation grasped from the information based on the received information, and a concept related to the situation based on the first data. It has a processing unit that generates a second data to be represented, and an output unit that generates and outputs a word expressing the concept based on the second data, and the output unit has the information that the generated word is generated. An information processing apparatus configured to output the generated words as new information to the input unit when the solution does not match the corresponding solution is provided.
 また、本発明の他の一観点によれば、受け取った情報に基づき、前記情報から把握される状況を表す第1のデータを生成する第1のステップと、前記第1のデータに基づき、前記状況に関連する概念を表す第2のデータを生成する第2のステップと、前記第2のデータに基づき、前記概念を表すことばを生成して出力する第3のステップと、を有し、前記第3のステップにおいて生成したことばが前記情報に応じた解に合致していない場合に、生成したことばを新たな情報として前記第1のステップから繰り返し行う情報処理方法が提供される。 Further, according to another aspect of the present invention, the first step of generating the first data representing the situation grasped from the information based on the received information, and the first step based on the first data. It has a second step of generating a second data representing a situation-related concept, and a third step of generating and outputting a word representing the concept based on the second data. When the words generated in the third step do not match the solution corresponding to the information, an information processing method is provided in which the generated words are used as new information and repeated from the first step.
 また、本発明の更に他の一観点によれば、コンピュータを、受け取った情報に基づき、前記情報から把握される状況を表す第1のデータを生成する手段、前記第1のデータに基づき、前記状況に関連する概念を表す第2のデータを生成する手段、前記第2のデータに基づき、前記概念を表すことばを生成し、生成したことばが前記情報に応じた解に合致していないとき、生成したことばを新たな情報として前記第1のデータを生成する手段へと出力する手段、として機能させるプログラムが提供される。 Further, according to still another aspect of the present invention, the computer is based on the information received, the means for generating the first data representing the situation grasped from the information, and the first data. When a means for generating a second data representing a situation-related concept, a word representing the concept is generated based on the second data, and the generated word does not match the solution according to the information. A program is provided that functions as a means for outputting the generated words as new information to the means for generating the first data.
 本発明によれば、人間の思考により近いアルゴリズムで情報処理が可能な情報処理装置及び情報処理方法を実現することができる。 According to the present invention, it is possible to realize an information processing device and an information processing method capable of information processing with an algorithm closer to human thinking.
図1は、本発明の第1実施形態による情報処理装置の構成例を示す概略図である。FIG. 1 is a schematic diagram showing a configuration example of an information processing apparatus according to the first embodiment of the present invention. 図2は、本発明の第1実施形態による情報処理装置におけるデータ処理モジュールの構成例を示す概略図である。FIG. 2 is a schematic diagram showing a configuration example of a data processing module in the information processing apparatus according to the first embodiment of the present invention. 図3は、本発明の第1実施形態による情報処理装置におけるデータ処理モジュールの動作を説明する図である。FIG. 3 is a diagram illustrating the operation of the data processing module in the information processing apparatus according to the first embodiment of the present invention. 図4は、本発明の第1実施形態による情報処理装置を用いた情報処理方法を示すフローチャートである。FIG. 4 is a flowchart showing an information processing method using the information processing apparatus according to the first embodiment of the present invention. 図5は、本発明の第1実施形態による情報処理装置を用いた情報処理の第1の適用例を示すシーケンス図である。FIG. 5 is a sequence diagram showing a first application example of information processing using the information processing apparatus according to the first embodiment of the present invention. 図6は、本発明の第1実施形態による情報処理装置を用いた情報処理の第2の適用例における前提条件を示す図である。FIG. 6 is a diagram showing preconditions in a second application example of information processing using the information processing apparatus according to the first embodiment of the present invention. 図7は、本発明の第1実施形態による情報処理装置を用いた情報処理の第2の適用例を示すシーケンス図である。FIG. 7 is a sequence diagram showing a second application example of information processing using the information processing apparatus according to the first embodiment of the present invention. 図8は、本発明の第1実施形態による情報処理装置のハードウェア構成例を示す概略図(その1)である。FIG. 8 is a schematic diagram (No. 1) showing a hardware configuration example of the information processing apparatus according to the first embodiment of the present invention. 図9は、本発明の第1実施形態による情報処理装置のハードウェア構成例を示す概略図(その2)である。FIG. 9 is a schematic diagram (No. 2) showing a hardware configuration example of the information processing apparatus according to the first embodiment of the present invention. 図10は、本発明の第1実施形態による情報処理装置におけるデータ処理プロセッサを説明する図である。FIG. 10 is a diagram illustrating a data processing processor in the information processing apparatus according to the first embodiment of the present invention. 図11Aは、本発明の第1実施形態による情報処理装置におけるデータ処理プロセッサを説明する図(その1)である。FIG. 11A is a diagram (No. 1) illustrating a data processing processor in the information processing apparatus according to the first embodiment of the present invention. 図11Bは、本発明の第1実施形態による情報処理装置におけるデータ処理プロセッサを説明する図(その2)である。FIG. 11B is a diagram (No. 2) illustrating a data processing processor in the information processing apparatus according to the first embodiment of the present invention. 図11Cは、本発明の第1実施形態による情報処理装置におけるデータ処理プロセッサを説明する図(その3)である。FIG. 11C is a diagram (No. 3) illustrating a data processing processor in the information processing apparatus according to the first embodiment of the present invention. 図12は、本発明の第2実施形態による情報処理装置の構成例を示す概略図である。FIG. 12 is a schematic diagram showing a configuration example of the information processing apparatus according to the second embodiment of the present invention.
 [第1実施形態]
 本発明の第1実施形態による情報処理装置について、図1乃至図3を用いて説明する。図1は、本実施形態による情報処理装置の構成例を示す概略図である。図2は、本実施形態による情報処理装置におけるデータ処理モジュールの構成例を示す概略図である。図3は、本実施形態による情報処理装置におけるデータ処理モジュールの動作を説明する図である。
[First Embodiment]
The information processing apparatus according to the first embodiment of the present invention will be described with reference to FIGS. 1 to 3. FIG. 1 is a schematic diagram showing a configuration example of an information processing apparatus according to the present embodiment. FIG. 2 is a schematic diagram showing a configuration example of a data processing module in the information processing apparatus according to the present embodiment. FIG. 3 is a diagram illustrating the operation of the data processing module in the information processing apparatus according to the present embodiment.
 本実施形態による情報処理装置1000は、図1に示すように、入力部100と、処理部200と、出力部300と、により構成され得る。 As shown in FIG. 1, the information processing apparatus 1000 according to the present embodiment may be composed of an input unit 100, a processing unit 200, and an output unit 300.
 入力部100は、受け取った情報に基づいて状況情報データ(第1のデータ)を生成する機能を備える。状況情報データは、受け取った情報から把握される状況を所定の形式でデータ化したものである。状況情報データの形式は、特に限定されるものではないが、例えばビットマップ形式のデータやベクトル表現形式のデータを適用することができる。 The input unit 100 has a function of generating status information data (first data) based on the received information. The situation information data is a data of the situation grasped from the received information in a predetermined format. The format of the situation information data is not particularly limited, but for example, bitmap format data or vector representation format data can be applied.
 入力部100が受け取る情報には、情報処理装置1000の外部から受け取る情報と、出力部300が出力する情報と、が含まれる。入力部100が外部から受け取る情報は、特に限定されるものではないが、例えば、聴覚情報(ことば、音など)、視覚情報(画像など)、触覚情報(体感、温度、質感など)などが挙げられる。例えば、外部からの情報として入力部100に音声入力や文章入力がなされた場合、入力部100は、公知のことば識別AIシステムなどを用いて入力データを状況情報データへと変換することができる。或いは、外部からの情報として入力部100に画像入力がなされた場合には、入力部100は、公知の画像識別AIシステムなどを用いて入力データを状況情報データへと変換することができる。入力部100が出力部300から受け取る情報は、出力部300が生成した「ことば」(思考)に関する情報である。 The information received by the input unit 100 includes information received from the outside of the information processing apparatus 1000 and information output by the output unit 300. The information received by the input unit 100 from the outside is not particularly limited, and examples thereof include auditory information (words, sounds, etc.), visual information (images, etc.), tactile information (sensible temperature, texture, etc.). Be done. For example, when voice input or text input is made to the input unit 100 as information from the outside, the input unit 100 can convert the input data into the situation information data by using a known word identification AI system or the like. Alternatively, when an image is input to the input unit 100 as information from the outside, the input unit 100 can convert the input data into situation information data using a known image identification AI system or the like. The information received by the input unit 100 from the output unit 300 is information related to "words" (thoughts) generated by the output unit 300.
 処理部200は、入力部100から受け取った状況情報データに対して所定の概念処理を行い、状況情報データに表される状況に関連する概念を表す概念情報データ(第2のデータ)を生成し、生成した概念情報データを出力部300へと出力する機能を備える。処理部200は、認識処理部210と、概念処理部220と、を含んで構成され得る。 The processing unit 200 performs predetermined conceptual processing on the status information data received from the input unit 100, and generates conceptual information data (second data) representing a concept related to the situation represented in the status information data. , It has a function to output the generated conceptual information data to the output unit 300. The processing unit 200 may be configured to include a recognition processing unit 210 and a concept processing unit 220.
 認識処理部210は、入力部100から受け取った状況情報データを概念情報データ(第3のデータ)に変換する機能を備える。認識処理部210が生成する概念情報データは、入力部100から受け取った状況情報データが表す状況を概念化したデータである。概念情報データは、入力部100から受け取った状況情報データと同じ形式のデータであり得る。同じ形式のデータとは、データ構成及びサイズが同じデータである。ただし、状況情報データと概念情報データとは、必ずしも同じ形式のデータである必要はなく、少なくとも一部に重なる部分があれば異なる形式であってもよい。 The recognition processing unit 210 has a function of converting the situation information data received from the input unit 100 into conceptual information data (third data). The conceptual information data generated by the recognition processing unit 210 is data that conceptualizes the situation represented by the situation information data received from the input unit 100. The conceptual information data may be data in the same format as the situation information data received from the input unit 100. Data in the same format is data having the same data structure and size. However, the situation information data and the conceptual information data do not necessarily have to be in the same format, and may be in different formats as long as there is at least a partially overlapping portion.
 認識処理部210は、入力部100から受け取った状況情報データに対して識別、クラス分類等の概念処理を行い、概念情報データを生成する。認識処理部210は、状況情報データの種類に応じて、複数種類の認識モジュールを含み得る。例えば、認識処理部210は、ことば識別AIシステムから受け取った状況情報データを処理することば認識モジュールや、画像識別AIシステムから受け取った状況情報データを処理する画像認識モジュールなどを含み得る。認識処理部210が行う分類には、例えば、状況情報データに示される対象物、要求、環境などの分類を含み得る。 The recognition processing unit 210 performs conceptual processing such as identification and classification on the status information data received from the input unit 100, and generates conceptual information data. The recognition processing unit 210 may include a plurality of types of recognition modules depending on the type of status information data. For example, the recognition processing unit 210 may include a word recognition module that processes status information data received from the word identification AI system, an image recognition module that processes status information data received from the image identification AI system, and the like. The classification performed by the recognition processing unit 210 may include, for example, classification of objects, requirements, environments, and the like shown in the situation information data.
 概念処理部220は、認識処理部210から受け取った概念情報データに対して概念処理を行い、認識処理部210から受け取った概念情報データを別の概念の概念情報データ(第2のデータ)に変換する機能を備える。概念情報データに対して行う概念処理は、受け取った概念情報データから状況を把握し、把握した状況を思考し、把握した状況に対して最適な解を導き出す処理である。 The concept processing unit 220 performs conceptual processing on the conceptual information data received from the recognition processing unit 210, and converts the conceptual information data received from the recognition processing unit 210 into conceptual information data (second data) of another concept. It has a function to do. The conceptual processing performed on the conceptual information data is a process of grasping the situation from the received conceptual information data, thinking about the grasped situation, and deriving the optimum solution for the grasped situation.
 概念処理部220には、例えば、Coqシステムなどの証明支援システムを適用することができる。概念処理部220の答え(解)がない(不完全な)場合は、後述する出力部300を介して出力データを入力部100へとフィードバックすることで、言わば自問自答を繰り返し、より適切な答えを得ることが可能となる。 For example, a certification support system such as a Coq system can be applied to the concept processing unit 220. If there is no answer (solution) from the concept processing unit 220 (incomplete), the output data is fed back to the input unit 100 via the output unit 300, which will be described later, so that the self-questioning self-answer is repeated and a more appropriate answer is given. It will be possible to obtain.
 出力部300は、処理部200から出力される解(概念情報データ)に基づいて「ことば」(思考)を生成する機能を備える。「ことば」は、特に限定されるものではないが、一例として、概念処理部220から出力される概念情報データに応じた思想を表現した、少なくとも主語と述語とを含む文章が挙げられる。出力部300により生成された「ことば」は、文字情報や音声情報として出力され得るとともに、入力部100へと出力され得る。概念を「ことば」に変換する方式は、特に限定されるものではない。簡易な手法としては、例えば、定型フレームの文章に概念から抽出された単語を挿入することで文章を生成する手法などが挙げられる。 The output unit 300 has a function of generating "words" (thoughts) based on the solution (conceptual information data) output from the processing unit 200. The "word" is not particularly limited, but as an example, a sentence including at least a subject and a predicate expressing an idea corresponding to the concept information data output from the concept processing unit 220 can be mentioned. The "word" generated by the output unit 300 can be output as character information or voice information, and can be output to the input unit 100. The method of converting a concept into a "word" is not particularly limited. As a simple method, for example, a method of generating a sentence by inserting a word extracted from a concept into a sentence of a fixed frame can be mentioned.
 出力部300は、処理部200から出力される解(概念情報データ)に基づいて「行動」を生成する機能を更に備えてもよい。この場合、出力部300は、例えば、生成された「行動」に関する指示を出力するように構成することができる。この指示は、特に限定されるものではないが、例えば、生成された「行動」に応じたタスクを実行するための所定のインストラクション、例えばアプリケーションの実行などを含み得る。 The output unit 300 may further include a function of generating an "action" based on the solution (conceptual information data) output from the processing unit 200. In this case, the output unit 300 can be configured to output, for example, an instruction regarding the generated "behavior". This instruction is not particularly limited, but may include, for example, a predetermined instruction for performing a task according to the generated "behavior", for example, execution of an application.
 なお、出力部300が受信する概念情報データは、認識処理部210が出力する概念情報データであってもよいし、概念処理部220が出力する概念情報データであってもよい。 The concept information data received by the output unit 300 may be the concept information data output by the recognition processing unit 210 or the concept information data output by the concept processing unit 220.
 次に、認識処理部210及び概念処理部220を構成する基本的なデータ処理モジュールについて、図2を用いて説明する。図2は、本実施形態による情報処理装置におけるデータ処理モジュールの概略構成を示す図である。なお、ここでは認識処理部210及び概念処理部220への適用例を示すが、同様のデータ処理モジュールは入力部100や出力部300に適用することも可能である。また、1つの機能ブロックが複数のデータ処理モジュール400を備えていてもよい。例えば、1つの機能ブロックが、入力データに応じて異なるデータ処理モジュール400を用いるように構成することができる。 Next, the basic data processing modules constituting the recognition processing unit 210 and the concept processing unit 220 will be described with reference to FIG. FIG. 2 is a diagram showing a schematic configuration of a data processing module in the information processing apparatus according to the present embodiment. Although an example of application to the recognition processing unit 210 and the concept processing unit 220 is shown here, the same data processing module can also be applied to the input unit 100 and the output unit 300. Further, one functional block may include a plurality of data processing modules 400. For example, one functional block can be configured to use different data processing modules 400 depending on the input data.
 データ処理モジュール400は、図2に示すように、データ取得部410と、記憶部420と、識別部430と、データ出力部440と、を含み得る。 As shown in FIG. 2, the data processing module 400 may include a data acquisition unit 410, a storage unit 420, an identification unit 430, and a data output unit 440.
 データ取得部410は、状況情報データや概念情報データを取得する機能を備える。状況情報データ及び概念情報データは、一例では、前述のように、所定の大きさのビットマップ形式のデータであり得る。 The data acquisition unit 410 has a function of acquiring status information data and conceptual information data. In one example, the situation information data and the conceptual information data may be data in a bitmap format having a predetermined size, as described above.
 記憶部420には、複数のモデルを含む学習モデルが格納されている。複数のモデルの各々は、特定の情報や概念を表すパターンに、そのパターンに対応する別の情報や概念を表すバリューが紐付けられたものである。例えば、各モデルは、特定の状況又は概念を表す複数の要素とそれらの要素値との関係をマッピングしたパターンと、当該特定の状況又は概念に対応する別の情報や概念を表す複数の要素とそれらの要素値との関係をマッピングしたバリューと、を含み得る。 The storage unit 420 stores a learning model including a plurality of models. In each of the plurality of models, a pattern representing a specific information or concept is associated with a value representing another information or concept corresponding to the pattern. For example, each model has a pattern that maps the relationship between multiple elements that represent a particular situation or concept and their element values, and multiple elements that represent different information or concepts that correspond to that particular situation or concept. It may include a value that maps the relationship to those element values.
 識別部430は、データ取得部410が取得したデータのパターンと記憶部420に記憶された複数のモデルの各々のパターンとを比較し、複数のモデルのうち最も適合度の高いパターンを有するモデルに紐付けられたバリューを選択する機能を備える。複数のモデルの各々が有するパターン及びバリューは、状況情報データと概念情報データとの関係と同様、状況情報データ及び概念情報データと同じ形式のデータであってもよいし、異なる形式のデータであってもよい。なお、パターンに紐付けられたバリューは、必ずしもビットマップ形式のデータである必要はなく、所定のインデックス情報を含むデータであってもよい。この場合、このインデックス情報を利用して、当該インデックス情報に対応するデータを外部から取得するように構成することができる。 The identification unit 430 compares the pattern of the data acquired by the data acquisition unit 410 with each pattern of the plurality of models stored in the storage unit 420, and selects the model having the most suitable pattern among the plurality of models. It has a function to select the associated value. The patterns and values possessed by each of the plurality of models may be data in the same format as the situation information data and the conceptual information data, or data in different formats, as in the relationship between the situation information data and the conceptual information data. You may. The value associated with the pattern does not necessarily have to be bitmap format data, and may be data including predetermined index information. In this case, this index information can be used to configure the data corresponding to the index information to be acquired from the outside.
 識別部430は、学習モデルの学習機能を備え得る。例えば、学習モデルの中に、適合度が所定の閾値以上のモデルが存在しない場合に、データ取得部410が取得したデータに対応するモデルを学習モデルに追加し、学習モデルを更新するように構成することができる。なお、学習モデルの構築や更新には、例えば、同一出願人による特願2019-144121号明細書に記載された技術を適用可能である。 The identification unit 430 may have a learning function of a learning model. For example, when there is no model in the learning model whose degree of conformity is equal to or higher than a predetermined threshold, a model corresponding to the data acquired by the data acquisition unit 410 is added to the learning model, and the learning model is updated. can do. For the construction and update of the learning model, for example, the technique described in Japanese Patent Application No. 2019-144121 by the same applicant can be applied.
 データ出力部440は、識別部430により選択されたバリュー(ビットマップ形式のデータ)を出力データとして出力する機能を備える。データ出力部440は、典型的には選択されたバリューを次段の処理部に出力するが、選択されたバリューをデータ取得部410へと戻し、データ処理モジュール400の中で再帰的な処理を行うように構成してもよい。 The data output unit 440 has a function of outputting the value (data in bitmap format) selected by the identification unit 430 as output data. The data output unit 440 typically outputs the selected value to the next-stage processing unit, but returns the selected value to the data acquisition unit 410 and performs recursive processing in the data processing module 400. It may be configured to do so.
 次に、データ処理モジュール400におけるデータ処理について、図3を用いて具体的に説明する。図3は、本実施形態による情報処理装置におけるデータ処理モジュール400の動作を示す概略図である。 Next, the data processing in the data processing module 400 will be specifically described with reference to FIG. FIG. 3 is a schematic diagram showing the operation of the data processing module 400 in the information processing apparatus according to the present embodiment.
 まず、データ取得部410は、入力部100から状況情報データを取得する。或いは、データ取得部410は、処理部200から概念情報データを取得する。データ取得部410は、取得したデータ(入力データ)を識別部430へと出力する。 First, the data acquisition unit 410 acquires status information data from the input unit 100. Alternatively, the data acquisition unit 410 acquires conceptual information data from the processing unit 200. The data acquisition unit 410 outputs the acquired data (input data) to the identification unit 430.
 識別部430は、データ取得部410から受信したデータのパターンと記憶部420に格納されている学習モデルの複数のモデルの各々のパターンとを比較する。そして、識別部430は、学習モデルに含まれる複数のモデルの中から、データ取得部410から受信したデータのパターンに対して最も適合度の高いパターンを有するモデルを抽出する。受信したデータのパターンに対して適合度の高いパターンを有するモデルは、受信したデータに対して情報距離が近いモデルであるといえる。 The identification unit 430 compares the pattern of the data received from the data acquisition unit 410 with each pattern of the plurality of models of the learning model stored in the storage unit 420. Then, the identification unit 430 extracts a model having the pattern having the highest degree of conformity to the pattern of the data received from the data acquisition unit 410 from the plurality of models included in the learning model. It can be said that a model having a pattern having a high degree of conformity with respect to the pattern of the received data is a model in which the information distance is close to the received data.
 入力データ(状況情報データ又は概念情報データ)と学習モデルとの適合度を判断する方法は、特に限定されるものではないが、例えば、入力データのパターンと学習モデルのパターンとの内積値を用いる方法が挙げられる。 The method of determining the goodness of fit between the input data (situation information data or conceptual information data) and the training model is not particularly limited, but for example, an internal product value of the input data pattern and the training model pattern is used. The method can be mentioned.
 以下に、入力データのパターンと学習モデルのパターンとの内積値を用いて入力データと学習モデルとの適合度を判断する方法について説明する。ここでは説明の簡略化のため、状況情報データ、概念情報データ、学習モデルのパターン及び学習データのバリューの各々は、3×3の行列状に配された9個のセルを含むビットマップ形式のパターンであるものとする。各セルの値は、0又は1である。 The method of determining the goodness of fit between the input data and the learning model using the inner product value of the input data pattern and the learning model pattern will be described below. Here, for the sake of brevity, each of the situation information data, the conceptual information data, the pattern of the learning model, and the value of the training data is in a bit map format including nine cells arranged in a 3 × 3 matrix. It shall be a pattern. The value of each cell is 0 or 1.
 入力データのパターンと学習モデルのパターンとの内積値は、同じ座標のセルの値同士を乗算し、各座標の乗算値を合算することにより算出する。例えば、図3に示すように、入力データのパターンを構成する各セルの値がA,B,C,D,E,F,G,H,Iであり、比較対象の学習モデルのパターンを構成する各セルの値が1,0,0,0,1,0,0,0,1であったものとする。この場合、入力データのパターンと学習モデルのパターンとの内積値は、A×1+B×0+C×0+D×0+E×1+F×0+G×0+H×0+I×1となる。このように算出した内積値は、入力データに含まれるセルのうち値が1であるセルの数で除することにより、正規化する。入力データに対する内積値の計算及び正規化の処理は、学習モデルに含まれる複数のモデルの各々に対して行う。 The internal product value of the input data pattern and the learning model pattern is calculated by multiplying the values of cells with the same coordinates and adding up the multiplication values of each coordinate. For example, as shown in FIG. 3, the values of the cells constituting the input data pattern are A, B, C, D, E, F, G, H, and I, and the pattern of the learning model to be compared is configured. It is assumed that the value of each cell is 1,0,0,0,1,0,0,0,1. In this case, the internal product value of the input data pattern and the learning model pattern is A × 1 + B × 0 + C × 0 + D × 0 + E × 1 + F × 0 + G × 0 + H × 0 + I × 1. The inner product value calculated in this way is normalized by dividing by the number of cells having a value of 1 among the cells included in the input data. The calculation and normalization of the inner product value for the input data is performed for each of the plurality of models included in the training model.
 次いで、学習モデルの複数のモデルの中から、正規化した内積値が最大であるモデルを抽出する。正規化した内積値は、尤度を表すものであり、その値が大きいほど入力データに対する適合度が高いことを示す。したがって、正規化した内積値が最大であるモデルを抽出することにより、入力データに対する適合度が最も高いモデルを抽出することができる。ここでは、例えば図3に示すように、各セルの値が1,0,0,0,1,0,0,0,1であるパターンを有するモデルが、入力データに対する適合度が最も高いモデルとして抽出されたものとする。 Next, the model with the maximum normalized internal product value is extracted from the multiple models of the learning model. The normalized internal product value represents the likelihood, and the larger the value, the higher the goodness of fit to the input data. Therefore, by extracting the model with the maximum normalized internal product value, the model with the highest goodness of fit to the input data can be extracted. Here, for example, as shown in FIG. 3, a model having a pattern in which the value of each cell is 1,0,0,0,1,0,0,0,1 has the highest goodness of fit to the input data. It is assumed that it is extracted as.
 次いで、識別部430は、抽出されたモデルに紐付けされたバリューを、入力データを変換した別の概念情報データとしてデータ出力部440へと出力する。例えば、抽出されたモデルが、図3に示すように、各セルの値が1,0,0,0,1,0,0,0,1であるパターンと、各セルの値が1,1,0,0,1,0,0,1,1であるバリューと、を含んでいたものとする。この場合、各セルの値が1,1,0,0,1,0,0,1,1であるバリューが、データ出力部440へと出力される出力データとなる。 Next, the identification unit 430 outputs the value associated with the extracted model to the data output unit 440 as another conceptual information data converted from the input data. For example, as shown in FIG. 3, the extracted model has a pattern in which the value of each cell is 1,0,0,0,1,0,0,0,1 and the value of each cell is 1,1. , 0,0,1,0,0,1,1 and the value. In this case, the value whose value in each cell is 1,1,0,0,1,0,0,1,1 becomes the output data to be output to the data output unit 440.
 次いで、データ出力部440は、識別部430から取得したデータを、次段の処理部へと出力する。 Next, the data output unit 440 outputs the data acquired from the identification unit 430 to the next-stage processing unit.
 このように構成することで、各セルの値がA,B,C,D,E,F,G,H,Iである入力データを、各セルの値が1,1,0,0,1,0,0,1,1である出力データへと変換することができる。 With this configuration, the input data in which the value of each cell is A, B, C, D, E, F, G, H, I can be input, and the value of each cell is 1,1,0,0,1. It can be converted into output data of, 0, 0, 1, 1.
 認識処理部210をこのようなデータ処理モジュール400を用いて構成することにより、入力部100から受け取った状況情報データを概念情報データに変換することができる。また、概念処理部220をこのようなデータ処理モジュール400を用いて構成することにより、認識処理部210から受け取った概念情報データを別の概念の概念情報データに変換することができる。 By configuring the recognition processing unit 210 using such a data processing module 400, it is possible to convert the situation information data received from the input unit 100 into conceptual information data. Further, by configuring the concept processing unit 220 using such a data processing module 400, the concept information data received from the recognition processing unit 210 can be converted into the concept information data of another concept.
 次に、本実施形態による情報処理装置1000を用いた情報処理方法について、図4を用いて説明する。図4は、本実施形態による情報処理方法を示すフローチャートである。 Next, the information processing method using the information processing apparatus 1000 according to the present embodiment will be described with reference to FIG. FIG. 4 is a flowchart showing an information processing method according to the present embodiment.
 まず、入力部100は、外部からの情報を取得する(ステップS101)。入力部100が取得する情報は、特に限定されるものではないが、例えば、聴覚情報(ことば、音など)、視覚情報(画像など)、触覚情報(体感、温度、質感など)などが挙げられる。 First, the input unit 100 acquires information from the outside (step S101). The information acquired by the input unit 100 is not particularly limited, and examples thereof include auditory information (words, sounds, etc.), visual information (images, etc.), and tactile information (sensible, temperature, texture, etc.). ..
 入力部100は、外部から受け取った情報を所定の形式でデータ化し、状況情報データを生成する(ステップS102)。例えば、外部からの情報として入力部100に音声入力や文章入力がなされた場合、入力部100は、公知のことば識別AIシステムなどを用い、入力データを状況情報データへと変換する。或いは、外部からの情報として入力部100に画像入力がなされた場合には、入力部100は、公知の画像識別AIシステムなどを用いて入力データを状況情報データへと変換する。 The input unit 100 converts the information received from the outside into data in a predetermined format and generates status information data (step S102). For example, when voice input or text input is made to the input unit 100 as information from the outside, the input unit 100 converts the input data into the situation information data by using a known word identification AI system or the like. Alternatively, when an image is input to the input unit 100 as information from the outside, the input unit 100 converts the input data into situation information data using a known image identification AI system or the like.
 次いで、入力部100は、生成した状況情報データを処理部200の認識処理部210へと出力する。 Next, the input unit 100 outputs the generated status information data to the recognition processing unit 210 of the processing unit 200.
 次いで、認識処理部210は、例えば上述のデータ処理モジュール400を用い、入力部から受け取った状況情報データを概念情報データへと変換し、概念処理部220へと出力する(ステップS103)。 Next, the recognition processing unit 210 uses, for example, the above-mentioned data processing module 400 to convert the situation information data received from the input unit into the concept information data and output it to the concept processing unit 220 (step S103).
 次いで、概念処理部220は、例えば上述のデータ処理モジュール400を用い、認識処理部210から受け取った概念情報データを、受け取った概念情報データとは別の概念を表す概念情報データへと変換する(ステップS104)。 Next, the concept processing unit 220 uses, for example, the above-mentioned data processing module 400 to convert the concept information data received from the recognition processing unit 210 into concept information data representing a concept different from the received concept information data (. Step S104).
 次いで、概念処理部220は、生成した概念情報データを出力部300へと出力する。 Next, the concept processing unit 220 outputs the generated concept information data to the output unit 300.
 次いで、出力部300は、処理部200から受け取った概念情報データに基づき、「ことば」及び「行動」を生成する(ステップS105)。 Next, the output unit 300 generates "words" and "actions" based on the conceptual information data received from the processing unit 200 (step S105).
 次いで、出力部300は、生成した「ことば」が答え(解)として十分であるか否かを判定する(ステップS106)。 Next, the output unit 300 determines whether or not the generated "word" is sufficient as an answer (solution) (step S106).
 判定の結果、生成した「ことば」が解として不十分(不完全)な場合(ステップS106の「No」)には、ステップS102ヘと移行し、生成した「ことば」を入力部100へとフィードバックし、ステップS102からステップS106の処理を繰り返す。すなわち、生成した「ことば」を使用して自問自答を繰り返す。出力部300が生成した「ことば」は、言わば過去の経験に基づく結果を表すものである。出力部300生成した「ことば」を再び処理することで、過去の経験に基づいて判断を行うことが可能となる。 As a result of the determination, if the generated "word" is insufficient (incomplete) as a solution (“No” in step S106), the process proceeds to step S102, and the generated “word” is fed back to the input unit 100. Then, the process of step S102 to step S106 is repeated. That is, repeat the self-questioning and self-answer using the generated "words". The "word" generated by the output unit 300 represents, so to speak, a result based on past experience. Output unit 300 By processing the generated "words" again, it is possible to make a judgment based on past experience.
 一方、生成した「ことば」が解として十分な場合(ステップS106の「Yes」)には、出力部300は、生成した「ことば」及び/又は「行動」を出力し、一連の処理を終了する(ステップS107)。「ことば」の出力は、特に限定されるものではないが、例えば、生成した「ことば」を文字情報や音声情報として出力することができる。「行動」の出力は、特に限定されるものではないが、例えば、生成した「行動」に応じたタスクを実行するための所定のインストラクションを出力することができる。 On the other hand, when the generated "word" is sufficient as a solution ("Yes" in step S106), the output unit 300 outputs the generated "word" and / or "action" and ends a series of processing. (Step S107). The output of "words" is not particularly limited, but for example, the generated "words" can be output as character information or voice information. The output of the "behavior" is not particularly limited, but for example, a predetermined instruction for executing the task corresponding to the generated "behavior" can be output.
 このように、本実施形態による情報処理装置における総ての処理は、データからデータへの変換、すなわちデータ駆動型の処理である。 As described above, all the processing in the information processing apparatus according to the present embodiment is data-to-data conversion, that is, data-driven processing.
 人の思考を説明する仮説の1つとして、ことばの生成が思考であるとするものがある。この仮説では、生成したことばを心語として自問自答が繰り返されることにより意識的な活動が行われることを説明している。出力部300が生成した「ことば」を入力部100にフィードバックすることは、本仮説における自問自答に対応するものといえる。これにより、人間の思考により近いアルゴリズムによる情報処理が可能となる。 One of the hypotheses that explain human thinking is that the generation of words is thinking. This hypothesis explains that conscious activity is performed by repeating self-questioning and self-answering with the generated words as the mental language. It can be said that feeding back the "words" generated by the output unit 300 to the input unit 100 corresponds to the self-questioning self-answer in this hypothesis. This enables information processing by an algorithm that is closer to human thinking.
 次に、本実施形態による情報処理装置1000を用いた情報処理の適用例について、図5乃至図7を説明する。ここでは2つの適用例を挙げ、本実施形態における情報処理をより具体的に説明する。 Next, FIGS. 5 to 7 will be described with respect to an application example of information processing using the information processing apparatus 1000 according to the present embodiment. Here, two application examples will be given to more specifically explain the information processing in the present embodiment.
 第1の適用例は、ユーザからのライセンス要求に応じてライセンス番号を取得し、取得したライセンス番号をユーザに報告することを目的としたライセンス発行業務処理への適用例である。なお、情報処理装置1000は、採番台帳を操作することによりライセンス番号を取得することを知識として備えているものとする。 The first application example is an application example to a license issuing business process for the purpose of acquiring a license number in response to a license request from a user and reporting the acquired license number to the user. It is assumed that the information processing apparatus 1000 has knowledge of acquiring a license number by operating the numbering ledger.
 図5は、第1の適用例における情報処理方法を示すシーケンス図である。 FIG. 5 is a sequence diagram showing an information processing method in the first application example.
 入力部100に対し、ユーザ(Aさん)からライセンスの発行を要求する指示が入力されたものとする。入力部100は、入力された情報を状況情報データに変換し、認識処理部210へと出力する。 It is assumed that the user (Mr. A) has input to the input unit 100 an instruction requesting the issuance of a license. The input unit 100 converts the input information into status information data and outputs it to the recognition processing unit 210.
 次いで、認識処理部210は、入力部100から受け取った状況情報データを概念情報データに変換する。ここでは、入力された情報が、「Aさん」「ライセンス」「発行」という状況を示す概念(概念A)を含む概念情報データに変換されたものとする。認識処理部210は、変換した概念情報データを、出力部300へと出力する。 Next, the recognition processing unit 210 converts the status information data received from the input unit 100 into conceptual information data. Here, it is assumed that the input information is converted into conceptual information data including the concept (concept A) indicating the situation of "Mr. A", "license", and "issue". The recognition processing unit 210 outputs the converted conceptual information data to the output unit 300.
 次いで、出力部300は、認識処理部210から受け取った概念情報データに基づき、「ことば」を生成する。例えば、「Aさん」「ライセンス」「発行」という状況を示す概念Aに対しては、「を考える」等の語を付加し、「ことば」を生成する。これにより、例えば、「Aさんライセンス発行を考える。」という「ことば」が生成される。出力部300は、生成した「ことば」を入力部100へと出力する。 Next, the output unit 300 generates "words" based on the conceptual information data received from the recognition processing unit 210. For example, for the concept A indicating the situation of "Mr. A", "license", and "issuance", a word such as "think" is added to generate a "word". As a result, for example, the "word" "Consider issuing Mr. A's license" is generated. The output unit 300 outputs the generated "word" to the input unit 100.
 次いで、入力部100は、入力された「ことば」を状況情報データに変換し、認識処理部210へと出力する。認識処理部210は、入力部100から受け取った状況情報データを概念情報データに変換する。ここでは、「を考える」の語に応じて、「Aさん」「ライセンス」「発行」の課題を示す概念(概念B)を含む概念情報データに変換されたものとする。認識処理部210は、変換した概念情報データを概念処理部220へと出力する。 Next, the input unit 100 converts the input "word" into status information data and outputs it to the recognition processing unit 210. The recognition processing unit 210 converts the situation information data received from the input unit 100 into conceptual information data. Here, it is assumed that the data is converted into conceptual information data including the concept (concept B) indicating the problems of "Mr. A", "license", and "issuance" according to the word "thinking". The recognition processing unit 210 outputs the converted concept information data to the concept processing unit 220.
 次いで、概念処理部220は、入力された概念情報データを別の概念情報データへと変換する。ここでは、「Aさん」「ライセンス」「発行」の課題から想起される行動を示す概念(概念C)を含む概念情報データに変換されたものとする。例えば、「ライセンス」「発行」という情報から「採番台帳操作」という手法を想起し、入力された概念情報データを、「Aさん」「ライセンス」「採番台帳操作」という行動を示す概念を含む概念情報データに変換する。概念処理部220は、変換した概念情報データを、出力部300へと出力する。 Next, the concept processing unit 220 converts the input concept information data into another concept information data. Here, it is assumed that the data is converted into conceptual information data including a concept (concept C) indicating an action recalled from the problems of "Mr. A", "license", and "issuance". For example, the method of "numbering ledger operation" is recalled from the information of "license" and "issuance", and the input conceptual information data is used as the concept of "Mr. A", "license", and "numbering ledger operation". Convert to include conceptual information data. The concept processing unit 220 outputs the converted concept information data to the output unit 300.
 次いで、出力部300は、概念処理部220から受け取った概念情報データに基づき、「ことば」を生成する。例えば、「Aさん」「ライセンス」「採番台帳操作」という行動を示す概念Cに対しては、「をする」等の語を付加し、「ことば」を生成する。これにより、例えば、「Aさんライセンス採番台帳操作をする。」という「ことば」が生成される。出力部300は、生成した「ことば」を入力部100へと出力する。 Next, the output unit 300 generates "words" based on the concept information data received from the concept processing unit 220. For example, a word such as "do" is added to the concept C indicating the behavior of "Mr. A", "license", and "numbering ledger operation" to generate "words". As a result, for example, the "word" of "operate the license numbering ledger of Mr. A" is generated. The output unit 300 outputs the generated "word" to the input unit 100.
 次いで、入力部100は、入力された「ことば」を状況情報データに変換し、認識処理部210へと出力する。認識処理部210は、入力部100から受け取った状況情報データを概念情報データに変換する。ここでは、「をする」の語に応じて、「Aさん」「ライセンス」「採番台帳操作」に対する作業を示す概念(概念D)を含む概念情報データに変換されたものとする。認識処理部210は、変換した概念情報データを出力部300へと出力する。 Next, the input unit 100 converts the input "word" into status information data and outputs it to the recognition processing unit 210. The recognition processing unit 210 converts the situation information data received from the input unit 100 into conceptual information data. Here, it is assumed that the data is converted into conceptual information data including the concept (concept D) indicating the work for "Mr. A", "license", and "numbering ledger operation" according to the word "do". The recognition processing unit 210 outputs the converted conceptual information data to the output unit 300.
 次いで、出力部300は、認識処理部210から受け取った概念情報データに基づき、「行動」を生成する。ここでは、「ライセンス」「採番台帳操作」という情報に対する作業として、例えば、概念情報データに示される情報「ライセンス」「採番台帳操作」という情報に基づき、ライセンス番号を発行するためのスケジュールアプリを実行する指示を出力する。スケジュールアプリは、実行することにより得られたライセンス番号を、入力部100へと出力する。 Next, the output unit 300 generates an "action" based on the conceptual information data received from the recognition processing unit 210. Here, as work for the information "license" and "numbering ledger operation", for example, a schedule application for issuing a license number based on the information "license" and "numbering ledger operation" shown in the conceptual information data. Output instructions to execute. The schedule application outputs the license number obtained by execution to the input unit 100.
 次いで、入力部100は、スケジュールアプリから受け取ったライセンス番号をそのまま或いは状況情報データに変換し、認識処理部210へと出力する。認識処理部210は、受け取った情報がスケジュールアプリの実行の結果であることを識別し、受け取った情報を出力部300へと出力する。 Next, the input unit 100 converts the license number received from the schedule application as it is or into status information data, and outputs it to the recognition processing unit 210. The recognition processing unit 210 identifies that the received information is the result of the execution of the schedule application, and outputs the received information to the output unit 300.
 次いで、出力部300は、認識処理部210から受け取った情報に基づき「ことば」を生成する。例えば、スケジュールアプリが出力したライセンス番号「123-456-7890」に「です」等の語を付加しことばを生成する。これにより、例えば、「123-456-7890です。」という「ことば」が生成される。 Next, the output unit 300 generates "words" based on the information received from the recognition processing unit 210. For example, a word such as "desu" is added to the license number "123-456-7890" output by the schedule application to generate a word. As a result, for example, the "word" "123-456-7890." Is generated.
 次いで、出力部300は、生成した「ことば」を、表示装置に表示し或いは音声出力する等によりユーザへと報告し、一連の処理を終了する。 Next, the output unit 300 reports the generated "words" to the user by displaying them on a display device, outputting them by voice, or the like, and ends a series of processes.
 第2の適用例は、2人のユーザ(Aさん、Bさん)の月曜日から金曜日までの5日間のスケジュールを比較し、両者の空き時間を調整して報告することを目的としたスケジュール調整処理への適用例である。ここでは、月曜日から金曜日までのAさん及びBさんのスケジュールが、図6に示すような場合を想定する。図6の表中、「午前」及び「午後」の時間帯は就業時間であり、「昼休み」は就業時間外であるものとする。また、「0」は予約のない空き時間帯を示しており、「1」,「2」,「3」は予約のある時間帯を示している。「1」,「2」,「3」の値は予約の優先度を示しており、「1」は出席必須の予約、「2」は出席が好ましい予約、「3」は欠席も可能な予約、であるものとする。なお、情報処理装置1000は、就業時間帯におけるスケジュール調整を行うことを知識として備えているが、初期状態において「昼休み」,「優先度」に関する知識は備えていないものとする。 The second application example is a schedule adjustment process for the purpose of comparing the schedules of two users (Mr. A and Mr. B) for 5 days from Monday to Friday, adjusting the free time of both users, and reporting. It is an application example to. Here, it is assumed that the schedules of Mr. A and Mr. B from Monday to Friday are as shown in FIG. In the table of FIG. 6, it is assumed that the "am" and "afternoon" time zones are working hours, and the "lunch break" is outside working hours. Further, "0" indicates a free time zone without reservation, and "1", "2", and "3" indicate a time zone with reservation. The values of "1", "2", and "3" indicate the priority of the reservation, "1" is the reservation that requires attendance, "2" is the reservation that attendance is preferable, and "3" is the reservation that can be absent. , Suppose. The information processing apparatus 1000 is provided with knowledge of adjusting the schedule during working hours, but is not provided with knowledge of "lunch break" and "priority" in the initial state.
 図7は、第2の適用例における情報処理方法を示すシーケンス図である。 FIG. 7 is a sequence diagram showing an information processing method in the second application example.
 入力部100に対し、ユーザ(Aさん)からBさんとのスケジュール調整を要求する指示が入力されたものとする。入力部100は、入力された情報を状況情報データに変換し、認識処理部210へと出力する。 It is assumed that the user (Mr. A) has input to the input unit 100 an instruction requesting schedule adjustment with Mr. B. The input unit 100 converts the input information into status information data and outputs it to the recognition processing unit 210.
 次いで、認識処理部210は、入力部100から受け取った状況情報データを概念情報データに変換する。ここでは、入力された情報が、「Aさん」「Bさん」「スケジュール調整」という状況を示す概念(概念A)を含む概念情報データに変換されたものとする。認識処理部210は、変換した概念情報データを、出力部300へと出力する。 Next, the recognition processing unit 210 converts the status information data received from the input unit 100 into conceptual information data. Here, it is assumed that the input information is converted into conceptual information data including the concept (concept A) indicating the situation of "Mr. A", "Mr. B", and "schedule adjustment". The recognition processing unit 210 outputs the converted conceptual information data to the output unit 300.
 次いで、出力部300は、認識処理部210から受け取った概念情報データに基づき、「ことば」を生成する。例えば、「Aさん」「Bさん」「スケジュール調整」という状況を示す概念Aに対しては、「を考える」等の語を付加し、「ことば」を生成する。これにより、例えば、「AさんBさんスケジュール調整を考える。」という「ことば」が生成される。出力部300は、生成した「ことば」を入力部100へと出力する。 Next, the output unit 300 generates "words" based on the conceptual information data received from the recognition processing unit 210. For example, a word such as "think" is added to the concept A indicating the situation of "Mr. A", "Mr. B", and "schedule adjustment" to generate "words". As a result, for example, the "word" "Consider adjusting the schedule of Mr. A and Mr. B." is generated. The output unit 300 outputs the generated "word" to the input unit 100.
 次いで、入力部100は、入力された「ことば」を状況情報データに変換し、認識処理部210へと出力する。認識処理部210は、入力部100から受け取った状況情報データを概念情報データに変換する。ここでは、「を考える」の語に応じて、「Aさん」「Bさん」「スケジュール調整」の課題を示す概念(概念B)を含む概念情報データに変換されたものとする。認識処理部210は、変換した概念情報データを概念処理部220へと出力する。 Next, the input unit 100 converts the input "word" into status information data and outputs it to the recognition processing unit 210. The recognition processing unit 210 converts the situation information data received from the input unit 100 into conceptual information data. Here, it is assumed that the data is converted into conceptual information data including the concept (concept B) indicating the tasks of "Mr. A", "Mr. B", and "schedule adjustment" according to the word "thinking". The recognition processing unit 210 outputs the converted concept information data to the concept processing unit 220.
 次いで、概念処理部220は、入力された概念情報データを別の概念情報データへと変換する。ここでは、「Aさん」「Bさん」「スケジュール調整」の課題から想起される行動を示す概念(概念C)を含む概念情報データに変換されたものとする。例えば、「スケジュール調整」という情報から「就業中」「スケジュールアプリで検索」という手法を想起し、入力された概念情報データを、「Aさん」「Bさん」「就業中」「スケジュールアプリで検索」という行動を示す概念を含む概念情報データに変換する。概念処理部220は、変換した概念情報データを、出力部300へと出力する。 Next, the concept processing unit 220 converts the input concept information data into another concept information data. Here, it is assumed that the data is converted into conceptual information data including a concept (concept C) indicating an action recalled from the tasks of "Mr. A", "Mr. B", and "schedule adjustment". For example, the method of "working" and "searching with the schedule app" is recalled from the information "schedule adjustment", and the input conceptual information data is searched for "Mr. A", "Mr. B", "working", and "schedule app". It is converted into conceptual information data including the concept showing the behavior. The concept processing unit 220 outputs the converted concept information data to the output unit 300.
 次いで、出力部300は、概念処理部220から受け取った概念情報データに基づき、「ことば」を生成する。例えば、「Aさん」「Bさん」「就業中」「スケジュールアプリで検索」という行動を示す概念Cに対しては、「をする」等の語を付加し、「ことば」を生成する。これにより、例えば、「AさんBさん就業中スケジュールアプリで検索をする。」という「ことば」が生成される。出力部300は、生成した「ことば」を入力部100へと出力する。 Next, the output unit 300 generates "words" based on the concept information data received from the concept processing unit 220. For example, for the concept C indicating the behaviors of "Mr. A", "Mr. B", "working", and "searching with the schedule application", words such as "do" are added to generate "words". As a result, for example, the "word" "Search with Mr. A and Mr. B's working schedule application" is generated. The output unit 300 outputs the generated "word" to the input unit 100.
 次いで、入力部100は、入力された「ことば」を状況情報データに変換し、認識処理部210へと出力する。認識処理部210は、入力部100から受け取った状況情報データを概念情報データに変換する。ここでは、「をする」の語に応じて、「Aさん」「Bさん」「就業中」「スケジュールアプリで検索」に対する作業を示す概念(概念D)を含む概念情報データに変換されたものとする。認識処理部210は、変換した概念情報データを出力部300へと出力する。 Next, the input unit 100 converts the input "word" into status information data and outputs it to the recognition processing unit 210. The recognition processing unit 210 converts the situation information data received from the input unit 100 into conceptual information data. Here, it is converted into conceptual information data including the concept (concept D) indicating the work for "Mr. A", "Mr. B", "working", and "searching with the schedule application" according to the word "do". And. The recognition processing unit 210 outputs the converted conceptual information data to the output unit 300.
 次いで、出力部300は、認識処理部210から受け取った概念情報データに基づき、「行動」を生成する。ここでは、概念情報データに示される情報に対する作業として、AさんとBさんの就業中のスケジュール調整をするためのスケジュールアプリを実行する指示を出力する。スケジュールアプリは、実行することにより得られた結果を、入力部100へと出力する。なお、図6の例ではAさんとBさんの就業中のスケジュールに共通の空き時間は存在しないため、「空きなし」という結果が出力されたものとする。 Next, the output unit 300 generates an "action" based on the conceptual information data received from the recognition processing unit 210. Here, as work on the information shown in the conceptual information data, an instruction to execute a schedule application for adjusting the schedule during work of Mr. A and Mr. B is output. The schedule application outputs the result obtained by execution to the input unit 100. In the example of FIG. 6, since there is no common free time in the working schedules of Mr. A and Mr. B, it is assumed that the result of "no free time" is output.
 次いで、入力部100は、スケジュールアプリから受け取った結果をそのまま或いは状況情報データに変換し、認識処理部210へと出力する。認識処理部210は、受け取った情報がスケジュールアプリの実行の結果であることを識別し、受け取った情報を出力部300へと出力する。 Next, the input unit 100 converts the result received from the schedule application as it is or into status information data, and outputs it to the recognition processing unit 210. The recognition processing unit 210 identifies that the received information is the result of the execution of the schedule application, and outputs the received information to the output unit 300.
 次いで、出力部300は、認識処理部210から受け取った情報に基づき「ことば」を生成する。例えば、スケジュールアプリが出力した結果「空きなし」に「です」等の語を付加しことばを生成する。これにより、例えば、「空きなしです。」という「ことば」が生成される。 Next, the output unit 300 generates "words" based on the information received from the recognition processing unit 210. For example, as a result of output by the schedule application, a word such as "is" is added to "no space" to generate a word. As a result, for example, the "word" "There is no space" is generated.
 次いで、出力部300は、生成した「ことば」を、表示装置に表示し或いは音声出力する等によりユーザへと報告し、一連の処理を終了する。 Next, the output unit 300 reports the generated "words" to the user by displaying them on a display device, outputting them by voice, or the like, and ends a series of processes.
 次いで、報告を受け取ったユーザ(Aさん)から、入力部100に対し、Bさんとの昼休みのスケジュール調整を要求する指示が入力されたものとする。入力部100は、入力された情報を状況情報データに変換し、認識処理部210へと出力する。 Next, it is assumed that the user (Mr. A) who received the report has input an instruction to the input unit 100 to request the schedule adjustment of the lunch break with Mr. B. The input unit 100 converts the input information into status information data and outputs it to the recognition processing unit 210.
 次いで、認識処理部210は、入力部100から受け取った状況情報データを概念情報データに変換する。ここでは、入力された情報が、「Aさん」「Bさん」「昼休み」「スケジュール調整」という状況を示す概念(概念A)を含む概念情報データに変換されたものとする。認識処理部210は、変換した概念情報データを、出力部300へと出力する。 Next, the recognition processing unit 210 converts the status information data received from the input unit 100 into conceptual information data. Here, it is assumed that the input information is converted into conceptual information data including the concept (concept A) indicating the situation of "Mr. A", "Mr. B", "lunch break", and "schedule adjustment". The recognition processing unit 210 outputs the converted conceptual information data to the output unit 300.
 次いで、出力部300は、認識処理部210から受け取った概念情報データに基づき、「ことば」を生成する。例えば、「Aさん」「Bさん」「昼休み」「スケジュール調整」という状況を示す概念Aに対しては、「を考える」等の語を付加し、「ことば」を生成する。これにより、例えば、「AさんBさん昼休みスケジュール調整を考える。」という「ことば」が生成される。出力部300は、生成した「ことば」を入力部100へと出力する。 Next, the output unit 300 generates "words" based on the conceptual information data received from the recognition processing unit 210. For example, for the concept A indicating the situation of "Mr. A", "Mr. B", "lunch break", and "schedule adjustment", words such as "think" are added to generate "words". As a result, for example, the "word" "Mr. A and Mr. B consider adjusting the lunch break schedule" is generated. The output unit 300 outputs the generated "word" to the input unit 100.
 次いで、入力部100は、入力された「ことば」を状況情報データに変換し、認識処理部210へと出力する。認識処理部210は、入力部100から受け取った状況情報データを概念情報データに変換する。ここでは、「を考える」の語に応じて、「Aさん」「Bさん」「昼休み」「スケジュール調整」の課題を示す概念(概念B)を含む概念情報データに変換されたものとする。認識処理部210は、変換した概念情報データを概念処理部220へと出力する。 Next, the input unit 100 converts the input "word" into status information data and outputs it to the recognition processing unit 210. The recognition processing unit 210 converts the situation information data received from the input unit 100 into conceptual information data. Here, it is assumed that the data is converted into conceptual information data including the concept (concept B) indicating the tasks of "Mr. A", "Mr. B", "lunch break", and "schedule adjustment" according to the word "thinking". The recognition processing unit 210 outputs the converted concept information data to the concept processing unit 220.
 次いで、概念処理部220は、入力された概念情報データを別の概念情報データへと変換する。ここでは、「Aさん」「Bさん」「昼休み」「スケジュール調整」の課題から想起される行動を示す概念(概念C)を含む概念情報データに変換されたものとする。この段階において情報処理装置1000は「昼休み」を知識として備えていないが、「スケジュール調整」という情報から「昼休み」「スケジュールアプリで検索」という手法を想起する。そして、入力された概念情報データを、「Aさん」「Bさん」「昼休み」「スケジュールアプリで検索」という行動を示す概念を含む概念情報データに変換する。概念処理部220は、変換した概念情報データを、出力部300へと出力する。 Next, the concept processing unit 220 converts the input concept information data into another concept information data. Here, it is assumed that the data is converted into conceptual information data including a concept (concept C) indicating an action recalled from the tasks of "Mr. A", "Mr. B", "lunch break", and "schedule adjustment". At this stage, the information processing apparatus 1000 does not have "lunch break" as knowledge, but the information "schedule adjustment" recalls the methods of "lunch break" and "search with the schedule application". Then, the input conceptual information data is converted into conceptual information data including concepts indicating actions such as "Mr. A", "Mr. B", "lunch break", and "search with the schedule application". The concept processing unit 220 outputs the converted concept information data to the output unit 300.
 次いで、出力部300は、概念処理部220から受け取った概念情報データに基づき、「ことば」を生成する。例えば、「Aさん」「Bさん」「昼休み」「スケジュールアプリで検索」という行動を示す概念Cに対しては、「をする」等の語を付加し、「ことば」を生成する。これにより、例えば、「AさんBさん昼休みスケジュールアプリで検索をする。」という「ことば」が生成される。出力部300は、生成した「ことば」を入力部100へと出力する。 Next, the output unit 300 generates "words" based on the concept information data received from the concept processing unit 220. For example, to the concept C indicating the actions of "Mr. A", "Mr. B", "lunch break", and "search with the schedule application", words such as "do" are added to generate "words". As a result, for example, the "word" "Search with Mr. A and Mr. B's lunch break schedule application" is generated. The output unit 300 outputs the generated "word" to the input unit 100.
 次いで、入力部100は、入力された「ことば」を状況情報データに変換し、認識処理部210へと出力する。認識処理部210は、入力部100から受け取った状況情報データを概念情報データに変換する。ここでは、「をする」の語に応じて、「Aさん」「Bさん」「昼休み」「スケジュールアプリで検索」に対する作業を示す概念(概念D)を含む概念情報データに変換されたものとする。認識処理部210は、変換した概念情報データを出力部300へと出力する。 Next, the input unit 100 converts the input "word" into status information data and outputs it to the recognition processing unit 210. The recognition processing unit 210 converts the situation information data received from the input unit 100 into conceptual information data. Here, it is converted into conceptual information data including the concept (concept D) indicating the work for "Mr. A", "Mr. B", "lunch break", and "search with the schedule application" according to the word "do". do. The recognition processing unit 210 outputs the converted conceptual information data to the output unit 300.
 次いで、出力部300は、認識処理部210から受け取った概念情報データに基づき、「行動」を生成する。ここでは、概念情報データに示される情報に対する作業として、AさんとBさんの昼休みのスケジュール調整をするためのスケジュールアプリを実行する指示を出力する。スケジュールアプリは、実行することにより得られた結果を、入力部100へと出力する。 Next, the output unit 300 generates an "action" based on the conceptual information data received from the recognition processing unit 210. Here, as work on the information shown in the conceptual information data, an instruction to execute a schedule application for adjusting the schedule of lunch breaks of Mr. A and Mr. B is output. The schedule application outputs the result obtained by execution to the input unit 100.
 次いで、入力部100は、スケジュールアプリから受け取った結果をそのまま或いは状況情報データに変換し、認識処理部210へと出力する。認識処理部210は、受け取った情報がスケジュールアプリの実行の結果であることを識別し、受け取った情報を出力部300へと出力する。なお、図6の例ではAさんとBさんの昼休みのスケジュールにおける共通の空き時間が火曜日、木曜日、金曜日に存在するため、「火、木、金の昼休み」という結果が出力されたものとする。 Next, the input unit 100 converts the result received from the schedule application as it is or into status information data, and outputs it to the recognition processing unit 210. The recognition processing unit 210 identifies that the received information is the result of execution of the schedule application, and outputs the received information to the output unit 300. In the example of FIG. 6, since the common free time in the lunch break schedule of Mr. A and Mr. B exists on Tuesday, Thursday, and Friday, it is assumed that the result of "Tuesday, Thursday, Friday lunch break" is output. ..
 次いで、入力部100は、スケジュールアプリから受け取った結果をそのまま或いは状況情報データに変換し、認識処理部210へと出力する。認識処理部210は、受け取った情報がスケジュールアプリの実行の結果であることを識別し、受け取った情報を出力部300へと出力する。 Next, the input unit 100 converts the result received from the schedule application as it is or into status information data, and outputs it to the recognition processing unit 210. The recognition processing unit 210 identifies that the received information is the result of the execution of the schedule application, and outputs the received information to the output unit 300.
 次いで、出力部300は、認識処理部210から受け取った情報に基づき「ことば」を生成する。例えば、スケジュールアプリが出力した結果「火、木、金の昼休み」に「です」等の語を付加しことばを生成する。これにより、例えば、「火、木、金の昼休みです。」という「ことば」が生成される。 Next, the output unit 300 generates "words" based on the information received from the recognition processing unit 210. For example, as a result of the schedule application output, a word such as "desu" is added to "Tue, Thu, Fri lunch break" to generate a word. This produces, for example, the "word" that says, "Tuesday, Thursday, and Friday lunch break."
 次いで、出力部300は、生成した「ことば」を、表示装置に表示し或いは音声出力する等によりユーザへと報告する。 Next, the output unit 300 reports the generated "words" to the user by displaying them on a display device or outputting them by voice.
 また、出力部300は、概念Dに対する「行動」の生成と並行して、認識処理部210から受け取った概念情報データに基づき、「ことば」を生成する。例えば、「Aさん」「Bさん」「昼休み」「スケジュールアプリで検索」という作業を示す概念Dに対しては、「をした」等の語を付加し、「ことば」を生成する。これにより、例えば、「AさんBさん昼休みスケジュールアプリで検索をした。」という「ことば」が生成される。出力部300は、生成した「ことば」を入力部100へと出力する。 Further, the output unit 300 generates "words" based on the concept information data received from the recognition processing unit 210 in parallel with the generation of the "action" for the concept D. For example, to the concept D indicating the work of "Mr. A", "Mr. B", "lunch break", and "search with the schedule application", words such as "done" are added to generate "words". As a result, for example, a "word" such as "Mr. A and Mr. B searched with the lunch break schedule application" is generated. The output unit 300 outputs the generated "word" to the input unit 100.
 次いで、入力部100は、入力された「ことば」を状況情報データに変換し、認識処理部210へと出力する。認識処理部210は、入力部100から受け取った状況情報データを概念情報データに変換する。ここでは、「をした」の語に応じて「行動」の結果を参照し、「昼休み」を知識として備えていないのにかかわらず結果が得られたことに応じて、「昼休みを覚える」という学習を示す概念(概念E)を生成する。 Next, the input unit 100 converts the input "word" into status information data and outputs it to the recognition processing unit 210. The recognition processing unit 210 converts the situation information data received from the input unit 100 into conceptual information data. Here, we refer to the result of "action" according to the word "done", and "remember the lunch break" according to the result obtained even though we do not have "lunch break" as knowledge. Generate a concept (concept E) that indicates learning.
 次いで、概念処理部220は、データ処理モジュール400の記憶部420に格納されている学習モデルの学習を行い、「昼休み」を知識として覚えさせる。こうして、一連の処理を終了する。 Next, the concept processing unit 220 learns the learning model stored in the storage unit 420 of the data processing module 400, and makes the "lunch break" memorized as knowledge. In this way, a series of processes is completed.
 次いで、報告を受け取ったユーザ(Aさん)から更に、入力部100に対し、Bさんとの優先度3のスケジュール調整を要求する指示が入力されたものとする。入力部100は、入力された情報を状況情報データに変換し、認識処理部210へと出力する。 Next, it is assumed that the user (Mr. A) who received the report further inputs an instruction requesting the input unit 100 to adjust the schedule of priority 3 with Mr. B. The input unit 100 converts the input information into status information data and outputs it to the recognition processing unit 210.
 次いで、認識処理部210は、入力部100から受け取った状況情報データを概念情報データに変換する。ここでは、入力された情報が、「Aさん」「Bさん」「優先度3」「スケジュール調整」という状況を示す概念(概念A)を含む概念情報データに変換されたものとする。認識処理部210は、変換した概念情報データを、出力部300へと出力する。 Next, the recognition processing unit 210 converts the status information data received from the input unit 100 into conceptual information data. Here, it is assumed that the input information is converted into conceptual information data including the concept (concept A) indicating the situation of "Mr. A", "Mr. B", "Priority 3", and "Schedule adjustment". The recognition processing unit 210 outputs the converted conceptual information data to the output unit 300.
 次いで、出力部300は、認識処理部210から受け取った概念情報データに基づき、「ことば」を生成する。例えば、「Aさん」「Bさん」「優先度3」「スケジュール調整」という状況を示す概念Aに対しては、「を考える」等の語を付加し、「ことば」を生成する。これにより、例えば、「AさんBさん優先度3スケジュール調整を考える。」という「ことば」が生成される。出力部300は、生成した「ことば」を入力部100へと出力する。 Next, the output unit 300 generates "words" based on the conceptual information data received from the recognition processing unit 210. For example, for the concept A indicating the situation of "Mr. A", "Mr. B", "Priority 3", and "Schedule adjustment", words such as "think" are added to generate "words". As a result, for example, the "word" "Mr. A and Mr. B consider the priority 3 schedule adjustment" is generated. The output unit 300 outputs the generated "word" to the input unit 100.
 次いで、入力部100は、入力された「ことば」を状況情報データに変換し、認識処理部210へと出力する。認識処理部210は、入力部100から受け取った状況情報データを概念情報データに変換する。ここでは、「を考える」の語に応じて、「Aさん」「Bさん」「優先度3」「スケジュール調整」の課題を示す概念(概念B)を含む概念情報データに変換されたものとする。認識処理部210は、変換した概念情報データを概念処理部220へと出力する。 Next, the input unit 100 converts the input "word" into status information data and outputs it to the recognition processing unit 210. The recognition processing unit 210 converts the situation information data received from the input unit 100 into conceptual information data. Here, it is converted into conceptual information data including the concept (concept B) indicating the tasks of "Mr. A", "Mr. B", "Priority 3", and "Schedule adjustment" according to the word "thinking". do. The recognition processing unit 210 outputs the converted concept information data to the concept processing unit 220.
 次いで、概念処理部220は、入力された概念情報データを別の概念情報データへと変換する。ここでは、「Aさん」「Bさん」「優先度3」「スケジュール調整」の課題から想起される行動を示す概念(概念C)を含む概念情報データに変換されたものとする。この段階において情報処理装置1000は「優先度3」を知識として備えていないが、「スケジュール調整」という情報から「優先度3」「スケジュールアプリで検索」という手法を想起する。そして、入力された概念情報データを、「Aさん」「Bさん」「優先度3」「スケジュールアプリで検索」という行動を示す概念を含む概念情報データに変換する。概念処理部220は、変換した概念情報データを、出力部300へと出力する。 Next, the concept processing unit 220 converts the input concept information data into another concept information data. Here, it is assumed that the data is converted into conceptual information data including a concept (concept C) indicating an action recalled from the tasks of "Mr. A", "Mr. B", "Priority 3", and "Schedule adjustment". At this stage, the information processing apparatus 1000 does not have "priority 3" as knowledge, but the information "schedule adjustment" recalls the methods "priority 3" and "search by schedule application". Then, the input conceptual information data is converted into conceptual information data including concepts indicating actions such as "Mr. A", "Mr. B", "Priority 3", and "Search by schedule application". The concept processing unit 220 outputs the converted concept information data to the output unit 300.
 次いで、出力部300は、概念処理部220から受け取った概念情報データに基づき、「ことば」を生成する。例えば、「Aさん」「Bさん」「優先度3」「スケジュールアプリで検索」という行動を示す概念Cに対しては、「をする」等の語を付加し、「ことば」を生成する。これにより、例えば、「AさんBさん優先度3スケジュールアプリで検索をする。」という「ことば」が生成される。出力部300は、生成した「ことば」を入力部100へと出力する。 Next, the output unit 300 generates "words" based on the concept information data received from the concept processing unit 220. For example, for the concept C indicating the actions of "Mr. A", "Mr. B", "Priority 3", and "Search by schedule application", words such as "do" are added to generate "words". As a result, for example, the "word" "Search with Mr. A and Mr. B priority 3 schedule application" is generated. The output unit 300 outputs the generated "word" to the input unit 100.
 次いで、入力部100は、入力された「ことば」を状況情報データに変換し、認識処理部210へと出力する。認識処理部210は、入力部100から受け取った状況情報データを概念情報データに変換する。ここでは、「をする」の語に応じて、「Aさん」「Bさん」「優先度3」「スケジュールアプリで検索」に対する作業を示す概念(概念D)を含む概念情報データに変換されたものとする。認識処理部210は、変換した概念情報データを出力部300へと出力する。 Next, the input unit 100 converts the input "word" into status information data and outputs it to the recognition processing unit 210. The recognition processing unit 210 converts the situation information data received from the input unit 100 into conceptual information data. Here, it was converted into conceptual information data including the concept (concept D) indicating the work for "Mr. A", "Mr. B", "Priority 3", and "Search with the schedule application" according to the word "do". It shall be. The recognition processing unit 210 outputs the converted conceptual information data to the output unit 300.
 次いで、出力部300は、認識処理部210から受け取った概念情報データに基づき、「行動」を生成する。ここでは、概念情報データに示される情報に対する作業として、AさんとBさんの優先度3のスケジュール調整をするためのスケジュールアプリを実行する指示を出力する。スケジュールアプリは、実行することにより得られた結果を、入力部100へと出力する。 Next, the output unit 300 generates an "action" based on the conceptual information data received from the recognition processing unit 210. Here, as work on the information shown in the conceptual information data, an instruction to execute a schedule application for adjusting the schedule of priority 3 of Mr. A and Mr. B is output. The schedule application outputs the result obtained by execution to the input unit 100.
 次いで、入力部100は、スケジュールアプリから受け取った結果をそのまま或いは状況情報データに変換し、認識処理部210へと出力する。認識処理部210は、受け取った情報がスケジュールアプリの実行の結果であることを識別し、受け取った情報を出力部300へと出力する。なお、図6の例ではAさんとBさんのスケジュールにおいて共通する空き時間又は優先度3の時間帯は月曜日の午後及び金曜日の午前に存在するため、「月曜午後、金曜午前の優先度3」という結果が出力されたものとする。 Next, the input unit 100 converts the result received from the schedule application as it is or into status information data, and outputs it to the recognition processing unit 210. The recognition processing unit 210 identifies that the received information is the result of the execution of the schedule application, and outputs the received information to the output unit 300. In the example of FIG. 6, since the free time or the time zone of priority 3 common to the schedules of Mr. A and Mr. B exists on Monday afternoon and Friday morning, "Monday afternoon, Friday morning priority 3". It is assumed that the result is output.
 次いで、入力部100は、スケジュールアプリから受け取った結果をそのまま或いは状況情報データに変換し、認識処理部210へと出力する。認識処理部210は、受け取った情報がスケジュールアプリの実行の結果であることを識別し、受け取った情報を出力部300へと出力する。 Next, the input unit 100 converts the result received from the schedule application as it is or into status information data, and outputs it to the recognition processing unit 210. The recognition processing unit 210 identifies that the received information is the result of the execution of the schedule application, and outputs the received information to the output unit 300.
 次いで、出力部300は、認識処理部210から受け取った情報に基づき「ことば」を生成する。例えば、スケジュールアプリが出力した結果「月曜午後、金曜午前の優先度3」に「です」等の語を付加しことばを生成する。これにより、例えば、「月曜午後、金曜午前の優先度3です。」という「ことば」が生成される。 Next, the output unit 300 generates "words" based on the information received from the recognition processing unit 210. For example, as a result output by the schedule application, a word such as "is" is added to "priority 3 on Monday afternoon and Friday morning" to generate a word. As a result, for example, the "word" "Monday afternoon, Friday morning has priority 3" is generated.
 次いで、出力部300は、生成した「ことば」を、表示装置に表示し或いは音声出力する等によりユーザへと報告する。 Next, the output unit 300 reports the generated "words" to the user by displaying them on a display device or outputting them by voice.
 また、出力部は、概念Dに対する「行動」の生成と並行して、認識処理部210から受け取った概念情報データに基づき、「ことば」を生成する。例えば、「Aさん」「Bさん」「優先度3」「スケジュールアプリで検索」という作業を示す概念Dに対しては、「をした」等の語を付加し、「ことば」を生成する。これにより、例えば、「AさんBさん優先度3スケジュールアプリで検索をした。」という「ことば」が生成される。出力部300は、生成した「ことば」を入力部100へと出力する。 Further, the output unit generates "words" based on the concept information data received from the recognition processing unit 210 in parallel with the generation of the "action" for the concept D. For example, to the concept D indicating the work of "Mr. A", "Mr. B", "Priority 3", and "Search by schedule application", words such as "done" are added to generate "words". As a result, for example, a "word" such as "Mr. A and Mr. B searched with the priority 3 schedule application" is generated. The output unit 300 outputs the generated "word" to the input unit 100.
 次いで、入力部100は、入力された「ことば」を状況情報データに変換し、認識処理部210へと出力する。認識処理部210は、入力部100から受け取った状況情報データを概念情報データに変換する。ここでは、「をした」の語に応じて「行動」の結果を参照し、「優先度3」を知識として備えていないのにかかわらず結果が得られたことに応じて、「優先度3を覚える」という学習を示す概念(概念E)を生成する。 Next, the input unit 100 converts the input "word" into status information data and outputs it to the recognition processing unit 210. The recognition processing unit 210 converts the situation information data received from the input unit 100 into conceptual information data. Here, the result of "action" is referred to according to the word "done", and "priority 3" is obtained according to the result obtained even though "priority 3" is not provided as knowledge. Generate a concept (concept E) that shows the learning of "remembering".
 次いで、概念処理部220は、データ処理モジュール400の記憶部420に格納されている学習モデルの学習を行い、「優先度3」を知識として覚えさせる。こうして、一連の処理を終了する。 Next, the concept processing unit 220 learns the learning model stored in the storage unit 420 of the data processing module 400, and makes the "priority 3" memorized as knowledge. In this way, a series of processes is completed.
 「昼休み」及び「優先度3」の学習が行われた後は、ユーザ(Aさん)からBさんとのスケジュール調整を要求する指示に対して、「空きなしです。火、木、金の昼休みです。月曜午後、金曜午前の優先度3です。」等の報告を同時に得ることも可能となる。 After learning "Lunch break" and "Priority 3", in response to the instruction from the user (Mr. A) to request schedule adjustment with Mr. B, "There is no space. Lunch break on Tuesday, Thursday, and Friday. It is also possible to get reports such as "It is priority 3 on Monday afternoon and Friday morning."
 次に、本実施形態による情報処理装置1000のハードウェア構成例について、図8乃至図11Cを用いて説明する。図8及び図9は、本実施形態による情報処理装置のハードウェア構成例を示す概略図である。図10は、本実施形態による情報処理装置におけるデータ処理プロセッサを説明する図である。図11A、図11B及び図11Cは、入力データ及び学習モデルのパターン例を示す図である。 Next, a hardware configuration example of the information processing apparatus 1000 according to the present embodiment will be described with reference to FIGS. 8 to 11C. 8 and 9 are schematic views showing a hardware configuration example of the information processing apparatus according to the present embodiment. FIG. 10 is a diagram illustrating a data processing processor in the information processing apparatus according to the present embodiment. 11A, 11B and 11C are diagrams showing pattern examples of input data and a learning model.
 情報処理装置1000は、例えば図8に示すように、一般的な情報処理装置と同様のハードウェア構成によって実現することが可能である。例えば、情報処理装置1000は、CPU(Central Processing Unit)500、主記憶部502、通信部504、入出力インターフェース部506を備え得る。 The information processing device 1000 can be realized by a hardware configuration similar to that of a general information processing device, as shown in FIG. 8, for example. For example, the information processing apparatus 1000 may include a CPU (Central Processing Unit) 500, a main storage unit 502, a communication unit 504, and an input / output interface unit 506.
 CPU500は、情報処理装置1000の全体的な制御や演算処理を司る制御・演算装置である。主記憶部502は、データの作業領域やデータの一時退避領域に用いられる記憶部であり、RAM(Random Access Memory)等のメモリにより構成され得る。通信部504は、ネットワークを介してデータの送受信を行うためのインターフェースである。入出力インターフェース部506は、外部の出力装置510、入力装置512、記憶装置514等と接続してデータの送受信を行うためのインターフェースである。CPU500、主記憶部502、通信部504及び入出力インターフェース部506は、システムバス508によって相互に接続されている。記憶装置514は、例えばROM(Read Only Memory)、磁気ディスク、半導体メモリ等の不揮発性メモリから構成されるハードディスク装置等によって構成され得る。 The CPU 500 is a control / arithmetic unit that controls the overall control and arithmetic processing of the information processing apparatus 1000. The main storage unit 502 is a storage unit used for a data work area or a data temporary save area, and may be configured by a memory such as a RAM (Random Access Memory). The communication unit 504 is an interface for transmitting and receiving data via a network. The input / output interface unit 506 is an interface for connecting to an external output device 510, an input device 512, a storage device 514, and the like to transmit / receive data. The CPU 500, the main storage unit 502, the communication unit 504, and the input / output interface unit 506 are connected to each other by the system bus 508. The storage device 514 may be configured by, for example, a hard disk device composed of a non-volatile memory such as a ROM (Read Only Memory), a magnetic disk, or a semiconductor memory.
 主記憶部502は、データ処理モジュール400等における演算を実行するための作業領域として用いることができる。CPU500は、主記憶部502における演算処理を制御する制御部として機能し得る。記憶装置514は、記憶部420として利用可能であり、学習済みの学習モデルを保存することができる。 The main storage unit 502 can be used as a work area for executing an operation in the data processing module 400 or the like. The CPU 500 can function as a control unit that controls arithmetic processing in the main storage unit 502. The storage device 514 can be used as a storage unit 420 and can store a trained learning model.
 通信部504は、イーサネット(登録商標)、Wi-Fi(登録商標)等の規格に基づく通信インターフェースであり、他の装置との通信を行うためのモジュールである。記憶装置514に格納される学習モデルは、通信部504を介して他の装置から受信するように構成されていてもよい。例えば、頻繁に使用する学習モデルは記憶装置514に記憶しておき、使用頻度の低い学習セル情報は他の装置から読み込むように構成することができる The communication unit 504 is a communication interface based on standards such as Ethernet (registered trademark) and Wi-Fi (registered trademark), and is a module for communicating with other devices. The learning model stored in the storage device 514 may be configured to receive from another device via the communication unit 504. For example, a frequently used learning model can be stored in the storage device 514, and infrequently used learning cell information can be configured to be read from another device.
 出力装置510は、例えば液晶表示装置等のディスプレイを含み得る。出力装置510は、ユーザに対して処理結果を提示するための表示装置として利用可能である。入力装置512は、キーボード、マウス、タッチパネル等であって、ユーザが情報処理装置1000に所定の指示を入力するために用いられ得る。 The output device 510 may include a display such as a liquid crystal display device. The output device 510 can be used as a display device for presenting the processing result to the user. The input device 512 is a keyboard, a mouse, a touch panel, or the like, and can be used for a user to input a predetermined instruction to the information processing device 1000.
 本実施形態による情報処理装置1000の各部の機能は、プログラムを組み込んだLSI(Large Scale Integration)等のハードウェア部品である回路部品を実装することにより、ハードウェア的に実現することができる。或いは、その機能を提供するプログラムを、記憶装置514に格納し、そのプログラムを主記憶部502にロードしてCPU500で実行することにより、ソフトウェア的に実現することも可能である。 The functions of each part of the information processing apparatus 1000 according to the present embodiment can be realized in terms of hardware by mounting circuit components that are hardware components such as LSI (Large Scale Integration) in which a program is incorporated. Alternatively, it can be realized by software by storing the program providing the function in the storage device 514, loading the program in the main storage unit 502, and executing the program in the CPU 500.
 また、図1に示す情報処理装置1000の構成は、必ずしも独立した1つの装置として構成されている必要はない。例えば、入力部100、処理部200、出力部300のうちの一部、例えば処理部200をクラウド上に配し、これらによって情報処理システムを構築するようにしてもよい。 Further, the configuration of the information processing device 1000 shown in FIG. 1 does not necessarily have to be configured as one independent device. For example, a part of the input unit 100, the processing unit 200, and the output unit 300, for example, the processing unit 200 may be arranged on the cloud, and an information processing system may be constructed by these.
 或いは、情報処理装置1000は、例えば図9に示すように、データ駆動型のデータフローマシンとして構成することも可能である。 Alternatively, the information processing apparatus 1000 can be configured as a data-driven data flow machine, for example, as shown in FIG.
 例えば、情報処理装置1000は、複数のデータ処理プロセッサ600と、入出力インターフェース部620と、を備え得る。複数のデータ処理プロセッサ600は、直列に接続されている。データ処理プロセッサ600の直列接続体のうちの第1段のデータ処理プロセッサ600は、入出力インターフェース部620に接続されており、入出力インターフェース部620からデータを受ける。また、データ処理プロセッサ600の直列接続体のうちの最終段のデータ処理プロセッサ600は、入出力インターフェース部620に接続されており、入出力インターフェース部620へとデータを出力する。データ処理プロセッサ600は、入力に一番近い情報距離にあるパターンに紐付いたバリューを出力する機能を備える。入出力インターフェース部620は、外部の出力装置630、入力装置640、記憶装置650等と接続してデータの送受信を行うためのインターフェースである。 For example, the information processing apparatus 1000 may include a plurality of data processing processors 600 and an input / output interface unit 620. The plurality of data processing processors 600 are connected in series. The first-stage data processing processor 600 of the series connection of the data processing processor 600 is connected to the input / output interface unit 620 and receives data from the input / output interface unit 620. Further, the data processing processor 600 at the final stage of the series connection body of the data processing processor 600 is connected to the input / output interface unit 620 and outputs data to the input / output interface unit 620. The data processing processor 600 has a function of outputting a value associated with a pattern at the information distance closest to the input. The input / output interface unit 620 is an interface for connecting to an external output device 630, input device 640, storage device 650, or the like to transmit / receive data.
 なお、図9には、入出力インターフェース部620から複数のデータ処理プロセッサ600を介して入出力インターフェース部620に戻る1つの経路を示しているが、複数の経路を並列に設けてもよい。また、複数の経路に対し、分岐する経路や合流する経路を設けてもよい。 Note that FIG. 9 shows one route from the input / output interface unit 620 to the input / output interface unit 620 via the plurality of data processing processors 600, but a plurality of routes may be provided in parallel. Further, a branching route or a merging route may be provided for a plurality of routes.
 複数のデータ処理プロセッサ600の各々は、例えば図10に示すように、入力処理部602と、複数(例えばm個)の内積器604~604と、比較器606と、セレクタ608と、出力処理部610と、を含んで構成され得る。複数のデータ処理プロセッサ600の各々は、データを受信し、入力データに対して所定の処理を施し、処理後のデータを出力する機能を備える。 As shown in FIG. 10, for example, each of the plurality of data processing processors 600 includes an input processing unit 602, a plurality of (for example, m) inner stackers 604 1 to 604 m , a comparator 606, a selector 608, and an output. It may be configured to include a processing unit 610. Each of the plurality of data processing processors 600 has a function of receiving data, performing predetermined processing on input data, and outputting the processed data.
 内積器604~604、比較器606及びセレクタ608は、論理ゲート回路などによって構成され得る。学習データのパターン(PAT1~PATm)及びバリュー(Value1~Valuem)は、レジスタに格納され得る。学習データのパターン(PAT1~PATm)及びバリュー(Value1~Valuem)は、予め記憶装置650に格納しておくことができる。この場合、情報処理装置1000の起動時に、記憶装置650から入出力インターフェース部620を通じて読み出し、各データ処理プロセッサ600に設定することができる。 The inner stacker 604 1 to 604 m , the comparator 606 and the selector 608 may be configured by a logic gate circuit or the like. The patterns (PAT1 to PATm) and values (Value1 to Valuem) of the training data can be stored in registers. The patterns (PAT1 to PATm) and values (Value1 to Valuem) of the learning data can be stored in the storage device 650 in advance. In this case, when the information processing device 1000 is started, it can be read from the storage device 650 through the input / output interface unit 620 and set in each data processing processor 600.
 入出力インターフェース部620から出力されたデータは、第1段目のデータ処理プロセッサ600の入力処理部602に入力される。入力処理部602に入力されるデータは、前述の状況情報データや概念情報データである。入力処理部602は、学習モデルを構成する複数のモデルに対応する複数の内積器604の各々に、入力データを並列に出力する。例えば、学習モデルがm個のモデルを含む場合、入力データは、m個の内積器604~604に並列に入力される。 The data output from the input / output interface unit 620 is input to the input processing unit 602 of the first stage data processing processor 600. The data input to the input processing unit 602 is the above-mentioned situation information data and conceptual information data. The input processing unit 602 outputs input data in parallel to each of the plurality of inner stackers 604 corresponding to the plurality of models constituting the learning model. For example, when the training model includes m models, the input data is input in parallel to the m inner stackers 604 1 to 604 m .
 内積器604~604の各々は、入力データのパターンと学習モデルのパターンとの内積計算を行う。例えば、入力データのパターンが図11Aに示すようなデータで構成され、内積器604に対応する学習モデルのパターンが図11Bに示すようなデータで構成されているものとする。また、図11Bの学習モデルのパターンには、図11Cに示すようなバリューが紐付けられているものとする。この場合、内積器604は、
  IN×PAT1+IN×PAT1+ … +IN×PAT1
のように、各要素について乗算したものの総和計算を行う。内積器604~604の各々の計算結果は、比較器606に入力される。
Each of the inner product devices 604 1 to 604 m performs an inner product calculation of the pattern of the input data and the pattern of the learning model. For example, it is assumed that the pattern of the input data is composed of the data as shown in FIG. 11A, and the pattern of the learning model corresponding to the inner stacker 6041 is composed of the data as shown in FIG. 11B. Further, it is assumed that the value as shown in FIG. 11C is associated with the pattern of the learning model of FIG. 11B. In this case, the inner product device 604 1
IN 1 x PAT1 1 + IN 2 x PAT1 2 + ... + IN 9 x PAT1 9
The sum of the multiplications for each element is calculated as in. The calculation results of the inner stackers 604 1 to 604 m are input to the comparator 606.
 このようにして、複数の学習モデルのパターンに対する入力データの内積計算を行うことで、1サイクルで総ての内積計算を完了することが可能となり、プログラムによる処理と比較して1処理のサイクル数を大幅に少なくすることができる。なお、プロセッサのリソース量が不足する場合は、各内積器604に供給する学習データのパターンを入れ替えながら、複数回に分けて内積計算処理を行ってもよい。 In this way, by performing the inner product calculation of the input data for the patterns of a plurality of learning models, it is possible to complete all the inner product calculations in one cycle, and the number of cycles of one processing is compared with the processing by the program. Can be significantly reduced. If the amount of resources of the processor is insufficient, the inner product calculation process may be performed in a plurality of times while exchanging the patterns of the learning data supplied to each inner product unit 604.
 比較器606は、内積器604~604の出力値を比較し、入力データに対して最も大きな内積値を示す学習モデルの番号(1~m)を、セレクタ608に出力する。 The comparator 606 compares the output values of the inner product units 604 1 to 604 m , and outputs the learning model number (1 to m) indicating the largest inner product value with respect to the input data to the selector 608.
 セレクタ608は、複数の学習モデルのパターンに紐付けられたバリューの中から、比較器606から出力される番号に対応する学習モデルに紐付けられたバリューを選択し、出力処理部610へと出力する。出力処理部610は、選択したバリューを、次段のデータ処理プロセッサ600へと出力する。 The selector 608 selects the value associated with the learning model corresponding to the number output from the comparator 606 from the values associated with the patterns of the plurality of learning models, and outputs the value to the output processing unit 610. do. The output processing unit 610 outputs the selected value to the next-stage data processing processor 600.
 なお、当該データ処理プロセッサ600における処理をパスして次段のデータ処理プロセッサ600の処理へと移行する場合は、入力処理部602から出力処理部610へとデータを送信してもよい。また、状態遷移などの処理を行う場合は、当該データ処理プロセッサ600による処理結果を出力処理部610から入力処理部602へと戻してもよい。 When shifting to the processing of the next-stage data processing processor 600 after passing the processing in the data processing processor 600, data may be transmitted from the input processing unit 602 to the output processing unit 610. Further, when performing processing such as a state transition, the processing result by the data processing processor 600 may be returned from the output processing unit 610 to the input processing unit 602.
 このように、本実施形態によれば、人間の思考により近いアルゴリズムで情報処理が可能な情報処理装置及び情報処理方法を実現することができる。 As described above, according to the present embodiment, it is possible to realize an information processing device and an information processing method capable of information processing with an algorithm closer to human thinking.
 [第2実施形態]
 本発明の第2実施形態による情報処理装置について、図12を用いて説明する。第1実施形態による情報処理装置と同様の構成要素には同一の符号を付し、説明を省略し或いは簡潔にする。図12は、本実施形態による情報処理装置の概略構成を示す概略図である。
[Second Embodiment]
The information processing apparatus according to the second embodiment of the present invention will be described with reference to FIG. The same components as those of the information processing apparatus according to the first embodiment are designated by the same reference numerals, and the description thereof will be omitted or simplified. FIG. 12 is a schematic diagram showing a schematic configuration of an information processing apparatus according to the present embodiment.
 本実施形態による情報処理装置1000は、図12に示すように、入力部100と、処理部200と、出力部300と、を有している。 As shown in FIG. 12, the information processing apparatus 1000 according to the present embodiment has an input unit 100, a processing unit 200, and an output unit 300.
 入力部100は、受け取った情報に基づき、当該情報から把握される状況を表す第1のデータを生成する機能を備える。処理部200は、入力部100から受け取った第1のデータに基づき、状況に関連する概念を表す第2のデータを生成する機能を備える。出力部300は、処理部200から受け取った第2のデータに基づき、当該概念を表すことばを生成する機能を備える。また、出力部300は、生成したことばが当該情報に応じた解に合致していないとき、生成したことばを新たな情報として入力部100へと出力するように構成されている。 The input unit 100 has a function of generating first data representing a situation grasped from the information based on the received information. The processing unit 200 has a function of generating second data representing a situation-related concept based on the first data received from the input unit 100. The output unit 300 has a function of generating a word expressing the concept based on the second data received from the processing unit 200. Further, the output unit 300 is configured to output the generated words as new information to the input unit 100 when the generated words do not match the solution corresponding to the information.
 このように構成することにより、人間の思考により近いアルゴリズムで情報処理が可能な情報処理装置を実現することができる。 With this configuration, it is possible to realize an information processing device that can process information with an algorithm that is closer to human thinking.
 [変形実施形態]
 本発明は、上記実施形態に限らず種々の変形が可能である。
[Modification Embodiment]
The present invention is not limited to the above embodiment and can be modified in various ways.
 例えば、いずれかの実施形態の一部の構成を他の実施形態に追加した例や、他の実施形態の一部の構成と置換した例も、本発明の実施形態である。 For example, an example in which a partial configuration of any of the embodiments is added to another embodiment or an example in which a partial configuration of another embodiment is replaced with another embodiment is also an embodiment of the present invention.
 また、上述の実施形態の機能を実現するように該実施形態の構成を動作させるプログラムを記録媒体に記録させ、該記録媒体に記録されたプログラムをコードとして読み出し、コンピュータにおいて実行する処理方法も各実施形態の範疇に含まれる。すなわち、コンピュータ読取可能な記録媒体も各実施形態の範囲に含まれる。また、上述のプログラムが記録された記録媒体はもちろん、そのプログラム自体も各実施形態に含まれる。 Further, there are also processing methods in which a program for operating the configuration of the embodiment is recorded on a recording medium so as to realize the function of the above-described embodiment, the program recorded on the recording medium is read out as a code, and the program is executed by a computer. Included in the category of embodiments. That is, a computer-readable recording medium is also included in the scope of each embodiment. Further, not only the recording medium on which the above-mentioned program is recorded but also the program itself is included in each embodiment.
 該記録媒体としては例えばフロッピー(登録商標)ディスク、ハードディスク、光ディスク、光磁気ディスク、CD-ROM、磁気テープ、不揮発性メモリカード、ROMを用いることができる。また該記録媒体に記録されたプログラム単体で処理を実行しているものに限らず、他のソフトウェア、拡張ボードの機能と共同して、OS上で動作して処理を実行するものも各実施形態の範疇に含まれる。 As the recording medium, for example, a floppy (registered trademark) disk, a hard disk, an optical disk, a magneto-optical disk, a CD-ROM, a magnetic tape, a non-volatile memory card, or a ROM can be used. Further, not only the program recorded on the recording medium that executes the process alone, but also the program that operates on the OS and executes the process in cooperation with other software and the function of the expansion board is also an embodiment. Is included in the category of.
 上記実施形態は、いずれも本発明を実施するにあたっての具体化の例を示したものに過ぎず、これらによって本発明の技術的範囲が限定的に解釈されてはならない。すなわち、本発明はその技術思想、又はその主要な特徴から逸脱することなく、様々な形で実施することができる。 The above embodiments are merely examples of embodiment of the present invention, and the technical scope of the present invention should not be construed in a limited manner by these. That is, the present invention can be implemented in various forms without departing from the technical idea or its main features.
 上記実施形態の一部又は全部は、以下の付記のようにも記載されうるが、以下には限られない。 A part or all of the above embodiment may be described as in the following appendix, but is not limited to the following.
(付記1)
 受け取った情報に基づき、前記情報から把握される状況を表す第1のデータを生成する入力部と、
 前記第1のデータに基づき、前記状況に関連する概念を表す第2のデータを生成する処理部と、
 前記第2のデータに基づき、前記概念を表すことばを生成して出力する出力部と、を有し、
 前記出力部は、生成したことばが前記情報に応じた解に合致していないとき、生成したことばを新たな情報として前記入力部へと出力するように構成されている
 ことを特徴とする情報処理装置。
(Appendix 1)
Based on the received information, an input unit that generates first data representing the situation grasped from the information, and an input unit.
A processing unit that generates a second data representing a concept related to the situation based on the first data, and a processing unit.
It has an output unit that generates and outputs a word expressing the concept based on the second data.
The information processing unit is configured to output the generated words as new information to the input unit when the generated words do not match the solution corresponding to the information. Device.
(付記2)
 前記出力部は、前記第2のデータが表す前記概念に応じた行動に関する指示を更に出力するように構成されている
 ことを特徴とする付記1記載の情報処理装置。
(Appendix 2)
The information processing apparatus according to Appendix 1, wherein the output unit is configured to further output an instruction regarding an action according to the concept represented by the second data.
(付記3)
 前記処理部は、前記第1のデータが表す前記状況を概念化した第3のデータを生成する第1の処理部と、前記第3のデータを別の概念に変換して前記第2のデータを生成する第2の処理部と、を有する
 ことを特徴とする付記1又は2記載の情報処理装置。
(Appendix 3)
The processing unit has a first processing unit that generates a third data conceptualizing the situation represented by the first data, and the second data by converting the third data into another concept. The information processing apparatus according to Appendix 1 or 2, characterized in that it has a second processing unit to be generated.
(付記4)
 前記第1の処理部は、第1の学習モデルと、第1の識別部と、を含み、
 前記第1のデータは、前記状況を表す複数の要素とそれらの要素値との関係をマッピングしたデータであり、
 前記第1の学習モデルは、特定の状況を表す複数の要素とそれらの要素値との関係をマッピングしたパターンと、前記パターンに紐付けられ、前記特定の状況に対応する概念を表す複数の要素とそれらの要素値との関係をマッピングしたバリューと、を各々が含む複数のモデルを含み、
 前記第1の識別部は、前記第1の学習モデルの前記複数のモデルのうち、前記第1のデータに対して最も適合度の高い前記パターンに紐付けられた前記バリューを、前記第3のデータとして選択する
 ことを特徴とする付記3記載の情報処理装置。
(Appendix 4)
The first processing unit includes a first learning model and a first identification unit.
The first data is data that maps the relationship between a plurality of elements representing the situation and their element values.
The first learning model has a pattern that maps the relationship between a plurality of elements representing a specific situation and their element values, and a plurality of elements associated with the pattern and representing a concept corresponding to the specific situation. Contains multiple models, each containing a value that maps the relationship between and their element values,
The first identification unit transfers the value associated with the pattern having the highest goodness of fit to the first data among the plurality of models of the first learning model. The information processing apparatus according to Appendix 3, characterized in that it is selected as data.
(付記5)
 前記第2の処理部は、第2の学習モデルと、第2の識別部と、を含み、
 前記第3のデータは、概念を表す複数の要素とそれらの要素値との関係をマッピングしたデータであり、
 前記第2の学習モデルは、特定の概念を表す複数の要素とそれらの要素値との関係をマッピングしたパターンと、前記パターンに紐付けられ、前記特定の概念から想定される別の概念を表す複数の要素とそれらの要素値との関係をマッピングしたバリューと、を各々が含む複数のモデルを含み、
 前記第2の識別部は、前記第2の学習モデルの前記複数のモデルのうち、前記第3のデータに対して最も適合度の高い前記パターンに紐付けられた前記バリューを、前記第2のデータとして選択する
 ことを特徴とする付記3又は4記載の情報処理装置。
(Appendix 5)
The second processing unit includes a second learning model and a second identification unit.
The third data is data that maps the relationship between a plurality of elements representing the concept and their element values.
The second learning model represents a pattern that maps the relationship between a plurality of elements representing a specific concept and their element values, and another concept associated with the pattern and assumed from the specific concept. Contains multiple models, each containing a value that maps the relationship between multiple elements and their element values.
The second identification unit transfers the value associated with the pattern having the highest goodness of fit to the third data among the plurality of models of the second learning model. The information processing apparatus according to Appendix 3 or 4, characterized in that it is selected as data.
(付記6)
 前記第2の処理部は、前記第3のデータが表す概念の少なくとも一部の要素に対応するモデルが前記第2の学習モデルに含まれていない場合に、前記少なくとも一部の要素に対応するモデルを前記第2の学習モデルを構成するモデルの1つとして追加する
 ことを特徴とする付記5記載の情報処理装置。
(Appendix 6)
The second processing unit corresponds to the at least a part of the elements when the model corresponding to at least a part of the concept represented by the third data is not included in the second learning model. The information processing apparatus according to Appendix 5, wherein the model is added as one of the models constituting the second learning model.
(付記7)
 前記処理部は、直列に接続された複数のデータ処理プロセッサにより構成され、
 前記複数のデータ処理プロセッサの各々は、入力に一番近い情報距離にあるパターンに紐付いたバリューを出力するように構成されている
 ことを特徴とする付記1乃至6のいずれか1項に記載の情報処理装置。
(Appendix 7)
The processing unit is composed of a plurality of data processing processors connected in series.
5. The item according to any one of Supplementary note 1 to 6, wherein each of the plurality of data processing processors is configured to output a value associated with a pattern at the information distance closest to the input. Information processing device.
(付記8)
 受け取った情報に基づき、前記情報から把握される状況を表す第1のデータを生成する第1のステップと、
 前記第1のデータに基づき、前記状況に関連する概念を表す第2のデータを生成する第2のステップと、
 前記第2のデータに基づき、前記概念を表すことばを生成して出力する第3のステップと、を有し、
 前記第3のステップにおいて生成したことばが前記情報に応じた解に合致していない場合に、生成したことばを新たな情報として前記第1のステップから繰り返し行う
 ことを特徴とする情報処理方法。
(Appendix 8)
Based on the received information, the first step of generating the first data representing the situation grasped from the information, and
Based on the first data, a second step of generating second data representing the concept related to the situation, and
It has a third step of generating and outputting a word expressing the concept based on the second data.
An information processing method characterized by repeating the generated words as new information from the first step when the words generated in the third step do not match the solution corresponding to the information.
(付記9)
 前記第3のステップにおいて、前記第2のデータが表す前記概念に応じた行動に関する指示を更に出力する
 ことを特徴とする付記8記載の情報処理方法。
(Appendix 9)
The information processing method according to Appendix 8, wherein in the third step, an instruction regarding an action according to the concept represented by the second data is further output.
(付記10)
 前記第2のステップは、
  前記第1のデータが表す前記状況を概念化した第3のデータを生成するステップと、
前記第3のデータを別の概念に変換して前記第2のデータを生成するステップと、を有する
 ことを特徴とする付記8又は9記載の情報処理方法。
(Appendix 10)
The second step is
A step of generating a third data that conceptualizes the situation represented by the first data, and
The information processing method according to Supplementary note 8 or 9, further comprising a step of converting the third data into another concept to generate the second data.
(付記11)
 前記第1のデータは、前記状況を表す複数の要素とそれらの要素値との関係をマッピングしたデータであり、
 第1の学習モデルは、特定の状況を表す複数の要素とそれらの要素値との関係をマッピングしたパターンと、前記パターンに紐付けられ、前記特定の状況に対応する概念を表す複数の要素とそれらの要素値との関係をマッピングしたバリューと、を各々が含む複数のモデルを含み、
 前記第3のデータを生成するステップでは、前記第1の学習モデルの前記複数のモデルのうち、前記第1のデータに対して最も適合度の高い前記パターンに紐付けられた前記バリューを、前記第3のデータとして選択する
 ことを特徴とする付記10記載の情報処理方法。
(Appendix 11)
The first data is data that maps the relationship between a plurality of elements representing the situation and their element values.
The first learning model includes a pattern that maps the relationship between a plurality of elements representing a specific situation and their element values, and a plurality of elements associated with the pattern and representing a concept corresponding to the specific situation. Includes multiple models, each containing a value that maps the relationship to those element values,
In the step of generating the third data, the value associated with the pattern having the highest goodness of fit to the first data among the plurality of models of the first learning model is referred to. The information processing method according to Appendix 10, wherein the data is selected as the third data.
(付記12)
 前記第3のデータは、概念を表す複数の要素とそれらの要素値との関係をマッピングしたデータであり、
 第2の学習モデルは、特定の概念を表す複数の要素とそれらの要素値との関係をマッピングしたパターンと、前記パターンに紐付けられ、前記特定の概念から想定される別の概念を表す複数の要素とそれらの要素値との関係をマッピングしたバリューと、を各々が含む複数のモデルを含み、
 前記第2のデータを生成するステップでは、前記第2の学習モデルの前記複数のモデルのうち、前記第3のデータに対して最も適合度の高い前記パターンに紐付けられた前記バリューを、前記第2のデータとして選択する
 ことを特徴とする付記10又は11記載の情報処理方法。
(Appendix 12)
The third data is data that maps the relationship between a plurality of elements representing the concept and their element values.
The second learning model is a pattern that maps the relationship between a plurality of elements representing a specific concept and their element values, and a plurality of patterns that are associated with the pattern and represent another concept that is assumed from the specific concept. Contains multiple models, each containing a value that maps the relationship between the elements of and their element values.
In the step of generating the second data, the value associated with the pattern having the highest goodness of fit to the third data among the plurality of models of the second learning model is referred to. The information processing method according to Appendix 10 or 11, characterized in that it is selected as the second data.
(付記13)
 前記第3のデータが表す概念の少なくとも一部の要素に対応するモデルが前記第2の学習モデルに含まれていない場合に、前記少なくとも一部の要素に対応するモデルを前記第2の学習モデルを構成するモデルの1つとして追加するステップをさらに有する
 ことを特徴とする付記12記載の情報処理方法。
(Appendix 13)
When the model corresponding to at least a part of the elements of the concept represented by the third data is not included in the second learning model, the model corresponding to the at least a part of the elements is referred to as the second learning model. The information processing method according to Appendix 12, further comprising a step to be added as one of the models constituting the above.
(付記14)
 コンピュータを、
  受け取った情報に基づき、前記情報から把握される状況を表す第1のデータを生成する手段、
  前記第1のデータに基づき、前記状況に関連する概念を表す第2のデータを生成する手段、
  前記第2のデータに基づき、前記概念を表すことばを生成し、生成したことばが前記情報に応じた解に合致していないとき、生成したことばを新たな情報として前記第1のデータを生成する手段へと出力する手段、
 として機能させるプログラム。
(Appendix 14)
Computer,
A means for generating first data representing the situation grasped from the information based on the received information.
A means of generating second data representing a concept related to the situation based on the first data.
Based on the second data, a word expressing the concept is generated, and when the generated word does not match the solution corresponding to the information, the generated word is used as new information to generate the first data. Means to output to means,
A program that functions as.
(付記15)
 付記14記載のプログラムを記載したコンピュータが読み取り可能な記録媒体。
(Appendix 15)
A computer-readable recording medium in which the program described in Appendix 14 is described.
 この出願は、2020年8月4日に出願された日本出願特願2020-132276を基礎とする優先権を主張し、その開示の全てをここに取り込む。 This application claims priority based on Japanese application Japanese Patent Application No. 2020-132276 filed on August 4, 2020, and incorporates all of its disclosures here.
100…入力部
200…処理部
210…認識処理部
220…概念処理部
300…出力部
400…データ処理モジュール
410…データ取得部
420…記憶部
430…識別部
440…データ出力部
500…CPU
502…主記憶部
504…通信部
506…入出力インターフェース部
508…システムバス
510…出力装置
512…入力装置
514…記憶装置
1000…情報処理装置
100 ... Input unit 200 ... Processing unit 210 ... Recognition processing unit 220 ... Conceptual processing unit 300 ... Output unit 400 ... Data processing module 410 ... Data acquisition unit 420 ... Storage unit 430 ... Identification unit 440 ... Data output unit 500 ... CPU
502 ... Main storage unit 504 ... Communication unit 506 ... Input / output interface unit 508 ... System bus 510 ... Output device 512 ... Input device 514 ... Storage device 1000 ... Information processing device

Claims (15)

  1.  受け取った情報に基づき、前記情報から把握される状況を表す第1のデータを生成する入力部と、
     前記第1のデータに基づき、前記状況に関連する概念を表す第2のデータを生成する処理部と、
     前記第2のデータに基づき、前記概念を表すことばを生成して出力する出力部と、を有し、
     前記出力部は、生成したことばが前記情報に応じた解に合致していないとき、生成したことばを新たな情報として前記入力部へと出力するように構成されている
     ことを特徴とする情報処理装置。
    Based on the received information, an input unit that generates first data representing the situation grasped from the information, and an input unit.
    A processing unit that generates a second data representing a concept related to the situation based on the first data, and a processing unit.
    It has an output unit that generates and outputs a word expressing the concept based on the second data.
    The information processing unit is configured to output the generated words as new information to the input unit when the generated words do not match the solution corresponding to the information. Device.
  2.  前記出力部は、前記第2のデータが表す前記概念に応じた行動に関する指示を更に出力するように構成されている
     ことを特徴とする請求項1記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein the output unit is configured to further output an instruction regarding an action according to the concept represented by the second data.
  3.  前記処理部は、前記第1のデータが表す前記状況を概念化した第3のデータを生成する第1の処理部と、前記第3のデータを別の概念に変換して前記第2のデータを生成する第2の処理部と、を有する
     ことを特徴とする請求項1又は2記載の情報処理装置。
    The processing unit has a first processing unit that generates a third data conceptualizing the situation represented by the first data, and the second data by converting the third data into another concept. The information processing apparatus according to claim 1 or 2, further comprising a second processing unit to be generated.
  4.  前記第1の処理部は、第1の学習モデルと、第1の識別部と、を含み、
     前記第1のデータは、前記状況を表す複数の要素とそれらの要素値との関係をマッピングしたデータであり、
     前記第1の学習モデルは、特定の状況を表す複数の要素とそれらの要素値との関係をマッピングしたパターンと、前記パターンに紐付けられ、前記特定の状況に対応する概念を表す複数の要素とそれらの要素値との関係をマッピングしたバリューと、を各々が含む複数のモデルを含み、
     前記第1の識別部は、前記第1の学習モデルの前記複数のモデルのうち、前記第1のデータに対して最も適合度の高い前記パターンに紐付けられた前記バリューを、前記第3のデータとして選択する
     ことを特徴とする請求項3記載の情報処理装置。
    The first processing unit includes a first learning model and a first identification unit.
    The first data is data that maps the relationship between a plurality of elements representing the situation and their element values.
    The first learning model has a pattern that maps the relationship between a plurality of elements representing a specific situation and their element values, and a plurality of elements associated with the pattern and representing a concept corresponding to the specific situation. Contains multiple models, each containing a value that maps the relationship between and their element values,
    The first identification unit transfers the value associated with the pattern having the highest goodness of fit to the first data among the plurality of models of the first learning model. The information processing apparatus according to claim 3, wherein the information processing apparatus is selected as data.
  5.  前記第2の処理部は、第2の学習モデルと、第2の識別部と、を含み、
     前記第3のデータは、概念を表す複数の要素とそれらの要素値との関係をマッピングしたデータであり、
     前記第2の学習モデルは、特定の概念を表す複数の要素とそれらの要素値との関係をマッピングしたパターンと、前記パターンに紐付けられ、前記特定の概念から想定される別の概念を表す複数の要素とそれらの要素値との関係をマッピングしたバリューと、を各々が含む複数のモデルを含み、
     前記第2の識別部は、前記第2の学習モデルの前記複数のモデルのうち、前記第3のデータに対して最も適合度の高い前記パターンに紐付けられた前記バリューを、前記第2のデータとして選択する
     ことを特徴とする請求項3又は4記載の情報処理装置。
    The second processing unit includes a second learning model and a second identification unit.
    The third data is data that maps the relationship between a plurality of elements representing the concept and their element values.
    The second learning model represents a pattern that maps the relationship between a plurality of elements representing a specific concept and their element values, and another concept associated with the pattern and assumed from the specific concept. Contains multiple models, each containing a value that maps the relationship between multiple elements and their element values.
    The second identification unit transfers the value associated with the pattern having the highest goodness of fit to the third data among the plurality of models of the second learning model. The information processing apparatus according to claim 3 or 4, wherein the information processing apparatus is selected as data.
  6.  前記第2の処理部は、前記第3のデータが表す概念の少なくとも一部の要素に対応するモデルが前記第2の学習モデルに含まれていない場合に、前記少なくとも一部の要素に対応するモデルを前記第2の学習モデルを構成するモデルの1つとして追加する
     ことを特徴とする請求項5記載の情報処理装置。
    The second processing unit corresponds to the at least a part of the elements when the model corresponding to at least a part of the concept represented by the third data is not included in the second learning model. The information processing apparatus according to claim 5, wherein the model is added as one of the models constituting the second learning model.
  7.  前記処理部は、直列に接続された複数のデータ処理プロセッサにより構成され、
     前記複数のデータ処理プロセッサの各々は、入力に一番近い情報距離にあるパターンに紐付いたバリューを出力するように構成されている
     ことを特徴とする請求項1乃至6のいずれか1項に記載の情報処理装置。
    The processing unit is composed of a plurality of data processing processors connected in series.
    The invention according to any one of claims 1 to 6, wherein each of the plurality of data processing processors is configured to output a value associated with a pattern at the information distance closest to the input. Information processing device.
  8.  受け取った情報に基づき、前記情報から把握される状況を表す第1のデータを生成する第1のステップと、
     前記第1のデータに基づき、前記状況に関連する概念を表す第2のデータを生成する第2のステップと、
     前記第2のデータに基づき、前記概念を表すことばを生成して出力する第3のステップと、を有し、
     前記第3のステップにおいて生成したことばが前記情報に応じた解に合致していない場合に、生成したことばを新たな情報として前記第1のステップから繰り返し行う
     ことを特徴とする情報処理方法。
    Based on the received information, the first step of generating the first data representing the situation grasped from the information, and
    Based on the first data, a second step of generating second data representing the concept related to the situation, and
    It has a third step of generating and outputting a word expressing the concept based on the second data.
    An information processing method characterized by repeating the generated words as new information from the first step when the words generated in the third step do not match the solution corresponding to the information.
  9.  前記第3のステップにおいて、前記第2のデータが表す前記概念に応じた行動に関する指示を更に出力する
     ことを特徴とする請求項8記載の情報処理方法。
    The information processing method according to claim 8, wherein in the third step, an instruction regarding an action according to the concept represented by the second data is further output.
  10.  前記第2のステップは、
      前記第1のデータが表す前記状況を概念化した第3のデータを生成するステップと、
    前記第3のデータを別の概念に変換して前記第2のデータを生成するステップと、を有する
     ことを特徴とする請求項8又は9記載の情報処理方法。
    The second step is
    A step of generating a third data that conceptualizes the situation represented by the first data, and
    The information processing method according to claim 8 or 9, further comprising a step of converting the third data into another concept to generate the second data.
  11.  前記第1のデータは、前記状況を表す複数の要素とそれらの要素値との関係をマッピングしたデータであり、
     第1の学習モデルは、特定の状況を表す複数の要素とそれらの要素値との関係をマッピングしたパターンと、前記パターンに紐付けられ、前記特定の状況に対応する概念を表す複数の要素とそれらの要素値との関係をマッピングしたバリューと、を各々が含む複数のモデルを含み、
     前記第3のデータを生成するステップでは、前記第1の学習モデルの前記複数のモデルのうち、前記第1のデータに対して最も適合度の高い前記パターンに紐付けられた前記バリューを、前記第3のデータとして選択する
     ことを特徴とする請求項10記載の情報処理方法。
    The first data is data that maps the relationship between a plurality of elements representing the situation and their element values.
    The first learning model includes a pattern that maps the relationship between a plurality of elements representing a specific situation and their element values, and a plurality of elements associated with the pattern and representing a concept corresponding to the specific situation. Includes multiple models, each containing a value that maps the relationship to those element values,
    In the step of generating the third data, the value associated with the pattern having the highest goodness of fit to the first data among the plurality of models of the first learning model is referred to. The information processing method according to claim 10, wherein the data is selected as the third data.
  12.  前記第3のデータは、概念を表す複数の要素とそれらの要素値との関係をマッピングしたデータであり、
     第2の学習モデルは、特定の概念を表す複数の要素とそれらの要素値との関係をマッピングしたパターンと、前記パターンに紐付けられ、前記特定の概念から想定される別の概念を表す複数の要素とそれらの要素値との関係をマッピングしたバリューと、を各々が含む複数のモデルを含み、
     前記第2のデータを生成するステップでは、前記第2の学習モデルの前記複数のモデルのうち、前記第3のデータに対して最も適合度の高い前記パターンに紐付けられた前記バリューを、前記第2のデータとして選択する
     ことを特徴とする請求項10又は11記載の情報処理方法。
    The third data is data that maps the relationship between a plurality of elements representing the concept and their element values.
    The second learning model is a pattern that maps the relationship between a plurality of elements representing a specific concept and their element values, and a plurality of patterns that are associated with the pattern and represent another concept that is assumed from the specific concept. Contains multiple models, each containing a value that maps the relationship between the elements of and their element values.
    In the step of generating the second data, the value associated with the pattern having the highest goodness of fit to the third data among the plurality of models of the second learning model is referred to. The information processing method according to claim 10 or 11, wherein the data is selected as the second data.
  13.  前記第3のデータが表す概念の少なくとも一部の要素に対応するモデルが前記第2の学習モデルに含まれていない場合に、前記少なくとも一部の要素に対応するモデルを前記第2の学習モデルを構成するモデルの1つとして追加するステップをさらに有する
     ことを特徴とする請求項12記載の情報処理方法。
    When the model corresponding to at least a part of the elements of the concept represented by the third data is not included in the second learning model, the model corresponding to the at least a part of the elements is referred to as the second learning model. 12. The information processing method according to claim 12, further comprising a step of adding as one of the models constituting the above.
  14.  コンピュータを、
      受け取った情報に基づき、前記情報から把握される状況を表す第1のデータを生成する手段、
      前記第1のデータに基づき、前記状況に関連する概念を表す第2のデータを生成する手段、
      前記第2のデータに基づき、前記概念を表すことばを生成し、生成したことばが前記情報に応じた解に合致していないとき、生成したことばを新たな情報として前記第1のデータを生成する手段へと出力する手段、
     として機能させるプログラム。
    Computer,
    A means for generating first data representing the situation grasped from the information based on the received information.
    A means of generating second data representing a concept related to the situation based on the first data.
    Based on the second data, a word expressing the concept is generated, and when the generated word does not match the solution corresponding to the information, the generated word is used as new information to generate the first data. Means to output to means,
    A program that functions as.
  15.  請求項14記載のプログラムを記載したコンピュータが読み取り可能な記録媒体。 A computer-readable recording medium in which the program according to claim 14 is described.
PCT/JP2021/024124 2020-08-04 2021-06-25 Information processing device, information processing method, program, and recording medium WO2022030134A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022541152A JP7381143B2 (en) 2020-08-04 2021-06-25 Information processing device, information processing method, program and recording medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-132276 2020-08-04
JP2020132276 2020-08-04

Publications (1)

Publication Number Publication Date
WO2022030134A1 true WO2022030134A1 (en) 2022-02-10

Family

ID=80119690

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/024124 WO2022030134A1 (en) 2020-08-04 2021-06-25 Information processing device, information processing method, program, and recording medium

Country Status (2)

Country Link
JP (1) JP7381143B2 (en)
WO (1) WO2022030134A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014222504A (en) * 2014-05-24 2014-11-27 洋彰 宮崎 Autonomous thinking pattern generation mechanism
US20200066277A1 (en) * 2018-08-24 2020-02-27 Bright Marbles, Inc Idea scoring for creativity tool selection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014222504A (en) * 2014-05-24 2014-11-27 洋彰 宮崎 Autonomous thinking pattern generation mechanism
US20200066277A1 (en) * 2018-08-24 2020-02-27 Bright Marbles, Inc Idea scoring for creativity tool selection

Also Published As

Publication number Publication date
JP7381143B2 (en) 2023-11-15
JPWO2022030134A1 (en) 2022-02-10

Similar Documents

Publication Publication Date Title
Nelson Foundations and methods of stochastic simulation
Cioffi-Revilla et al. Computation and social science
Kucherbaev et al. Human-aided bots
Mills et al. Principles of information systems analysis and design
CN111985229B (en) Sequence labeling method and device and computer equipment
US7584160B2 (en) System and method for optimizing project subdivision using data and requirements focuses subject to multidimensional constraints
US20170293844A1 (en) Human-machine collaborative optimization via apprenticeship scheduling
US20190034785A1 (en) System and method for program induction using probabilistic neural programs
CN110807566A (en) Artificial intelligence model evaluation method, device, equipment and storage medium
RU2670781C2 (en) System and method for data storage and processing
CN107807968A (en) Question and answer system, method and storage medium based on Bayesian network
CN109035028A (en) Intelligence, which is thrown, cares for strategy-generating method and device, electronic equipment, storage medium
CN112256886B (en) Probability calculation method and device in atlas, computer equipment and storage medium
Luo et al. Diagnosing university student subject proficiency and predicting degree completion in vector space
Priore et al. Learning-based scheduling of flexible manufacturing systems using support vector machines
WO2022030134A1 (en) Information processing device, information processing method, program, and recording medium
Bibi et al. Sequential spiking neural P systems with local scheduled synapses without delay
US11562126B2 (en) Coaching system and coaching method
CN113342988B (en) Method and system for constructing service knowledge graph to realize service combination optimization based on LDA cross-domain
KR20210148877A (en) Electronic device and method for controlling the electronic deivce
US20210357791A1 (en) System and method for storing and processing data
Kvet et al. Use of machine learning for the unknown values in database transformation processes
Gummadi et al. Analysis of machine learning in education sector
Nguyen et al. From Black Boxes to Conversations: Incorporating XAI in a Conversational Agent
Kadijević Data science for novice students: A didactic approach to data mining using neural networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21854457

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022541152

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21854457

Country of ref document: EP

Kind code of ref document: A1