WO2019053958A1 - Evaluation assistance system and evaluation assistance device - Google Patents
Evaluation assistance system and evaluation assistance device Download PDFInfo
- Publication number
- WO2019053958A1 WO2019053958A1 PCT/JP2018/020608 JP2018020608W WO2019053958A1 WO 2019053958 A1 WO2019053958 A1 WO 2019053958A1 JP 2018020608 W JP2018020608 W JP 2018020608W WO 2019053958 A1 WO2019053958 A1 WO 2019053958A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- evaluation
- proposer
- target information
- presentation
- Prior art date
Links
- 238000011156 evaluation Methods 0.000 title claims abstract description 321
- 230000036541 health Effects 0.000 claims description 12
- 230000008921 facial expression Effects 0.000 claims description 11
- 239000000463 material Substances 0.000 claims description 7
- 230000006855 networking Effects 0.000 claims description 7
- 238000000034 method Methods 0.000 claims description 3
- 230000001815 facial effect Effects 0.000 claims description 2
- 230000010365 information processing Effects 0.000 claims 1
- 238000011158 quantitative evaluation Methods 0.000 description 21
- 230000006854 communication Effects 0.000 description 13
- 238000004891 communication Methods 0.000 description 13
- 230000004044 response Effects 0.000 description 11
- 239000000284 extract Substances 0.000 description 9
- 230000004048 modification Effects 0.000 description 9
- 238000012986 modification Methods 0.000 description 9
- 238000010801 machine learning Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 5
- 238000012216 screening Methods 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 241000209761 Avena Species 0.000 description 1
- 235000007319 Avena orientalis Nutrition 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000007175 bidirectional communication Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 235000013305 food Nutrition 0.000 description 1
- 230000003862 health status Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000000474 nursing effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000012946 outsourcing Methods 0.000 description 1
- 235000013550 pizza Nutrition 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000003442 weekly effect Effects 0.000 description 1
- 230000037303 wrinkles Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q40/00—Finance; Insurance; Tax strategies; Processing of corporate or income taxes
- G06Q40/02—Banking, e.g. interest calculation or account maintenance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
Definitions
- Patent Document 1 can only evaluate based on how long a lecture has been spent on presentation material, etc., and can not evaluate the content of the presentation. Under such circumstances, it is desirable to be able to quantitatively evaluate the content of the presenter's presentation.
- An evaluation support system is an evaluation support system for evaluating the content of a presentation of a proposer via a network, and acquiring means for acquiring target information generated based on a video of the presentation of the proposer And previously acquired target information in the past, reference information used for evaluation of the target information in the past, and three or more stages of association between the target information in the past and the reference information are stored.
- output means for generating and outputting the evaluation result.
- An evaluation support system is the evaluation support system according to the first aspect, wherein the acquisition means generates question information for the proposer based on the target information and outputs the question information, and the question information And additional means for acquiring answer information based on the proposer's answer to.
- the acquiring means is content data related to contents of materials used for a presentation of the proposer from the target information
- the present invention is characterized in that it extracts at least one of progress data on how to proceed with the presentation, voice data on voice of the proposer, and expression data on facial features of the proposer.
- the acquisition means in the evaluation support system according to any one of the first to fifth aspects, the acquisition means generates the target information generated based on a video of a presentation of the proposer in 30 seconds to 30 minutes. It is characterized by acquiring.
- the evaluation support system according to the eighth invention is characterized in that, in any one of the first invention to the seventh invention, the acquisition means acquires the degree of attention of the proposer in a social networking service.
- An evaluation support apparatus is an evaluation support apparatus for evaluating the content of a presentation of a proposer, the acquisition unit acquiring target information generated based on a video of the presentation of the proposer, and acquiring in advance A reference database storing the past target information, the reference information used to evaluate the past target information, and three or more stages of association between the past target information and the reference information; An evaluation unit which acquires evaluation information including the first association degree of three or more stages between the target information and the reference information with reference to the reference database; and generates an evaluation result based on the evaluation information; And an output unit that outputs an evaluation result.
- the evaluation means refers to the reference database to acquire evaluation information including the first degree of association between the object information and the reference information.
- the target information is generated based on the video of the proposer's presentation. For this reason, it is possible to obtain a quantitative evaluation result for the content of the proposer's presentation. This makes it possible to connect the quantitative evaluation results to the social credit etc. of the proposer.
- the question means generates question information for the proposer and outputs the question information.
- the addition means acquires answer information based on the proposer's answer to the question information. For this reason, even when the amount of information included in the presentation of the proposer is small, the answer information can be acquired using the question information, and the amount of information can be replenished. This makes it possible to improve the accuracy of the evaluation result.
- the question means generates question information with reference to the evaluation information. For this reason, it is possible to generate question information according to the evaluation information. This makes it possible to further improve the accuracy of the evaluation result.
- the acquisition means extracts at least one of content data, progress data, voice data, and facial expression data from the target information. Therefore, it is possible to generate an evaluation result independently for each data, or to generate an evaluation result combining the data. This makes it possible to generate an optimal evaluation result according to the field or situation of presentation.
- the evaluation result has an estimated health value of the proposer. Therefore, the health condition of the proposer at the time of presentation can also be included in the evaluation criteria. This makes it possible to further improve the accuracy of the evaluation result.
- the acquisition means acquires target information generated based on the video of the presenter's presentation in 30 seconds to 30 minutes. For this reason, it is possible to suppress variations in the amount of information included in each piece of target information. This makes it possible to easily obtain quantitative evaluation results.
- the evaluation unit refers to the reference database, and acquires evaluation information including the first degree of association between the target information and the reference information.
- the target information is generated based on the video of the proposer's presentation. For this reason, it is possible to obtain a quantitative evaluation result for the content of the proposer's presentation. This makes it possible to connect the quantitative evaluation results to the social credit etc. of the proposer.
- FIG.2 (a) is a schematic diagram which shows an example of object information
- FIG.2 (b) is a schematic diagram which shows an example of evaluation result
- Fig.3 (a) is a schematic diagram which shows an example of a structure of the evaluation assistance apparatus in embodiment
- FIG.3 (b) is a schematic diagram which shows an example of the function of the evaluation assistance apparatus in embodiment.
- FIG. 4 is a schematic view showing an example of a reference database in the embodiment.
- FIG. 5 is a schematic view showing a first modified example of the reference database in the embodiment.
- FIG. 6 is a schematic view showing a second modification of the reference database in the embodiment.
- FIG. 4 is a schematic view showing an example of a reference database in the embodiment.
- FIG. 5 is a schematic view showing a first modified example of the reference database in the embodiment.
- FIG. 6 is a schematic view showing a second modification of the reference database in the embodiment.
- FIG. 7 is a schematic view showing an example of question information and answer information.
- FIG. 8 is a schematic view showing an example of the question database in the embodiment.
- FIG. 9 is a flowchart showing an example of the operation of the evaluation support system in the embodiment.
- FIG. 10 is a flowchart showing a first modified example of the operation of the evaluation support system in the embodiment.
- FIG. 11 is a flowchart showing a second modified example of the operation of the evaluation support system in the embodiment.
- FIG. 12 is a schematic view showing a modification of the question database in the embodiment.
- FIG. 1 is a block diagram showing an entire configuration of an evaluation support system 100 in the present embodiment.
- the evaluation support system 100 includes an evaluation support device 1.
- the evaluation support device 1 may be connected to, for example, the server 3 or the like, in addition to being connected to the user terminal 2 via, for example, the public communication network 4 (network).
- the public communication network 4 network
- the evaluation support system 100 is mainly used to deploy services on the Internet.
- the evaluation support system 100 can output a quantitative evaluation result R with respect to the target information D generated based on the video of the presentation of the user (proposer).
- the manager or the like of the evaluation support system 100 can examine the social credit of the proposer based on the evaluation result R, for example, and can set a virtual stock or the like according to the social credit.
- the proposer can consider, for example, improvement of the presentation based on the evaluation result R.
- the target information D and the evaluation result R do not have to be disclosed on the Internet. For this reason, the content etc. of the presentation of a proposer can be evaluated in a non-known state.
- the evaluation support system 100 may transmit, for example, the evaluation result R to the user terminal 2 owned by the proposer. Therefore, the proposer can grasp the quantitative evaluation of the presentation. Note that the evaluation support system 100 can also transmit the video of the presentation of the proposer and the evaluation result R to another user terminal owned by another user. For this reason, when another user performs an open call for participants for presentation and the like, quantitative selection support can be performed via the evaluation support system 100.
- the manager or the like of the evaluation support system 100 mainly uses the evaluation support apparatus 1 in the present embodiment.
- the evaluation support device 1 is for acquiring target information D and outputting an evaluation result R for the target information D.
- Other users, administrators, etc. can consider, for example, the social credit of the proposer and the quality of the video of the presentation based on the evaluation result R obtained by the evaluation support device 1.
- the evaluation support device 1 acquires, from the user terminal 2, target information D generated by the user terminal 2 as illustrated in FIG. 2A, for example.
- the target information D is generated using the camera and the microphone of the user terminal 2 based on the video of the proposer's presentation and the like.
- the target information D includes, for example, at least one of presentation data P, audio data S, and facial expression data F.
- the evaluation support device 1 extracts presentation data P and the like from the acquired target information D as necessary.
- the presentation data P has, for example, at least one of content data on the content of the material used for the presentation by the proposer and progress data on how to proceed the presentation of the proposer.
- the content data includes data obtained by converting the content of the material into text data, and may also include image data such as a diagram or a graph.
- the progress data has data on the progress time explaining each material and the progress time of the whole presentation.
- the progress data includes, for example, data regarding the order in which the material and the like are used.
- the voice data S has, for example, at least one of utterance data on the suggestioner's utterance and tone data on the proposer's tone.
- the speech data has, for example, data obtained by converting the speech of the proposer into text data.
- the tone data includes, for example, data relating to the speed of speech of the proposer, voice volume, intonation, time between the speech, and the like.
- the expression data F includes, for example, at least one of first expression data relating to eye features of the proposer and face data relating to features of the entire face of the proposer.
- the first facial expression data includes data on features such as eyesight, eye power, and line of sight of the proposer.
- the face data includes data on features of the proposer's complexion, wrinkles, hair, mouth, nose, etc.
- the evaluation support device 1 generates an evaluation result R for the target information D, and outputs the evaluation result R to the user terminal 2.
- the evaluation result R has, for example, planning, sustainability, future, presentation ability, estimated health value, estimated age, and / or evaluation price.
- the evaluation result R may generate a comment for each item as well as generate a score.
- the evaluation result R is, for example, an evaluation (first evaluation) selected from the evaluation criteria using, as an evaluation criterion, at least one of planability, sustainability, future, presentation ability, estimated health value, estimated age, and evaluation price. May be included.
- the evaluation criteria can be set in advance in two or more stages, and can be set in a score format such as 101 stages of 0 to 100 points, for example.
- the first evaluation "OO points" in the evaluation criteria "plannability" is displayed.
- the evaluation result R may generate a comment corresponding to each evaluation criterion as well as generate a score.
- planability can be used, for example, to evaluate omissions, omissions, etc. in planning in the proposal of the proposer.
- the sustainability can be used, for example, to evaluate whether the proposal can be realized continuously in the proposal.
- the future can be used, for example, for objectively evaluating the future in the proposal of the proposer.
- an evaluation for example, evaluation criteria for the proposer itself is shown.
- the presentation ability can be used, for example, for evaluating the progress of the proposal by the proposer and the method of explanation.
- the estimated health value can be used to assess the health status obtained from the proponent's appearance.
- the estimated age can be used, for example, to evaluate the estimated age obtained from the appearance of the proposer.
- FIG. 3A is a schematic view showing an example of the configuration of the evaluation support device 1.
- an electronic device such as a personal computer (PC) is used.
- the evaluation support device 1 includes a housing 10, a CPU 101, a ROM 102, a RAM 103, a storage unit 104, and I / Fs 105 to 107.
- the respective configurations 101 to 107 are connected by an internal bus 110.
- a CPU (Central Processing Unit) 101 controls the entire evaluation support apparatus 1.
- a ROM (Read Only Memory) 102 stores an operation code of the CPU 101.
- a random access memory (RAM) 103 is a work area used when the CPU 101 operates.
- the storage unit 104 stores various types of information such as target information D and the like.
- a data storage device such as a solid state drive (SSD) or a floppy disk is used as the storage unit 104.
- the evaluation support device 1 may have a GPU (Graphics Processing Unit) not shown. By having a GPU, higher-speed arithmetic processing can be performed than usual.
- the I / F 105 is an interface for transmitting and receiving various information to and from the user terminal 2 via the public communication network 4.
- the I / F 106 is an interface for transmitting and receiving information with the input unit 108.
- a keyboard is used as the input unit 108, and a manager or the like of the evaluation support system 100 inputs various information or a control command of the evaluation support apparatus 1 via the input unit 108.
- the I / F 107 is an interface for transmitting and receiving various information to and from the output unit 109.
- the output unit 109 outputs various types of information stored in the storage unit 104, the processing status of the evaluation support apparatus 1, and the like.
- a display is used as the output portion 109, and may be, for example, a touch panel.
- FIG. 3B is a schematic view showing an example of the function of the evaluation support device 1.
- the evaluation support device 1 includes an acquisition unit 11, an evaluation unit 12, an output unit 14, an input unit 15, and an information DB 16.
- the evaluation support device 1 may include, for example, the update unit 13.
- the function shown in FIG. 3B is realized by the CPU 101 executing a program stored in the storage unit 104 or the like using the RAM 103 as a work area.
- each configuration 11 to 16 may be controlled by artificial intelligence, for example.
- artificial intelligence may be based on any known artificial intelligence technology.
- the information DB 16 includes a reference database in which past target information acquired in advance and reference information used for evaluation of the past target information are stored.
- the information DB 16 stores evaluation information obtained by evaluating the target information D, and also includes a database storing various information such as a format for displaying the evaluation result R based on the evaluation information and an evaluation standard.
- the reference database and the database are stored in the storage unit 104 embodied by an HDD, an SSD, or the like.
- Each of the configurations 11 to 15 stores various types of information in the information DB 16 or takes out various types of information as needed.
- the reference database stores three or more levels of association between past target information and reference information.
- the past target information and reference information have a plurality of data, and are each linked by an association degree indicating the degree of association, and, for example, three or more stages of association, such as 10 stages or 5 stages (in FIG. (Indicated by line features).
- “content A” included in the past content data indicates “80%” of the degree of association with “plan A” included in the plan of the reference information, and “sustainability” included in the sustainability of the reference information It shows the degree of association “10%” with “A”.
- the reference database has, for example, an algorithm capable of calculating the degree of association.
- a function for example, a function (classifier) optimized based on past target information, reference information, and the degree of association may be used.
- the past target information includes data corresponding to the above-described target information D, and includes, for example, at least one of past presentation data, past voice data, and past facial expression data.
- the features of the respective data are the same as the respective data included in the target information D described above, and thus the description thereof is omitted.
- the reference information has data corresponding to the evaluation result R, and has, for example, planning, sustainability, future, presentation ability, estimated health value, estimated age, and evaluation price.
- reference information the result when evaluating the past object information is used.
- the characteristic of each data is the same as each data included in the evaluation result R described above, and thus the description thereof is omitted.
- Evaluation information R with respect to a suggestioner's presentation can be generated because reference information has the above data.
- each said data is an example and can be set arbitrarily as needed.
- Past target information and reference information are stored, for example, in a video and audio data format in a reference database, and may be stored, for example, in a data format such as a numerical value, a matrix (vector), or a histogram.
- the degree of association is calculated based on the relationship between past target information and reference information.
- the degree of association is calculated using, for example, machine learning.
- machine learning For example, deep learning is used for machine learning.
- data previously processed by machine learning or the like may be used as data used as presentation data in the past, speech data in the past, and expression data in the past. That is, a result of machine learning of past target information acquired in advance as data of past presentation data, past speech data, and past facial expression data may be used.
- the degree of association may be calculated based on a relationship between reference information and a combination of two or more data included in past target information, as shown in FIG. 5, for example.
- the combination of “content A” included in the past content data of the past target information and “progress B” included in the past progress data is between “plan A” included in the plan of the reference information
- the association degree "80%” is shown, and the association degree "10%” with "Plan B” is shown. In this case, when generating the evaluation result R, it is possible to improve the accuracy and expand the range of options.
- the past target information may have intermediate data associated with each data and reference information of the past target information.
- the intermediate information is linked to at least one of the past presentation data, the past voice data, and the past facial expression data through three or more levels of similarity, and is linked to the reference information through the association. ing.
- the acquisition unit 11 acquires the target information D.
- the acquisition unit 11 may acquire the target information D from a storage medium such as, for example, a portable memory.
- the data format of the target information D is arbitrary, and for example, the acquisition unit 11 may convert it into an arbitrary data format.
- the acquisition unit 11 may extract, for example, only specific data included in the target information D.
- the acquisition unit 11 extracts at least one of content data, progress data, voice data S, and expression data F.
- An administrator or the like can arbitrarily set data to be extracted.
- the acquisition unit 11 includes, for example, a question unit 11 a.
- the question unit 11 a generates question information Q for the proposer based on the target information D, and outputs the question information Q to the user terminal 2 or the like via the output unit 14.
- the user terminal 2 or the like displays the acquired question information Q, for example, as shown in FIG.
- the question unit 11 a may generate and output, for example, the question information Q in the voice data format.
- the question unit 11a may generate and output the question information Q while acquiring the object information D (for example, in the middle of the presentation of the proposer), for example, after acquiring the object information D (for example, The question information Q may be generated and output at the end of the presentation).
- the number of times the question section 11 a generates and outputs the question information Q is arbitrary.
- the question unit 11 a generates question information Q with reference to, for example, a question database.
- the question database has the past target information acquired in advance and the past question information used for the question of the past target information, and is stored in the information DB 16.
- character strings such as stored morphemes are stored in the question database, and the question unit 11a may generate the question information Q by combining the character strings stored in the question database.
- the question unit 11a may generate question information Q based on, for example, the degree of similarity between the target information D and past target information, in addition to referring to a question sentence published on the Internet, for example. .
- the question database for example, as shown in FIG. 8, three or more stages of question association degrees are stored between past target information and past question information.
- the past target information and the past question information are linked by the question association degree indicating the degree of the relationship, for example, three or more stages of association (eg, percentage and line characteristics in FIG. 8) such as 10 stages or 5 stages. Indicated).
- “content A” included in the past content data indicates the question association degree "60%” with "question A” included in the past question information
- the question association with "question C” Indicates "30%”.
- the question unit 11 a generates and outputs the past question information having the highest degree of question association as the question information Q.
- generates question information Q it can set so that the question relevance in the arbitrary threshold value or range may be referred.
- the question database has, for example, an algorithm capable of calculating the degree of question association.
- a function classifier
- a degree of question association may be used.
- the past target information and the past question information are stored in the question database, for example, in the form of text, or in the form of video or sound data, or may be stored, for example, in data form such as numerical values, matrices (vectors) Good.
- the question association degree is calculated based on the relationship between past target information and past question information.
- the question association degree is calculated using, for example, machine learning. For example, deep learning is used for machine learning.
- the degree of question association may be calculated based on a combination of two or more pieces of data of past target information.
- the acquisition unit 11 includes, for example, an addition unit 11b.
- the adding unit 11 b acquires the answer information A based on the proposer's answer to the question information Q. For example, as shown in FIG. 7, the answer information A is generated on the basis of the contents of which the proposer answers the question information Q displayed on the user terminal 2.
- the adding unit 11 b may extract the response information A from the acquired target information D, or may acquire the response information A as information different from the target information D, for example.
- the addition unit 11 b may obtain the answer information A in the text data format, or may obtain the answer information A in the voice data format, for example.
- the question unit 11a may generate the question information Q for the proposer again based on the answer information A.
- the adding unit 11 b may, for example, obtain the answer information A as a part of the target information D, and may classify the answer information A into any of the presentation data P, the voice data S, and the expression data F, for example. Further, when the adding unit 11 b acquires the response information A as information different from the target information D, the reference database has past response information corresponding to the response information A.
- the evaluation unit 12 acquires evaluation information including the first degree of association in three or more stages between the target information D and the reference information.
- the evaluation unit 12 refers to the reference database stored in the storage unit 104, selects past target information that matches or is similar to the target information D, and relates the degree of association associated with the selected past interview data. Calculated as the first association degree.
- the evaluation unit 12 may calculate, for example, the first degree of association between the target information D and the reference data, using the reference database as an algorithm of the classifier or an optimized function.
- the presentation data P included in the target information D matches or is similar to the "content A”, "80%”, “80%” relative to “plan A” of the reference information.
- the first association degree of “10%” for the duration A ”and“ 1% ”for the“ presentation B ” is calculated.
- the voice data S is similar to "Speech A” and "Speech B"
- the degree of association between "Speech A” and "Speech B” and the reference information is multiplied by an arbitrary coefficient.
- a value is calculated as a first association degree.
- the target information D has a plurality of data, for example, a first association degree corresponding to each of the plurality of data is calculated.
- the evaluation unit 12 After calculating the first association degree, acquires evaluation information including the target information D, the reference information, and the first association degree. Note that the evaluation unit 12 may calculate the first association degree with reference to, for example, the reference database illustrated in FIG. 5 or 6.
- the updating unit 13 reflects the relationship in the degree of association, for example, when newly acquiring a relationship between past target information and reference information.
- data to be reflected in the degree of association for example, update data having target information D newly acquired by a manager or the like and reference information corresponding to the evaluation result R of the target information D is used.
- learning data or the like created by a manager or the like based on the evaluation result R is used.
- the output unit 14 generates an evaluation result R based on the evaluation information, and outputs the evaluation result R.
- the output unit 14 generates an evaluation result R for the target information D based on, for example, the first degree of association of the evaluation information.
- the output unit 14 may generate the evaluation result R, for example, without processing the evaluation information.
- the output unit 14 outputs the generated evaluation result R.
- the output unit 14 may output the evaluation result R to the output unit 109 via the I / F 107, or may output the evaluation result R to an arbitrary device such as the user terminal 2 via the I / F 105, for example.
- the evaluation result R includes, for example, a first evaluation selected from two or more evaluation criteria set in advance.
- the evaluation criteria items similar to the above-described reference information (planning, sustainability, future potential, presentation ability, estimated health value, estimated age, etc.) are used, and for example, determination criteria combining the above items are used.
- the evaluation criteria may be set to three or more levels, such as 0 to 100, or may be set to, for example, two stages of "pass" and "fail".
- the evaluation criteria are set in advance by a manager or the like and stored in the information DB 16.
- the first evaluation is selected from the evaluation criteria based on the reference information and the first degree of association included in the evaluation information.
- the first evaluation indicates, for example, an “OO point” or the like selected from evaluation criteria set at 0 to 100 points, and is acquired as a result of comprehensively evaluating the evaluation information.
- the first evaluation may be acquired, for example, for each evaluation criterion corresponding to each reference information.
- the output unit 14 converts, for example, the first degree of association included in the evaluation information as a first evaluation, and generates an evaluation result R.
- the output unit 14 outputs the generated evaluation result R.
- the output unit 14 outputs the evaluation result R to the output unit 109 via the I / F 107.
- the output unit 14 transmits the evaluation result R to the user terminal 2 owned by the proposer, for example, via the I / F 105.
- the output unit 14 transmits, for example, the video of the presentation of the proposer and the evaluation result R to another user terminal owned by another user.
- the input unit 15 receives the target information D transmitted from the user terminal 2 via the I / F 105, and also receives various information input from the input unit 108 via the I / F 106, for example. Besides, the input unit 15 may receive, for example, the target information D and the like stored in the server 3. The input unit 15 may receive, for example, the target information D via a storage medium such as a portable memory. The input unit 15 receives, for example, update data created by a manager or the like based on the evaluation result R, data for learning used to update the degree of association, and the like.
- the user terminal 2 is owned by a proposer who uses the evaluation support system 100.
- the user terminal 2 in addition to a personal computer (PC), an electronic device such as a smartphone or a tablet terminal is used, for example.
- the user terminal 2 includes a camera and a microphone for acquiring a video of a presentation of the proposer, and a generation unit that generates the target information D based on the acquired video.
- the user terminal 2 may have, for example, the same configuration and function as the evaluation support device 1 described above. That is, the evaluation support system 100 in the present embodiment may use, for example, the user terminal 2 instead of the evaluation support device 1.
- the user terminal 2 indicates, for example, a terminal (another user terminal) owned by another user who carries out a public offering of a presentation.
- the server 3 stores data (database) related to various information.
- data for example, information sent via the public communication network 4 is accumulated.
- information similar to the information DB 16 may be stored in the server 3, and transmission / reception of various information with the evaluation support apparatus 1 may be performed via the public communication network 4.
- a database server on a network may be used as the server 3.
- the server 3 may be used instead of the storage unit 104 and the information DB 16 described above.
- the public communication network 4 (network) is an Internet network or the like to which the evaluation support device 1 or the like is connected via a communication circuit.
- the public communication network 4 may be configured by a so-called optical fiber communication network.
- the public communication network 4 is not limited to a wired communication network, and may be realized by a wireless communication network.
- FIG. 9 is a flowchart showing an example of the operation of the evaluation support system 100 in the present embodiment.
- target information D to be evaluated is acquired (acquisition means S110).
- the acquisition unit 11 may acquire the target information D via the input unit 15 or may acquire the target information D via a storage medium such as a portable memory, for example.
- the acquisition unit 11 may acquire, for example, personal information in addition to the target information D.
- the acquisition unit 11 may store the acquired target information D and the like in the information DB 16.
- the acquisition unit 11 extracts specific data included in the target information D.
- the acquisition unit 11 may store the extracted data in the information DB 16.
- evaluation information including the first degree of association between the target information D and the reference information is acquired (evaluation means S120).
- the evaluation unit 12 acquires the target information D from the acquisition unit 11 or the information DB 16, and acquires a reference database from the information DB 16.
- an evaluation result R is generated based on the evaluation information, and the evaluation result R is output (output means S130).
- the output unit 14 may transmit, for example, the evaluation result R to the user terminal 2 owned by the proposer.
- the output unit 14 may acquire the evaluation data from the evaluation unit 12 or the information DB 16, and may acquire, for example, a format for displaying the evaluation result R from the information DB 16.
- the output unit 14 generates an evaluation result R with reference to, for example, a format based on the evaluation information.
- the evaluation result R may include, for example, a first evaluation selected from two or more evaluation criteria set in advance.
- the output unit 14 selects a first evaluation from evaluation criteria set in advance, such as 2 to 100 levels. For example, “very good: 80 to 100%”, “good: 60 to 79%”, “ordinary: 40 to 59%”, “bad: 20 to 39” as the evaluation criteria corresponding to the first association degree indicated by percentage. When five levels of “%” and “very bad: 0 to 19%” are set in advance, the output unit 14 first evaluates “normal” when the first association degree “45%” is acquired. Choose as.
- the output unit 14 acquires, for example, an evaluation criterion from the information DB 16, and generates and outputs an evaluation result R based on the evaluation information.
- the output unit 14 outputs the evaluation result R to the user terminal 2 or the output unit 109.
- the output unit 14 may output the evaluation result R based on, for example, a result of comparing the first threshold with a preset threshold. In this case, for example, when the threshold value is set to “80% or more”, the evaluation result R is output only when the first association degree is 80% or more.
- the condition of the threshold can be set arbitrarily.
- the output unit 14 may calculate, for example, a value for each first association degree linked to the reference information as a score, or may calculate a value or an average value in which the first association degree is combined.
- the output unit 14 may select the first evaluation based on, for example, a result of comparing the first association degree with a preset threshold. In this case, for example, when the evaluation criteria are set to two stages with “80%” as the threshold, one of the two stages of the first association degree of 80% or more and less than 80% is evaluated as the first evaluation. Selected as The output unit 14 may generate and output the evaluation result R based on only the first evaluation of, for example, 80% or more.
- the update unit 13 acquires, for example, update data newly acquired by a manager or the like, and reflects the update data in the degree of association.
- the updating unit 13 acquires, for example, learning data created by the administrator based on the evaluation result R, and reflects it on the degree of association.
- the updating unit 13 calculates and updates the degree of association using, for example, machine learning, and for example, deep learning is used for machine learning.
- the updating unit S140 reflects the relationship in the degree of association when newly acquiring a relationship between past target information and reference information. Therefore, the degree of association can be easily updated, and the accuracy of the evaluation result R can be further enhanced.
- the addition means S110b acquires the answer information A based on the answer of the proposer to the question information Q.
- the adding unit 11 b may add the response information A to the target information D.
- the evaluation unit S120 is implemented using the target information D to which the response information A is added.
- the adding unit 11 b may extract, for example, the response information A from the target information D.
- the acquisition unit 11 acquires the target information D including the response information A.
- the evaluation unit S120 refers to the reference database and evaluates including the first degree of association between the target information D and the reference information, as in the above-described operation. Get information.
- the target information D is generated based on the video of the proposer's presentation. Therefore, it is possible to obtain a quantitative evaluation result R for the content of the presentation of the proposer. This makes it possible to connect the quantitative evaluation result R to the social credit etc. of the proposer.
- the question means S110a generates question information Q for the proposer, and outputs the question information Q.
- the addition means S110b acquires the answer information A based on the answer of the proposer to the question information Q. Therefore, even when the amount of information contained in the presentation of the proposer is small, the answer information A can be acquired using the question information Q, and the amount of information can be replenished. This makes it possible to improve the accuracy of the evaluation result R.
- the question means S110a refers to, for example, the question database shown in FIG. 12 and generates question information Q.
- the question database shown in FIG. 12 differs from the question database shown in FIG. 8 in that the reference information possessed by the reference database is linked with past question information and the degree of question association. That is, based on the evaluation information acquired in the evaluation means S120, the question means S110a and the addition means S110b are implemented.
- the evaluation unit S120 refers to the reference database and evaluates including the first degree of association between the target information D and the reference information as in the above-described operation. Get information.
- the target information D is generated based on the video of the proposer's presentation. Therefore, it is possible to obtain a quantitative evaluation result R for the content of the presentation of the proposer. This makes it possible to connect the quantitative evaluation result R to the social credit etc. of the proposer.
- the reference information etc. (past question information, intermediate information) may be displayed in descending order of association degree. It becomes possible. By displaying in order of the degree of association and the like in this manner, a manager or the like can preferentially select a tendency that is likely to be a proposer. On the other hand, since the tendency which is low in the possibility of being a proponent can be displayed without being excluded, it becomes possible for the administrator etc. to make a selection without missing.
- evaluation can be performed without missing even when the degree of association or the like is extremely low such as 1%. Even if the reference information etc. has a very low degree of association, it indicates that they are connected as a slight sign, and it becomes possible to suppress missing or false recognition.
- the acquiring unit 11 may acquire, for example, the degree of attention of the proposer in the social networking service.
- the degree of attention of the proposer for example, the number of followers such as Twitter (registered trademark) or Facebook (registered trademark) is used, and the degree of connection with other users is used. Therefore, information other than the content of the proposer's presentation can be targeted for the evaluation result R. Thereby, it is possible to easily generate the optimum evaluation result R in accordance with the field or situation of presentation.
- the video of the proposer is desirably 30 seconds or more and 30 minutes or less, and more preferably 2 minutes or more and 5 minutes or less. Thereby, it is possible to suppress the evaluation variation for each proposer, and it is possible to easily realize the quantitative evaluation result R.
- the target information D may be generated, for example, on the basis of a video of a presentation for the purpose of funding or supporting a world adventure in which the proposer takes several months to several years.
- the target information D is generated based on the video of the presentation for the purpose of procuring funds necessary for the athlete, for example, the proposer is the athlete or the athlete's supporter. Good.
- the target information D may be generated based on, for example, a video of a presentation for which the proposer aims for a support request or provision other than a fund.
- Credit, electronic money, points, and / or virtual currency and may be, for example, personal support, educational support, technical guidance, and the like.
- the target information D may be generated based on a video of a presentation for the purpose of searching for a side job for a time when the proposer is open, searching for a job, or requesting personal assistance, for example.
- a video of the presentation "help moving,” “will accompany shopping,” “guide a walk around the city,” “get on a fashion consultation,” “I'll deal with crush,” "will talk freely
- the video based on the description of the specific implementation content such as “.” May be used.
- a video based on the description of the contents to the effect of serving oats, pizza and the like may be used.
- the target information D may be generated based on the video of the presentation for the purpose of, for example, a human request or provision such as a periodical request such as a babysitter by the proposer.
- the target information D may be generated based on, for example, a video of a presentation intended to support a person, a service, or a system that the proposer wants to support.
- the target information D may be generated based on, for example, a video in which the performance such as the performer's performance and performance is a presentation.
- the proposer generates the target information D for the purpose of, for example, attracting customers or financial support.
- a video of a presentation for example, personal information on the Internet (for example, Facebook (registered trademark) etc.)
- An image based on the contents of trading, buying and selling for the account used may be used.
- other users can also check information such as the texture of the product fabric, the comfort, and the sense of size, which can not be known unless the product is actually at hand.
- the target information D of only the proposer who has won through the game or the quiz performed in advance may be acquired and transmitted to another user.
- only the target information D of the proposer selected in advance is provided to the other users, so the quality of the target information D can be improved.
- the contents regarding business investment may be used as a video of a presentation. Therefore, other users can consider business investment decisions via the evaluation support system 100.
- the video of the presentation a video based on the description of the contents regarding outsourcing of production outsource or consignment may be used. For this reason, it is possible to obtain a quantitative evaluation result R based on videos of presentations in a plurality of companies, and it becomes possible to easily select a consignee company or the like.
- another user may select a field of interest and may be able to acquire a video or the like of a presentation of a proposer in the selected field. Moreover, after acquiring the video etc. of a presentation of a proposer, you may provide an additional service. For example, when another user selects a social task of interest, the social task may include support or sick child support. Grouping such as presentation videos is performed based on the social issues, and other users can make donations to a group of interest.
- the video of the presentation for example, a video based on the content to recommend another proposer may be used. Therefore, in addition to the evaluation result R for quantitatively evaluating the content of the presentation, it is possible to confirm the degree to which the proposer is supported by other proposers.
- Evaluation support device 2 User terminal 3: Server 4: Public communication network 10: Case 11: Acquisition unit 11a: Question unit 11b: Addition unit 12: Evaluation unit 13: Update unit 14: Output unit 15: Input unit 16 : Information DB 100: Evaluation support system 101: CPU 102: ROM 103: RAM 104: Storage unit 105: I / F 106: I / F 107: I / F 108: input part 109: output part 110: internal bus S110: acquisition means S110a: question means S110b: addition means S120: evaluation means S130: output means S140: update means
Landscapes
- Business, Economics & Management (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Economics (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Development Economics (AREA)
- Technology Law (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Primary Health Care (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
[Problem] To provide an evaluation assistance system and evaluation assistance device capable of quantitatively evaluating the content of a proposer's presentation. [Solution] Provided is an evaluation assistance system 100 for evaluating the content of a proposer's presentation via a network 4, said system being characterized by comprising: an acquisition means S110 for acquiring information of interest D generated on the basis of a video of the proposer's presentation; a reference database wherein is stored previously acquired past information of interest, reference information used in the evaluation of the past information of interest, and associations of at least three levels between the past information of interest and the reference information; an evaluation means S120 for referring to the reference database and acquiring evaluation information including a first association which is an association of at least three levels between the information of interest D and the reference information; and an output means S130 for generating an evaluation result R on the basis of the evaluation information and outputting the evaluation result R.
Description
本発明は、提案者のプレゼンテーションの内容を評価する評価支援システム及び評価支援装置に関する。
The present invention relates to an evaluation support system and an evaluation support apparatus for evaluating the content of a presenter's presentation.
近年、インターネット上のサービスでは、例えばユーザ(提案者)の社会的信用に基づき、個人間におけるサービス等の享受を促進できる環境の開発が盛んに行われている。例えば、提案者の社会的信用に基づいた仮想株式等を発行し、他のユーザが仮想株式等を売買するサービス等が注目を集めている。このような分野において、提案者の社会的信用は、他のソーシャルネットワーキングサービス(SNS: Social Networking Service)における提案者への注目度合いで評価され、例えばTwitter(登録商標)やFacebook(登録商標)等のフォロワー数により評価される。
2. Description of the Related Art In recent years, with services on the Internet, development of an environment capable of promoting enjoyment of services among individuals based on, for example, social trust of users (proposers) has been actively performed. For example, services such as issuing virtual stocks and the like based on the social credit of a proposer and buying and selling virtual stocks and the like by other users have attracted attention. In such a field, the social credit of the proposer is evaluated by the degree of attention to the proposer in other social networking services (SNS: Social Networking Service), for example, Twitter (registered trademark), Facebook (registered trademark), etc. It is evaluated by the number of followers of.
他方、提案者に基づく仮想株式等を発行する場合、仮想株式等を売買するユーザは、主に提案者の将来性等を含む思想を判断基準として仮想株式等の売買を検討する。このため、例えば他のソーシャルネットワーキングサービスにおける提案者への注目度合いが高い場合、提案者の思想が懐疑的な場合であったとしても、提案者の社会的信用は必要以上に高く評価される恐れがある。また、他のソーシャルネットワーキングサービスを利用していない提案者においては、思想が有望な場合であったとしても、提案者の社会的信用は低く評価される恐れがある。このため、従来の評価方法では、提案者の社会的信用を適切に評価することが難しい。
On the other hand, when issuing a hypothetical stock or the like based on a proposer, a user who buys or sells the hypothetical stock or the like examines buying and selling the hypothetical stock or the like mainly based on the idea including the prospect of the proposer. For this reason, for example, when the degree of attention to the proposer in other social networking services is high, the social credit of the proposer may be evaluated higher than necessary even if the thinker's thought is skeptical. There is. In addition, among proponents who do not use other social networking services, there is a risk that the proponent's social credibility may be evaluated low even if the idea is promising. For this reason, it is difficult for the conventional evaluation method to appropriately evaluate the social credit of the proposer.
提案者の社会的信用を適切に評価する方法として、提案者の思想を含むプレゼンテーションを評価する方法が挙げられる。しかしながら、運営者等が各提案者のプレゼンテーションを評価するには、膨大な時間と評価者の人件費とを費やす必要がある。また、評価者毎の評価バラつきも大きくなることが想定されるため、定量的な評価が課題として挙げられる。
One way to properly assess the proponent's social credibility is to evaluate a presentation that contains the proponent's thoughts. However, in order for the operator etc. to evaluate each proposer's presentation, it is necessary to spend a great deal of time and labor cost of the evaluator. In addition, since it is assumed that the evaluation variation for each evaluator will also increase, quantitative evaluation is an issue.
この点、提案者のプレゼンテーションを評価するために、例えば特許文献1に開示されたプレゼンテーション評価装置等が提案されている。
In this regard, in order to evaluate the presenter's presentation, for example, a presentation evaluation device disclosed in Patent Document 1 has been proposed.
特許文献1に開示されたプレゼンテーション評価装置では、プレゼンテーションが開始されると、スライドを表示する。そして、前記視線方向検知装置により検知された視線の評価を加算する。そして、この装置は、各ページをめくるタイミングをチェックする。この結果を受けて、ページをめくるタイミングとアジェンダから抽出された時間との比較を行う。その結果、大きなずれが発生している場合には、プレゼンテーション画面にその旨が分かる警告の表示を行う。例えば、予定の時間より早く次のページに進んでしまった場合には、「早く進みました(X秒)」と表示する。
The presentation evaluation device disclosed in Patent Document 1 displays a slide when the presentation is started. Then, the evaluation of the sight line detected by the sight direction detection device is added. The device then checks the timing of turning each page. Based on this result, the page turning timing is compared with the time extracted from the agenda. As a result, when a large deviation occurs, a warning is displayed on the presentation screen to indicate that. For example, if the user advances to the next page earlier than the scheduled time, "Progressed early (X seconds)" is displayed.
しかしながら、特許文献1の開示技術では、発表資料に対してどの程度の時間講演を費やしたか等に基づく評価ができるに過ぎず、プレゼンテーションの内容を評価することができない。このような事情により、提案者のプレゼンテーションの内容を定量的に評価できることが望まれている。
However, the disclosed technology of Patent Document 1 can only evaluate based on how long a lecture has been spent on presentation material, etc., and can not evaluate the content of the presentation. Under such circumstances, it is desirable to be able to quantitatively evaluate the content of the presenter's presentation.
そこで本発明は、上述した問題点に鑑みて案出されたものであり、その目的とするところは、提案者のプレゼンテーションの内容を定量的に評価できる評価支援システム及び評価支援装置を提供することにある。
Therefore, the present invention has been made in view of the above-mentioned problems, and an object of the present invention is to provide an evaluation support system and an evaluation support apparatus capable of quantitatively evaluating the content of a presentation of a proposer. It is in.
第1発明に係る評価支援システムは、ネットワークを介して提案者のプレゼンテーションの内容を評価する評価支援システムであって、前記提案者のプレゼンテーションの映像に基づいて生成された対象情報を取得する取得手段と、予め取得された過去の対象情報、前記過去の対象情報の評価に用いられた参照情報、及び、前記過去の対象情報と前記参照情報との間における3段階以上の連関度が記憶された参照データベースと、前記参照データベースを参照し、前記対象情報と、前記参照情報との間における3段階以上の第1連関度を含む評価情報を取得する評価手段と、前記評価情報に基づき評価結果を生成し、前記評価結果を出力する出力手段と、を備えることを特徴とする。
An evaluation support system according to a first aspect of the present invention is an evaluation support system for evaluating the content of a presentation of a proposer via a network, and acquiring means for acquiring target information generated based on a video of the presentation of the proposer And previously acquired target information in the past, reference information used for evaluation of the target information in the past, and three or more stages of association between the target information in the past and the reference information are stored. An evaluation means for acquiring evaluation information including a first association degree of three or more stages between the target information and the reference information with reference to the reference database, the reference database, and an evaluation result based on the evaluation information And output means for generating and outputting the evaluation result.
第2発明に係る評価支援システムは、第1発明において、前記取得手段は、前記対象情報に基づき、前記提案者への質問情報を生成し、前記質問情報を出力する質問手段と、前記質問情報に対する前記提案者の回答に基づく回答情報を取得する追加手段と、を有することを特徴とする。
An evaluation support system according to a second aspect of the present invention is the evaluation support system according to the first aspect, wherein the acquisition means generates question information for the proposer based on the target information and outputs the question information, and the question information And additional means for acquiring answer information based on the proposer's answer to.
第3発明に係る評価支援システムは、第2発明において、前記質問手段は、前記評価手段により取得された前記評価情報を参照して前記質問情報を生成することを特徴とする。
The evaluation support system according to the third invention is characterized in that, in the second invention, the question means generates the question information with reference to the evaluation information acquired by the evaluation means.
第4発明に係る評価支援システムは、第1発明~第3発明の何れかにおいて、前記取得手段は、前記対象情報から、前記提案者のプレゼンテーションに用いた資料の内容に関する内容データ、前記提案者のプレゼンテーションの進め方に関する進行データ、前記提案者の音声に関する音声データ、及び、前記提案者の顔の特徴に関する表情データの少なくとも何れかを抽出することを特徴とする。
In the evaluation support system according to the fourth aspect of the present invention, in the evaluation support system according to any one of the first to third aspects, the acquiring means is content data related to contents of materials used for a presentation of the proposer from the target information The present invention is characterized in that it extracts at least one of progress data on how to proceed with the presentation, voice data on voice of the proposer, and expression data on facial features of the proposer.
第5発明に係る評価支援システムは、第4発明において、前記表情データは、前記提案者の目の特徴に関する第1表情データと、前記提案者の顔全体の特徴に関する顔データと、を有し、前記評価結果は、推定健康値を有することを特徴とする。
In the evaluation support system according to the fifth aspect of the present invention, in the fourth aspect, the facial expression data includes first facial expression data relating to eye features of the proposer, and face data relating to features of the entire face of the proposer. The evaluation result is characterized by having an estimated health value.
第6発明に係る評価支援システムは、第1発明~第5発明の何れかにおいて、前記取得手段は、30秒以上30分以下における前記提案者のプレゼンテーションの映像に基づいて生成された前記対象情報を取得することを特徴とする。
In the evaluation support system according to a sixth aspect of the present invention, in the evaluation support system according to any one of the first to fifth aspects, the acquisition means generates the target information generated based on a video of a presentation of the proposer in 30 seconds to 30 minutes. It is characterized by acquiring.
第7発明に係る評価支援システムは、第1発明~第6発明の何れかにおいて、前記過去の対象情報と、前記参照情報との間の関係を新たに取得した場合には、前記関係を前記連関度に反映させる更新手段をさらに備えることを特徴とする。
The evaluation support system according to a seventh aspect of the present invention is the evaluation support system according to any one of the first to sixth aspects, wherein, when a relation between the past target information and the reference information is newly obtained, the relation is said The method further comprises updating means for reflecting the degree of association.
第8発明に係る評価支援システムは、第1発明~第7発明の何れかにおいて、前記取得手段は、ソーシャルネットワーキングサービスにおける前記提案者の注目度合いを取得することを特徴とする。
The evaluation support system according to the eighth invention is characterized in that, in any one of the first invention to the seventh invention, the acquisition means acquires the degree of attention of the proposer in a social networking service.
第9発明に係る評価支援装置は、提案者のプレゼンテーションの内容を評価する評価支援装置であって、前記提案者のプレゼンテーションの映像に基づいて生成された対象情報を取得する取得部と、予め取得された過去の対象情報、前記過去の対象情報の評価に用いられた参照情報、及び、前記過去の対象情報と前記参照情報との間における3段階以上の連関度が記憶された参照データベースと、前記参照データベースを参照し、前記対象情報と、前記参照情報との間における3段階以上の第1連関度を含む評価情報を取得する評価部と、前記評価情報に基づき評価結果を生成し、前記評価結果を出力する出力部と、を備えることを特徴とする。
An evaluation support apparatus according to a ninth aspect of the present invention is an evaluation support apparatus for evaluating the content of a presentation of a proposer, the acquisition unit acquiring target information generated based on a video of the presentation of the proposer, and acquiring in advance A reference database storing the past target information, the reference information used to evaluate the past target information, and three or more stages of association between the past target information and the reference information; An evaluation unit which acquires evaluation information including the first association degree of three or more stages between the target information and the reference information with reference to the reference database; and generates an evaluation result based on the evaluation information; And an output unit that outputs an evaluation result.
第1発明~第8発明によれば、評価手段は、参照データベースを参照し、対象情報と、参照情報との間における第1連関度を含む評価情報を取得する。対象情報は、提案者のプレゼンテーションの映像に基づいて生成される。このため、提案者のプレゼンテーションの内容に対して定量的な評価結果を取得することができる。これにより、定量的な評価結果を提案者の社会的信用等に繋げることが可能となる。
According to the first invention to the eighth invention, the evaluation means refers to the reference database to acquire evaluation information including the first degree of association between the object information and the reference information. The target information is generated based on the video of the proposer's presentation. For this reason, it is possible to obtain a quantitative evaluation result for the content of the proposer's presentation. This makes it possible to connect the quantitative evaluation results to the social credit etc. of the proposer.
特に、第2発明によれば、質問手段は、提案者への質問情報を生成し、質問情報を出力する。また、追加手段は、質問情報に対する提案者の回答に基づく回答情報を取得する。このため、提案者のプレゼンテーションに含まれる情報量が少ない場合においても、質問情報を用いて回答情報を取得することができ、情報量を補充することができる。これにより、評価結果の精度を向上させることが可能となる。
In particular, according to the second aspect, the question means generates question information for the proposer and outputs the question information. In addition, the addition means acquires answer information based on the proposer's answer to the question information. For this reason, even when the amount of information included in the presentation of the proposer is small, the answer information can be acquired using the question information, and the amount of information can be replenished. This makes it possible to improve the accuracy of the evaluation result.
特に、第3発明によれば、質問手段は、評価情報を参照して質問情報を生成する。このため、評価情報に応じた質問情報を生成することができる。これにより、評価結果の精度をさらに向上させることが可能となる。
In particular, according to the third invention, the question means generates question information with reference to the evaluation information. For this reason, it is possible to generate question information according to the evaluation information. This makes it possible to further improve the accuracy of the evaluation result.
特に、第4発明によれば、取得手段は、対象情報から、内容データ、進行データ、音声データ、及び表情データの少なくとも何れかを抽出する。このため、上記データ毎に独立した評価結果の生成や、上記データを組み合わせた評価結果の生成をすることができる。これにより、プレゼンテーションの分野や状況に応じて最適な評価結果を生成することが可能となる。
In particular, according to the fourth aspect, the acquisition means extracts at least one of content data, progress data, voice data, and facial expression data from the target information. Therefore, it is possible to generate an evaluation result independently for each data, or to generate an evaluation result combining the data. This makes it possible to generate an optimal evaluation result according to the field or situation of presentation.
特に、第5発明によれば、評価結果は、提案者の推定健康値を有する。このため、提案者のプレゼンテーション時における健康状態も評価基準に含めることができる。これにより、評価結果の精度をさらに向上させることが可能となる。
In particular, according to the fifth invention, the evaluation result has an estimated health value of the proposer. Therefore, the health condition of the proposer at the time of presentation can also be included in the evaluation criteria. This makes it possible to further improve the accuracy of the evaluation result.
特に、第6発明によれば、取得手段は、30秒以上30分以下における提案者のプレゼンテーションの映像に基づいて生成された対象情報を取得する。このため、対象情報毎に含まれる情報量のバラつきを抑制することができる。これにより、定量的な評価結果を容易に取得することが可能となる。
In particular, according to the sixth aspect of the present invention, the acquisition means acquires target information generated based on the video of the presenter's presentation in 30 seconds to 30 minutes. For this reason, it is possible to suppress variations in the amount of information included in each piece of target information. This makes it possible to easily obtain quantitative evaluation results.
特に、第7発明によれば、更新手段は、過去の対象情報と、参照情報との間の関係を新たに取得した場合には、関係を連関度に反映させる。このため、連関度を容易に更新することができ、評価結果の精度をさらに高めることが可能となる。
In particular, according to the seventh invention, the updating means reflects the relationship in the degree of association when newly acquiring the relationship between the past target information and the reference information. Therefore, the degree of association can be easily updated, and the accuracy of the evaluation result can be further enhanced.
第9発明によれば、評価部は、参照データベースを参照し、対象情報と、参照情報との間における第1連関度を含む評価情報を取得する。対象情報は、提案者のプレゼンテーションの映像に基づいて生成される。このため、提案者のプレゼンテーションの内容に対して定量的な評価結果を取得することができる。これにより、定量的な評価結果を提案者の社会的信用等に繋げることが可能となる。
According to the ninth aspect, the evaluation unit refers to the reference database, and acquires evaluation information including the first degree of association between the target information and the reference information. The target information is generated based on the video of the proposer's presentation. For this reason, it is possible to obtain a quantitative evaluation result for the content of the proposer's presentation. This makes it possible to connect the quantitative evaluation results to the social credit etc. of the proposer.
以下、本発明の実施形態における評価支援システムの一例について、図面を参照しながら説明する。
Hereinafter, an example of the evaluation support system in the embodiment of the present invention will be described with reference to the drawings.
(実施形態:評価支援システム100の構成)
図1を参照して、実施形態における評価支援システム100の構成の一例について説明する。図1は、本実施形態における評価支援システム100の全体構成を示すブロック図である。 (Embodiment: Configuration of Evaluation Support System 100)
An exemplary configuration of theevaluation support system 100 according to the embodiment will be described with reference to FIG. FIG. 1 is a block diagram showing an entire configuration of an evaluation support system 100 in the present embodiment.
図1を参照して、実施形態における評価支援システム100の構成の一例について説明する。図1は、本実施形態における評価支援システム100の全体構成を示すブロック図である。 (Embodiment: Configuration of Evaluation Support System 100)
An exemplary configuration of the
図1に示すように、評価支援システム100は、評価支援装置1を備える。評価支援装置1は、例えば公衆通信網4(ネットワーク)を介して、ユーザ端末2と接続されるほか、例えばサーバ3等と接続されてもよい。
As shown in FIG. 1, the evaluation support system 100 includes an evaluation support device 1. The evaluation support device 1 may be connected to, for example, the server 3 or the like, in addition to being connected to the user terminal 2 via, for example, the public communication network 4 (network).
評価支援システム100は、主にインターネット上のサービスを展開するために用いられる。評価支援システム100は、ユーザ(提案者)のプレゼンテーションの映像に基づいて生成された対象情報Dに対して、定量的な評価結果Rを出力することができる。このため、評価支援システム100の管理者等は、例えば評価結果Rに基づき、提案者の社会的信用を検討でき、社会的信用に応じた仮想株式等を設定することができる。また、提案者は、例えば評価結果Rに基づき、プレゼンテーションの改善等を検討することができる。また、対象情報D及び評価結果Rは、インターネット上に公開する必要がない。このため、提案者のプレゼンテーションの内容等を非公知の状態で評価をすることができる。また、評価支援システム100は、例えば評価結果Rを提案者の保有するユーザ端末2に送信してもよい。このため、提案者は、プレゼンテーションに対する定量的な評価を把握することができる。なお、評価支援システム100は、提案者のプレゼンテーションの映像及び評価結果Rを、他のユーザが保有する他のユーザ端末に送信することもできる。このため、他のユーザが、プレゼンテーションの公募等を実施する際、評価支援システム100を介して定量的な選考の支援を実施することができる。
The evaluation support system 100 is mainly used to deploy services on the Internet. The evaluation support system 100 can output a quantitative evaluation result R with respect to the target information D generated based on the video of the presentation of the user (proposer). For this reason, the manager or the like of the evaluation support system 100 can examine the social credit of the proposer based on the evaluation result R, for example, and can set a virtual stock or the like according to the social credit. Also, the proposer can consider, for example, improvement of the presentation based on the evaluation result R. Further, the target information D and the evaluation result R do not have to be disclosed on the Internet. For this reason, the content etc. of the presentation of a proposer can be evaluated in a non-known state. Further, the evaluation support system 100 may transmit, for example, the evaluation result R to the user terminal 2 owned by the proposer. Therefore, the proposer can grasp the quantitative evaluation of the presentation. Note that the evaluation support system 100 can also transmit the video of the presentation of the proposer and the evaluation result R to another user terminal owned by another user. For this reason, when another user performs an open call for participants for presentation and the like, quantitative selection support can be performed via the evaluation support system 100.
<評価支援装置1>
本実施形態における評価支援装置1は、主に評価支援システム100の管理者等が用いる。評価支援装置1は、対象情報Dを取得し、対象情報Dに対する評価結果Rを出力するためのものである。他のユーザや管理者等は、評価支援装置1により得られた評価結果Rに基づき、例えば提案者の社会的信用やプレゼンテーションの映像の質を検討することができる。 <Evaluation support device 1>
The manager or the like of theevaluation support system 100 mainly uses the evaluation support apparatus 1 in the present embodiment. The evaluation support device 1 is for acquiring target information D and outputting an evaluation result R for the target information D. Other users, administrators, etc. can consider, for example, the social credit of the proposer and the quality of the video of the presentation based on the evaluation result R obtained by the evaluation support device 1.
本実施形態における評価支援装置1は、主に評価支援システム100の管理者等が用いる。評価支援装置1は、対象情報Dを取得し、対象情報Dに対する評価結果Rを出力するためのものである。他のユーザや管理者等は、評価支援装置1により得られた評価結果Rに基づき、例えば提案者の社会的信用やプレゼンテーションの映像の質を検討することができる。 <
The manager or the like of the
評価支援装置1は、例えば図2(a)に示すように、ユーザ端末2により生成された対象情報Dを、ユーザ端末2から取得する。対象情報Dは、ユーザ端末2の有するカメラ及びマイクを用いて、提案者のプレゼンテーションの映像等に基づいて生成される。対象情報Dは、例えばプレゼンデータP、音声データS、及び表情データFの少なくとも何れかを有する。評価支援装置1は、取得した対象情報Dから、必要に応じてプレゼンデータP等を抽出する。
The evaluation support device 1 acquires, from the user terminal 2, target information D generated by the user terminal 2 as illustrated in FIG. 2A, for example. The target information D is generated using the camera and the microphone of the user terminal 2 based on the video of the proposer's presentation and the like. The target information D includes, for example, at least one of presentation data P, audio data S, and facial expression data F. The evaluation support device 1 extracts presentation data P and the like from the acquired target information D as necessary.
プレゼンデータPは、例えば提案者がプレゼンテーションに用いた資料の内容に関する内容データ、及び提案者のプレゼンテーションの進め方に関する進行データの少なくとも何れかを有する。内容データは、資料の内容をテキストデータに変換したデータを有するほか、例えば図やグラフ等の画像データを有してもよい。進行データは、資料毎を説明する進行時間や、プレゼンテーション全体の進行時間に関するデータを有する。進行データは、例えば資料等を用いる順番に関するデータを有する。
The presentation data P has, for example, at least one of content data on the content of the material used for the presentation by the proposer and progress data on how to proceed the presentation of the proposer. The content data includes data obtained by converting the content of the material into text data, and may also include image data such as a diagram or a graph. The progress data has data on the progress time explaining each material and the progress time of the whole presentation. The progress data includes, for example, data regarding the order in which the material and the like are used.
音声データSは、例えば提案者の発言に関する発言データ、及び提案者の口調に関する口調データの少なくとも何れかを有する。発言データは、例えば提案者の発言をテキストデータに変換したデータを有する。口調データは、例えば提案者の発言の速度、声量、イントネーション、発言間の時間等に関するデータを有する。
The voice data S has, for example, at least one of utterance data on the suggestioner's utterance and tone data on the proposer's tone. The speech data has, for example, data obtained by converting the speech of the proposer into text data. The tone data includes, for example, data relating to the speed of speech of the proposer, voice volume, intonation, time between the speech, and the like.
表情データFは、例えば提案者の目の特徴に関する第1表情データ、及び提案者の顔全体の特徴に関する顔データの少なくとも何れかを有する。第1表情データは、提案者の目つき、目力、視線等の特徴に関するデータを有する。顔データは、提案者の顔色、しわ、髪、口、鼻等体の特徴に関するデータを有する。
The expression data F includes, for example, at least one of first expression data relating to eye features of the proposer and face data relating to features of the entire face of the proposer. The first facial expression data includes data on features such as eyesight, eye power, and line of sight of the proposer. The face data includes data on features of the proposer's complexion, wrinkles, hair, mouth, nose, etc.
評価支援装置1は、例えば図2(b)に示すように、対象情報Dに対する評価結果Rを生成し、ユーザ端末2に評価結果Rを出力する。評価結果Rは、例えば計画性、持続性、将来性、プレゼン能力、推定健康値、推定年齢、及び評価価格の少なくとも何れかを有する。評価結果Rは、各項目に対してスコアを生成するほか、コメントを生成してもよい。評価結果Rは、例えば計画性、持続性、将来性、プレゼン能力、推定健康値、推定年齢、及び評価価格の少なくとも何れかを評価基準として、評価基準から選択される評価(第1評価)を含んでもよい。この場合、評価基準は、予め2段階以上に設定することができ、例えば0点~100点の101段階等のスコア形式に設定することができる。図2(b)では、評価基準「計画性」における第1評価「○○点」が表示される。評価結果Rは、各評価基準に対応するスコアを生成するほか、コメントを生成してもよい。
For example, as shown in FIG. 2B, the evaluation support device 1 generates an evaluation result R for the target information D, and outputs the evaluation result R to the user terminal 2. The evaluation result R has, for example, planning, sustainability, future, presentation ability, estimated health value, estimated age, and / or evaluation price. The evaluation result R may generate a comment for each item as well as generate a score. The evaluation result R is, for example, an evaluation (first evaluation) selected from the evaluation criteria using, as an evaluation criterion, at least one of planability, sustainability, future, presentation ability, estimated health value, estimated age, and evaluation price. May be included. In this case, the evaluation criteria can be set in advance in two or more stages, and can be set in a score format such as 101 stages of 0 to 100 points, for example. In FIG. 2 (b), the first evaluation "OO points" in the evaluation criteria "plannability" is displayed. The evaluation result R may generate a comment corresponding to each evaluation criterion as well as generate a score.
計画性、持続性、及び将来性に関しては、主に提案者の提案内容に対する評価(評価基準)を示す。計画性は、例えば提案者の提案における計画上の抜けや漏れ等の評価に用いることができる。持続性は、例えば提案者の提案における持続的に実現できるか等の評価に用いることができる。将来性は、例えば提案者の提案における客観的な将来性の評価に用いることができる。
In terms of planability, sustainability, and future potential, we mainly indicate the evaluation (the evaluation criteria) on the proposal content of the proposer. Planarity can be used, for example, to evaluate omissions, omissions, etc. in planning in the proposal of the proposer. The sustainability can be used, for example, to evaluate whether the proposal can be realized continuously in the proposal. The future can be used, for example, for objectively evaluating the future in the proposal of the proposer.
参照情報のうち、プレゼン能力、推定健康値、及び推定年齢に関しては、提案者自体に対する評価(例えば評価基準)を示す。プレゼン能力は、例えば提案者の提案の進行や説明の方法等の評価に用いることができる。推定健康値は、提案者の見た目から得られる健康状態の評価に用いることができる。推定年齢は、例えば提案者の見た目から得られる推定年齢の評価に用いることができる。
Among the reference information, regarding the presentation ability, the estimated health value, and the estimated age, an evaluation (for example, evaluation criteria) for the proposer itself is shown. The presentation ability can be used, for example, for evaluating the progress of the proposal by the proposer and the method of explanation. The estimated health value can be used to assess the health status obtained from the proponent's appearance. The estimated age can be used, for example, to evaluate the estimated age obtained from the appearance of the proposer.
図3(a)は、評価支援装置1の構成の一例を示す模式図である。評価支援装置1として、パーソナルコンピュータ(PC)等の電子機器が用いられる。評価支援装置1は、筐体10と、CPU101と、ROM102と、RAM103と、記憶部104と、I/F105~107とを備える。各構成101~107は、内部バス110により接続される。
FIG. 3A is a schematic view showing an example of the configuration of the evaluation support device 1. As the evaluation support device 1, an electronic device such as a personal computer (PC) is used. The evaluation support device 1 includes a housing 10, a CPU 101, a ROM 102, a RAM 103, a storage unit 104, and I / Fs 105 to 107. The respective configurations 101 to 107 are connected by an internal bus 110.
CPU(Central Processing Unit)101は、評価支援装置1全体を制御する。ROM(Read Only Memory)102は、CPU101の動作コードを格納する。RAM(Random Access Memory)103は、CPU101の動作時に使用される作業領域である。記憶部104は、対象情報D等の各種情報が記憶される。記憶部104として、例えばHDD(Hard Disk Drive)の他、SSD(Solid State Drive)やフロッピーディスク等のデータ保存装置が用いられる。なお、例えば評価支援装置1は、図示しないGPU(Graphics Processing Unit)を有してもよい。GPUを有することで、通常よりも高速演算処理が可能となる。
A CPU (Central Processing Unit) 101 controls the entire evaluation support apparatus 1. A ROM (Read Only Memory) 102 stores an operation code of the CPU 101. A random access memory (RAM) 103 is a work area used when the CPU 101 operates. The storage unit 104 stores various types of information such as target information D and the like. For example, in addition to a hard disk drive (HDD), a data storage device such as a solid state drive (SSD) or a floppy disk is used as the storage unit 104. For example, the evaluation support device 1 may have a GPU (Graphics Processing Unit) not shown. By having a GPU, higher-speed arithmetic processing can be performed than usual.
I/F105は、公衆通信網4を介してユーザ端末2等との各種情報の送受信を行うためのインターフェースである。I/F106は、入力部分108との情報の送受信を行うためのインターフェースである。入力部分108として、例えばキーボードが用いられ、評価支援システム100の管理者等は、入力部分108を介して、各種情報又は評価支援装置1の制御コマンド等を入力する。I/F107は、出力部分109との各種情報の送受信を行うためのインターフェースである。出力部分109は、記憶部104に保存された各種情報、又は評価支援装置1の処理状況等を出力する。出力部分109として、ディスプレイが用いられ、例えばタッチパネル式でもよい。
The I / F 105 is an interface for transmitting and receiving various information to and from the user terminal 2 via the public communication network 4. The I / F 106 is an interface for transmitting and receiving information with the input unit 108. For example, a keyboard is used as the input unit 108, and a manager or the like of the evaluation support system 100 inputs various information or a control command of the evaluation support apparatus 1 via the input unit 108. The I / F 107 is an interface for transmitting and receiving various information to and from the output unit 109. The output unit 109 outputs various types of information stored in the storage unit 104, the processing status of the evaluation support apparatus 1, and the like. A display is used as the output portion 109, and may be, for example, a touch panel.
図3(b)は、評価支援装置1の機能の一例を示す模式図である。評価支援装置1は、取得部11と、評価部12と、出力部14と、入力部15と、情報DB16とを備える。評価支援装置1は、例えば更新部13を備えてもよい。なお、図3(b)に示した機能は、CPU101が、RAM103を作業領域として、記憶部104等に記憶されたプログラムを実行することにより実現される。また、各構成11~16は、例えば人工知能により制御されてもよい。ここで、「人工知能」は、いかなる周知の人工知能技術に基づくものであってもよい。
FIG. 3B is a schematic view showing an example of the function of the evaluation support device 1. The evaluation support device 1 includes an acquisition unit 11, an evaluation unit 12, an output unit 14, an input unit 15, and an information DB 16. The evaluation support device 1 may include, for example, the update unit 13. The function shown in FIG. 3B is realized by the CPU 101 executing a program stored in the storage unit 104 or the like using the RAM 103 as a work area. Also, each configuration 11 to 16 may be controlled by artificial intelligence, for example. Here, "artificial intelligence" may be based on any known artificial intelligence technology.
<情報DB16>
情報DB16は、予め取得された過去の対象情報、及び過去の対象情報の評価に用いられた参照情報が記憶された参照データベースを含む。情報DB16は、対象情報Dを評価した評価情報が記憶されるほか、例えば評価情報に基づき評価結果Rを表示するフォーマット、評価基準等の各種情報が記憶されたデータベースを含む。参照データベース及びデータベースは、HDDやSSD等で具現化された記憶部104に保存される。各構成11~15は、必要に応じて情報DB16に各種情報を記憶させ、又は各種情報を取出す。 <Information DB 16>
Theinformation DB 16 includes a reference database in which past target information acquired in advance and reference information used for evaluation of the past target information are stored. The information DB 16 stores evaluation information obtained by evaluating the target information D, and also includes a database storing various information such as a format for displaying the evaluation result R based on the evaluation information and an evaluation standard. The reference database and the database are stored in the storage unit 104 embodied by an HDD, an SSD, or the like. Each of the configurations 11 to 15 stores various types of information in the information DB 16 or takes out various types of information as needed.
情報DB16は、予め取得された過去の対象情報、及び過去の対象情報の評価に用いられた参照情報が記憶された参照データベースを含む。情報DB16は、対象情報Dを評価した評価情報が記憶されるほか、例えば評価情報に基づき評価結果Rを表示するフォーマット、評価基準等の各種情報が記憶されたデータベースを含む。参照データベース及びデータベースは、HDDやSSD等で具現化された記憶部104に保存される。各構成11~15は、必要に応じて情報DB16に各種情報を記憶させ、又は各種情報を取出す。 <
The
参照データベースには、例えば図4に示すように、過去の対象情報と参照情報との間における3段階以上の連関度が記憶される。過去の対象情報及び参照情報は、複数のデータを有し、それぞれ関係の度合いを示す連関度で紐づいており、例えば10段階や5段階等の3段階以上の関連度(図4では百分率及び線の特徴で表示)で示される。例えば、過去の内容データに含まれる「内容A」は、参照情報の計画性に含まれる「計画A」との間における連関度「80%」を示し、参照情報の持続性に含まれる「持続A」との間の連関度「10%」を示す。
For example, as shown in FIG. 4, the reference database stores three or more levels of association between past target information and reference information. The past target information and reference information have a plurality of data, and are each linked by an association degree indicating the degree of association, and, for example, three or more stages of association, such as 10 stages or 5 stages (in FIG. (Indicated by line features). For example, “content A” included in the past content data indicates “80%” of the degree of association with “plan A” included in the plan of the reference information, and “sustainability” included in the sustainability of the reference information It shows the degree of association “10%” with “A”.
参照データベースは、例えば連関度を算出できるアルゴリズムを有する。参照データベースとして、例えば過去の対象情報、参照情報、及び連関度に基づいて最適化された関数(分類器)が用いられてもよい。
The reference database has, for example, an algorithm capable of calculating the degree of association. As a reference database, for example, a function (classifier) optimized based on past target information, reference information, and the degree of association may be used.
過去の対象情報は、上述した対象情報Dに対応するデータを有し、例えば過去のプレゼンデータ、過去の音声データ、及び過去の表情データの少なくとも何れかを有する。各データの特徴は、上述した対象情報Dの有する各データと同様のため、説明を省略する。
The past target information includes data corresponding to the above-described target information D, and includes, for example, at least one of past presentation data, past voice data, and past facial expression data. The features of the respective data are the same as the respective data included in the target information D described above, and thus the description thereof is omitted.
参照情報は、評価結果Rに対応するデータを有し、例えば計画性、持続性、将来性、プレゼン能力、推定健康値、推定年齢、及び評価価格の少なくとも何れかを有する。参照情報として、過去の対象情報を評価したときの結果が用いられる。各データの特徴は、上述した評価結果Rの有する各データと同様のため、説明を省略する。
The reference information has data corresponding to the evaluation result R, and has, for example, planning, sustainability, future, presentation ability, estimated health value, estimated age, and evaluation price. As reference information, the result when evaluating the past object information is used. The characteristic of each data is the same as each data included in the evaluation result R described above, and thus the description thereof is omitted.
上記のようなデータを参照情報が有することで、提案者のプレゼンテーションに対する評価結果Rを生成することができる。なお、上記各データは一例であり、必要に応じて任意に設定することができる。
Evaluation information R with respect to a suggestioner's presentation can be generated because reference information has the above data. In addition, each said data is an example and can be set arbitrarily as needed.
過去の対象情報及び参照情報は、例えば映像や音のデータ形式で参照データベースに記憶されるほか、例えば数値、行列(ベクトル)、又はヒストグラム等のデータ形式で記憶されてもよい。
Past target information and reference information are stored, for example, in a video and audio data format in a reference database, and may be stored, for example, in a data format such as a numerical value, a matrix (vector), or a histogram.
連関度は、過去の対象情報と、参照情報との関係に基づき算出される。連関度は、例えば機械学習を用いて算出される。機械学習には、例えば深層学習が用いられる。なお、例えば過去のプレゼンデータ、過去の音声データ、及び過去の表情データとして用いられるデータは、予め機械学習等により加工されたデータが用いられてもよい。すなわち、過去のプレゼンデータ、過去の音声データ、及び過去の表情データとして、事前に取得された過去の対象情報をデータ毎に機械学習した結果が用いられてもよい。
The degree of association is calculated based on the relationship between past target information and reference information. The degree of association is calculated using, for example, machine learning. For example, deep learning is used for machine learning. For example, data previously processed by machine learning or the like may be used as data used as presentation data in the past, speech data in the past, and expression data in the past. That is, a result of machine learning of past target information acquired in advance as data of past presentation data, past speech data, and past facial expression data may be used.
連関度は、例えば図5に示すように、過去の対象情報の有する2つ以上のデータの組み合わせと、参照情報との間の関係に基づいて算出されてもよい。例えば、過去の対象情報の過去の内容データに含まれる「内容A」及び過去の進行データに含まれる「進行B」の組み合わせは、参照情報の計画性に含まれる「計画A」との間における連関度「80%」を示し、「計画B」との間における連関度「10%」を示す。この場合、評価結果Rを生成する際、精度の向上及び選択肢の範囲を拡大させることが可能となる。
The degree of association may be calculated based on a relationship between reference information and a combination of two or more data included in past target information, as shown in FIG. 5, for example. For example, the combination of “content A” included in the past content data of the past target information and “progress B” included in the past progress data is between “plan A” included in the plan of the reference information The association degree "80%" is shown, and the association degree "10%" with "Plan B" is shown. In this case, when generating the evaluation result R, it is possible to improve the accuracy and expand the range of options.
なお、過去の対象情報は、例えば図6に示すように、過去の対象情報の有する各データ及び参照情報と紐づいた中間情報を有してもよい。この場合、中間情報は、3段階以上の類似度を介して過去のプレゼンデータ、過去の音声データ、及び過去の表情データの少なくとも何れかと紐づけられ、連関度を介して参照情報と紐づけられている。これにより、過去のプレゼンデータ、過去の音声データ、及び過去の表情データの少なくとも何れかを更新、追加等を実施する場合においても、連関度を更新する必要が無い。これにより、評価対象の変更等に伴う参照データベースの再構築に費やす時間を大幅に削減することが可能となる。
For example, as shown in FIG. 6, the past target information may have intermediate data associated with each data and reference information of the past target information. In this case, the intermediate information is linked to at least one of the past presentation data, the past voice data, and the past facial expression data through three or more levels of similarity, and is linked to the reference information through the association. ing. As a result, even when at least one of past presentation data, past speech data, and past facial expression data is updated, addition, etc., it is not necessary to update the degree of association. This makes it possible to significantly reduce the time spent rebuilding the reference database accompanying changes in the evaluation target.
<取得部11>
取得部11は、対象情報Dを取得する。取得部11は、ユーザ端末2から対象情報Dを取得するほか、例えば可搬メモリ等の記憶媒体から対象情報Dを取得してもよい。なお、対象情報Dのデータ形式は任意であり、例えば取得部11が任意のデータ形式に変換してもよい。 <Acquisition unit 11>
Theacquisition unit 11 acquires the target information D. In addition to acquiring the target information D from the user terminal 2, the acquisition unit 11 may acquire the target information D from a storage medium such as, for example, a portable memory. Note that the data format of the target information D is arbitrary, and for example, the acquisition unit 11 may convert it into an arbitrary data format.
取得部11は、対象情報Dを取得する。取得部11は、ユーザ端末2から対象情報Dを取得するほか、例えば可搬メモリ等の記憶媒体から対象情報Dを取得してもよい。なお、対象情報Dのデータ形式は任意であり、例えば取得部11が任意のデータ形式に変換してもよい。 <
The
取得部11は、例えば対象情報Dの有する特定のデータのみを抽出してもよく、例えば内容データ、進行データ、音声データS、及び、表情データFの少なくとも何れかを抽出する。抽出するデータは、管理者等が任意に設定できる。
The acquisition unit 11 may extract, for example, only specific data included in the target information D. For example, the acquisition unit 11 extracts at least one of content data, progress data, voice data S, and expression data F. An administrator or the like can arbitrarily set data to be extracted.
<質問部11a>
取得部11は、例えば質問部11aを有する。質問部11aは、対象情報Dに基づき、提案者への質問情報Qを生成し、出力部14を介して質問情報Qをユーザ端末2等に出力する。ユーザ端末2等は、例えば図7に示すように、取得した質問情報Qを表示する。質問部11aは、テキストデータ形式の質問情報Qを生成及び出力するほか、例えば音声データ形式の質問情報Qを生成及び出力してもよい。質問部11aは、例えば対象情報Dを取得している途中(例えば提案者のプレゼンテーションの途中)において質問情報Qを生成及び出力してもよく、例えば対象情報Dを取得したあと(例えば対象者のプレゼンテーション終了後)において質問情報Qを生成及び出力してもよい。質問部11aが質問情報Qを生成及び出力する回数は、任意である。 <Question part 11a>
Theacquisition unit 11 includes, for example, a question unit 11 a. The question unit 11 a generates question information Q for the proposer based on the target information D, and outputs the question information Q to the user terminal 2 or the like via the output unit 14. The user terminal 2 or the like displays the acquired question information Q, for example, as shown in FIG. In addition to generating and outputting the question information Q in the text data format, the question unit 11 a may generate and output, for example, the question information Q in the voice data format. For example, the question unit 11a may generate and output the question information Q while acquiring the object information D (for example, in the middle of the presentation of the proposer), for example, after acquiring the object information D (for example, The question information Q may be generated and output at the end of the presentation). The number of times the question section 11 a generates and outputs the question information Q is arbitrary.
取得部11は、例えば質問部11aを有する。質問部11aは、対象情報Dに基づき、提案者への質問情報Qを生成し、出力部14を介して質問情報Qをユーザ端末2等に出力する。ユーザ端末2等は、例えば図7に示すように、取得した質問情報Qを表示する。質問部11aは、テキストデータ形式の質問情報Qを生成及び出力するほか、例えば音声データ形式の質問情報Qを生成及び出力してもよい。質問部11aは、例えば対象情報Dを取得している途中(例えば提案者のプレゼンテーションの途中)において質問情報Qを生成及び出力してもよく、例えば対象情報Dを取得したあと(例えば対象者のプレゼンテーション終了後)において質問情報Qを生成及び出力してもよい。質問部11aが質問情報Qを生成及び出力する回数は、任意である。 <Question part 11a>
The
質問部11aは、例えば質問データベースを参照して、質問情報Qを生成する。質問データベースは、予め取得された過去の対象情報、及び過去の対象情報の質問に用いられた過去の質問情報を有し、情報DB16に記憶される。質問データベースには、例えば蓄積された形態素等の文字列が記憶され、質問部11aは、質問データベースに記憶された文字列を組み合わせて質問情報Qを生成してもよい。この場合、質問部11aは、例えばインターネット上に公開されている質問文を参照するほか、例えば対象情報Dと過去の対象情報との間における類似度に基づき、質問情報Qを生成してもよい。
The question unit 11 a generates question information Q with reference to, for example, a question database. The question database has the past target information acquired in advance and the past question information used for the question of the past target information, and is stored in the information DB 16. For example, character strings such as stored morphemes are stored in the question database, and the question unit 11a may generate the question information Q by combining the character strings stored in the question database. In this case, the question unit 11a may generate question information Q based on, for example, the degree of similarity between the target information D and past target information, in addition to referring to a question sentence published on the Internet, for example. .
質問データベースには、例えば図8に示すように、過去の対象情報と過去の質問情報との間における3段階以上の質問連関度が記憶される。過去の対象情報及び過去の質問情報は、それぞれ関係の度合いを示す質問連関度で紐づいており、例えば10段階や5段階等の3段階以上の関連度(図8では百分率及び線の特徴で表示)で示される。例えば、過去の内容データに含まれる「内容A」は、過去の質問情報に含まれる「質問A」との間における質問連関度「60%」を示し、「質問C」との間の質問連関度「30%」を示す。質問部11aは、例えば最も高い質問連関度を有する過去の質問情報を、質問情報Qとして生成及び出力する。なお、質問部11aが質問情報Qを生成する際、任意の閾値又は範囲における質問連関度を参照するように設定できる。
In the question database, for example, as shown in FIG. 8, three or more stages of question association degrees are stored between past target information and past question information. The past target information and the past question information are linked by the question association degree indicating the degree of the relationship, for example, three or more stages of association (eg, percentage and line characteristics in FIG. 8) such as 10 stages or 5 stages. Indicated). For example, "content A" included in the past content data indicates the question association degree "60%" with "question A" included in the past question information, and the question association with "question C" Indicates "30%". For example, the question unit 11 a generates and outputs the past question information having the highest degree of question association as the question information Q. In addition, when the question part 11a produces | generates question information Q, it can set so that the question relevance in the arbitrary threshold value or range may be referred.
質問データベースは、例えば質問連関度を算出できるアルゴリズムを有する。質問データベースとして、例えば過去の対象情報、過去の質問情報、及び質問連関度に基づいて最適化された関数(分類器)が用いられてもよい。
The question database has, for example, an algorithm capable of calculating the degree of question association. As the question database, for example, a function (classifier) optimized based on past target information, past question information, and a degree of question association may be used.
過去の対象情報及び過去の質問情報は、例えばテキスト形式、又は映像や音のデータ形式で質問データベースに記憶されるほか、例えば数値、行列(ベクトル)、又はヒストグラム等のデータ形式で記憶されてもよい。
The past target information and the past question information are stored in the question database, for example, in the form of text, or in the form of video or sound data, or may be stored, for example, in data form such as numerical values, matrices (vectors) Good.
質問連関度は、過去の対象情報と、過去の質問情報との関係に基づき算出される。質問連関度は、例えば機械学習を用いて算出される。機械学習には、例えば深層学習が用いられる。なお、質問データベースでは、図5に示した参照データベースと同様に、質問連関度が、過去の対象情報の有する2つ以上のデータの組み合わせに基づいて算出されてもよい。
The question association degree is calculated based on the relationship between past target information and past question information. The question association degree is calculated using, for example, machine learning. For example, deep learning is used for machine learning. In the question database, as in the reference database shown in FIG. 5, the degree of question association may be calculated based on a combination of two or more pieces of data of past target information.
<追加部11b>
取得部11は、例えば追加部11bを有する。追加部11bは、質問情報Qに対する提案者の回答に基づく回答情報Aを取得する。回答情報Aは、例えば図7に示すように、ユーザ端末2に表示された質問情報Qに対し、提案者が回答した内容に基づき生成される。 <Additional part 11b>
Theacquisition unit 11 includes, for example, an addition unit 11b. The adding unit 11 b acquires the answer information A based on the proposer's answer to the question information Q. For example, as shown in FIG. 7, the answer information A is generated on the basis of the contents of which the proposer answers the question information Q displayed on the user terminal 2.
取得部11は、例えば追加部11bを有する。追加部11bは、質問情報Qに対する提案者の回答に基づく回答情報Aを取得する。回答情報Aは、例えば図7に示すように、ユーザ端末2に表示された質問情報Qに対し、提案者が回答した内容に基づき生成される。 <Additional part 11b>
The
追加部11bは、取得された対象情報Dから回答情報Aを抽出するほか、例えば対象情報Dとは別の情報として回答情報Aを取得してもよい。追加部11bは、テキストデータ形式の回答情報Aを取得するほか、例えば音声データ形式の回答情報Aを取得してもよい。追加部11bが回答情報Aを取得したあと、例えば質問部11aは、回答情報Aに基づき、再び提案者への質問情報Qを生成してもよい。追加部11bは、例えば回答情報Aを対象情報Dの一部として取得するほか、例えば回答情報AをプレゼンデータP、音声データS、及び表情データFの何れかに分類してもよい。また、追加部11bが回答情報Aを対象情報Dとは別の情報として取得した場合、参照データベースは、回答情報Aに対応する過去の回答情報を有する。
The adding unit 11 b may extract the response information A from the acquired target information D, or may acquire the response information A as information different from the target information D, for example. The addition unit 11 b may obtain the answer information A in the text data format, or may obtain the answer information A in the voice data format, for example. After the adding unit 11b acquires the answer information A, for example, the question unit 11a may generate the question information Q for the proposer again based on the answer information A. The adding unit 11 b may, for example, obtain the answer information A as a part of the target information D, and may classify the answer information A into any of the presentation data P, the voice data S, and the expression data F, for example. Further, when the adding unit 11 b acquires the response information A as information different from the target information D, the reference database has past response information corresponding to the response information A.
取得部11は、対象情報Dに加え、例えば対象情報Dに記憶された対象者に関する人物情報を取得してもよい。これにより、人物情報に対応した評価結果Rを出力することが可能となる。
In addition to the target information D, the acquisition unit 11 may acquire, for example, personal information related to the target person stored in the target information D. This makes it possible to output an evaluation result R corresponding to personal information.
<評価部12>
評価部12は、対象情報Dと、参照情報との間における3段階以上の第1連関度を含む評価情報を取得する。評価部12は、記憶部104に保存された参照データベースを参照して、対象情報Dと一致又は類似する過去の対象情報を選択し、選択された過去の面談データに紐づけられた連関度を第1連関度として算出する。このほか、評価部12は、例えば参照データベースを分類器のアルゴリズム又は最適化された関数として用い、対象情報Dと参照データとの間における第1連関度を算出してもよい。 <Evaluation part 12>
Theevaluation unit 12 acquires evaluation information including the first degree of association in three or more stages between the target information D and the reference information. The evaluation unit 12 refers to the reference database stored in the storage unit 104, selects past target information that matches or is similar to the target information D, and relates the degree of association associated with the selected past interview data. Calculated as the first association degree. In addition, the evaluation unit 12 may calculate, for example, the first degree of association between the target information D and the reference data, using the reference database as an algorithm of the classifier or an optimized function.
評価部12は、対象情報Dと、参照情報との間における3段階以上の第1連関度を含む評価情報を取得する。評価部12は、記憶部104に保存された参照データベースを参照して、対象情報Dと一致又は類似する過去の対象情報を選択し、選択された過去の面談データに紐づけられた連関度を第1連関度として算出する。このほか、評価部12は、例えば参照データベースを分類器のアルゴリズム又は最適化された関数として用い、対象情報Dと参照データとの間における第1連関度を算出してもよい。 <
The
例えば、図4に示した参照データベースを用いる場合、対象情報Dの有するプレゼンデータPが、「内容A」と一致又は類似するとき、参照情報の「計画A」に対して「80%」、「持続A」に対して「10%」、「プレゼンB」に対して「1%」の第1連関度がそれぞれ算出される。また、音声データSが、「発言A」及び「発言B」と類似するときは、例えば「発言A」及び「発言B」と参照情報との間の連関度に対して任意の係数を乗算した値が、第1連関度として算出される。また、対象情報Dが複数のデータを有する場合、例えば複数のデータ毎に対応する第1連関度が算出される。
For example, when using the reference database shown in FIG. 4, when the presentation data P included in the target information D matches or is similar to the "content A", "80%", "80%" relative to "plan A" of the reference information. The first association degree of “10%” for the duration A ”and“ 1% ”for the“ presentation B ”is calculated. Also, when the voice data S is similar to "Speech A" and "Speech B", for example, the degree of association between "Speech A" and "Speech B" and the reference information is multiplied by an arbitrary coefficient. A value is calculated as a first association degree. Further, when the target information D has a plurality of data, for example, a first association degree corresponding to each of the plurality of data is calculated.
評価部12は、第1連関度を算出したあと、対象情報D、参照情報、及び第1連関度を含む評価情報を取得する。なお、評価部12は、例えば図5又は図6に示した参照データベースを参照して、第1連関度を算出してもよい。
After calculating the first association degree, the evaluation unit 12 acquires evaluation information including the target information D, the reference information, and the first association degree. Note that the evaluation unit 12 may calculate the first association degree with reference to, for example, the reference database illustrated in FIG. 5 or 6.
<更新部13>
更新部13は、例えば過去の対象情報と、参照情報との間の関係を新たに取得した場合には、関係を連関度に反映させる。連関度に反映させるデータとして、例えば管理者等が新たに取得した対象情報Dと、対象情報Dの評価結果Rに対応する参照情報とを有する更新データが用いられる。このほか、連関度に反映させるデータとして、例えば管理者等が評価結果Rに基づいて作成した学習用データ等が用いられる。 <Update unit 13>
The updatingunit 13 reflects the relationship in the degree of association, for example, when newly acquiring a relationship between past target information and reference information. As data to be reflected in the degree of association, for example, update data having target information D newly acquired by a manager or the like and reference information corresponding to the evaluation result R of the target information D is used. Besides, as data to be reflected in the degree of association, for example, learning data or the like created by a manager or the like based on the evaluation result R is used.
更新部13は、例えば過去の対象情報と、参照情報との間の関係を新たに取得した場合には、関係を連関度に反映させる。連関度に反映させるデータとして、例えば管理者等が新たに取得した対象情報Dと、対象情報Dの評価結果Rに対応する参照情報とを有する更新データが用いられる。このほか、連関度に反映させるデータとして、例えば管理者等が評価結果Rに基づいて作成した学習用データ等が用いられる。 <
The updating
<出力部14>
出力部14は、評価情報に基づき評価結果Rを生成し、評価結果Rを出力する。出力部14は、例えば評価情報の第1連関度に基づいて、対象情報Dに対する評価結果Rを生成する。また、出力部14は、例えば評価情報の加工処理等を行わずに評価結果Rとして生成してもよい。 <Output unit 14>
Theoutput unit 14 generates an evaluation result R based on the evaluation information, and outputs the evaluation result R. The output unit 14 generates an evaluation result R for the target information D based on, for example, the first degree of association of the evaluation information. In addition, the output unit 14 may generate the evaluation result R, for example, without processing the evaluation information.
出力部14は、評価情報に基づき評価結果Rを生成し、評価結果Rを出力する。出力部14は、例えば評価情報の第1連関度に基づいて、対象情報Dに対する評価結果Rを生成する。また、出力部14は、例えば評価情報の加工処理等を行わずに評価結果Rとして生成してもよい。 <
The
出力部14は、生成した評価結果Rを出力する。出力部14は、I/F107を介して出力部分109に評価結果Rを出力するほか、例えばI/F105を介してユーザ端末2等の任意の装置に評価結果Rを出力してもよい。
The output unit 14 outputs the generated evaluation result R. The output unit 14 may output the evaluation result R to the output unit 109 via the I / F 107, or may output the evaluation result R to an arbitrary device such as the user terminal 2 via the I / F 105, for example.
評価結果Rは、例えば予め設定された2段階以上の評価基準から選択された第1評価を含む。評価基準として、上述した参照情報と同様の項目(計画性、持続性、将来性、プレゼン能力、推定健康値、推定年齢等)が用いられるほか、例えば上記項目を総合した判断基準が用いられる。評価基準は、0点~100点等の3段階以上に設定されるほか、例えば「合格」及び「不合格」の2段階に設定されてもよい。評価基準は、予め管理者等に設定されて情報DB16に記憶される。
The evaluation result R includes, for example, a first evaluation selected from two or more evaluation criteria set in advance. As the evaluation criteria, items similar to the above-described reference information (planning, sustainability, future potential, presentation ability, estimated health value, estimated age, etc.) are used, and for example, determination criteria combining the above items are used. The evaluation criteria may be set to three or more levels, such as 0 to 100, or may be set to, for example, two stages of "pass" and "fail". The evaluation criteria are set in advance by a manager or the like and stored in the information DB 16.
第1評価は、評価情報に含まれる参照情報及び第1連関度に基づいて、評価基準から選択される。第1評価は、例えば0点~100点に設定された評価基準から選択される「○○点」等を示し、評価情報を総合的に評価した結果として取得される。このほか、第1評価は、例えば各参照情報に対応する評価基準毎に取得されてもよい。この場合、出力部14は、例えば評価情報に含まれる第1連関度を第1評価として変換し、評価結果Rを生成する。
The first evaluation is selected from the evaluation criteria based on the reference information and the first degree of association included in the evaluation information. The first evaluation indicates, for example, an “OO point” or the like selected from evaluation criteria set at 0 to 100 points, and is acquired as a result of comprehensively evaluating the evaluation information. In addition to this, the first evaluation may be acquired, for example, for each evaluation criterion corresponding to each reference information. In this case, the output unit 14 converts, for example, the first degree of association included in the evaluation information as a first evaluation, and generates an evaluation result R.
出力部14は、生成した評価結果Rを出力する。出力部14は、I/F107を介して出力部分109に評価結果Rを出力する。出力部14は、例えばI/F105を介して提案者の保有するユーザ端末2に評価結果Rを送信する。出力部14は、例えば提案者のプレゼンテーションの映像及び評価結果Rを、他のユーザの保有する他のユーザ端末に送信する。
The output unit 14 outputs the generated evaluation result R. The output unit 14 outputs the evaluation result R to the output unit 109 via the I / F 107. The output unit 14 transmits the evaluation result R to the user terminal 2 owned by the proposer, for example, via the I / F 105. The output unit 14 transmits, for example, the video of the presentation of the proposer and the evaluation result R to another user terminal owned by another user.
<入力部15>
入力部15は、I/F105を介してユーザ端末2から送信された対象情報Dを受信するほか、例えばI/F106を介して入力部分108から入力された各種情報を受信する。そのほか、入力部15は、例えばサーバ3に記憶された対象情報D等を受信してもよい。入力部15は、例えば可搬メモリ等の記憶媒体を介して、対象情報D等を受信してもよい。入力部15は、例えば管理者等が評価結果Rに基づいて作成した更新データや、連関度を更新するために用いられる学習用データ等を受信する。 <Input unit 15>
Theinput unit 15 receives the target information D transmitted from the user terminal 2 via the I / F 105, and also receives various information input from the input unit 108 via the I / F 106, for example. Besides, the input unit 15 may receive, for example, the target information D and the like stored in the server 3. The input unit 15 may receive, for example, the target information D via a storage medium such as a portable memory. The input unit 15 receives, for example, update data created by a manager or the like based on the evaluation result R, data for learning used to update the degree of association, and the like.
入力部15は、I/F105を介してユーザ端末2から送信された対象情報Dを受信するほか、例えばI/F106を介して入力部分108から入力された各種情報を受信する。そのほか、入力部15は、例えばサーバ3に記憶された対象情報D等を受信してもよい。入力部15は、例えば可搬メモリ等の記憶媒体を介して、対象情報D等を受信してもよい。入力部15は、例えば管理者等が評価結果Rに基づいて作成した更新データや、連関度を更新するために用いられる学習用データ等を受信する。 <
The
<ユーザ端末2>
ユーザ端末2は、評価支援システム100を利用する提案者が保有する。ユーザ端末2として、パーソナルコンピュータ(PC)のほか、例えばスマートフォンやタブレット端末等の電子機器が用いられる。ユーザ端末2は、提案者のプレゼンテーションの映像を取得するためのカメラ及びマイク、並びに取得された映像に基づいて対象情報Dを生成する生成部等を有する。なお、ユーザ端末2は、例えば上述した評価支援装置1と同等の構成及び機能を有してもよい。すなわち、本実施形態における評価支援システム100は、例えば評価支援装置1の代わりにユーザ端末2を用いてもよい。ユーザ端末2は、例えばプレゼンテーションの公募等を実施する他のユーザが保有する端末(他のユーザ端末)を示す。 <User terminal 2>
Theuser terminal 2 is owned by a proposer who uses the evaluation support system 100. As the user terminal 2, in addition to a personal computer (PC), an electronic device such as a smartphone or a tablet terminal is used, for example. The user terminal 2 includes a camera and a microphone for acquiring a video of a presentation of the proposer, and a generation unit that generates the target information D based on the acquired video. The user terminal 2 may have, for example, the same configuration and function as the evaluation support device 1 described above. That is, the evaluation support system 100 in the present embodiment may use, for example, the user terminal 2 instead of the evaluation support device 1. The user terminal 2 indicates, for example, a terminal (another user terminal) owned by another user who carries out a public offering of a presentation.
ユーザ端末2は、評価支援システム100を利用する提案者が保有する。ユーザ端末2として、パーソナルコンピュータ(PC)のほか、例えばスマートフォンやタブレット端末等の電子機器が用いられる。ユーザ端末2は、提案者のプレゼンテーションの映像を取得するためのカメラ及びマイク、並びに取得された映像に基づいて対象情報Dを生成する生成部等を有する。なお、ユーザ端末2は、例えば上述した評価支援装置1と同等の構成及び機能を有してもよい。すなわち、本実施形態における評価支援システム100は、例えば評価支援装置1の代わりにユーザ端末2を用いてもよい。ユーザ端末2は、例えばプレゼンテーションの公募等を実施する他のユーザが保有する端末(他のユーザ端末)を示す。 <
The
<サーバ3>
サーバ3には、各種情報に関するデータ(データベース)が記憶されている。このデータベースには、例えば公衆通信網4を介して送られてきた情報が蓄積される。サーバ3には、例えば情報DB16と同様の情報が記憶され、公衆通信網4を介して評価支援装置1と各種情報の送受信が行われてもよい。サーバ3として、例えばネットワーク上のデータベースサーバが用いられてもよい。サーバ3は、上述した記憶部104や情報DB16の代わりに用いられてもよい。 <Server 3>
Theserver 3 stores data (database) related to various information. In this database, for example, information sent via the public communication network 4 is accumulated. For example, information similar to the information DB 16 may be stored in the server 3, and transmission / reception of various information with the evaluation support apparatus 1 may be performed via the public communication network 4. For example, a database server on a network may be used as the server 3. The server 3 may be used instead of the storage unit 104 and the information DB 16 described above.
サーバ3には、各種情報に関するデータ(データベース)が記憶されている。このデータベースには、例えば公衆通信網4を介して送られてきた情報が蓄積される。サーバ3には、例えば情報DB16と同様の情報が記憶され、公衆通信網4を介して評価支援装置1と各種情報の送受信が行われてもよい。サーバ3として、例えばネットワーク上のデータベースサーバが用いられてもよい。サーバ3は、上述した記憶部104や情報DB16の代わりに用いられてもよい。 <
The
<公衆通信網4>
公衆通信網4(ネットワーク)は、評価支援装置1等が通信回路を介して接続されるインターネット網等である。公衆通信網4は、いわゆる光ファイバ通信網で構成されてもよい。また、公衆通信網4は、有線通信網には限定されず、無線通信網で実現してもよい。 <Public communication network 4>
The public communication network 4 (network) is an Internet network or the like to which theevaluation support device 1 or the like is connected via a communication circuit. The public communication network 4 may be configured by a so-called optical fiber communication network. Also, the public communication network 4 is not limited to a wired communication network, and may be realized by a wireless communication network.
公衆通信網4(ネットワーク)は、評価支援装置1等が通信回路を介して接続されるインターネット網等である。公衆通信網4は、いわゆる光ファイバ通信網で構成されてもよい。また、公衆通信網4は、有線通信網には限定されず、無線通信網で実現してもよい。 <
The public communication network 4 (network) is an Internet network or the like to which the
(実施形態:評価支援システム100の動作)
次に、本実施形態における評価支援システム100の動作の一例について説明する。図9は、本実施形態における評価支援システム100の動作の一例を示すフローチャートである。 (Embodiment: Operation of Evaluation Support System 100)
Next, an example of the operation of theevaluation support system 100 in the present embodiment will be described. FIG. 9 is a flowchart showing an example of the operation of the evaluation support system 100 in the present embodiment.
次に、本実施形態における評価支援システム100の動作の一例について説明する。図9は、本実施形態における評価支援システム100の動作の一例を示すフローチャートである。 (Embodiment: Operation of Evaluation Support System 100)
Next, an example of the operation of the
<取得手段S110>
先ず、評価の対象となる対象情報Dを取得する(取得手段S110)。取得部11は、入力部15を介して、ユーザ端末2により生成された対象情報Dを取得するほか、例えば可搬メモリ等の記憶媒体を介して、対象情報Dを取得してもよい。取得部11は、対象情報Dに加え、例えば人物情報を取得してもよい。なお、取得部11は、取得した対象情報D等を情報DB16に記憶させてもよい。 <Acquisition means S110>
First, target information D to be evaluated is acquired (acquisition means S110). Theacquisition unit 11 may acquire the target information D via the input unit 15 or may acquire the target information D via a storage medium such as a portable memory, for example. The acquisition unit 11 may acquire, for example, personal information in addition to the target information D. The acquisition unit 11 may store the acquired target information D and the like in the information DB 16.
先ず、評価の対象となる対象情報Dを取得する(取得手段S110)。取得部11は、入力部15を介して、ユーザ端末2により生成された対象情報Dを取得するほか、例えば可搬メモリ等の記憶媒体を介して、対象情報Dを取得してもよい。取得部11は、対象情報Dに加え、例えば人物情報を取得してもよい。なお、取得部11は、取得した対象情報D等を情報DB16に記憶させてもよい。 <Acquisition means S110>
First, target information D to be evaluated is acquired (acquisition means S110). The
例えば取得部11は、対象情報Dの有する特定のデータを抽出する。この場合、取得部11は、抽出した各データを情報DB16に記憶させてもよい。
For example, the acquisition unit 11 extracts specific data included in the target information D. In this case, the acquisition unit 11 may store the extracted data in the information DB 16.
例えば取得部11は、ユーザ端末2によって生成された対象情報Dを連続的に取得するほか、一定期間に記憶された各対象情報Dを取得してもよい。すなわち、取得部11は、提案者のプレゼンテーション中にリアルタイムに近い状態で、対象情報Dを分割して取得するほか、提案者のプレゼンテーション終了後に、対象情報Dを一括して取得してもよい。取得部11がユーザ端末2から対象情報Dを取得するタイミング及び分割数は、任意である。
For example, in addition to continuously acquiring the target information D generated by the user terminal 2, the acquiring unit 11 may acquire each of the target information D stored for a predetermined period. That is, the acquisition unit 11 may divide and acquire the target information D in a state close to real time during the presentation of the proposer, or may collectively acquire the target information D after the end of the presentation of the proposer. The timing at which the acquisition unit 11 acquires the target information D from the user terminal 2 and the number of divisions are arbitrary.
<評価手段S120>
次に、参照データベースを参照し、対象情報Dと、参照情報との間の第1連関度を含む評価情報を取得する(評価手段S120)。評価部12は、取得部11又は情報DB16から対象情報Dを取得し、情報DB16から参照データベースを取得する。 <Evaluation means S120>
Next, with reference to the reference database, evaluation information including the first degree of association between the target information D and the reference information is acquired (evaluation means S120). Theevaluation unit 12 acquires the target information D from the acquisition unit 11 or the information DB 16, and acquires a reference database from the information DB 16.
次に、参照データベースを参照し、対象情報Dと、参照情報との間の第1連関度を含む評価情報を取得する(評価手段S120)。評価部12は、取得部11又は情報DB16から対象情報Dを取得し、情報DB16から参照データベースを取得する。 <Evaluation means S120>
Next, with reference to the reference database, evaluation information including the first degree of association between the target information D and the reference information is acquired (evaluation means S120). The
評価部12は、参照データベースを参照することで、対象情報Dと、参照情報との間における第1連関度を算出することができる。評価部12は、例えば対象情報Dと一致、一部一致、又は類似する過去の対象情報を選択し、選択された過去の対象情報に紐づけられた参照情報を選択し、選択された過去の対象情報と参照情報との間における連関度に基づいて第1連関度を算出する。なお、評価部12は、算出した第1連関度及び取得した評価情報を情報DB16に記憶させてもよい。
The evaluation unit 12 can calculate the first association degree between the target information D and the reference information by referring to the reference database. For example, the evaluation unit 12 selects past target information that matches, partially matches, or is similar to the target information D, selects reference information linked to the selected past target information, and selects the selected past information. The first degree of association is calculated based on the degree of association between the target information and the reference information. Note that the evaluation unit 12 may store the calculated first association degree and the acquired evaluation information in the information DB 16.
評価部12は、例えば図5に示した参照データベースを参照し、対象情報Dに含まれる各データの組み合わせと、参照情報との間における第1連関度を算出してもよい。また、評価部12は、例えば図6に示した参照データベースを参照し、中間情報を介して第1連関度を算出してもよい。
The evaluation unit 12 may calculate, for example, the first degree of association between the combination of each data included in the target information D and the reference information with reference to the reference database illustrated in FIG. 5. Further, the evaluation unit 12 may calculate the first association degree via the intermediate information with reference to, for example, the reference database illustrated in FIG.
<出力手段S130>
次に、評価情報に基づき評価結果Rを生成し、評価結果Rを出力する(出力手段S130)。出力部14は、例えば評価結果Rを提案者の保有するユーザ端末2に送信してもよい。出力部14は、評価部12又は情報DB16から評価データを取得し、例えば情報DB16から評価結果Rを表示するフォーマットを取得してもよい。 <Output means S130>
Next, an evaluation result R is generated based on the evaluation information, and the evaluation result R is output (output means S130). Theoutput unit 14 may transmit, for example, the evaluation result R to the user terminal 2 owned by the proposer. The output unit 14 may acquire the evaluation data from the evaluation unit 12 or the information DB 16, and may acquire, for example, a format for displaying the evaluation result R from the information DB 16.
次に、評価情報に基づき評価結果Rを生成し、評価結果Rを出力する(出力手段S130)。出力部14は、例えば評価結果Rを提案者の保有するユーザ端末2に送信してもよい。出力部14は、評価部12又は情報DB16から評価データを取得し、例えば情報DB16から評価結果Rを表示するフォーマットを取得してもよい。 <Output means S130>
Next, an evaluation result R is generated based on the evaluation information, and the evaluation result R is output (output means S130). The
出力部14は、評価情報に基づいて、例えばフォーマットを参照して評価結果Rを生成する。評価結果Rは、例えば予め設定された2段階以上の評価基準から選択された第1評価を含んでもよい。
The output unit 14 generates an evaluation result R with reference to, for example, a format based on the evaluation information. The evaluation result R may include, for example, a first evaluation selected from two or more evaluation criteria set in advance.
出力部14は、例えば図2(b)に示すように、評価情報に基づいて「計画性」、「持続性」等の評価項目に対する点数(スコア)を算出した結果を、評価結果Rとして生成及び出力する。
For example, as shown in FIG. 2 (b), the output unit 14 generates, as an evaluation result R, the result of calculating the score (score) for the evaluation items such as "plannability" and "sustainability" based on the evaluation information. And output.
出力部14、例えば予め設定された2~100段階等の評価基準から第1評価を選択する。例えば百分率で示される第1連関度に対応する評価基準として「非常に良い:80~100%」、「良い:60~79%」、「普通:40~59%」、「悪い:20~39%」、及び「非常に悪い:0~19%」の5段階が予め設定されているとき、出力部14は、第1連関度「45%」を取得した場合に「普通」を第1評価として選択する。
The output unit 14, for example, selects a first evaluation from evaluation criteria set in advance, such as 2 to 100 levels. For example, "very good: 80 to 100%", "good: 60 to 79%", "ordinary: 40 to 59%", "bad: 20 to 39" as the evaluation criteria corresponding to the first association degree indicated by percentage. When five levels of “%” and “very bad: 0 to 19%” are set in advance, the output unit 14 first evaluates “normal” when the first association degree “45%” is acquired. Choose as.
出力部14は、例えば参照情報に紐づく第1連関度毎の値を点数として算出するほか、第1連関度を組み合わせた値や平均値等を算出してもよい。出力部14は、例えば評価情報に基づいたテキストデータを選択し、評価結果Rのコメントとして生成及び出力してもよい。この場合、テキストデータは予め参照情報と結び付けた状態で記憶部104に保存される。
The output unit 14 may calculate, for example, a value for each first association degree linked to the reference information as a score, or may calculate a value or an average value in which the first association degree is combined. The output unit 14 may select, for example, text data based on the evaluation information, and may generate and output it as a comment of the evaluation result R. In this case, the text data is stored in the storage unit 104 in a state of being linked to the reference information in advance.
出力部14は、例えば情報DB16から評価基準を取得し、評価情報に基づいて評価結果Rを生成及び出力する。出力部14は、ユーザ端末2又は出力部分109に評価結果Rを出力する。出力部14は、例えば予め設定された閾値と、第1連関度とを比較した結果に基づき、評価結果Rを出力してもよい。この場合、例えば閾値を「80%以上」と設定したとき、第1連関度が80%以上の場合のみ評価結果Rを出力する。なお、閾値の条件は、任意に設定することができる。
The output unit 14 acquires, for example, an evaluation criterion from the information DB 16, and generates and outputs an evaluation result R based on the evaluation information. The output unit 14 outputs the evaluation result R to the user terminal 2 or the output unit 109. The output unit 14 may output the evaluation result R based on, for example, a result of comparing the first threshold with a preset threshold. In this case, for example, when the threshold value is set to “80% or more”, the evaluation result R is output only when the first association degree is 80% or more. The condition of the threshold can be set arbitrarily.
出力部14は、例えば参照情報に紐づく第1連関度毎の値を点数として算出するほか、第1連関度を組み合わせた値や平均値等を算出してもよい。出力部14は、例えば予め設定された閾値と、第1連関度とを比較した結果に基づき、第1評価を選択してもよい。この場合、例えば評価基準について「80%」を閾値とした2段階に設定したとき、第1連関度が80%以上の場合と80%未満の場合との2段階のうち1つを第1評価として選択される。出力部14は、例えば80%以上の第1評価のみに基づいて評価結果Rを生成及び出力してもよい。
The output unit 14 may calculate, for example, a value for each first association degree linked to the reference information as a score, or may calculate a value or an average value in which the first association degree is combined. The output unit 14 may select the first evaluation based on, for example, a result of comparing the first association degree with a preset threshold. In this case, for example, when the evaluation criteria are set to two stages with “80%” as the threshold, one of the two stages of the first association degree of 80% or more and less than 80% is evaluated as the first evaluation. Selected as The output unit 14 may generate and output the evaluation result R based on only the first evaluation of, for example, 80% or more.
<更新手段S140>
その後、例えば過去の対象情報と、参照情報との間の関係を新たに取得した場合には、関係を連関度に反映させてもよい(更新手段S140)。更新部13は、例えば管理者等が新たに取得した更新データを取得し、連関度に反映させる。このほか、更新部13は、例えば管理者が評価結果Rに基づいて作成した学習用データを取得し、連関度に反映させる。
更新部13は、例えば機械学習を用いて連関度の算出及び更新を行い、機械学習には、例えば深層学習が用いられる。 <Updating means S140>
After that, for example, when the relationship between the target information in the past and the reference information is newly acquired, the relationship may be reflected on the association degree (updating unit S140). Theupdate unit 13 acquires, for example, update data newly acquired by a manager or the like, and reflects the update data in the degree of association. In addition to this, the updating unit 13 acquires, for example, learning data created by the administrator based on the evaluation result R, and reflects it on the degree of association.
The updatingunit 13 calculates and updates the degree of association using, for example, machine learning, and for example, deep learning is used for machine learning.
その後、例えば過去の対象情報と、参照情報との間の関係を新たに取得した場合には、関係を連関度に反映させてもよい(更新手段S140)。更新部13は、例えば管理者等が新たに取得した更新データを取得し、連関度に反映させる。このほか、更新部13は、例えば管理者が評価結果Rに基づいて作成した学習用データを取得し、連関度に反映させる。
更新部13は、例えば機械学習を用いて連関度の算出及び更新を行い、機械学習には、例えば深層学習が用いられる。 <Updating means S140>
After that, for example, when the relationship between the target information in the past and the reference information is newly acquired, the relationship may be reflected on the association degree (updating unit S140). The
The updating
これにより、本実施形態における評価支援システム100の動作が終了する。なお、上述した更新手段S140を実施するか否かは任意である。
Thus, the operation of the evaluation support system 100 in the present embodiment ends. Note that it is optional whether or not to execute the updating unit S140 described above.
本実施形態によれば、評価手段S120は、参照データベースを参照し、対象情報Dと、参照情報との間における第1連関度を含む評価情報を取得する。対象情報Dは、提案者のプレゼンテーションの映像に基づいて生成される。このため、提案者のプレゼンテーションの内容に対して定量的な評価結果Rを取得することができる。これにより、定量的な評価結果Rを提案者の社会的信用等に繋げることが可能となる。
According to the present embodiment, the evaluation unit S120 refers to the reference database, and acquires evaluation information including the first degree of association between the target information D and the reference information. The target information D is generated based on the video of the proposer's presentation. Therefore, it is possible to obtain a quantitative evaluation result R for the content of the presentation of the proposer. This makes it possible to connect the quantitative evaluation result R to the social credit etc. of the proposer.
また、本実施形態によれば、取得手段S110は、対象情報Dから、内容データ、進行データ、音声データS、及び表情データFの少なくとも何れかを抽出する。このため、上記データ毎に独立した評価結果Rの生成や、上記データを組み合わせた評価結果Rの生成をすることができる。これにより、プレゼンテーションの分野や状況に応じて最適な評価結果Rを生成することが可能となる。
Further, according to the present embodiment, the acquisition unit S110 extracts at least one of content data, progress data, voice data S, and facial expression data F from the target information D. Therefore, it is possible to generate an evaluation result R independent of each data, or to generate an evaluation result R combining the data. As a result, it is possible to generate an evaluation result R that is optimal according to the field or situation of presentation.
また、本実施形態によれば、評価結果Rは、提案者の推定健康値を有する。このため、提案者のプレゼンテーション時における健康状態も評価基準に含めることができる。これにより、評価結果Rの精度をさらに向上させることが可能となる。
Further, according to the present embodiment, the evaluation result R has an estimated health value of the proposer. Therefore, the health condition of the proposer at the time of presentation can also be included in the evaluation criteria. This makes it possible to further improve the accuracy of the evaluation result R.
また、本実施形態によれば、取得手段S110は、30秒以上30分以下における提案者のプレゼンテーションの映像に基づいて生成された対象情報Dを取得する。このため、対象情報D毎に含まれる情報量のバラつきを抑制することができる。これにより、定量的な評価結果Rを容易に取得することが可能となる。
Further, according to the present embodiment, the acquiring unit S110 acquires the target information D generated based on the video of the presentation of the proposer in 30 seconds to 30 minutes. For this reason, it is possible to suppress variations in the amount of information included in each piece of target information D. This makes it possible to easily obtain a quantitative evaluation result R.
また、本実施形態によれば、更新手段S140は、過去の対象情報と、参照情報との間の関係を新たに取得した場合には、関係を連関度に反映させる。このため、連関度を容易に更新することができ、評価結果Rの精度をさらに高めることが可能となる。
Further, according to the present embodiment, the updating unit S140 reflects the relationship in the degree of association when newly acquiring a relationship between past target information and reference information. Therefore, the degree of association can be easily updated, and the accuracy of the evaluation result R can be further enhanced.
また、本実施形態によれば、例えば評価結果Rは、予め設定された2段階以上の評価から選択された第1評価を含んでもよい。この場合、分野の異なるプレゼンテーションに対しても、共通した評価基準に基づいた評価結果Rを出力することができる。これにより、様々な分野のプレゼンテーションに対して定量的な評価結果Rを取得することが可能となる。
Further, according to the present embodiment, for example, the evaluation result R may include the first evaluation selected from the preset two or more evaluations. In this case, it is possible to output an evaluation result R based on a common evaluation standard even for presentations in different fields. This makes it possible to obtain quantitative evaluation results R for presentations in various fields.
また、本実施形態によれば、例えば出力手段S130は、提案者のプレゼンテーションの映像及び評価結果Rを、他のユーザ端末に送信することを含んでもよい。このため、例えば他のユーザ端末を保有する他のユーザが、プレゼンテーションの公募を実施する際、評価支援システム100を介して定量的な選考の支援を実施することができる。これにより、選考者に起因する審査基準のバラつきを低減することが可能となるほか、選考に費やす人件費及び選考時間の削減を実現することが可能となる。
Further, according to the present embodiment, for example, the output unit S130 may include transmitting the video of the presentation of the proposer and the evaluation result R to another user terminal. Therefore, for example, when another user who owns another user terminal carries out a call for presentations, it is possible to carry out quantitative selection support via the evaluation support system 100. As a result, it is possible to reduce the variation in the screening criteria caused by the screening persons, and to reduce labor costs and screening time spent on screening.
(評価支援システム100の動作の第1変形例)
次に、図10を参照して、本実施形態における評価支援システム100の動作の第1変形例について説明する。上述した動作と、第1変形例の動作との違いは、質問手段S110aと、追加手段S110bとを実施する点である。そのほかの点については、上述した動作と同様のため、説明を適宜省略する。なお、質問手段S110a及び追加手段S110bは、取得手段S110のあとに実施するほか、例えば取得手段S110中に実施してもよい。 (First Modified Example of Operation of Evaluation Support System 100)
Next, with reference to FIG. 10, a first modified example of the operation of theevaluation support system 100 in the present embodiment will be described. The difference between the operation described above and the operation of the first modification is that the question means S110a and the addition means S110b are implemented. The other points are the same as the above-described operation, and hence the description will be appropriately omitted. The query means S110a and the addition means S110b may be implemented, for example, in the acquisition means S110, in addition to the acquisition means S110.
次に、図10を参照して、本実施形態における評価支援システム100の動作の第1変形例について説明する。上述した動作と、第1変形例の動作との違いは、質問手段S110aと、追加手段S110bとを実施する点である。そのほかの点については、上述した動作と同様のため、説明を適宜省略する。なお、質問手段S110a及び追加手段S110bは、取得手段S110のあとに実施するほか、例えば取得手段S110中に実施してもよい。 (First Modified Example of Operation of Evaluation Support System 100)
Next, with reference to FIG. 10, a first modified example of the operation of the
<質問手段:S110a>
質問手段S110aは、取得された対象情報Dに基づき、提案者への質問情報Qを生成し、質問情報Qを出力する。質問部11aは、例えば図8に示す質問データベースを参照し、対象情報Dと、過去の質問情報との間における3段階以上の第1質問連関度を取得し、過去の質問情報及び第1質問連関度に基づき質問情報Qを生成する。質問部11aは、例えば複数の質問に対応する質問情報Qを生成及び出力してもよい。質問部11aは、例えば生成した質問情報Qを情報DB16に記憶させる。 <Question means: S110a>
The question means S110a generates question information Q for the proposer based on the acquired target information D, and outputs the question information Q. The question unit 11a refers to, for example, the question database shown in FIG. 8 and acquires three or more stages of first question relatedness between the target information D and the past question information, and the past question information and the first question The question information Q is generated based on the degree of association. For example, the question unit 11a may generate and output question information Q corresponding to a plurality of questions. For example, the question unit 11 a stores the generated question information Q in theinformation DB 16.
質問手段S110aは、取得された対象情報Dに基づき、提案者への質問情報Qを生成し、質問情報Qを出力する。質問部11aは、例えば図8に示す質問データベースを参照し、対象情報Dと、過去の質問情報との間における3段階以上の第1質問連関度を取得し、過去の質問情報及び第1質問連関度に基づき質問情報Qを生成する。質問部11aは、例えば複数の質問に対応する質問情報Qを生成及び出力してもよい。質問部11aは、例えば生成した質問情報Qを情報DB16に記憶させる。 <Question means: S110a>
The question means S110a generates question information Q for the proposer based on the acquired target information D, and outputs the question information Q. The question unit 11a refers to, for example, the question database shown in FIG. 8 and acquires three or more stages of first question relatedness between the target information D and the past question information, and the past question information and the first question The question information Q is generated based on the degree of association. For example, the question unit 11a may generate and output question information Q corresponding to a plurality of questions. For example, the question unit 11 a stores the generated question information Q in the
<追加手段:S110b>
次に、追加手段S110bは、質問情報Qに対する提案者の回答に基づく回答情報Aを取得する。追加部11bは、対象情報Dとは別の情報として回答情報Aを取得する場合、回答情報Aを対象情報Dに追加してもよい。この場合、回答情報Aを追加した対象情報Dを用いて評価手段S120を実施する。追加部11bは、例えば対象情報Dから回答情報Aを抽出してもよい。この場合、追加手段S110bにおいて、取得部11は回答情報Aを含む対象情報Dを取得する。 <Additional means: S110b>
Next, the addition means S110b acquires the answer information A based on the answer of the proposer to the question information Q. When acquiring the response information A as information different from the target information D, the adding unit 11 b may add the response information A to the target information D. In this case, the evaluation unit S120 is implemented using the target information D to which the response information A is added. The adding unit 11 b may extract, for example, the response information A from the target information D. In this case, in the addition unit S110b, theacquisition unit 11 acquires the target information D including the response information A.
次に、追加手段S110bは、質問情報Qに対する提案者の回答に基づく回答情報Aを取得する。追加部11bは、対象情報Dとは別の情報として回答情報Aを取得する場合、回答情報Aを対象情報Dに追加してもよい。この場合、回答情報Aを追加した対象情報Dを用いて評価手段S120を実施する。追加部11bは、例えば対象情報Dから回答情報Aを抽出してもよい。この場合、追加手段S110bにおいて、取得部11は回答情報Aを含む対象情報Dを取得する。 <Additional means: S110b>
Next, the addition means S110b acquires the answer information A based on the answer of the proposer to the question information Q. When acquiring the response information A as information different from the target information D, the adding unit 11 b may add the response information A to the target information D. In this case, the evaluation unit S120 is implemented using the target information D to which the response information A is added. The adding unit 11 b may extract, for example, the response information A from the target information D. In this case, in the addition unit S110b, the
その後、上述した評価手段S120等を実施し、本実施形態における評価支援システム100の動作が終了する。
Thereafter, the above-described evaluation means S120 and the like are implemented, and the operation of the evaluation support system 100 in the present embodiment is ended.
本実施形態における動作の第1変形例によれば、上述した動作と同様に、評価手段S120は、参照データベースを参照し、対象情報Dと、参照情報との間における第1連関度を含む評価情報を取得する。対象情報Dは、提案者のプレゼンテーションの映像に基づいて生成される。このため、提案者のプレゼンテーションの内容に対して定量的な評価結果Rを取得することができる。これにより、定量的な評価結果Rを提案者の社会的信用等に繋げることが可能となる。
According to the first modified example of the operation in this embodiment, the evaluation unit S120 refers to the reference database and evaluates including the first degree of association between the target information D and the reference information, as in the above-described operation. Get information. The target information D is generated based on the video of the proposer's presentation. Therefore, it is possible to obtain a quantitative evaluation result R for the content of the presentation of the proposer. This makes it possible to connect the quantitative evaluation result R to the social credit etc. of the proposer.
特に、本実施形態における動作の第1変形例によれば、質問手段S110aは、提案者への質問情報Qを生成し、質問情報Qを出力する。また、追加手段S110bは、質問情報Qに対する提案者の回答に基づく回答情報Aを取得する。このため、提案者のプレゼンテーションに含まれる情報量が少ない場合においても、質問情報Qを用いて回答情報Aを取得することができ、情報量を補充することができる。これにより、評価結果Rの精度を向上させることが可能となる。
In particular, according to the first modified example of the operation in the present embodiment, the question means S110a generates question information Q for the proposer, and outputs the question information Q. Further, the addition means S110b acquires the answer information A based on the answer of the proposer to the question information Q. Therefore, even when the amount of information contained in the presentation of the proposer is small, the answer information A can be acquired using the question information Q, and the amount of information can be replenished. This makes it possible to improve the accuracy of the evaluation result R.
(評価支援システム100の動作の第2変形例)
次に、図11を参照して、本実施形態における評価支援システム100の動作の第2変形例について説明する。上述した第1変形例の動作と、第2変形例の動作との違いは、評価手段S120を実施したあとに質問手段S110a及び追加手段S110bを実施し、再び評価手段S120を実施する点である。そのほかの点については、上述した動作と同様のため、説明を適宜省略する。 (Second Modified Example of Operation of Evaluation Support System 100)
Next, a second modification of the operation of theevaluation support system 100 in the present embodiment will be described with reference to FIG. The difference between the operation of the first modification and the operation of the second modification is that after the evaluation unit S120 is performed, the question unit S110a and the addition unit S110b are performed, and the evaluation unit S120 is performed again. . The other points are the same as the above-described operation, and hence the description will be appropriately omitted.
次に、図11を参照して、本実施形態における評価支援システム100の動作の第2変形例について説明する。上述した第1変形例の動作と、第2変形例の動作との違いは、評価手段S120を実施したあとに質問手段S110a及び追加手段S110bを実施し、再び評価手段S120を実施する点である。そのほかの点については、上述した動作と同様のため、説明を適宜省略する。 (Second Modified Example of Operation of Evaluation Support System 100)
Next, a second modification of the operation of the
質問手段S110aは、例えば図12に示す質問データベースを参照し、質問情報Qを生成する。図12に示す質問データベースは、参照データベースの有する参照情報が、過去の質問情報と質問連関度で紐づけられている点において、図8に示した質問データベースと相違する。すなわち、評価手段S120において取得された評価情報に基づいて、質問手段S110a及び追加手段S110bを実施する。
The question means S110a refers to, for example, the question database shown in FIG. 12 and generates question information Q. The question database shown in FIG. 12 differs from the question database shown in FIG. 8 in that the reference information possessed by the reference database is linked with past question information and the degree of question association. That is, based on the evaluation information acquired in the evaluation means S120, the question means S110a and the addition means S110b are implemented.
第2変形例の動作では、追加手段S110bにおいて、取得部11は、回答情報Aを含む対象情報Dを取得する。その後、回答情報Aを含む対象情報Dに基づき評価手段S120を再び実施する。
In the operation of the second modified example, in the addition unit S110b, the acquisition unit 11 acquires the target information D including the response information A. Thereafter, based on the target information D including the answer information A, the evaluation unit S120 is implemented again.
その後、上述した出力手段S130等を実施し、本実施形態における評価支援システム100の動作が終了する。
Thereafter, the above-described output means S130 and the like are implemented, and the operation of the evaluation support system 100 in the present embodiment ends.
本実施形態における動作の第2変形例によれば、上述した動作と同様に、評価手段S120は、参照データベースを参照し、対象情報Dと、参照情報との間における第1連関度を含む評価情報を取得する。対象情報Dは、提案者のプレゼンテーションの映像に基づいて生成される。このため、提案者のプレゼンテーションの内容に対して定量的な評価結果Rを取得することができる。これにより、定量的な評価結果Rを提案者の社会的信用等に繋げることが可能となる。
According to the second modified example of the operation in this embodiment, the evaluation unit S120 refers to the reference database and evaluates including the first degree of association between the target information D and the reference information as in the above-described operation. Get information. The target information D is generated based on the video of the proposer's presentation. Therefore, it is possible to obtain a quantitative evaluation result R for the content of the presentation of the proposer. This makes it possible to connect the quantitative evaluation result R to the social credit etc. of the proposer.
特に、本実施形態における動作の第2変形例によれば、質問手段S110aは、評価情報を参照して質問情報Qを生成する。このため、評価情報に応じた質問情報Qを生成することができる。これにより、評価結果Rの精度をさらに向上させることが可能となる
In particular, according to the second modification of the operation in this embodiment, the question means S110a generates question information Q with reference to the evaluation information. For this reason, it is possible to generate the question information Q according to the evaluation information. This makes it possible to further improve the accuracy of the evaluation result R.
上述した実施形態によれば、3段階以上に設定されている連関度等(第1連関度、質問連関度、第1質問連関度、類似度)に基づいて、対象情報Dを評価できる点に特徴がある。連関度等は、例えば0~100%までの数値で記述することができるほか、3段階以上の数値で記述できればいかなる段階で構成されていてもよい。
According to the above-described embodiment, the object information D can be evaluated based on the degree of association or the like (first association degree, question association degree, first question association degree, similarity) set in three or more stages. There is a feature. The degree of association or the like can be described by, for example, a numerical value of 0 to 100%, and may be configured at any stage as long as it can be described by three or more stages of numerical values.
このような連関度等に基づいて、対象情報Dに対する評価結果Rの候補として選ばれる参照情報において、連関度の高い又は低い順に参照情報等(過去の質問情報、中間情報)を表示することが可能となる。このように連関度等の順に表示することで、管理者等は提案者に該当する可能性の高い傾向を優先的に選択することができる。他方、提案者に該当する可能性の低い傾向も除外せずに表示できるため、管理者等は見逃すことなく選択することが可能となる。
In the reference information selected as a candidate for the evaluation result R for the target information D based on the association degree etc., the reference information etc. (past question information, intermediate information) may be displayed in descending order of association degree. It becomes possible. By displaying in order of the degree of association and the like in this manner, a manager or the like can preferentially select a tendency that is likely to be a proposer. On the other hand, since the tendency which is low in the possibility of being a proponent can be displayed without being excluded, it becomes possible for the administrator etc. to make a selection without missing.
上記に加え、上述した実施形態によれば、例えば連関度等が1%のような極めて低い場合も見逃すことなく評価することができる。連関度等が極めて低い参照情報等であっても、僅かな兆候として繋がっていることを示しており、見逃しや誤認を抑制することが可能となる。
In addition to the above, according to the above-described embodiment, evaluation can be performed without missing even when the degree of association or the like is extremely low such as 1%. Even if the reference information etc. has a very low degree of association, it indicates that they are connected as a slight sign, and it becomes possible to suppress missing or false recognition.
また、上述した実施形態によれば、取得手段S110において、取得部11は、例えばソーシャルネットワーキングサービスにおける提案者の注目度合いを取得してもよい。提案者の注目度合いとして、例えばTwitter(登録商標)やFacebook(登録商標)等のフォロワー数が用いられ、他のユーザとの繋がり度合いが用いられる。このため、提案者のプレゼンテーションの内容以外の情報を評価結果Rの対象とすることができる。これにより、プレゼンテーションの分野や状況に応じて最適な評価結果Rを容易に生成することが可能となる。
Further, according to the above-described embodiment, in the acquiring unit S110, the acquiring unit 11 may acquire, for example, the degree of attention of the proposer in the social networking service. As the degree of attention of the proposer, for example, the number of followers such as Twitter (registered trademark) or Facebook (registered trademark) is used, and the degree of connection with other users is used. Therefore, information other than the content of the proposer's presentation can be targeted for the evaluation result R. Thereby, it is possible to easily generate the optimum evaluation result R in accordance with the field or situation of presentation.
また、上述した実施形態によれば、提案者のプレゼンテーションは、企業の広告を含んでもよい。このため、プレゼンテーション内容を非公知の状態で評価することができる。
Also, according to the embodiment described above, the presenter's presentation may include an advertisement of a company. For this reason, the presentation content can be evaluated in a non-known state.
また、上述した実施形態によれば、対象情報Dは、人物の映像を除く情報でもよい。このため、企業の製品や創作物に対して定量的に評価をすることができる。
Moreover, according to the embodiment described above, the target information D may be information excluding the image of a person. For this reason, it is possible to quantitatively evaluate the products and creations of a company.
また、上述した実施形態によれば、対象情報Dは、提案者の動作を含む。このため、提案者の技能等に対して定量的に評価することができる。
Moreover, according to the embodiment described above, the target information D includes the action of the proposer. For this reason, it can evaluate quantitatively with respect to a proposer's skill etc.
また、上述した実施形態によれば、提案者の映像は、30秒以上30分以内であることが望ましく、より好ましくは2分以上5分以内である。これにより、提案者毎の評価バラつきを抑制することができ、定量的な評価結果Rを容易に実現することが可能となる。
Moreover, according to the embodiment described above, the video of the proposer is desirably 30 seconds or more and 30 minutes or less, and more preferably 2 minutes or more and 5 minutes or less. Thereby, it is possible to suppress the evaluation variation for each proposer, and it is possible to easily realize the quantitative evaluation result R.
また、上述した実施形態によれば、提案者の映像は、5秒以上1分以内であることが望ましく、より好ましくは5秒以上30秒以内である。これにより、データの増大を抑制した上で、定量的な評価結果Rを容易に実現することが可能となる。
Moreover, according to the embodiment described above, the video of the proposer is desirably 5 seconds or more and 1 minute or less, more preferably 5 seconds or more and 30 seconds or less. This makes it possible to easily realize quantitative evaluation results R while suppressing an increase in data.
また、上述した実施形態によれば、提案者がプレゼンテーションに用いる言語は任意であり、いかなる言語を用いた場合においても、定量的な評価結果Rを実現することが可能となる。即ち、対象情報Dは、提案者が母国語又は多言語を用いたプレゼンテーションに基づいて生成されてもよい。
Further, according to the above-described embodiment, the language used by the proposer for presentation is arbitrary, and it is possible to realize the quantitative evaluation result R regardless of using any language. That is, the target information D may be generated based on a presentation made by the proposer using a native language or multiple languages.
また、上述した実施形態によれば、対象情報Dは、1人又は複数の提案者のプレゼンテーションの映像や、企業紹介等に基づいて生成されてもよい。即ち、プレゼンテーションの映像には、提案者が撮影されていなくてもよい。
Further, according to the above-described embodiment, the target information D may be generated based on a video of a presentation of one or more proposers, a company introduction, or the like. That is, in the video of the presentation, the proposer may not be photographed.
また、上述した実施形態によれば、例えば提案者が複数回(例えば毎日、毎週等)のプレゼンテーションを継続実施できるようにしてもよい。この場合、プレゼンテーション毎に異なる対象情報Dを生成するほか、複数のプレゼンテーションを一括した対象情報Dを生成してもよい。
Further, according to the above-described embodiment, for example, the proposer may be able to continuously carry out a plurality of (for example, daily, weekly, etc.) presentations. In this case, in addition to generating different target information D for each presentation, the target information D may be generated by batching a plurality of presentations.
また、上述した実施形態によれば、対象情報Dは、例えば提案者が数カ月~数年かかる世界冒険の資金や支援を目的としたプレゼンテーションの映像に基づいて生成されてもよい。
Further, according to the above-described embodiment, the target information D may be generated, for example, on the basis of a video of a presentation for the purpose of funding or supporting a world adventure in which the proposer takes several months to several years.
また、上述した実施形態によれば、対象情報Dは、例えば提案者がアスリート、又はアスリートの支援者であり、アスリートとして必要な資金の調達を目的としたプレゼンテーションの映像に基づいて生成されてもよい。
Further, according to the above-described embodiment, the target information D is generated based on the video of the presentation for the purpose of procuring funds necessary for the athlete, for example, the proposer is the athlete or the athlete's supporter. Good.
また、上述した実施形態によれば、対象情報Dは、例えば提案者が資金以外の支援依頼又は提供を目的としたプレゼンテーションの映像に基づいて生成されてもよく、資金以外の支援として、例えばキャッシュ、クレジット、電子マネー、ポイント、及び仮想通貨の少なくとも何れかが用いられるほか、例えば人的支援、教育支援、技術指導等でもよい。
Further, according to the above-described embodiment, the target information D may be generated based on, for example, a video of a presentation for which the proposer aims for a support request or provision other than a fund. , Credit, electronic money, points, and / or virtual currency, and may be, for example, personal support, educational support, technical guidance, and the like.
また、上述した実施形態によれば、対象情報Dは、例えば提案者が空いている時間の副業探し、仕事探し、又は人的支援の依頼を目的としたプレゼンテーションの映像に基づいて生成されてもよい。例えば、プレゼンテーションの映像として、「引越しを手伝います」「買い物に同行します」「街歩きのガイドをします」「ファッション相談に乗ります」「ヒマつぶしに付き合います」「何でも気軽に話しましょう」等の具体的な実施内容の説明に基づく映像が用いられてもよい。また、プレゼンテーションの映像として、蕎麦やピザ等の出前をする旨の内容の説明に基づく映像が用いられてもよい。
Further, according to the above-described embodiment, the target information D may be generated based on a video of a presentation for the purpose of searching for a side job for a time when the proposer is open, searching for a job, or requesting personal assistance, for example. Good. For example, as a video of the presentation, "help moving," "will accompany shopping," "guide a walk around the city," "get on a fashion consultation," "I'll deal with crush," "will talk freely The video based on the description of the specific implementation content such as “.” May be used. In addition, as the video of the presentation, a video based on the description of the contents to the effect of serving oats, pizza and the like may be used.
また、上述した実施形態によれば、対象情報Dは、例えば提案者が介護、看病等の人的依頼又は提供を目的としたプレゼンテーションの映像に基づいて生成されてもよい。
Further, according to the above-described embodiment, the target information D may be generated based on, for example, a video of a presentation for which the proposer aims for a personal request or provision such as care or nursing.
また、上述した実施形態によれば、対象情報Dは、例えば商品の売買を目的としたプレゼンテーションの映像に基づいて生成されてもよい。
Further, according to the above-described embodiment, the target information D may be generated based on, for example, a video of a presentation intended to buy and sell a product.
また、上述した実施形態によれば、対象情報Dは、例えば提案者が行きたい場所や泊まりたい場所の見積の取得を目的としたプレゼンテーションの映像に基づいて生成されてもよい。
Further, according to the above-described embodiment, the target information D may be generated based on, for example, a video of a presentation for the purpose of acquiring an estimate of a place where the proposer wants to go or a place where he wants to stay.
また、上述した実施形態によれば、対象情報Dは、提案者との時間の交換、時間の売買を目的としたプレゼンテーションの映像に基づいて生成されてもよい。
Further, according to the above-described embodiment, the target information D may be generated based on a video of a presentation for the purpose of exchanging time with the proposer, and buying and selling time.
また、上述した実施形態によれば、対象情報Dは、例えば提案者がベビーシッター等の定期依頼のような人的依頼又は提供を目的としてプレゼンテーションの映像に基づいて生成されてもよい。
Further, according to the above-described embodiment, the target information D may be generated based on the video of the presentation for the purpose of, for example, a human request or provision such as a periodical request such as a babysitter by the proposer.
また、上述した実施形態によれば、対象情報Dは、例えば提案者が支援したい人、サービス、又はシステム等に対する支援を目的としたプレゼンテーションの映像に基づいて生成されてもよい。
Further, according to the above-described embodiment, the target information D may be generated based on, for example, a video of a presentation intended to support a person, a service, or a system that the proposer wants to support.
また、上述した実施形態によれば、対象情報Dは、例えば提案者の演技、演奏等のパフォーマンスをプレゼンテーションとした映像に基づいて生成されてもよい。この場合、提案者は、例えば集客又は資金援助を目的として対象情報Dを生成する。
Further, according to the above-described embodiment, the target information D may be generated based on, for example, a video in which the performance such as the performer's performance and performance is a presentation. In this case, the proposer generates the target information D for the purpose of, for example, attracting customers or financial support.
また、上述した実施形態によれば、提案者毎に取得した評価結果Rの数値に基づき、仮想通貨等のように提案者間で自由に数値を取引できるようにしてもよい。例えば、評価結果Rの数値が高い提案者が、評価結果Rの数値が低い提案者に対して、評価結果Rの数値を譲渡、貸付、預入等を実施できるようにしてもよい。また、提案者の評価結果Rの数値に基づき、他の提案者の売り出し評価を購入し、一時的に手持ちの評価結果Rの数値がマイナスとなっても、一定期間内に補充、充当、精算等ができるようにしてもよい。
Further, according to the above-described embodiment, it is possible to freely trade the numerical value among the proponents like a virtual currency or the like based on the numerical value of the evaluation result R acquired for each proposer. For example, a proposer having a high numerical value of the evaluation result R may be able to transfer, lend, deposit, etc. the numerical value of the evaluation result R to a proposer whose numerical value of the evaluation result R is low. In addition, based on the evaluation result R of the proposer, another seller's evaluation of the sale is purchased, and even if the evaluation result R on the hand temporarily becomes negative, replenishment, allocation, and settlement within a certain period of time Etc. may be possible.
上記において、例えば評価結果Rの数値が極端に高い提案者が存在する場合、数値の流動性を見込めない可能性がある。このため、その提案者が数値を他の提案者又はユーザに譲渡できるようにしてもよい。この場合、例えば提案者は、所有する数値の分野と同等の市場価値を有する分野における提案者に譲渡することができる。
In the above, for example, when there is a proposer whose numerical value of the evaluation result R is extremely high, it may not be possible to anticipate the liquidity of the numerical value. Therefore, the proposer may be able to transfer the numerical value to another proposer or user. In this case, for example, the proposer can assign to the proposer in the field having the same market value as the field of the owned numerical value.
また、上述した実施形態によれば、プレゼンテーションの映像、対象情報D、及び評価結果Rの少なくとも何れかを、他のユーザが保有する端末(他のユーザ端末)に送信してもよい。このとき、提案者のユーザ端末2と、他のユーザ端末との間における双方向通信を実現してもよい。なお、提案者のユーザ端末2及び他のユーザ端末の数は、任意である。
Further, according to the above-described embodiment, at least one of the video of the presentation, the target information D, and the evaluation result R may be transmitted to a terminal (another user terminal) owned by another user. At this time, bidirectional communication may be realized between the user terminal 2 of the proposer and another user terminal. In addition, the number of the user terminal 2 of a proposer and another user terminal is arbitrary.
また、上述した実施形態によれば、例えば対象情報Dに対して、評価結果Rのポイントとされた部分を確認することができ、例えばテキスト形式で確認することができる。このため、提案者が容易にプレゼンテーションの改善を行うことが可能となる。
Further, according to the above-described embodiment, for example, a portion of the target information D that is regarded as a point of the evaluation result R can be confirmed, and for example, can be confirmed in a text format. Therefore, the proposer can easily improve the presentation.
また、上述した実施形態によれば、対象情報Dに対して、他のユーザがコメントをすることができる。また、提案者が他のユーザのコメントに対して返答することもできる。このため、プレゼンテーションの映像特有の臨場感を活かすことができる。また、例えば提案者を支援する他のユーザの名前等を、提案者がプレゼンテーションの映像に含めることができ、提案者と他のユーザとの間における親密度を向上させることが可能となる。
Further, according to the embodiment described above, another user can make a comment on the target information D. Also, the proposer can respond to other user's comments. For this reason, it is possible to take advantage of the unique feeling of presentation video. In addition, for example, the name of the other user who supports the proposer can be included in the video of the presentation, and the closeness between the proposer and the other user can be improved.
また、上述した実施形態によれば、プレゼンテーションの映像として、洋服、装飾品、家具、食品、家、土地、不動産等の商品のほか、例えばインターネット上の個人情報(例えばFacebook(登録商標)等に用いられるアカウント)に対する売買や賃借に関する内容に基づく映像が用いられてもよい。例えば洋服に関するプレゼンテーションの映像が用いられた場合、商品の生地の質感や、着心地、サイズ感など、実際に商品が手元にないと分からない情報についても、他のユーザが確認することもできる。なお、評価支援システム100を介して、商品の売買や賃借を支援することもできる。
Further, according to the embodiment described above, in addition to goods such as clothes, accessories, furniture, food, home, land, real estate, etc. as a video of a presentation, for example, personal information on the Internet (for example, Facebook (registered trademark) etc.) An image based on the contents of trading, buying and selling for the account used may be used. For example, when a video of a presentation relating to clothes is used, other users can also check information such as the texture of the product fabric, the comfort, and the sense of size, which can not be known unless the product is actually at hand. In addition, it is also possible to support buying and selling and renting of goods through the evaluation support system 100.
また、上述した実施形態によれば、提案者がプレゼンテーションの映像を撮影する日程や内容の告知を、他のユーザに報知することができる。このため、他のユーザは、プレゼンテーションの映像及び評価結果Rに基づき、興味のある提案者のプレゼンテーションの映像を見逃すことなく確認することが可能となる。また、ユーザが提案者に興味がある場合には、対象情報Dの取得を予め希望することもできる。
Further, according to the above-described embodiment, it is possible to notify other users of the schedule and the content of the proposal for capturing the video of the presentation. Therefore, other users can check the video of the presentation of the interested proposer without missing based on the video of the presentation and the evaluation result R. In addition, when the user is interested in the proposer, the user may wish to acquire the target information D in advance.
また、上述した実施形態によれば、例えば予め実施されたゲームやクイズ等を勝ち抜いた提案者のみの対象情報Dを取得し、他のユーザに送信できるようにしてもよい。この場合、予め選別された提案者の対象情報Dのみが、他のユーザに提供されるため、対象情報Dの質を向上させることが可能となる。
Further, according to the embodiment described above, for example, the target information D of only the proposer who has won through the game or the quiz performed in advance may be acquired and transmitted to another user. In this case, only the target information D of the proposer selected in advance is provided to the other users, so the quality of the target information D can be improved.
また、上述した実施形態によれば、特定の専門分野に精通した複数の提案者同士のプレゼンテーションの映像が用いられてもよいほか、チーム、集団、組織、法人等の特定された複数の提案者同値のプレゼンテーションの映像が用いられてもよい。このため、様々な分野における対象情報Dに対しても、定量的な評価結果Rを取得することができる。
Further, according to the above-described embodiment, a video of a presentation between a plurality of proposers who are familiar with a specific special field may be used, and a plurality of specified proposers such as a team, a group, an organization, a corporation, etc. An equivalent presentation video may be used. For this reason, it is possible to acquire quantitative evaluation results R for target information D in various fields.
また、上述した実施形態によれば、プレゼンテーションの映像として、事業投資に関する内容が用いられてもよい。このため、他のユーザが、評価支援システム100を介して事業投資の判断を検討することができる。
Also, according to the above-described embodiment, the contents regarding business investment may be used as a video of a presentation. Therefore, other users can consider business investment decisions via the evaluation support system 100.
また、上述した実施形態によれば、プレゼンテーションの映像として、製造のアウトソースの委託又は受託に関する内容の説明に基づく映像が用いられてもよい。このため、複数の企業におけるプレゼンテーションの映像に基づく定量的な評価結果Rを取得することができ、委託先の企業等を容易に選定することが可能となる。
Further, according to the above-described embodiment, as the video of the presentation, a video based on the description of the contents regarding outsourcing of production outsource or consignment may be used. For this reason, it is possible to obtain a quantitative evaluation result R based on videos of presentations in a plurality of companies, and it becomes possible to easily select a consignee company or the like.
また、上述した実施形態によれば、他のユーザが、関心のある分野を選定し、選定された分野における提案者のプレゼンテーションの映像等を取得できるようにしてもよい。また、提案者のプレゼンテーションの映像等を取得したあとに、付加サービスを備えてもよい。例えば他のユーザが関心のある社会課題を選んだ場合、社会課題の中には支援や病児支援などが存在する。その社会課題に基づいてプレゼンテーションの映像等のグループ分けが行われ、他のユーザが興味のあるグループに寄付をすることができる。
Further, according to the above-described embodiment, another user may select a field of interest and may be able to acquire a video or the like of a presentation of a proposer in the selected field. Moreover, after acquiring the video etc. of a presentation of a proposer, you may provide an additional service. For example, when another user selects a social task of interest, the social task may include support or sick child support. Grouping such as presentation videos is performed based on the social issues, and other users can make donations to a group of interest.
また、上述した実施形態によれば、プレゼンテーションの映像として、例えば他の提案者を推薦する内容に基づく映像が用いられてもよい。このため、プレゼンテーションの内容を定量的に評価する評価結果Rに加え、提案者が他の提案者に支持されている度合いを確認することができる。
Furthermore, according to the above-described embodiment, as the video of the presentation, for example, a video based on the content to recommend another proposer may be used. Therefore, in addition to the evaluation result R for quantitatively evaluating the content of the presentation, it is possible to confirm the degree to which the proposer is supported by other proposers.
本発明の実施形態を説明したが、各実施形態は例として提示したものであり、発明の範囲を限定することは意図していない。これら新規な実施形態は、その他の様々な形態で実施されることが可能であり、発明の要旨を逸脱しない範囲で、種々の省略、置き換え、変更を行うことができる。これら実施形態やその変形は、発明の範囲や要旨に含まれるとともに、特許請求の範囲に記載された発明とその均等の範囲に含まれる。
While the embodiments of the present invention have been described, each embodiment is presented by way of example only and is not intended to limit the scope of the invention. These novel embodiments can be implemented in various other forms, and various omissions, substitutions, and modifications can be made without departing from the scope of the invention. These embodiments and modifications thereof are included in the scope and the gist of the invention, and are included in the invention described in the claims and the equivalent scope thereof.
1 :評価支援装置
2 :ユーザ端末
3 :サーバ
4 :公衆通信網
10 :筐体
11 :取得部
11a :質問部
11b :追加部
12 :評価部
13 :更新部
14 :出力部
15 :入力部
16 :情報DB
100 :評価支援システム
101 :CPU
102 :ROM
103 :RAM
104 :記憶部
105 :I/F
106 :I/F
107 :I/F
108 :入力部分
109 :出力部分
110 :内部バス
S110 :取得手段
S110a :質問手段
S110b :追加手段
S120 :評価手段
S130 :出力手段
S140 :更新手段 1: Evaluation support device 2: User terminal 3: Server 4: Public communication network 10: Case 11: Acquisition unit 11a: Question unit 11b: Addition unit 12: Evaluation unit 13: Update unit 14: Output unit 15: Input unit 16 : Information DB
100: Evaluation support system 101: CPU
102: ROM
103: RAM
104: Storage unit 105: I / F
106: I / F
107: I / F
108: input part 109: output part 110: internal bus S110: acquisition means S110a: question means S110b: addition means S120: evaluation means S130: output means S140: update means
2 :ユーザ端末
3 :サーバ
4 :公衆通信網
10 :筐体
11 :取得部
11a :質問部
11b :追加部
12 :評価部
13 :更新部
14 :出力部
15 :入力部
16 :情報DB
100 :評価支援システム
101 :CPU
102 :ROM
103 :RAM
104 :記憶部
105 :I/F
106 :I/F
107 :I/F
108 :入力部分
109 :出力部分
110 :内部バス
S110 :取得手段
S110a :質問手段
S110b :追加手段
S120 :評価手段
S130 :出力手段
S140 :更新手段 1: Evaluation support device 2: User terminal 3: Server 4: Public communication network 10: Case 11: Acquisition unit 11a: Question unit 11b: Addition unit 12: Evaluation unit 13: Update unit 14: Output unit 15: Input unit 16 : Information DB
100: Evaluation support system 101: CPU
102: ROM
103: RAM
104: Storage unit 105: I / F
106: I / F
107: I / F
108: input part 109: output part 110: internal bus S110: acquisition means S110a: question means S110b: addition means S120: evaluation means S130: output means S140: update means
Claims (9)
- ネットワークを介して提案者のプレゼンテーションの内容を評価する評価支援システムであって、
前記提案者のプレゼンテーションの映像に基づいて生成された対象情報を取得する取得手段と、
予め取得された過去の対象情報、前記過去の対象情報の評価に用いられた参照情報、及び、前記過去の対象情報と前記参照情報との間における3段階以上の連関度が記憶された参照データベースと、
前記参照データベースを参照し、前記対象情報と、前記参照情報との間における3段階以上の第1連関度を含む評価情報を取得する評価手段と、
前記評価情報に基づき評価結果を生成し、前記評価結果を出力する出力手段と、
を備えることを特徴とする評価支援システム。
An evaluation support system for evaluating the content of a proposer's presentation through a network
Acquisition means for acquiring target information generated based on a video of a presentation of the proposer;
Reference database in which past target information acquired in advance, reference information used for evaluation of the past target information, and three or more stages of association between the past target information and the reference information are stored When,
An evaluation unit that refers to the reference database and acquires evaluation information including the first association degree of three or more stages between the target information and the reference information;
An output unit that generates an evaluation result based on the evaluation information and outputs the evaluation result;
An evaluation support system comprising:
- 前記取得手段は、
前記対象情報に基づき、前記提案者への質問情報を生成し、前記質問情報を出力する質問手段と、
前記質問情報に対する前記提案者の回答に基づく回答情報を取得する追加手段と、
を有すること
を特徴とする請求項1記載の評価支援システム。
The acquisition means is
Question means for generating question information for the proposer based on the target information and outputting the question information;
Additional means for acquiring answer information based on the proposer's answer to the question information;
The evaluation support system according to claim 1, comprising:
- 前記質問手段は、前記評価手段により取得された前記評価情報を参照して前記質問情報を生成すること
を特徴とする請求項2記載の評価支援システム。
The evaluation support system according to claim 2, wherein the question means generates the question information with reference to the evaluation information acquired by the evaluation means.
- 前記取得手段は、前記対象情報から、前記提案者のプレゼンテーションに用いた資料の内容に関する内容データ、前記提案者のプレゼンテーションの進め方に関する進行データ、前記提案者の音声に関する音声データ、及び、前記提案者の顔の特徴に関する表情データの少なくとも何れかを抽出すること
を特徴とする請求項1~3の何れか1項記載の評価支援システム。
The acquisition means, from the target information, content data on the content of the material used for the presentation of the proposer, progress data on how to proceed the presentation of the proposer, voice data on the voice of the proposer, and the proposer The evaluation support system according to any one of claims 1 to 3, wherein at least one of facial expression data related to facial features of the image is extracted.
- 前記表情データは、
前記提案者の目の特徴に関する第1表情データと、
前記提案者の顔全体の特徴に関する顔データと、
を有し、
前記評価結果は、推定健康値を有すること
を特徴とする請求項4記載の評価支援システム。
The expression data is
First expression data relating to eye features of the proposer;
Face data relating to features of the entire face of the proposer;
Have
The evaluation support system according to claim 4, wherein the evaluation result has an estimated health value.
- 前記取得手段は、30秒以上30分以下における前記提案者のプレゼンテーションの映像に基づいて生成された前記対象情報を取得すること
を特徴とする請求項1~5の何れか1項記載の評価支援システム。
The evaluation support method according to any one of claims 1 to 5, wherein the acquisition means acquires the target information generated based on a video of a presentation of the proposer in 30 seconds to 30 minutes. system.
- 前記過去の対象情報と、前記参照情報との間の関係を新たに取得した場合には、前記関係を前記連関度に反映させる更新手段をさらに備えること
を特徴とする請求項1~6の何れか1項記載の評価支援システム。
7. The information processing apparatus according to any one of claims 1 to 6, further comprising: updating means for reflecting the relationship on the degree of association when newly acquiring a relationship between the past target information and the reference information. Evaluation support system described in or item 1.
- 前記取得手段は、ソーシャルネットワーキングサービスにおける前記提案者の注目度合いを取得すること
を特徴とする請求項1~7の何れか1項記載の評価支援システム。
The evaluation support system according to any one of claims 1 to 7, wherein the acquisition means acquires the degree of attention of the proposer in a social networking service.
- 提案者のプレゼンテーションの内容を評価する評価支援装置であって、
前記提案者のプレゼンテーションの映像に基づいて生成された対象情報を取得する取得部と、
予め取得された過去の対象情報、前記過去の対象情報の評価に用いられた参照情報、及び、前記過去の対象情報と前記参照情報との間における3段階以上の連関度が記憶された参照データベースと、
前記参照データベースを参照し、前記対象情報と、前記参照情報との間における3段階以上の第1連関度を含む評価情報を取得する評価部と、
前記評価情報に基づき評価結果を生成し、前記評価結果を出力する出力部と、
を備えることを特徴とする評価支援装置。 It is an evaluation support device that evaluates the content of the presentation of the proposer,
An acquisition unit for acquiring target information generated based on a video of a presentation of the proposer;
Reference database in which past target information acquired in advance, reference information used for evaluation of the past target information, and three or more stages of association between the past target information and the reference information are stored When,
An evaluation unit which refers to the reference database and acquires evaluation information including the first association degree of three or more stages between the target information and the reference information;
An output unit that generates an evaluation result based on the evaluation information and outputs the evaluation result;
An evaluation support apparatus comprising:
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017-174923 | 2017-09-12 | ||
JP2017174923A JP6241698B1 (en) | 2017-09-12 | 2017-09-12 | Evaluation support system and evaluation support apparatus |
JP2017-230064 | 2017-11-30 | ||
JP2017230064A JP6288748B1 (en) | 2017-11-30 | 2017-11-30 | Evaluation support system and evaluation support apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019053958A1 true WO2019053958A1 (en) | 2019-03-21 |
Family
ID=65723647
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2018/020608 WO2019053958A1 (en) | 2017-09-12 | 2018-05-29 | Evaluation assistance system and evaluation assistance device |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2019053958A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020160425A (en) * | 2019-09-24 | 2020-10-01 | 株式会社博報堂Dyホールディングス | Evaluation system, evaluation method, and computer program |
CN112313641A (en) * | 2019-03-29 | 2021-02-02 | 艾思益信息应用技术股份公司 | Information providing system and information providing method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002117261A (en) * | 2000-10-06 | 2002-04-19 | Digital Dream:Kk | Method for raising investment for creating video contents using web, its server system and recording medium with the method programmed and recorded thereon |
US20020177115A1 (en) * | 2001-05-22 | 2002-11-28 | Moskowitz Paul A. | System to provide presentation evaluations |
JP2008139762A (en) * | 2006-12-05 | 2008-06-19 | Univ Of Tokyo | Presentation support device, method, and program |
JP2017130122A (en) * | 2016-01-22 | 2017-07-27 | ジャパンモード株式会社 | Problem solving support system |
-
2018
- 2018-05-29 WO PCT/JP2018/020608 patent/WO2019053958A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002117261A (en) * | 2000-10-06 | 2002-04-19 | Digital Dream:Kk | Method for raising investment for creating video contents using web, its server system and recording medium with the method programmed and recorded thereon |
US20020177115A1 (en) * | 2001-05-22 | 2002-11-28 | Moskowitz Paul A. | System to provide presentation evaluations |
JP2008139762A (en) * | 2006-12-05 | 2008-06-19 | Univ Of Tokyo | Presentation support device, method, and program |
JP2017130122A (en) * | 2016-01-22 | 2017-07-27 | ジャパンモード株式会社 | Problem solving support system |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112313641A (en) * | 2019-03-29 | 2021-02-02 | 艾思益信息应用技术股份公司 | Information providing system and information providing method |
JP2020160425A (en) * | 2019-09-24 | 2020-10-01 | 株式会社博報堂Dyホールディングス | Evaluation system, evaluation method, and computer program |
JP7160778B2 (en) | 2019-09-24 | 2022-10-25 | 株式会社博報堂Dyホールディングス | Evaluation system, evaluation method, and computer program. |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Kalantari et al. | Exploring the early adopters of augmented reality smart glasses: The case of Microsoft HoloLens | |
Jakic et al. | The impact of language style accommodation during social media interactions on brand trust | |
Jin et al. | Social presence and imagery processing as predictors of chatbot continuance intention in human-AI-interaction | |
Dean | The good glow: Charity and the symbolic power of doing good | |
Song et al. | Consumers’ preference for user-designed versus designer-designed products: The moderating role of power distance belief | |
Köhler et al. | Return on interactivity: The impact of online agents on newcomer adjustment | |
Beltagui et al. | Setting the stage for service experience: design strategies for functional services | |
Zinkhan | Promoting services via the Internet: new opportunities and challenges | |
Tan et al. | The borders are re-opening! Has virtual reality been a friend or a foe to the tourism industry so far? | |
Ho et al. | The role of customer personality in premium banking services | |
Alhammad et al. | Online persuasion for e-commerce websites | |
JP6241698B1 (en) | Evaluation support system and evaluation support apparatus | |
Paramita et al. | Turning narcissists into prosocial agents: explaining young people’s online donation behavior | |
JP6288748B1 (en) | Evaluation support system and evaluation support apparatus | |
WO2019053958A1 (en) | Evaluation assistance system and evaluation assistance device | |
Vaidyanathan | Augmented Reality in Retail-A Case Study: Technology Implications to Utilitarian, Aesthetic and Enjoyment Values | |
Park et al. | Understanding switching intentions between traditional banks and Internet-only banks among Generation X and Generation Z | |
Dew | The Empathetic Algorithm Leveraging AI for Next-Level CX | |
Boongasame et al. | Forming buyer coalition scheme with connection of a coalition leader | |
Costa et al. | The influence of artificial intelligence on online behaviour | |
Holloman | The Social Media MBA in Practice: An Essential Collection of Inspirational Case Studies to Influence Your Social Media Strategy | |
Kennedy et al. | No BS Trust Based Marketing: The Ultimate Guide to Creating Trust in an Understandibly Un-trusting World | |
Grade | Multi level marketing and the impact on distributors’ loyalty of (un) success factors: an approach to measure loyalty | |
Exalto et al. | Conversational commerce, the conversation of tomorrow | |
Sam | Opportunities for Multi Party Chatbots in Human Social Interactions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18856617 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24/07/2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18856617 Country of ref document: EP Kind code of ref document: A1 |