WO2021084718A1 - Voice playback program, voice playback method, and voice playback system - Google Patents

Voice playback program, voice playback method, and voice playback system Download PDF

Info

Publication number
WO2021084718A1
WO2021084718A1 PCT/JP2019/042944 JP2019042944W WO2021084718A1 WO 2021084718 A1 WO2021084718 A1 WO 2021084718A1 JP 2019042944 W JP2019042944 W JP 2019042944W WO 2021084718 A1 WO2021084718 A1 WO 2021084718A1
Authority
WO
WIPO (PCT)
Prior art keywords
scenario
performer
character
characters
voice
Prior art date
Application number
PCT/JP2019/042944
Other languages
French (fr)
Japanese (ja)
Inventor
直己 原田
Original Assignee
富士通株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士通株式会社 filed Critical 富士通株式会社
Priority to PCT/JP2019/042944 priority Critical patent/WO2021084718A1/en
Publication of WO2021084718A1 publication Critical patent/WO2021084718A1/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K15/00Acoustics not otherwise provided for
    • G10K15/04Sound-producing devices

Definitions

  • the present invention relates to a voice reproduction program, a voice reproduction method, and an audio reproduction system.
  • the social voice recording web service has a recording mode in which each performer records the lines of characters in the scenario, and a listener mode in which the scenario and the performer are specified in a desired combination and listened to.
  • a work publishing system that publishes a story work via a network, and the recorded content for the story work is received from the voice performer, and the recorded content determined to be above a predetermined standard level is described as a story. Some of them are managed in association with the work, and the corresponding recorded content is added to the story work and published.
  • a selection screen for accepting selection of an audio data set to be used as an audio data set to be output when displaying cartoon data is output to the audio playback device, and the audio data set is selected. Then, there is a technique for executing audio reproduction using the selected audio data set when displaying cartoon data in the audio reproduction device.
  • the present invention aims to make it easier to specify a performer of his choice from a plurality of performers.
  • the performer information associated with the designated characters in the scenario is acquired from the storage unit, and the performer information corresponding to the acquired characters in the scenario is obtained.
  • the performer information corresponding to the acquired characters in the scenario is obtained.
  • FIG. 1 is an explanatory diagram showing an embodiment of an audio reproduction method according to an embodiment.
  • FIG. 2 is an explanatory diagram showing a system configuration example of the audio reproduction system 200.
  • FIG. 3 is a block diagram showing a hardware configuration example of the information processing device 101.
  • FIG. 4 is a block diagram showing a hardware configuration example of the user terminal 202.
  • FIG. 5 is an explanatory diagram showing an example of the stored contents of the scenario DB 220.
  • FIG. 6 is an explanatory diagram showing an example of the stored contents of the registered voice DB 230.
  • FIG. 7 is an explanatory diagram showing an example of the stored contents of the extraction condition DB 240.
  • FIG. 8 is a block diagram showing a functional configuration example of the information processing device 101.
  • FIG. 8 is a block diagram showing a functional configuration example of the information processing device 101.
  • FIG. 9 is an explanatory diagram showing a screen example of the login screen.
  • FIG. 10 is an explanatory diagram showing a screen example of a new registration screen.
  • FIG. 11 is an explanatory diagram showing a screen example of the application home screen.
  • FIG. 12 is an explanatory diagram showing a screen example of the work introduction screen.
  • FIG. 13 is an explanatory diagram showing a screen example of the scenario reproduction screen.
  • FIG. 14 is an explanatory diagram showing a screen example of the sample reproduction screen.
  • FIG. 15 is a sequence diagram showing an operation example at the time of scenario registration of the voice reproduction system 200.
  • FIG. 16 is a sequence diagram showing an operation example at the time of login of the voice reproduction system 200.
  • FIG. 17 is a sequence diagram (No.
  • FIG. 18 is a sequence diagram (No. 2) showing an operation example of the audio reproduction system 200 at the time of content reproduction.
  • FIG. 19 is a flowchart showing an example of a sample reproduction dialogue determination processing procedure of the information processing apparatus 101.
  • FIG. 1 is an explanatory diagram showing an embodiment of an audio reproduction method according to an embodiment.
  • the information processing device 101 is a computer that reproduces voice data registered by the performer with respect to the dialogue of the characters in the scenario.
  • the scenario is a script or script that describes the lines of the characters.
  • the scenario may describe the composition of the scene, the movement of the characters, and the like.
  • Examples of scenarios include novels, light novels, poems, dramas, stage scripts, essays, and manga.
  • the characters are the characters and animals that appear in the scenario.
  • the characters may include a narrator who explains the scene, the psychology of the characters, and the like.
  • a performer is a person who plays the role of a character.
  • the social voice recording web service creates various scenarios together. For example, you can record and listen to your own voice in various stories, or you can enjoy combining voices recorded by other people.
  • the social voice recording web service is a user-participation type service in which people who want to be voice actors and voice actors, people who want to play with voice, people who want to listen to various voices, etc. gather.
  • the social voice recording web service has a recording mode and a listener mode.
  • recording mode the performer can record the lines of the characters in the scenario.
  • listener mode you can specify and listen to the scenario and performer in any combination you like.
  • the social voice recording web service is a user-participation type service
  • a situation occurs in which a plurality of performers play one character in one scenario. That is, multiple performers may record lines for one character in a scenario.
  • the performer in the listener mode, when playing the content corresponding to the scenario, the performer can be specified for each character in the scenario.
  • the user can create and enjoy his own scenario by designating the performer of his choice from multiple performers.
  • a list of visual information it is difficult to display a list because the content handled by the social voice recording web service is voice.
  • the information processing device 101 When the information processing device 101 receives a scenario designation from the user, the information processing device 101 acquires the performer information associated with the characters in the designated scenario from the storage unit 110.
  • the storage unit 110 stores the performer information that identifies the performer who plays the role of the character in association with each character in each scenario.
  • the information processing device 101 acquires, for example, the performer information associated with each character in "Scenario 001" from the storage unit 110.
  • the information processing device 101 displays the performer information corresponding to the characters in the acquired scenario so that it can be determined that the performer is the performer for the characters.
  • the performer information is information that identifies the performer who plays the role of the character, and is, for example, the name of the performer or an icon representing the performer.
  • the information processing device 101 displays the performer information corresponding to the character for each character in "Scenario 001" so as to be able to determine that the performer is the character.
  • the characters in “Scenario 001" are referred to as "Character 001" and "Character 002".
  • the information processing device 101 displays, for example, the sample reproduction screen 120 on the terminal device 102.
  • the performer information "user 100, user 2501" corresponding to "character 001” is displayed so as to be discriminable as the performer for "character 001".
  • the performer information "user 556" corresponding to the "character 002” is displayed so as to be able to determine that the performer is the performer for the "character 002".
  • the information processing device 101 determines each time the user accepts the designation of one of the plurality of performers.
  • the voice data registered by the one performer is reproduced for the dialogue to be reproduced.
  • the information processing apparatus 101 receives the designation of any one of the plurality of performers for any of the characters in "Scenario 001", and the same one of the characters is used each time.
  • the voice data registered by any of the specified performers is played.
  • the performer is designated, for example, by designating the performer information by the operation input of the user 001 on the sample reproduction screen 120.
  • the dialogue 130 to be reproduced for "Character 001” is defined as "Very delicious”.
  • the information processing device 101 accepts the designation of the "user 100" among the plurality of performers for the "character 001", the "user 100" is registered for the dialogue 130 of the "character 001". Play back the audio data.
  • the information processing device 101 receives the designation of "user 2501" among the plurality of performers for “character 001”, the voice data registered by "user 2501" for the dialogue 130 of "character 001". To play.
  • the information processing device 101 even if there are a plurality of performers for the characters in the scenario, only one performer among the plurality of performers is designated, and the characters are affected. You can audition the voice recorded by the designated performer for the same line. In addition, even if the performers are switched, the voices of the same characters are reproduced, so that the voices of the performers can be easily compared. As a result, even if there are a plurality of performers for the characters, it is possible to easily specify the performer of his / her choice from the plurality of performers.
  • the user 001 auditions the voice recorded by the designated performer for the dialogue 130 to be reproduced by simply designating one of the plurality of performers "user 100, user 2501" on the sample playback screen 120. can do. Therefore, the user 001 can check the voice of the dialogue 130 for each performer for the "character 001" of the "scenario 001" and determine the performer of his / her preference.
  • the voice reproduction system 200 is applied to, for example, a social voice recording web service for creating various scenarios together.
  • FIG. 2 is an explanatory diagram showing a system configuration example of the audio reproduction system 200.
  • the voice reproduction system 200 includes an information processing device 101, an administrator terminal 201, and a plurality of user terminals 202.
  • the information processing device 101, the administrator terminal 201, and the user terminal 202 are connected via a wired or wireless network 210.
  • the network 210 is, for example, the Internet, LAN (Local Area Network), WAN (Wide Area Network), or the like.
  • the information processing device 101 has a scenario DB (Database) 220, a registered voice DB 230, and an extraction condition DB 240.
  • the information processing device 101 is, for example, a server.
  • the stored contents of the various DBs 220, 230, and 240 will be described later with reference to FIGS. 5 to 7.
  • the administrator terminal 201 is a computer used by the administrator of the audio reproduction system 200. At the administrator terminal 201, the administrator registers, for example, scenario information.
  • the registered scenario information is stored in the scenario DB 220 included in the information processing device 101.
  • the administrator terminal 201 is, for example, a PC, a tablet-type PC (Personal Computer), or the like.
  • the user terminal 202 is a computer used by the user of the voice reproduction system 200.
  • an application for using the social voice recording web service is installed on the user terminal 202.
  • the user can use the social voice recording web service to record the lines of the characters in the scenario and to read and listen to the content corresponding to the scenario.
  • the user terminal 202 is, for example, a smartphone, a mobile phone, a tablet PC, or the like.
  • the terminal device 102 shown in FIG. 1 corresponds to, for example, a user terminal 202.
  • the information processing device 101 and the administrator terminal 201 are provided separately, but the present invention is not limited to this.
  • the information processing device 101 and the administrator terminal 201 may be realized by the same computer.
  • FIG. 3 is a block diagram showing a hardware configuration example of the information processing device 101.
  • the information processing device 101 includes a CPU (Central Processing Unit) 301, a memory 302, a disk drive 303, a disk 304, a communication I / F (Interface) 305, and a portable recording medium I / F 306. , And a portable recording medium 307. Further, each component is connected by a bus 300.
  • CPU Central Processing Unit
  • the CPU 301 controls the entire information processing device 101.
  • the CPU 301 may have a plurality of cores.
  • the memory 302 includes, for example, a ROM (Read Only Memory), a RAM (Random Access Memory), a flash ROM, and the like.
  • the flash ROM stores the OS (Operating System) program
  • the ROM stores the application program
  • the RAM is used as the work area of the CPU 301.
  • the program stored in the memory 302 is loaded into the CPU 301 to cause the CPU 301 to execute the coded process.
  • the disk drive 303 controls data read / write to the disk 304 according to the control of the CPU 301.
  • the disk 304 stores the data written under the control of the disk drive 303. Examples of the disk 304 include a magnetic disk and an optical disk.
  • the communication I / F 305 is connected to the network 210 through a communication line, and is connected to an external computer (for example, the administrator terminal 201 and the user terminal 202 shown in FIG. 2) via the network 210.
  • the communication I / F 305 controls the interface between the network 210 and the inside of the device, and controls the input / output of data from an external computer.
  • a modem, a LAN adapter, or the like can be adopted.
  • the portable recording medium I / F 306 controls data read / write to the portable recording medium 307 according to the control of the CPU 301.
  • the portable recording medium 307 stores the data written under the control of the portable recording medium I / F 306.
  • Examples of the portable recording medium 307 include a CD (Compact Disc) -ROM, a DVD (Digital Versatile Disc), and a USB (Universal Versatile Bus) memory.
  • the information processing device 101 may include, for example, an SSD (Solid State Drive), an input device, a display, or the like, in addition to the above-mentioned components. Further, the information processing apparatus 101 does not have to have, for example, a disk drive 303, a disk 304, a portable recording medium I / F 306, and a portable recording medium 307 among the above-described components.
  • an SSD Solid State Drive
  • I / F portable recording medium
  • FIG. 4 is a block diagram showing a hardware configuration example of the user terminal 202.
  • the user terminal 202 includes a CPU 401, a memory 402, a communication I / F 403, a camera 404, a display 405, an input device 406, a microphone 407, a speaker 408, and a portable recording medium I / F 409. And a portable recording medium 410. Further, each component is connected by a bus 400.
  • the CPU 401 controls the entire user terminal 202.
  • the CPU 401 may have a plurality of cores.
  • the memory 402 is a storage unit having, for example, a ROM, a RAM, a flash ROM, and the like. Specifically, for example, a flash ROM or ROM stores various programs, and the RAM is used as a work area of the CPU 401.
  • the program stored in the memory 402 is loaded into the CPU 401 to cause the CPU 401 to execute the coded process.
  • the communication I / F 403 is connected to the network 210 (see FIG. 2) through a communication line, and is connected to another computer (for example, the information processing device 101 shown in FIG. 2) via the network 210. Then, the communication I / F 403 controls the internal interface with the network 210 and controls the input / output of data from another computer.
  • the camera 404 is an imaging device that captures an image (still image or moving image) and outputs image data.
  • the display 405 is a display device that displays data such as a cursor, an icon, a toolbox, a document, an image, and functional information.
  • a liquid crystal display, an organic EL (Electroluminescence) display, or the like can be adopted.
  • the input device 406 has keys for inputting characters, numbers, various instructions, etc., and inputs data.
  • the input device 406 may be a keyboard, a mouse, or the like, or may be a touch panel type input pad, a numeric keypad, or the like.
  • the microphone 407 is a device that converts sound into an electric signal.
  • the speaker 408 is a device that converts an electric signal into voice.
  • the portable recording medium I / F 409 controls data read / write to the portable recording medium 410 according to the control of the CPU 401.
  • the portable recording medium 410 stores data written under the control of the portable recording medium I / F409.
  • the user terminal 202 may not have, for example, a camera 404, a portable recording medium I / F409, a portable recording medium 410, or the like among the above-mentioned components.
  • the administrator terminal 201 shown in FIG. 2 can also be realized by the same hardware configuration as the user terminal 202.
  • the various DBs 220, 230, and 240 are realized, for example, by storage devices such as the memory 302 and the disk 304 shown in FIG.
  • FIG. 5 is an explanatory diagram showing an example of the stored contents of the scenario DB 220.
  • the scenario DB 220 stores scenario information 500-1 to 500-n of scenarios S1 to Sn (n: a natural number of 2 or more).
  • Scenario information 500-i includes scenario ID, title, synopsis, characters, and text information of scenario Si.
  • the scenario ID is an identifier that uniquely identifies the scenario Si.
  • the title is the title of scenario Si.
  • the synopsis is a synopsis of scenario Si.
  • the character information is information that identifies the characters in Scenario Si, and includes, for example, the names of the characters and the introduction of the characters.
  • the character information may include a character ID that uniquely identifies the character.
  • the text shows the lines of the characters in Scenario Si in the order of remarks.
  • the dialogue number indicates the order in which the dialogue is spoken by each character. Each line is identified by a combination of characters and line numbers.
  • the scenario information 500-i may include information indicating the points of performance.
  • scenario information 500-1 is the title of scenario S1, "daily story”, synopsis “daily conversation ", characters “Taro: male, 15 years old, personality ..., Hanako: female, 16 years old, personality is "including.
  • the scenario information 500-1 includes a text showing the lines of the characters in the scenario S1 in the order of remarks. For example, the line number "1" of the character “Taro” is “It's hot”. The line of the character “Hanako” with the line number "1" is "Eh?".
  • FIG. 6 is an explanatory diagram showing an example of the stored contents of the registered voice DB 230.
  • the registered voice DB 230 has fields for scenario ID, title, characters, performers, and voice data, and by setting information in each field, registered voice information (for example, registered voice information 600-1 to 600-1 to 600-4) is stored as a record.
  • the scenario ID is an identifier that uniquely identifies the scenario Si.
  • the title is the title of scenario Si.
  • the characters are information that identifies the characters in the scenario Si.
  • the information that identifies the character is, for example, the name of the character or the character ID.
  • the performer is information that identifies the performer corresponding to the character.
  • the information that identifies the performer is, for example, the name of the performer or the performer ID.
  • the performer ID is an identifier that uniquely identifies the performer.
  • the voice data is the voice data of the characters' lines recorded by the performer.
  • the registered voice information 600-1 is voice data "Hanako Yamada” recorded by the performer "Hanako Yamada” about the line number "1" of the character “Taro” in scenario S1 (scenario ID: S1, title: daily story). "Taro 1 voice” is shown.
  • FIG. 7 is an explanatory diagram showing an example of the stored contents of the extraction condition DB 240.
  • the extraction condition DB 240 stores the conditions (i) and (ii).
  • the conditions (i) and (ii) indicate the conditions for extracting the lines for sample reproduction from the lines of the characters.
  • the sample reproduction is to reproduce the sound of each performer for audition when deciding the performer about the characters in the scenario Si.
  • the dialogue for performing sample reproduction may be referred to as "sample reproduction dialogue".
  • condition (i) indicates a condition that the characters are not counted in "!, "?", “!, “Tsu”, ",", and “.”
  • condition (ii) indicates a condition that "the number of characters x 1" is counted as the score of each line and the one with the highest score is extracted.
  • FIG. 8 is a block diagram showing a functional configuration example of the information processing device 101.
  • the information processing device 101 includes a reception unit 801, a display control unit 802, a reproduction control unit 803, a determination unit 804, and a storage unit 810.
  • the reception unit 801 to the determination unit 804 may cause the CPU 301 to execute a program stored in a storage device such as the memory 302, the disk 304, or the portable recording medium 307 shown in FIG.
  • Communication I / F 305 realizes the function.
  • the processing result of each functional unit is stored in a storage device such as a memory 302 or a disk 304, for example.
  • the storage unit 810 is realized by a storage device such as a memory 302 or a disk 304, for example. Specifically, for example, the storage unit 810 stores the scenario DB 220, the registered voice DB 230, and the extraction condition DB 240.
  • the reception unit 801 receives the designation of the scenario Si from the user.
  • the user is, for example, a user (logged-in user) who has logged in to the social voice recording web service.
  • the designated scenario Si is, for example, a scenario in which the user reads or listens to the content, or a scenario in which the user records the lines of the characters.
  • the scenario Si is specified, for example, on the application home screen displayed on the user terminal 202 shown in FIG.
  • the application home screen is an operation screen on which any scenario Si can be specified from a plurality of scenarios (for example, scenarios S1 to Sn).
  • scenarios S1 to Sn For example, scenarios S1 to Sn.
  • keywords and categories may be specified to narrow down the scenarios to be displayed in a list from scenarios S1 to Sn.
  • a screen example of the application home screen will be described later with reference to FIG.
  • the reception unit 801 receives the designation of the scenario Si from the user by receiving the information specifying the scenario Si specified by the user's operation input on the application home screen from the user terminal 202.
  • the information that identifies the scenario Si is, for example, a scenario ID or a title.
  • the display control unit 802 acquires the scenario information of the designated scenario Si from the storage unit 810.
  • the scenario information includes information for identifying characters in the scenario Si and the text of the scenario Si. Then, the display control unit 802 refers to the acquired scenario information and displays the characters in the scenario Si in a selectable manner.
  • the display control unit 802 acquires the scenario information 500-i of the designated scenario Si from the scenario DB 220 (see FIG. 5). Then, the display control unit 802 displays an operation screen on the user terminal 202 in which the characters in the scenario Si can be selected with reference to the acquired scenario information 500-i. A screen example of an operation screen in which characters in scenario Si can be selected will be described later with reference to FIG.
  • the display control unit 802 acquires the performer information associated with the characters in the designated scenario Si from the storage unit 810. Then, the display control unit 802 displays the performer information corresponding to the acquired characters in the scenario Si so as to be able to determine that the performer is the character.
  • the performer information is information that identifies the performer who plays the role of the character.
  • the performer information is, for example, the name of the performer, an icon representing the performer, or the like.
  • the performer information may include, for example, performer attributes such as gender and age.
  • character C any character in scenario Si may be referred to as "character C”.
  • the display control unit 802 when the display control unit 802 accepts the selection of the character C in the scenario Si, the display control unit 802 acquires the performer information associated with the selected character C from the storage unit 810. Then, the display control unit 802 displays the acquired performer information corresponding to the character C so as to be able to determine that the performer is the character C.
  • the registration corresponding to the selected character C is registered from the registration voice DB 230 (see FIG. 6). Get voice information.
  • the registered voice information corresponding to the selected character C is a registration in which the scenario ID of the specified scenario Si is set in the scenario ID field and the name of the selected character C is set in the character field. It is voice information.
  • the sample playback screen is an operation screen that displays the performer specified from the registered voice information corresponding to the character C so as to be a performer for the character C.
  • the sample reproduction screen may be generated by the information processing device 101 or may be generated by the user terminal 202. A screen example of the sample reproduction screen will be described later with reference to FIG.
  • the display control unit 802 may acquire the performer information associated with each character C of the designated scenario Si from the storage unit 810. Then, the display control unit 802 may display the acquired performer information corresponding to the character C for each character C in the scenario Si so as to be able to determine that the character is the performer for the character C. Good.
  • the display control unit 802 may display the performer information for each character C in the scenario Si in a list instead of displaying the performer information only for the selected character C in the scenario Si.
  • the sample reproduction screen 120 shown in FIG. 1 is an example of an operation screen that displays a list of performer information for each character C in the scenario Si.
  • the reception unit 801 accepts the designation of one or a plurality of performers for the character C in the scenario Si.
  • the designated performer is, for example, a performer who wants to audition the voice when deciding the performer for the character C.
  • the performer is designated, for example, on the sample reproduction screen displayed on the user terminal 202.
  • the reception unit 801 receives the designation of the performer for the character C by receiving the information for identifying the performer specified by the user's operation input on the sample playback screen from the user terminal 202.
  • the information that identifies the performer is, for example, the name of the performer or the performer ID.
  • the playback control unit 803 accepts the designation of one or a plurality of performers for the character C in the scenario Si, the voice registered by the designated performer for the same line of the character C. Play the data.
  • the dialogue to be reproduced (the dialogue for sample reproduction) is determined by, for example, the dialogue of the character C by the determination unit 804. However, the dialogue to be reproduced may be set in advance.
  • the voice data registered by the designated performer is specified, for example, from the registered voice information of the designated performer among the registered voice information corresponding to the character C.
  • the reproduction control unit 803 receives the voice data registered by the one performer for the dialogue to be reproduced each time the user receives the designation of one of the plurality of performers for the one character C. To play.
  • the playback control unit 803 is used for sample playback from the registered voice information of the designated performer among the registered voice information corresponding to the character C. Acquire the voice data of the dialogue. Then, the reproduction control unit 803 transmits the acquired voice data to the user terminal 202, and outputs the voice data from the speaker 408 (see FIG. 4) of the user terminal 202.
  • scenario Si is “scenario S1”
  • character C is “taro”
  • “Hanako Yamada” is designated as the performer.
  • the dialogue for sample reproduction of the character “Taro” is defined as the dialogue with the dialogue number "1".
  • the reproduction control unit 803 acquires the voice data “Taro 1 voice” by referring to the registered voice information 600-1 in the registered voice DB 230. Then, the reproduction control unit 803 transmits the acquired voice data "Taro 1 voice” to the user terminal 202, and outputs the voice data "Taro 1 voice” from the speaker 408 of the user terminal 202.
  • the determination unit 804 determines the dialogue to be reproduced (sample reproduction dialogue) for the character C. Specifically, for example, the determination unit 804 may preferentially determine the lines for sample reproduction among the lines of the character C, in which the number of characters is a predetermined number ⁇ or more.
  • the predetermined number ⁇ can be arbitrarily set, and is set to a value of, for example, about 10 to 30.
  • the reproduction control unit 803 each time the reproduction control unit 803 accepts the designation of one or a plurality of performers for the character C in the scenario Si, the designated performer is used for the determined sample reproduction dialogue. Plays the audio data registered by.
  • the dialogue “hot” of the dialogue number “1” of the character “Taro” in scenario S1 shown in FIG. 5 is a sample because the number of characters is 10 or less. Not determined as a playback line. As a result, it is possible to prevent a line for which it is difficult to judge the feeling of the performer's voice because the number of characters is too small to be determined as a sample reproduction line.
  • the determination unit 804 may preferentially determine the lines for sample reproduction among the lines of the character C, in which the number of characters is a predetermined number ⁇ or more and a predetermined number ⁇ or less.
  • the predetermined number ⁇ can be arbitrarily set, and is set to, for example, a value of about 50 to 100. As a result, it is possible to prevent the lines for which the number of characters is too small to judge the feeling of the performer's voice and the lines for which the number of characters is too large and the audition takes a long time to be determined as the sample reproduction lines.
  • the determination unit 804 may, for example, preferentially determine the dialogue for sample reproduction among the dialogue of the character C, which does not include the predetermined character code.
  • the predetermined character code is a character code corresponding to a character that is difficult to be expressed by a voice such as " --, "?", "!, “Tsu”, ",", “.”.
  • the dialogue "" of the dialogue number "3" of the character “Taro” in scenario S1 shown in FIG. 5 is not determined as the dialogue for sample reproduction because it includes "".
  • characters that are difficult to express by voice are included, it is possible to prevent lines that are difficult to judge the feeling of the performer's voice from being determined as sample reproduction lines.
  • the determination unit 804 may decide, for example, the dialogue for sample reproduction by giving priority to the dialogue appearing in the first half of the scenario Si among the dialogue of the character C.
  • the first half of the scenario Si may be, for example, the front part in which all the lines of the scenario Si are divided into two front and rear parts, or may be a predetermined number of lines from the first line of the scenario Si.
  • the sample playback lines for the character "Taro” are determined from the lines of the character “Taro” included in the first 10 lines of scenario S1. ..
  • the dialogue for sample reproduction can be determined in consideration of the fact that the dialogue that becomes spoiler tends to appear in the latter half of the scenario Si.
  • the extraction conditions used when determining the sample reproduction dialogue are set in advance and stored in the extraction condition DB 240 shown in FIG. 7. More specifically, for example, the determination unit 804 identifies the dialogue of the selected character C with reference to the scenario information 500-i of the scenario Si. Next, the determination unit 804 identifies the conditions (i) and (ii) with reference to the extraction condition DB 240 shown in FIG. 7. Then, the determination unit 804 refers to the specified condition (i) and counts the number of characters of the specified character C for each line.
  • the determination unit 804 refers to the specified condition (ii), counts the number of characters ⁇ 1 as the score of each dialogue for each dialogue of the specified character C, and extracts the dialogue with the highest score. Then, the determination unit 804 determines the extracted dialogue as the sample reproduction dialogue for the character C.
  • the score of the dialogue "" of the dialogue number "3" is "0 points".
  • the number of characters in the line "It doesn't look hot for that" of the line number "4" is "14". Therefore, the score of the dialogue "It doesn't look hot for that" of the dialogue number "4" is "14 points”.
  • the decision unit 804 has the highest score among the lines "1" to "4" of the character "Taro", the line number "4" "It doesn't look hot for that. Is determined as the sample playback dialogue.
  • the determined sample reproduction dialogue is stored in the storage unit 810 in association with the character C, for example.
  • the playback control unit 803 receives the designation of the performer for the character C in the scenario Si
  • the designated performer requests the sample playback dialogue corresponding to the character C stored in the storage unit 810. Play the registered audio data.
  • the count result of the number of characters (or points) for each line of the character C in the scenario Si may be stored in the scenario information 500-i as, for example, the attribute information of the line.
  • the sample reproduction dialogue of the character C it is possible not to repeat the process of counting the number of characters (or points) for the same dialogue.
  • the reception unit 801 accepts the selection of the performer for the main reproduction from one or more performers for the character C in the scenario Si.
  • the performer for this reproduction is a performer who plays the role of the character C when reproducing the content corresponding to the scenario Si.
  • the content corresponding to the scenario Si is electronic data including the text of the scenario Si and the voice data of the dialogue for each character C in the scenario Si.
  • the performer for the main reproduction is selected, for example, on the sample reproduction screen displayed on the user terminal 202.
  • the reception unit 801 receives information from the user terminal 202 that identifies the performer selected for the main playback by the user's operation input on the sample playback screen, so that the reception unit 801 receives the information of the performer regarding the character C. Accept your choice. Further, the reception unit 801 may accept the performer designated immediately before on the sample reproduction screen as the performer for the main reproduction.
  • the performer who first registered the voice data for the character C in the scenario Si may be selected as the performer for the main reproduction.
  • Information that identifies the combination of the character C and the performer for each character C in the scenario Si is stored in the storage unit 810 in association with, for example, the user (logged-in user) of the user terminal 202.
  • the display control unit 802 displays the lines of the character C in the scenario Si in the order of remarks when playing back the content corresponding to the scenario Si. Specifically, for example, the display control unit 802 displays the lines corresponding to the character C in the order of remarks so that it can be determined that the lines correspond to the character C.
  • the display control unit 802 refers to the scenario DB 220 and displays the scenario reproduction screen on the user terminal 202.
  • the scenario playback screen is an operation screen for playing back the content corresponding to the scenario Si.
  • the scenario reproduction screen may be generated by the information processing device 101 or may be generated by the user terminal 202. A screen example of the scenario reproduction screen will be described later with reference to FIG.
  • the reproduction control unit 803 reproduces the voice data registered by the selected performer for the dialogue of the character C. Instructions such as playback start, pause, and playback end of the content corresponding to the scenario Si are given, for example, on the scenario playback screen displayed on the user terminal 202.
  • the reproduction control unit 803 when the reproduction control unit 803 receives the reproduction start instruction of the content corresponding to the scenario Si, the reproduction control unit 803 refers to the scenario DB 220 and sequentially selects the dialogue to be reproduced. Next, each time the reproduction control unit 803 selects a dialogue to be reproduced, the reproduction control unit 803 refers to the registered voice DB 230 and acquires the voice data registered by the performer selected as the performer for the main reproduction for the selected dialogue. .. Then, the reproduction control unit 803 transmits the acquired voice data to the user terminal 202, and outputs the voice data from the speaker 408 of the user terminal 202.
  • the reproduction control unit 803 may output, for example, an audio data group summarizing the audio data of the performer for the main reproduction for each character C in the scenario Si to the user terminal 202.
  • the user terminal 202 may sequentially output the voice data of the dialogue for each character C in the scenario Si in the order of speech based on the received voice data group.
  • the reception unit 801 may accept a share request for sharing the combination of the character C and the performer in the scenario Si with other users.
  • the share request is made, for example, when sharing a recommended combination of the character C and the performer with another user.
  • the share request is made, for example, on the scenario reproduction screen displayed on the user terminal 202.
  • the reproduction control unit 803 when the reproduction control unit 803 receives the share request, the reproduction control unit 803 generates combination information for specifying the combination of the character C in the scenario Si and the performer selected as the performer for the main reproduction, and the generated combination information. Is output.
  • the combination information is represented by, for example, a URL (Uniform Resource Locator).
  • the playback control unit 803 receives a share request from the user terminal 202
  • the characters in the scenario Si are stored in the storage unit 810 in association with the user (logged-in user) of the user terminal 202. Identify the combination of C and the performer.
  • the reproduction control unit 803 generates combination information that specifies the combination of the character C and the performer in the specified scenario Si.
  • the reproduction control unit 803 generates the URL "cm-app: // service / scenario S1 / Taro: Hanako Yamada / Hanako: Taro Yamada /" as the combination information of the scenario S1. Then, the reproduction control unit 803 transmits the generated combination information to the user terminal 202.
  • the user can provide other users with combination information indicating a recommended combination of the character C and the performer in the scenario Si by using, for example, SNS (Social Networking Service) or e-mail. it can.
  • SNS Social Networking Service
  • the reproduction control unit 803 when the reproduction control unit 803 receives the combination information from the user, when reproducing the content corresponding to the scenario Si, the reproduction control unit 803 reproduces the voice data registered by the performer specified from the combination information with respect to the dialogue of the character C. To do. Specifically, for example, when the playback control unit 803 receives the URL "cm-app: // service / scenario S1 / Taro: Hanako Yamada / Hanako: Taro Yamada /", the playback control unit 803 reproduces the content corresponding to the scenario S1. At that time, the voice data registered by the performers (Taro: Hanako Yamada, Hanako: Taro Yamada) specified from the URL is reproduced for the lines of each character C (Taro, Hanako).
  • the recommended combination of the character C and the performer in the scenario Si selected by a certain user can be shared by a plurality of users.
  • each functional unit of the information processing device 101 may be realized by, for example, the user terminal 202.
  • the user terminal 202 acquires, for example, various information used in each functional unit (for example, scenario information in the scenario DB 220, registered voice information in the registered voice DB 230, and conditions in the extraction condition DB 240) from the information processing device 101.
  • each functional unit of the information processing device 101 may be realized by a plurality of computers (for example, the information processing device 101 and the user terminal 202) in the voice reproduction system 200.
  • FIG. 9 is an explanatory diagram showing a screen example of the login screen.
  • the login screen 900 is an example of an operation screen for performing a login process to the social voice recording web service.
  • tap the box 901 to enter your email address.
  • tapping the box 902 allows you to enter the password.
  • the button 903 is tapped on the login screen 900, the login process can be performed using the entered e-mail address and password.
  • the entered password is matched with the password registered in the user DB (not shown) in association with the entered e-mail address.
  • the authentication is OK.
  • the passcodes do not match or the entered e-mail address does not exist, authentication is NG.
  • the login screen 900 tap the button 904 to log in using SNS. Further, when the button 905 is tapped on the login screen 900, the new registration screen 1000 shown in FIG. 10 described later can be displayed and the new registration process can be performed.
  • FIG. 10 is an explanatory diagram showing a screen example of the new registration screen.
  • the new registration screen 1000 is an example of an operation screen for newly registering to the social voice recording web service.
  • the registration button 1006 is tapped on the new registration screen 1000, new registration can be performed using the information input in each of the boxes 1001 to 1005.
  • the newly registered user information is registered in, for example, a user DB (not shown).
  • FIG. 11 is an explanatory diagram showing a screen example of the application home screen.
  • the application home screen 1100 is an example of an operation screen (TOP screen) that is first displayed when the login process to the social voice recording web service is completed.
  • the application home screen 1100 includes notification 1110 and a scenario list 1120.
  • Notification 1110 displays a notification regarding the social voice recording web service.
  • scenario list 1120 a plurality of scenarios (for example, scenarios S1 to S3) can be specified and displayed.
  • the scenario name and synopsis of each scenario are displayed in the scenario list 1120.
  • scenario list 1120 can be arbitrarily set. For example, in the scenario list 1120, a predetermined number of top scenarios may be displayed in the order of latest, popularity, or title. Further, by tapping the button 1130 on the application home screen 1100, a scenario that is not currently displayed can be displayed.
  • the user can read or listen to the content from a plurality of scenarios (for example, scenarios S1 to Sn) registered in the social voice recording web service, and the character C. You can specify the scenario Si to record the lines of.
  • scenarios S1 to Sn for example, scenarios S1 to Sn
  • scenario Si in the scenario list 1120 is tapped (designated) on the application home screen 1100
  • information identifying the tapped (designated) scenario Si is transmitted to the information processing device 101, as shown in FIG.
  • the work introduction screen 1200 is displayed.
  • scenario S2 is specified.
  • FIG. 12 is an explanatory diagram showing a screen example of the work introduction screen.
  • the work introduction screen 1200 is an example of an operation screen for introducing a work about scenario S2 (title: SENPAI ⁇ KOUHAI).
  • work introduction information 1210 such as a synopsis of scenario S2, characters, and acting points is displayed.
  • the user can confirm the synopsis, characters, acting points, etc. of scenario S2 (title: SENPAI ⁇ KOUHAI).
  • scenario Si for reading and listening to the content
  • scenario Si for recording the dialogue of the character C.
  • the voice recording screen is an operation screen for recording the dialogue of the character C.
  • FIG. 13 is an explanatory diagram showing a screen example of the scenario reproduction screen.
  • the scenario reproduction screen 1300 is an example of an operation screen for reproducing the content corresponding to the scenario S2.
  • the lines corresponding to the character C (Yamashita, Suzuki) in the scenario S2 are displayed in the order of remarks so that it can be determined that the lines correspond to the character C.
  • the character C (Yamashita, Suzuki) can be selected and the sample playback screen for the character C can be displayed in a pull-down manner.
  • the sample playback screen is an operation screen that displays the performer corresponding to the character C so as to be able to determine that the performer is the performer for the character C.
  • the sample playback screen 1400 as shown in FIG. 14 is pulled down.
  • the volume adjustment screen is an operation screen for adjusting the volume when reproducing the dialogue of the character C.
  • the lines of the character C in the scenario Si can be continuously played in the order of remarks.
  • the audio data registered by the performer selected for the main reproduction is reproduced.
  • the play button for example, play buttons 1321 to 1329
  • the voice data registered by the performer for the tapped dialogue can be reproduced.
  • the voice data registered by the performer is played for the line "... I searched for, Mr. Suzuki" of character C (Yamashita).
  • the user can listen to the voice data registered by the performer for the lines of each character C while reading the text of the scenario Si.
  • FIG. 14 is an explanatory diagram showing a screen example of the sample reproduction screen.
  • the sample playback screen 1400 indicates that a plurality of performers (Hanako Yamada, Taro Yamada, Toru Fuji) corresponding to the character C “Yamashita” in scenario S2 are performers for the character C “Yamashita”.
  • This is an operation screen that is displayed so that it can be identified.
  • sample playback screen 1400 tap the sample playback buttons 1401, 1402, 1403 to specify the performer.
  • the voice data registered by the designated performer is played back for the same line of the character C "Yamashita" each time.
  • the dialogue to be reproduced is, for example, the dialogue for sample reproduction determined by the determination unit 804 for the character C "Yamashita".
  • the performer "Hanako Yamada” is designated, and the voice data registered by the performer "Hanako Yamada” is played back for the sample playback dialogue of the character C "Yamashita”. Will be done.
  • the performer "Taro Yamada” is designated, and the voice data registered by the performer “Taro Yamada” is played back for the sample playback dialogue of the character C "Yamashita”. Will be done.
  • the performer "Fuji Toru” is designated, and the voice data registered by the performer "Fuji Toru” is played for the sample playback dialogue of the character C "Yamashita”. Will be done.
  • the user can audition the voices of each performer (Hanako Yamada, Taro Yamada, Toru Fuji) when deciding the performer for the character "Yamashita" in the scenario S2.
  • the voice of the same line of the character "Yamashita” is played, so that the user can easily determine his / her favorite performer by comparing the voices of the performers. ..
  • the performer for this playback can be selected from a plurality of performers "Hanako Yamada, Taro Yamada, Toru Fuji".
  • the performer designated immediately before may be selected as the performer for the main reproduction.
  • the selected performer may be selected as the performer for the main reproduction.
  • FIG. 15 is a sequence diagram showing an operation example at the time of scenario registration of the voice reproduction system 200.
  • the administrator terminal 201 accesses the management screen by the operation input of the administrator (step S1501).
  • the management screen (not shown) is an operation screen for performing various management tasks.
  • the information processing device 101 When the information processing device 101 detects access to the management screen of the administrator terminal 201, the information processing device 101 displays a login screen on the administrator terminal 201 (step S1502).
  • the administrator terminal 201 performs a login process when login information is input by an operation input of the administrator on the displayed login screen (step S1503).
  • the information processing device 101 performs login authentication using the input login information (step S1504).
  • the authentication is OK
  • the information processing device 101 displays the management menu on the administrator terminal 201 (step S1505).
  • the information processing device 101 displays a message such as "authentication failed" on the administrator terminal 201, for example.
  • the administrator terminal 201 displays the scenario registration screen when scenario registration is selected by the operation input of the administrator from the displayed management menu (step S1506).
  • the scenario registration screen (not shown) is an operation screen that accepts input of scenario information.
  • the scenario information includes, for example, a scenario ID, a title, a synopsis, characters, and a text.
  • the administrator terminal 201 transmits the input scenario information registration request to the information processing device 101 (step S1507).
  • the information processing device 101 receives the scenario information registration request from the administrator terminal 201, the information processing device 101 registers the scenario information in the scenario DB 220 (step S1508).
  • the administrator of the voice reproduction system 200 can newly register the scenario information provided by the social voice recording web service.
  • FIG. 16 is a sequence diagram showing an operation example at the time of login of the voice reproduction system 200.
  • the user terminal 202 activates the application for using the social voice recording web service by the operation input of the user (step S1601).
  • the user terminal 202 displays a login screen on the display 405 (step S1602).
  • the login information is input by the user's operation input on the displayed login screen, the user terminal 202 performs the login process (step S1603).
  • the information processing device 101 performs login authentication using the input login information (step S1604).
  • the authentication is OK
  • the information processing device 101 transmits screen information (information such as notification and scenario list) of the application home screen to the user terminal 202 (step S1605).
  • the information processing device 101 displays a message such as "authentication failed" on the user terminal 202, for example.
  • the user terminal 202 When the user terminal 202 receives the screen information of the application home screen, the user terminal 202 displays the application home screen on the display 405 based on the screen information (step S1606).
  • the application home screen may be generated by the user terminal 202 or may be generated by the information processing device 101.
  • the user terminal 202 transmits information for identifying the specified scenario Si to the information processing device 101 (step S1607). ).
  • the information processing device 101 When the information processing device 101 receives the information that identifies the designated scenario Si, it acquires the work introduction information (synopsis, characters, acting points, etc.) of the designated scenario Si from the scenario DB 220 (step S1608). Then, the information processing device 101 transmits the acquired work introduction information to the user terminal 202 (step S1609).
  • the user terminal 202 When the user terminal 202 receives the work introduction information, the user terminal 202 displays the work introduction screen on the display 405 based on the work introduction information (step S1610).
  • the work introduction screen may be generated by the user terminal 202 or may be generated by the information processing device 101.
  • the user can log in to the social voice recording web service and browse the work introduction screen of the scenario Si specified from the scenario list.
  • Step S1701 are sequence diagrams showing an operation example of the audio reproduction system 200 at the time of content reproduction.
  • the playback tab see, for example, FIG. 12
  • the user terminal 202 transmits the playback request of the scenario Si to the information processing device 101. (Step S1701).
  • the information processing device 101 When the information processing device 101 receives the reproduction request of the scenario Si, the information processing device 101 transmits the content corresponding to the scenario Si to the user terminal 202 (step S1702).
  • the contents corresponding to the scenario Si include, for example, the text of the scenario Si, the performer information corresponding to each character C, the voice data of the performer for the main reproduction of each character C, and the sample reproduction dialogue of each character C. Etc. are included.
  • the user terminal 202 When the user terminal 202 receives the content corresponding to the scenario Si, the user terminal 202 displays the scenario reproduction screen on the display 405 based on the content (step S1703).
  • the scenario reproduction screen may be generated by the user terminal 202 or may be generated by the information processing device 101.
  • the user terminal 202 When the user terminal 202 accepts the selection of any character C in the scenario Si by the user's operation input on the scenario reproduction screen (step S1704), the user terminal 202 displays a sample reproduction screen for the selected character C. (Step S1705).
  • the user terminal 202 transmits information identifying the designated performer to the information processing device 101 (step S1706). ).
  • the information processing device 101 When the information processing device 101 receives the information that identifies the designated performer, the information processing device 101 refers to the registered voice DB 230, and among the registered voice information corresponding to the character C specified from the information, the registered voice of the designated performer. From the information, the voice data of the sample reproduction dialogue is acquired (step S1707).
  • the sample playback dialogue for character C is determined from the dialogue of character C.
  • the specific processing procedure for determining the sample reproduction dialogue for the character C will be described later with reference to FIG.
  • the information processing device 101 transmits the acquired voice data of the sample reproduction dialogue to the user terminal 202 (step S1708).
  • the user terminal 202 reproduces the sample reproduction dialogue for the selected character C by outputting the received voice data from the speaker 408 (step S1709).
  • the user terminal 202 identifies the selected performer for the main reproduction. Information is transmitted to the information processing device 101 (step S1801).
  • the information processing device 101 When the information processing device 101 receives the information for identifying the selected performer for the main reproduction, the information processing device 101 refers to the registered voice DB 230 and selects the registered voice information corresponding to the character C specified from the information. Acquire the registered voice information of the performer (step S1802). Then, the information processing device 101 transmits the acquired registered voice information to the user terminal 202 (step S1803).
  • the user terminal 202 reproduces the content corresponding to the scenario Si when the play button is selected by the user's operation input on the scenario reproduction screen (step S1804). Specifically, for example, the user terminal 202 sequentially selects the dialogue to be reproduced, and outputs the voice data registered by the performer selected as the performer for the main reproduction from the speaker 408 for the selected dialogue.
  • the user can check the voice of the sample playback dialogue for each performer for the character C in the scenario Si, and determine the performer of his / her preference. Further, it is possible to specify a performer of his / her own preference for each character C in the scenario Si and play the content corresponding to the scenario Si.
  • the sample reproduction dialogue determination process is a process for determining the sample reproduction dialogue for the character C in the scenario Si.
  • FIG. 19 is a flowchart showing an example of the sample reproduction dialogue determination processing procedure of the information processing apparatus 101.
  • the information processing apparatus 101 refers to the scenario information 500-i of the scenario Si in the scenario DB 220, and selects an unselected line among the lines of the character C ( Step S1901).
  • the information processing apparatus 101 refers to the extraction condition DB 240 and selects characters such as " York, "?”, “!, “Tsu”, ",", and “.” Without counting them. The number of characters in the dialogue is counted (step S1902). Next, the information processing device 101 determines whether or not there is an unselected line among the lines of the character C (step S1903).
  • step S1903: Yes if there is an unselected line (step S1903: Yes), the information processing device 101 returns to step S1901.
  • step S1903: No the information processing apparatus 101 refers to the extraction condition DB 240 and uses the counted number of characters as the score of each dialogue to specify the dialogue with the highest score. (Step S1904).
  • the information processing device 101 determines the specified dialogue as the sample reproduction dialogue (step S1905). Then, the information processing apparatus 101 stores the determined sample reproduction dialogue in association with the character C in the scenario Si (step S1906), and ends a series of processes according to this flowchart.
  • step S1901 the information processing device 101 may select the lines that appear in the first half of the scenario Si from the lines of the character C. This makes it possible to prevent spoiler lines from being determined as sample reproduction lines.
  • the sample reproduction dialogue determination process may be executed, for example, when the scenario information of scenario Si is newly registered. Further, the sample reproduction dialogue determination process may be executed, for example, when the information processing apparatus 101 receives information specifying the performer specified on the sample reproduction screen from the user terminal 202.
  • the performer information associated with the character C of the designated scenario Si is acquired from the storage unit 810. Then, the performer information corresponding to the character C of the acquired scenario Si can be displayed so as to be able to determine that the performer is the character C. Further, according to the information processing device 101, each time the designation of any one of the plurality of performers for any of the characters C in the scenario Si is accepted, the same line of the character C (for sample reproduction) is received. The voice data registered by the designated performer can be played back for the dialogue).
  • the designated performer can be specified for the sample playback dialogue by simply designating one of the plurality of performers. You can audition the recorded voice. Further, even if the performer to reproduce the sample is switched, the voice of the same line of the character C is reproduced, so that the voice of the performer can be easily compared.
  • the lines having a predetermined number of characters ⁇ or more can be preferentially determined as the sample reproduction lines.
  • the lines having a predetermined number of characters ⁇ or more and a predetermined number ⁇ or less can be preferentially determined as the sample reproduction lines.
  • the lines that do not include the predetermined character code can be preferentially determined as the sample reproduction lines.
  • the lines appearing in the first half of the scenario Si can be prioritized and determined as the sample reproduction lines.
  • the information processing device 101 when the selection of the performer for the main reproduction is received from a plurality of performers for the character C and the content corresponding to the scenario Si is reproduced, the dialogue of the character C is selected.
  • the audio data registered by the performer can be played back.
  • the information processing device 101 when the request for sharing the combination of the character C and the performer in the scenario Si is received with another user, the character C and the performer selected as the performer for the main reproduction are received. It is possible to generate combination information that specifies the combination of and output the generated combination information. Then, according to the information processing device 101, when the combination information is received from the user, when the content corresponding to the scenario Si is reproduced, the voice data registered by the performer specified from the combination information is input to the dialogue of the character C. Can be played.
  • the performer information associated with each character C of the designated scenario Si is acquired from the storage unit 810, and the character C of the scenario Si is obtained.
  • the acquired performer information corresponding to the character C can be displayed so as to be able to determine that the performer is the character C.
  • the performer information for all the characters C in the scenario Si can be displayed in a list, and it is possible to easily confirm the performers that can be specified for each character C.
  • the information processing apparatus 101 and the voice reproduction system 200 when there are a plurality of performers for the character C in the scenario Si, one of the plurality of performers is performed.
  • the voice recorded by each performer can be auditioned for the same line of the character C.
  • the user terminal 202 may be configured to accept the designation of any of the lines in the scenario Si before and after step S1706.
  • the user terminal 202 accepts the designation of the dialogue
  • the information for identifying the designated performer and the information Information that identifies the specified dialogue in scenario Si is transmitted to the information processing device 101.
  • the information processing device 101 receives the information specifying the designated performer and the information specifying the designated dialogue, the information processing device 101 refers to the registered voice DB 230 and corresponds to the character C specified from the information.
  • the specified dialogue is acquired as the voice data of the sample playback dialogue.
  • the user terminal 202 transmits information identifying the performer to the information processing device 101, and the information processing device 101 acquires the designated dialogue as voice data of the sample reproduction dialogue. ..
  • the audio reproduction method described in the present embodiment can be realized by executing a program prepared in advance on a computer such as a personal computer or a workstation.
  • This audio reproduction program is recorded on a computer-readable recording medium such as a hard disk, a flexible disk, a CD-ROM, a DVD, or a USB memory, and is executed by being read from the recording medium by the computer. Further, this audio reproduction program may be distributed via a network such as the Internet.
  • the information processing apparatus 101 described in the present embodiment can also be realized by a standard cell, an IC for a specific purpose such as a structured ASIC (Application Specific Integrated Circuit), or a PLD (Programmable Logic Device) such as an FPGA.
  • a standard cell an IC for a specific purpose such as a structured ASIC (Application Specific Integrated Circuit), or a PLD (Programmable Logic Device) such as an FPGA.
  • Voice playback system 201 Administrator terminal 202
  • User terminal 210 Network 220
  • Extraction condition DB 300,400 Bus 301,401
  • CPU 302,402 Memory 303
  • Disk drive 304
  • Communication I / F 306,409
  • Portable recording medium I / F 307,410
  • Microphone 408 Speaker 801 Reception unit 802 Display control unit 803 Playback control unit 804
  • Decision unit 900 Login screen 1000 New registration screen 1100 App home screen 1200 Work introduction screen 1300 Scenario playback Screen C Characters S1-Sn, Si Scenario

Abstract

In the present invention, a sample playback screen (1400) is an operation screen on which a plurality of performers, "Hanako Yamada, Taro Yamada, Toru Fuji", who correspond to a character "Yamashita" in a scenario (S2), are displayed such that the performer performing the character "Yamashita" can be discerned. Each time a performer is designated by tapping a sample playback button (1401, 1402, 1403) on the sample playback screen (1400), voice data that was recorded by the designated performer is played back for the same dialogue of the character "Yamashita". For example, if the sample playback button (1401) is tapped, the performer "Hanako Yamada" is designated, and the voice data that was recorded by the performer "Hanako Yamada" is played back for the sample playback dialogue of the character "Yamashita". Due to this configuration, when determining a performer for the character "Yamashita", a user can listen to a voice sample of a designated performer by simply designating one of the plurality of performers "Hanako Yamada, Taro Yamada, Toru Fuji".

Description

音声再生プログラム、音声再生方法および音声再生システムAudio playback program, audio playback method and audio playback system
 本発明は、音声再生プログラム、音声再生方法および音声再生システムに関する。 The present invention relates to a voice reproduction program, a voice reproduction method, and an audio reproduction system.
 従来、様々なシナリオを皆でつくりあげるソーシャルボイスレコーディングwebサービスがある。ソーシャルボイスレコーディングwebサービスには、シナリオ内の登場人物のセリフを各演者が録音するレコーディングモードと、シナリオおよび演者を好みの組合せで指定して聴くリスナーモードがある。 Conventionally, there is a social voice recording web service that allows everyone to create various scenarios. The social voice recording web service has a recording mode in which each performer records the lines of characters in the scenario, and a listener mode in which the scenario and the performer are specified in a desired combination and listened to.
 先行技術としては、ネットワークを介して物語作品を公開する作品公開システムであって、物語作品に対する録音内容を声出演者から受信し、所定の基準レベル以上であると判定された録音内容を、物語作品に対応付けて管理し、対応する録音内容を物語作品に付加して公開するものがある。また、登録された複数の音声データセットのうち、漫画データの表示の際に出力させる音声データセットとして用いる音声データセットの選択を受け付ける選択画面を音声再生装置に出力させ、音声データセットが選択されると、音声再生装置において漫画データの表示の際に、選択された音声データセットを用いて音声再生を実行させる技術がある。 As a prior art, it is a work publishing system that publishes a story work via a network, and the recorded content for the story work is received from the voice performer, and the recorded content determined to be above a predetermined standard level is described as a story. Some of them are managed in association with the work, and the corresponding recorded content is added to the story work and published. In addition, among a plurality of registered audio data sets, a selection screen for accepting selection of an audio data set to be used as an audio data set to be output when displaying cartoon data is output to the audio playback device, and the audio data set is selected. Then, there is a technique for executing audio reproduction using the selected audio data set when displaying cartoon data in the audio reproduction device.
特開2002-304482号公報JP-A-2002-304482 特開2018-169691号公報JP-A-2018-169691
 しかしながら、従来技術では、シナリオ内の一人の登場人物に対して複数の演者がいる場合に、自分好みの演者を指定しにくいという問題がある。 However, with the conventional technology, there is a problem that it is difficult to specify a performer of your choice when there are multiple performers for one character in the scenario.
 一つの側面では、本発明は、複数の演者の中から自分好みの演者を指定しやすくさせることを目的とする。 In one aspect, the present invention aims to make it easier to specify a performer of his choice from a plurality of performers.
 一つの実施態様では、ユーザからシナリオの指定を受け付けると、指定された前記シナリオの登場人物に対応付けた演者情報を記憶部から取得し、取得した前記シナリオの登場人物に対応する演者情報を、当該登場人物についての演者であることを判別可能に表示し、前記シナリオ内のいずれかの登場人物についての複数の演者のうちのいずれかの演者の指定を受け付けるたびに、前記いずれかの登場人物の同じセリフについて、指定された前記いずれかの演者が登録した音声データを再生する、音声再生プログラムが提供される。 In one embodiment, when a scenario designation is received from the user, the performer information associated with the designated characters in the scenario is acquired from the storage unit, and the performer information corresponding to the acquired characters in the scenario is obtained. Each time it displays that it is a performer for the character and accepts the designation of any of the performers for any of the characters in the scenario, any of the characters mentioned above A voice reproduction program is provided that reproduces the voice data registered by any of the designated performers for the same dialogue.
 本発明の一側面によれば、複数の演者の中から自分好みの演者を指定しやすくさせることができるという効果を奏する。 According to one aspect of the present invention, there is an effect that it is possible to easily specify a performer of his / her choice from a plurality of performers.
図1は、実施の形態にかかる音声再生方法の一実施例を示す説明図である。FIG. 1 is an explanatory diagram showing an embodiment of an audio reproduction method according to an embodiment. 図2は、音声再生システム200のシステム構成例を示す説明図である。FIG. 2 is an explanatory diagram showing a system configuration example of the audio reproduction system 200. 図3は、情報処理装置101のハードウェア構成例を示すブロック図である。FIG. 3 is a block diagram showing a hardware configuration example of the information processing device 101. 図4は、ユーザ端末202のハードウェア構成例を示すブロック図である。FIG. 4 is a block diagram showing a hardware configuration example of the user terminal 202. 図5は、シナリオDB220の記憶内容の一例を示す説明図である。FIG. 5 is an explanatory diagram showing an example of the stored contents of the scenario DB 220. 図6は、登録音声DB230の記憶内容の一例を示す説明図である。FIG. 6 is an explanatory diagram showing an example of the stored contents of the registered voice DB 230. 図7は、抽出条件DB240の記憶内容の一例を示す説明図である。FIG. 7 is an explanatory diagram showing an example of the stored contents of the extraction condition DB 240. 図8は、情報処理装置101の機能的構成例を示すブロック図である。FIG. 8 is a block diagram showing a functional configuration example of the information processing device 101. 図9は、ログイン画面の画面例を示す説明図である。FIG. 9 is an explanatory diagram showing a screen example of the login screen. 図10は、新規登録画面の画面例を示す説明図である。FIG. 10 is an explanatory diagram showing a screen example of a new registration screen. 図11は、アプリホーム画面の画面例を示す説明図である。FIG. 11 is an explanatory diagram showing a screen example of the application home screen. 図12は、作品紹介画面の画面例を示す説明図である。FIG. 12 is an explanatory diagram showing a screen example of the work introduction screen. 図13は、シナリオ再生画面の画面例を示す説明図である。FIG. 13 is an explanatory diagram showing a screen example of the scenario reproduction screen. 図14は、サンプル再生画面の画面例を示す説明図である。FIG. 14 is an explanatory diagram showing a screen example of the sample reproduction screen. 図15は、音声再生システム200のシナリオ登録時の動作例を示すシーケンス図である。FIG. 15 is a sequence diagram showing an operation example at the time of scenario registration of the voice reproduction system 200. 図16は、音声再生システム200のログイン時の動作例を示すシーケンス図である。FIG. 16 is a sequence diagram showing an operation example at the time of login of the voice reproduction system 200. 図17は、音声再生システム200のコンテンツ再生時の動作例を示すシーケンス図(その1)である。FIG. 17 is a sequence diagram (No. 1) showing an operation example of the audio reproduction system 200 at the time of content reproduction. 図18は、音声再生システム200のコンテンツ再生時の動作例を示すシーケンス図(その2)である。FIG. 18 is a sequence diagram (No. 2) showing an operation example of the audio reproduction system 200 at the time of content reproduction. 図19は、情報処理装置101のサンプル再生用セリフ決定処理手順の一例を示すフローチャートである。FIG. 19 is a flowchart showing an example of a sample reproduction dialogue determination processing procedure of the information processing apparatus 101.
 以下に図面を参照して、本発明にかかる音声再生プログラム、音声再生方法および音声再生システムの実施の形態を詳細に説明する。 The audio reproduction program, the audio reproduction method, and the embodiment of the audio reproduction system according to the present invention will be described in detail with reference to the drawings below.
(実施の形態)
 図1は、実施の形態にかかる音声再生方法の一実施例を示す説明図である。図1において、情報処理装置101は、シナリオ内の登場人物のセリフについて、演者が登録した音声データを再生するコンピュータである。シナリオは、登場人物のセリフを記述した台本や脚本などである。シナリオには、場面の構成や登場人物の動きなどが記述されていてもよい。
(Embodiment)
FIG. 1 is an explanatory diagram showing an embodiment of an audio reproduction method according to an embodiment. In FIG. 1, the information processing device 101 is a computer that reproduces voice data registered by the performer with respect to the dialogue of the characters in the scenario. The scenario is a script or script that describes the lines of the characters. The scenario may describe the composition of the scene, the movement of the characters, and the like.
 シナリオとしては、例えば、小説、ライトノベル、詩、戯曲、舞台脚本、エッセイ、漫画などが挙げられる。登場人物は、シナリオ内に登場する人物や動物などである。登場人物には、場面や登場人物の心理などを説明するナレーターが含まれてもよい。演者は、登場人物の役を演じる人である。 Examples of scenarios include novels, light novels, poems, dramas, stage scripts, essays, and manga. The characters are the characters and animals that appear in the scenario. The characters may include a narrator who explains the scene, the psychology of the characters, and the like. A performer is a person who plays the role of a character.
 ソーシャルボイスレコーディングwebサービスは、様々なシナリオを皆でつくりあげるものであり、例えば、様々なストーリーに自分の声を録音して聞いたり、他の人が録音した声を組合せて楽しむことができる。ソーシャルボイスレコーディングwebサービスは、声優や声優を志す人、声で遊びたい人、いろんな声を聴きたい人などが集まるユーザ参加型のサービスである。 The social voice recording web service creates various scenarios together. For example, you can record and listen to your own voice in various stories, or you can enjoy combining voices recorded by other people. The social voice recording web service is a user-participation type service in which people who want to be voice actors and voice actors, people who want to play with voice, people who want to listen to various voices, etc. gather.
 ソーシャルボイスレコーディングwebサービスには、レコーディングモードとリスナーモードとがある。レコーディングモードでは、シナリオ内の登場人物のセリフを演者が録音することができる。リスナーモードでは、シナリオおよび演者を好みの組合せで指定して聴くことができる。 The social voice recording web service has a recording mode and a listener mode. In recording mode, the performer can record the lines of the characters in the scenario. In the listener mode, you can specify and listen to the scenario and performer in any combination you like.
 ここで、ソーシャルボイスレコーディングwebサービスは、ユーザ参加型のサービスのため、複数の演者が一つのシナリオの一人の登場人物を演じる状況が発生する。すなわち、あるシナリオ内の一人の登場人物に対して複数の演者がセリフを録音することがある。この状況に対処すべく、リスナーモードでは、シナリオに対応するコンテンツを再生する場合に、シナリオ内の登場人物ごとに演者を指定することができる。 Here, since the social voice recording web service is a user-participation type service, a situation occurs in which a plurality of performers play one character in one scenario. That is, multiple performers may record lines for one character in a scenario. In order to deal with this situation, in the listener mode, when playing the content corresponding to the scenario, the performer can be specified for each character in the scenario.
 同じ登場人物でも演じる人によって雰囲気が変わるため、ユーザは、複数の演者の中から自分好みの演者を指定することで、自分だけのシナリオをつくって楽しむことができる。この際、ユーザが複数の演者の中から自分好みの演者を判断しやすくなるように、例えば、判断材料となる各演者の情報を一覧表示することが考えられる。しかし、視覚的な情報であれば一覧表示することが可能だが、ソーシャルボイスレコーディングwebサービスが扱うコンテンツは音声であるため、一覧表示することが難しい。 Since the atmosphere changes depending on the person who plays the same character, the user can create and enjoy his own scenario by designating the performer of his choice from multiple performers. At this time, for example, it is conceivable to display a list of information of each performer as a judgment material so that the user can easily judge his / her favorite performer from a plurality of performers. However, although it is possible to display a list of visual information, it is difficult to display a list because the content handled by the social voice recording web service is voice.
 また、ユーザが自分好みの演者を判断するにあたり、演者が録音したセリフをサンプル再生することが考えられる。しかし、シナリオ内の一人の登場人物に対して複数の演者がいる場合に、複数の演者それぞれが録音したセリフを確認するために、煩雑で面倒な操作が必要となると時間や手間がかかり、自分好みの演者を指定しにくくなる。 In addition, when the user judges his / her favorite performer, it is conceivable to play back a sample of the dialogue recorded by the performer. However, when there are multiple performers for one character in the scenario, it takes time and effort if complicated and troublesome operations are required to check the lines recorded by each of the multiple performers. It becomes difficult to specify a favorite performer.
 そこで、本実施の形態では、シナリオ内の一人の登場人物に対して複数の演者がいる場合に、複数の演者のうちの一人の演者を指定するだけで、指定された演者が録音した音声を確認できるようにして、複数の演者の中から自分好みの演者を指定しやすくさせる音声再生方法について説明する。以下、情報処理装置101の処理例について説明する。 Therefore, in the present embodiment, when there are a plurality of performers for one character in the scenario, only one of the plurality of performers is designated, and the voice recorded by the designated performer is output. I will explain the audio playback method that makes it easy to specify the performer of your choice from among multiple performers so that you can check it. Hereinafter, a processing example of the information processing apparatus 101 will be described.
 (1)情報処理装置101は、ユーザからシナリオの指定を受け付けると、指定されたシナリオの登場人物に対応付けた演者情報を記憶部110から取得する。ここで、記憶部110は、各シナリオについて、当該各シナリオ内の登場人物ごとに、当該登場人物の役を演じる演者を特定する演者情報を対応付けて記憶する。 (1) When the information processing device 101 receives a scenario designation from the user, the information processing device 101 acquires the performer information associated with the characters in the designated scenario from the storage unit 110. Here, for each scenario, the storage unit 110 stores the performer information that identifies the performer who plays the role of the character in association with each character in each scenario.
 図1の例では、ユーザ001の端末装置102から、「シナリオ001」の指定を受け付けた場合を想定する。この場合、情報処理装置101は、例えば、「シナリオ001」の登場人物ごとに対応付けた演者情報を記憶部110から取得する。 In the example of FIG. 1, it is assumed that the designation of "scenario 001" is received from the terminal device 102 of the user 001. In this case, the information processing device 101 acquires, for example, the performer information associated with each character in "Scenario 001" from the storage unit 110.
 (2)情報処理装置101は、取得したシナリオの登場人物に対応する演者情報を、当該登場人物についての演者であることを判別可能に表示する。ここで、演者情報は、登場人物の役を演じる演者を特定する情報であり、例えば、演者の名前や演者をあらわすアイコンなどである。 (2) The information processing device 101 displays the performer information corresponding to the characters in the acquired scenario so that it can be determined that the performer is the performer for the characters. Here, the performer information is information that identifies the performer who plays the role of the character, and is, for example, the name of the performer or an icon representing the performer.
 図1の例では、情報処理装置101は、「シナリオ001」の登場人物ごとに、当該登場人物に対応する演者情報を、当該登場人物についての演者であることを判別可能に表示する。ここでは、「シナリオ001」内の登場人物を「登場人物001」と「登場人物002」とする。この場合、情報処理装置101は、例えば、サンプル再生画面120を端末装置102に表示する。サンプル再生画面120には、「登場人物001」に対応する演者情報「ユーザ100、ユーザ2501」が、「登場人物001」についての演者であることが判別可能に表示されている。また、サンプル再生画面120には、「登場人物002」に対応する演者情報「ユーザ556」が、「登場人物002」についての演者であることが判別可能に表示されている。 In the example of FIG. 1, the information processing device 101 displays the performer information corresponding to the character for each character in "Scenario 001" so as to be able to determine that the performer is the character. Here, the characters in "Scenario 001" are referred to as "Character 001" and "Character 002". In this case, the information processing device 101 displays, for example, the sample reproduction screen 120 on the terminal device 102. On the sample playback screen 120, the performer information "user 100, user 2501" corresponding to "character 001" is displayed so as to be discriminable as the performer for "character 001". Further, on the sample reproduction screen 120, the performer information "user 556" corresponding to the "character 002" is displayed so as to be able to determine that the performer is the performer for the "character 002".
 (3)情報処理装置101は、シナリオ内のいずれかの登場人物についての複数の演者のうちのいずれかの演者の指定を受け付けるたびに、当該いずれかの登場人物の同じセリフについて、指定されたいずれかの演者が登録した音声データを再生する。ここで、再生対象のセリフは、シナリオ内の登場人物のセリフの中から決定される。 (3) Each time the information processing device 101 accepts the designation of one of a plurality of performers for any of the characters in the scenario, the same line of any of the characters is designated. Play the audio data registered by one of the performers. Here, the dialogue to be reproduced is determined from the dialogue of the characters in the scenario.
 すなわち、情報処理装置101は、シナリオ内の登場人物のうちの一人の登場人物に対して複数の演者がいる場合に、ユーザから複数の演者のうちの一人の演者の指定を受け付けるたびに、決定された再生対象のセリフについて、当該一人の演者が登録した音声データを再生する。 That is, when the information processing device 101 has a plurality of performers for one of the characters in the scenario, the information processing device 101 determines each time the user accepts the designation of one of the plurality of performers. The voice data registered by the one performer is reproduced for the dialogue to be reproduced.
 図1の例では、情報処理装置101は、「シナリオ001」内のいずれかの登場人物についての複数の演者のうちのいずれかの演者の指定を受け付けるたびに、当該いずれかの登場人物の同じセリフについて、指定されたいずれかの演者が登録した音声データを再生する。演者の指定は、例えば、サンプル再生画面120において、ユーザ001の操作入力により演者情報を指定することにより行われる。 In the example of FIG. 1, the information processing apparatus 101 receives the designation of any one of the plurality of performers for any of the characters in "Scenario 001", and the same one of the characters is used each time. For the dialogue, the voice data registered by any of the specified performers is played. The performer is designated, for example, by designating the performer information by the operation input of the user 001 on the sample reproduction screen 120.
 ここで、「登場人物001」についての再生対象のセリフ130を、「すごくおいしいね」とする。この場合、情報処理装置101は、例えば、「登場人物001」についての複数の演者のうちの「ユーザ100」の指定を受け付けると、「登場人物001」のセリフ130について、「ユーザ100」が登録した音声データを再生する。また、情報処理装置101は、「登場人物001」についての複数の演者のうちの「ユーザ2501」の指定を受け付けると、「登場人物001」のセリフ130について、「ユーザ2501」が登録した音声データを再生する。 Here, the dialogue 130 to be reproduced for "Character 001" is defined as "Very delicious". In this case, for example, when the information processing device 101 accepts the designation of the "user 100" among the plurality of performers for the "character 001", the "user 100" is registered for the dialogue 130 of the "character 001". Play back the audio data. Further, when the information processing device 101 receives the designation of "user 2501" among the plurality of performers for "character 001", the voice data registered by "user 2501" for the dialogue 130 of "character 001". To play.
 このように、情報処理装置101によれば、シナリオ内の登場人物に対して複数の演者がいる場合であっても、複数の演者のうちの一人の演者を指定するだけで、当該登場人物の同じセリフについて、指定された演者が録音した音声を試聴することができる。また、演者を切り替えても、登場人物の同じセリフの音声が再生されるため、演者の音声を比較しやすくさせることができる。これにより、登場人物に対して複数の演者がいる場合であっても、複数の演者の中から自分好みの演者を指定しやすくさせることができる。 As described above, according to the information processing device 101, even if there are a plurality of performers for the characters in the scenario, only one performer among the plurality of performers is designated, and the characters are affected. You can audition the voice recorded by the designated performer for the same line. In addition, even if the performers are switched, the voices of the same characters are reproduced, so that the voices of the performers can be easily compared. As a result, even if there are a plurality of performers for the characters, it is possible to easily specify the performer of his / her choice from the plurality of performers.
 図1の例では、「シナリオ001」の「登場人物001」について、複数の演者「ユーザ100、ユーザ2501」が存在する。ユーザ001は、サンプル再生画面120において、複数の演者「ユーザ100、ユーザ2501」のうちのいずれかの演者を指定するだけで、再生対象のセリフ130について、指定された演者が録音した音声を試聴することができる。このため、ユーザ001は、「シナリオ001」の「登場人物001」について、演者ごとにセリフ130の音声を確認して、自分好みの演者を判断することができる。 In the example of FIG. 1, there are a plurality of performers "user 100, user 2501" for "character 001" in "scenario 001". The user 001 auditions the voice recorded by the designated performer for the dialogue 130 to be reproduced by simply designating one of the plurality of performers "user 100, user 2501" on the sample playback screen 120. can do. Therefore, the user 001 can check the voice of the dialogue 130 for each performer for the "character 001" of the "scenario 001" and determine the performer of his / her preference.
(音声再生システム200のシステム構成例)
 つぎに、実施の形態にかかる音声再生システム200のシステム構成例について説明する。音声再生システム200は、例えば、様々なシナリオを皆でつくりあげるソーシャルボイスレコーディングwebサービスに適用される。
(System configuration example of audio reproduction system 200)
Next, a system configuration example of the audio reproduction system 200 according to the embodiment will be described. The voice reproduction system 200 is applied to, for example, a social voice recording web service for creating various scenarios together.
 図2は、音声再生システム200のシステム構成例を示す説明図である。図2において、音声再生システム200は、情報処理装置101と、管理者端末201と、複数のユーザ端末202とを含む。音声再生システム200において、情報処理装置101、管理者端末201およびユーザ端末202は、有線または無線のネットワーク210を介して接続される。ネットワーク210は、例えば、インターネット、LAN(Local Area Network)、WAN(Wide Area Network)などである。 FIG. 2 is an explanatory diagram showing a system configuration example of the audio reproduction system 200. In FIG. 2, the voice reproduction system 200 includes an information processing device 101, an administrator terminal 201, and a plurality of user terminals 202. In the voice reproduction system 200, the information processing device 101, the administrator terminal 201, and the user terminal 202 are connected via a wired or wireless network 210. The network 210 is, for example, the Internet, LAN (Local Area Network), WAN (Wide Area Network), or the like.
 ここで、情報処理装置101は、シナリオDB(Database)220、登録音声DB230および抽出条件DB240を有する。情報処理装置101は、例えば、サーバである。各種DB220,230,240の記憶内容については、図5~図7を用いて後述する。 Here, the information processing device 101 has a scenario DB (Database) 220, a registered voice DB 230, and an extraction condition DB 240. The information processing device 101 is, for example, a server. The stored contents of the various DBs 220, 230, and 240 will be described later with reference to FIGS. 5 to 7.
 管理者端末201は、音声再生システム200の管理者が使用するコンピュータである。管理者端末201において、管理者は、例えば、シナリオ情報の登録を行う。登録されたシナリオ情報は、情報処理装置101が有するシナリオDB220に記憶される。管理者端末201は、例えば、PC、タブレット型PC(Personal Computer)などである。 The administrator terminal 201 is a computer used by the administrator of the audio reproduction system 200. At the administrator terminal 201, the administrator registers, for example, scenario information. The registered scenario information is stored in the scenario DB 220 included in the information processing device 101. The administrator terminal 201 is, for example, a PC, a tablet-type PC (Personal Computer), or the like.
 ユーザ端末202は、音声再生システム200のユーザが使用するコンピュータである。ユーザ端末202には、例えば、ソーシャルボイスレコーディングwebサービスを利用するためのアプリケーションがインストールされている。ユーザは、ソーシャルボイスレコーディングwebサービスを利用して、シナリオ内の登場人物のセリフを録音したり、シナリオに対応するコンテンツを読んだり聴いたりすることができる。ユーザ端末202は、例えば、スマートフォン、携帯電話、タブレット型PCなどである。図1に示した端末装置102は、例えば、ユーザ端末202に相当する。 The user terminal 202 is a computer used by the user of the voice reproduction system 200. For example, an application for using the social voice recording web service is installed on the user terminal 202. The user can use the social voice recording web service to record the lines of the characters in the scenario and to read and listen to the content corresponding to the scenario. The user terminal 202 is, for example, a smartphone, a mobile phone, a tablet PC, or the like. The terminal device 102 shown in FIG. 1 corresponds to, for example, a user terminal 202.
 なお、図2の例では、情報処理装置101と管理者端末201とを別体に設けることにしたが、これに限らない。例えば、情報処理装置101と管理者端末201とは同一のコンピュータにより実現されることにしてもよい。 In the example of FIG. 2, the information processing device 101 and the administrator terminal 201 are provided separately, but the present invention is not limited to this. For example, the information processing device 101 and the administrator terminal 201 may be realized by the same computer.
(情報処理装置101のハードウェア構成例)
 図3は、情報処理装置101のハードウェア構成例を示すブロック図である。図3において、情報処理装置101は、CPU(Central Processing Unit)301と、メモリ302と、ディスクドライブ303と、ディスク304と、通信I/F(Interface)305と、可搬型記録媒体I/F306と、可搬型記録媒体307と、を有する。また、各構成部は、バス300によってそれぞれ接続される。
(Example of hardware configuration of information processing device 101)
FIG. 3 is a block diagram showing a hardware configuration example of the information processing device 101. In FIG. 3, the information processing device 101 includes a CPU (Central Processing Unit) 301, a memory 302, a disk drive 303, a disk 304, a communication I / F (Interface) 305, and a portable recording medium I / F 306. , And a portable recording medium 307. Further, each component is connected by a bus 300.
 ここで、CPU301は、情報処理装置101の全体の制御を司る。CPU301は、複数のコアを有していてもよい。メモリ302は、例えば、ROM(Read Only Memory)、RAM(Random Access Memory)およびフラッシュROMなどを有する。具体的には、例えば、フラッシュROMがOS(Operating System)のプログラムを記憶し、ROMがアプリケーションプログラムを記憶し、RAMがCPU301のワークエリアとして使用される。メモリ302に記憶されるプログラムは、CPU301にロードされることで、コーディングされている処理をCPU301に実行させる。 Here, the CPU 301 controls the entire information processing device 101. The CPU 301 may have a plurality of cores. The memory 302 includes, for example, a ROM (Read Only Memory), a RAM (Random Access Memory), a flash ROM, and the like. Specifically, for example, the flash ROM stores the OS (Operating System) program, the ROM stores the application program, and the RAM is used as the work area of the CPU 301. The program stored in the memory 302 is loaded into the CPU 301 to cause the CPU 301 to execute the coded process.
 ディスクドライブ303は、CPU301の制御に従ってディスク304に対するデータのリード/ライトを制御する。ディスク304は、ディスクドライブ303の制御で書き込まれたデータを記憶する。ディスク304としては、例えば、磁気ディスク、光ディスクなどが挙げられる。 The disk drive 303 controls data read / write to the disk 304 according to the control of the CPU 301. The disk 304 stores the data written under the control of the disk drive 303. Examples of the disk 304 include a magnetic disk and an optical disk.
 通信I/F305は、通信回線を通じてネットワーク210に接続され、ネットワーク210を介して外部のコンピュータ(例えば、図2に示した管理者端末201、ユーザ端末202)に接続される。そして、通信I/F305は、ネットワーク210と装置内部とのインターフェースを司り、外部のコンピュータからのデータの入出力を制御する。通信I/F305には、例えば、モデムやLANアダプタなどを採用することができる。 The communication I / F 305 is connected to the network 210 through a communication line, and is connected to an external computer (for example, the administrator terminal 201 and the user terminal 202 shown in FIG. 2) via the network 210. The communication I / F 305 controls the interface between the network 210 and the inside of the device, and controls the input / output of data from an external computer. For the communication I / F 305, for example, a modem, a LAN adapter, or the like can be adopted.
 可搬型記録媒体I/F306は、CPU301の制御に従って可搬型記録媒体307に対するデータのリード/ライトを制御する。可搬型記録媒体307は、可搬型記録媒体I/F306の制御で書き込まれたデータを記憶する。可搬型記録媒体307としては、例えば、CD(Compact Disc)-ROM、DVD(Digital Versatile Disk)、USB(Universal Serial Bus)メモリなどが挙げられる。 The portable recording medium I / F 306 controls data read / write to the portable recording medium 307 according to the control of the CPU 301. The portable recording medium 307 stores the data written under the control of the portable recording medium I / F 306. Examples of the portable recording medium 307 include a CD (Compact Disc) -ROM, a DVD (Digital Versatile Disc), and a USB (Universal Versatile Bus) memory.
 なお、情報処理装置101は、上述した構成部のほかに、例えば、SSD(Solid State Drive)、入力装置、ディスプレイ等を有することにしてもよい。また、情報処理装置101は、上述した構成部のうち、例えば、ディスクドライブ303、ディスク304、可搬型記録媒体I/F306、可搬型記録媒体307を有していなくてもよい。 The information processing device 101 may include, for example, an SSD (Solid State Drive), an input device, a display, or the like, in addition to the above-mentioned components. Further, the information processing apparatus 101 does not have to have, for example, a disk drive 303, a disk 304, a portable recording medium I / F 306, and a portable recording medium 307 among the above-described components.
(ユーザ端末202のハードウェア構成例)
 図4は、ユーザ端末202のハードウェア構成例を示すブロック図である。図4において、ユーザ端末202は、CPU401と、メモリ402と、通信I/F403と、カメラ404と、ディスプレイ405と、入力装置406と、マイク407と、スピーカ408と、可搬型記録媒体I/F409と、可搬型記録媒体410と、を有する。また、各構成部はバス400によってそれぞれ接続される。
(Hardware configuration example of user terminal 202)
FIG. 4 is a block diagram showing a hardware configuration example of the user terminal 202. In FIG. 4, the user terminal 202 includes a CPU 401, a memory 402, a communication I / F 403, a camera 404, a display 405, an input device 406, a microphone 407, a speaker 408, and a portable recording medium I / F 409. And a portable recording medium 410. Further, each component is connected by a bus 400.
 ここで、CPU401は、ユーザ端末202の全体の制御を司る。CPU401は、複数のコアを有していてもよい。メモリ402は、例えば、ROM、RAMおよびフラッシュROMなどを有する記憶部である。具体的には、例えば、フラッシュROMやROMが各種プログラムを記憶し、RAMがCPU401のワークエリアとして使用される。メモリ402に記憶されるプログラムは、CPU401にロードされることで、コーディングされている処理をCPU401に実行させる。 Here, the CPU 401 controls the entire user terminal 202. The CPU 401 may have a plurality of cores. The memory 402 is a storage unit having, for example, a ROM, a RAM, a flash ROM, and the like. Specifically, for example, a flash ROM or ROM stores various programs, and the RAM is used as a work area of the CPU 401. The program stored in the memory 402 is loaded into the CPU 401 to cause the CPU 401 to execute the coded process.
 通信I/F403は、通信回線を通じてネットワーク210(図2参照)に接続され、ネットワーク210を介して他のコンピュータ(例えば、図2に示した情報処理装置101)に接続される。そして、通信I/F403は、ネットワーク210と内部のインターフェースを司り、他のコンピュータからのデータの入出力を制御する。 The communication I / F 403 is connected to the network 210 (see FIG. 2) through a communication line, and is connected to another computer (for example, the information processing device 101 shown in FIG. 2) via the network 210. Then, the communication I / F 403 controls the internal interface with the network 210 and controls the input / output of data from another computer.
 カメラ404は、画像(静止画または動画)を撮像して画像データを出力する撮像装置である。ディスプレイ405は、カーソル、アイコンあるいはツールボックスをはじめ、文書、画像、機能情報などのデータを表示する表示装置である。ディスプレイ405としては、例えば、液晶ディスプレイや有機EL(Electroluminescence)ディスプレイなどを採用することができる。 The camera 404 is an imaging device that captures an image (still image or moving image) and outputs image data. The display 405 is a display device that displays data such as a cursor, an icon, a toolbox, a document, an image, and functional information. As the display 405, for example, a liquid crystal display, an organic EL (Electroluminescence) display, or the like can be adopted.
 入力装置406は、文字、数字、各種指示などの入力のためのキーを有し、データの入力を行う。入力装置406は、キーボードやマウスなどであってもよく、また、タッチパネル式の入力パッドやテンキーなどであってもよい。 The input device 406 has keys for inputting characters, numbers, various instructions, etc., and inputs data. The input device 406 may be a keyboard, a mouse, or the like, or may be a touch panel type input pad, a numeric keypad, or the like.
 マイク407は、音を電気信号に変換する装置である。スピーカ408は、電気信号を音声に変換する装置である。可搬型記録媒体I/F409は、CPU401の制御に従って可搬型記録媒体410に対するデータのリード/ライトを制御する。可搬型記録媒体410は、可搬型記録媒体I/F409の制御で書き込まれたデータを記憶する。 The microphone 407 is a device that converts sound into an electric signal. The speaker 408 is a device that converts an electric signal into voice. The portable recording medium I / F 409 controls data read / write to the portable recording medium 410 according to the control of the CPU 401. The portable recording medium 410 stores data written under the control of the portable recording medium I / F409.
 なお、ユーザ端末202は、上述した構成部のうち、例えば、カメラ404、可搬型記録媒体I/F409、可搬型記録媒体410などを有さないことにしてもよい。また、図2に示した管理者端末201についても、ユーザ端末202と同様のハードウェア構成により実現することができる。 Note that the user terminal 202 may not have, for example, a camera 404, a portable recording medium I / F409, a portable recording medium 410, or the like among the above-mentioned components. Further, the administrator terminal 201 shown in FIG. 2 can also be realized by the same hardware configuration as the user terminal 202.
(各種DB220,230,240の記憶内容)
 つぎに、図5~図7を用いて、情報処理装置101が有する各種DB220,230,240の記憶内容について説明する。各種DB220,230,240は、例えば、図3に示したメモリ302、ディスク304などの記憶装置により実現される。
(Memory contents of various DBs 220, 230, 240)
Next, the stored contents of various DBs 220, 230, and 240 included in the information processing apparatus 101 will be described with reference to FIGS. 5 to 7. The various DBs 220, 230, and 240 are realized, for example, by storage devices such as the memory 302 and the disk 304 shown in FIG.
 図5は、シナリオDB220の記憶内容の一例を示す説明図である。図5において、シナリオDB220は、シナリオS1~Sn(n:2以上の自然数)のシナリオ情報500-1~500-nを記憶する。以下の説明では、シナリオS1~Snのうちの任意のシナリオを「シナリオSi」と表記する場合がある(i=1,2,…,n)。 FIG. 5 is an explanatory diagram showing an example of the stored contents of the scenario DB 220. In FIG. 5, the scenario DB 220 stores scenario information 500-1 to 500-n of scenarios S1 to Sn (n: a natural number of 2 or more). In the following description, any scenario among the scenarios S1 to Sn may be referred to as "scenario Si" (i = 1, 2, ..., N).
 シナリオ情報500-iは、シナリオSiのシナリオID、タイトル、あらすじ、登場人物、および、本文の情報を含む。シナリオIDは、シナリオSiを一意に識別する識別子である。タイトルは、シナリオSiのタイトルである。あらすじは、シナリオSiのあらすじである。 Scenario information 500-i includes scenario ID, title, synopsis, characters, and text information of scenario Si. The scenario ID is an identifier that uniquely identifies the scenario Si. The title is the title of scenario Si. The synopsis is a synopsis of scenario Si.
 登場人物の情報は、シナリオSiの登場人物を特定する情報であり、例えば、登場人物の名前と人物紹介とを含む。登場人物の情報には、登場人物を一意に識別する登場人物IDが含まれていてもよい。本文は、シナリオSi内の登場人物のセリフを発言順に示したものである。セリフ番号は、各登場人物におけるセリフの発言順を示している。各セリフは、登場人物とセリフ番号との組合せにより特定される。シナリオ情報500-iには、演技のポイントを示す情報が含まれていてもよい。 The character information is information that identifies the characters in Scenario Si, and includes, for example, the names of the characters and the introduction of the characters. The character information may include a character ID that uniquely identifies the character. The text shows the lines of the characters in Scenario Si in the order of remarks. The dialogue number indicates the order in which the dialogue is spoken by each character. Each line is identified by a combination of characters and line numbers. The scenario information 500-i may include information indicating the points of performance.
 例えば、シナリオ情報500-1は、シナリオS1のタイトル「日常物語」、あらすじ「日常の会話を…」、登場人物「たろう:男性、15歳、性格は…、はなこ:女性、16歳、性格は…」を含む。また、シナリオ情報500-1は、シナリオS1内の登場人物のセリフを発言順に示した本文を含む。例えば、登場人物「たろう」のセリフ番号「1」のセリフは、「暑いね」である。登場人物「はなこ」のセリフ番号「1」のセリフは、「え?」である。 For example, scenario information 500-1 is the title of scenario S1, "daily story", synopsis "daily conversation ...", characters "Taro: male, 15 years old, personality ..., Hanako: female, 16 years old, personality is …"including. Further, the scenario information 500-1 includes a text showing the lines of the characters in the scenario S1 in the order of remarks. For example, the line number "1" of the character "Taro" is "It's hot". The line of the character "Hanako" with the line number "1" is "Eh?".
 図6は、登録音声DB230の記憶内容の一例を示す説明図である。図6において、登録音声DB230は、シナリオID、タイトル、登場人物、演者および音声データのフィールドを有し、各フィールドに情報を設定することで、登録音声情報(例えば、登録音声情報600-1~600-4)をレコードとして記憶する。 FIG. 6 is an explanatory diagram showing an example of the stored contents of the registered voice DB 230. In FIG. 6, the registered voice DB 230 has fields for scenario ID, title, characters, performers, and voice data, and by setting information in each field, registered voice information (for example, registered voice information 600-1 to 600-1 to 600-4) is stored as a record.
 ここで、シナリオIDは、シナリオSiを一意に識別する識別子である。タイトルは、シナリオSiのタイトルである。登場人物は、シナリオSiの登場人物を特定する情報である。登場人物を特定する情報は、例えば、登場人物の名前や登場人物IDである。演者は、登場人物に対応する演者を特定する情報である。演者を特定する情報は、例えば、演者の名前や演者IDである。演者IDは、演者を一意に識別する識別子である。音声データは、演者が録音した登場人物のセリフの音声データである。 Here, the scenario ID is an identifier that uniquely identifies the scenario Si. The title is the title of scenario Si. The characters are information that identifies the characters in the scenario Si. The information that identifies the character is, for example, the name of the character or the character ID. The performer is information that identifies the performer corresponding to the character. The information that identifies the performer is, for example, the name of the performer or the performer ID. The performer ID is an identifier that uniquely identifies the performer. The voice data is the voice data of the characters' lines recorded by the performer.
 例えば、登録音声情報600-1は、シナリオS1(シナリオID:S1、タイトル:日常物語)の登場人物「たろう」のセリフ番号「1」のセリフについて、演者「山田花子」が録音した音声データ「たろう1音声」を示す。 For example, the registered voice information 600-1 is voice data "Hanako Yamada" recorded by the performer "Hanako Yamada" about the line number "1" of the character "Taro" in scenario S1 (scenario ID: S1, title: daily story). "Taro 1 voice" is shown.
 図7は、抽出条件DB240の記憶内容の一例を示す説明図である。図7において、抽出条件DB240は、条件(i)および(ii)を記憶する。条件(i)および(ii)は、登場人物のセリフの中から、サンプル再生を行うセリフを抽出する条件を示す。サンプル再生とは、シナリオSi内の登場人物についての演者を決めるにあたり、各演者の音声を試聴のために再生することである。以下の説明では、サンプル再生を行うセリフを「サンプル再生用セリフ」と表記する場合がある。 FIG. 7 is an explanatory diagram showing an example of the stored contents of the extraction condition DB 240. In FIG. 7, the extraction condition DB 240 stores the conditions (i) and (ii). The conditions (i) and (ii) indicate the conditions for extracting the lines for sample reproduction from the lines of the characters. The sample reproduction is to reproduce the sound of each performer for audition when deciding the performer about the characters in the scenario Si. In the following description, the dialogue for performing sample reproduction may be referred to as "sample reproduction dialogue".
 ここで、条件(i)は、セリフの文字数をカウントするにあたり、「…」、「?」、「!」、「っ」、「、」、「。」は、文字をカウントしないという条件を示す。条件(ii)は、「文字数×1」を各セリフの点数としてカウントし、最も点数が高いものを抽出するという条件を示す。 Here, the condition (i) indicates a condition that the characters are not counted in "...", "?", "!", "Tsu", ",", and "." When counting the number of characters in the dialogue. .. The condition (ii) indicates a condition that "the number of characters x 1" is counted as the score of each line and the one with the highest score is extracted.
(情報処理装置101の機能的構成例)
 図8は、情報処理装置101の機能的構成例を示すブロック図である。図8において、情報処理装置101は、受付部801と、表示制御部802と、再生制御部803と、決定部804と、記憶部810と、を含む。具体的には、例えば、受付部801~決定部804は、図3に示したメモリ302、ディスク304、可搬型記録媒体307などの記憶装置に記憶されたプログラムをCPU301に実行させることにより、または、通信I/F305により、その機能を実現する。各機能部の処理結果は、例えば、メモリ302、ディスク304などの記憶装置に記憶される。記憶部810は、例えば、メモリ302、ディスク304などの記憶装置により実現される。具体的には、例えば、記憶部810は、シナリオDB220、登録音声DB230および抽出条件DB240を記憶する。
(Example of functional configuration of information processing device 101)
FIG. 8 is a block diagram showing a functional configuration example of the information processing device 101. In FIG. 8, the information processing device 101 includes a reception unit 801, a display control unit 802, a reproduction control unit 803, a determination unit 804, and a storage unit 810. Specifically, for example, the reception unit 801 to the determination unit 804 may cause the CPU 301 to execute a program stored in a storage device such as the memory 302, the disk 304, or the portable recording medium 307 shown in FIG. , Communication I / F 305 realizes the function. The processing result of each functional unit is stored in a storage device such as a memory 302 or a disk 304, for example. The storage unit 810 is realized by a storage device such as a memory 302 or a disk 304, for example. Specifically, for example, the storage unit 810 stores the scenario DB 220, the registered voice DB 230, and the extraction condition DB 240.
 受付部801は、ユーザからシナリオSiの指定を受け付ける。ユーザは、例えば、ソーシャルボイスレコーディングwebサービスにログインしたユーザ(ログインユーザ)である。指定されるシナリオSiは、例えば、ユーザがコンテンツを読んだり聴いたりするシナリオや、ユーザが登場人物のセリフを録音するシナリオである。 The reception unit 801 receives the designation of the scenario Si from the user. The user is, for example, a user (logged-in user) who has logged in to the social voice recording web service. The designated scenario Si is, for example, a scenario in which the user reads or listens to the content, or a scenario in which the user records the lines of the characters.
 シナリオSiの指定は、例えば、図2に示したユーザ端末202に表示されるアプリホーム画面において行われる。アプリホーム画面は、複数のシナリオ(例えば、シナリオS1~Sn)の中からいずれかのシナリオSiを指定可能な操作画面である。アプリホーム画面では、例えば、キーワードやカテゴリを指定して、シナリオS1~Snのうち一覧表示するシナリオを絞り込み可能であってもよい。アプリホーム画面の画面例については、図11を用いて後述する。 The scenario Si is specified, for example, on the application home screen displayed on the user terminal 202 shown in FIG. The application home screen is an operation screen on which any scenario Si can be specified from a plurality of scenarios (for example, scenarios S1 to Sn). On the application home screen, for example, keywords and categories may be specified to narrow down the scenarios to be displayed in a list from scenarios S1 to Sn. A screen example of the application home screen will be described later with reference to FIG.
 具体的には、例えば、受付部801は、アプリホーム画面においてユーザの操作入力により指定されたシナリオSiを特定する情報をユーザ端末202から受信することにより、ユーザからシナリオSiの指定を受け付ける。シナリオSiを特定する情報は、例えば、シナリオIDやタイトルである。 Specifically, for example, the reception unit 801 receives the designation of the scenario Si from the user by receiving the information specifying the scenario Si specified by the user's operation input on the application home screen from the user terminal 202. The information that identifies the scenario Si is, for example, a scenario ID or a title.
 表示制御部802は、指定されたシナリオSiのシナリオ情報を記憶部810から取得する。シナリオ情報は、シナリオSiの登場人物を特定する情報や、シナリオSiの本文を含む。そして、表示制御部802は、取得したシナリオ情報を参照して、シナリオSi内の登場人物を選択可能に表示する。 The display control unit 802 acquires the scenario information of the designated scenario Si from the storage unit 810. The scenario information includes information for identifying characters in the scenario Si and the text of the scenario Si. Then, the display control unit 802 refers to the acquired scenario information and displays the characters in the scenario Si in a selectable manner.
 具体的には、例えば、表示制御部802は、シナリオDB220(図5参照)から、指定されたシナリオSiのシナリオ情報500-iを取得する。そして、表示制御部802は、取得したシナリオ情報500-iを参照して、シナリオSi内の登場人物を選択可能な操作画面をユーザ端末202に表示する。シナリオSi内の登場人物を選択可能な操作画面の画面例については、図13を用いて後述する。 Specifically, for example, the display control unit 802 acquires the scenario information 500-i of the designated scenario Si from the scenario DB 220 (see FIG. 5). Then, the display control unit 802 displays an operation screen on the user terminal 202 in which the characters in the scenario Si can be selected with reference to the acquired scenario information 500-i. A screen example of an operation screen in which characters in scenario Si can be selected will be described later with reference to FIG.
 また、表示制御部802は、指定されたシナリオSiの登場人物に対応付けた演者情報を記憶部810から取得する。そして、表示制御部802は、取得したシナリオSiの登場人物に対応する演者情報を、当該登場人物についての演者であることを判別可能に表示する。 Further, the display control unit 802 acquires the performer information associated with the characters in the designated scenario Si from the storage unit 810. Then, the display control unit 802 displays the performer information corresponding to the acquired characters in the scenario Si so as to be able to determine that the performer is the character.
 ここで、演者情報は、登場人物の役を演じる演者を特定する情報である。演者情報は、例えば、演者の名前や、演者をあらわすアイコンなどである。演者情報には、例えば、性別、年齢などの演者の属性が含まれていてもよい。 Here, the performer information is information that identifies the performer who plays the role of the character. The performer information is, for example, the name of the performer, an icon representing the performer, or the like. The performer information may include, for example, performer attributes such as gender and age.
 以下の説明では、シナリオSi内のいずれかの登場人物を「登場人物C」と表記する場合がある。 In the following explanation, any character in scenario Si may be referred to as "character C".
 具体的には、例えば、表示制御部802は、シナリオSi内の登場人物Cの選択を受け付けると、選択された登場人物Cに対応付けた演者情報を記憶部810から取得する。そして、表示制御部802は、取得した登場人物Cに対応する演者情報を、当該登場人物Cについての演者であることを判別可能に表示する。 Specifically, for example, when the display control unit 802 accepts the selection of the character C in the scenario Si, the display control unit 802 acquires the performer information associated with the selected character C from the storage unit 810. Then, the display control unit 802 displays the acquired performer information corresponding to the character C so as to be able to determine that the performer is the character C.
 より詳細に説明すると、例えば、表示制御部802は、シナリオSi内のいずれかの登場人物Cの選択を受け付けると、登録音声DB230(図6参照)から、選択された登場人物Cに対応する登録音声情報を取得する。なお、選択された登場人物Cに対応する登録音声情報とは、指定されたシナリオSiのシナリオIDがシナリオIDフィールドに設定され、選択された登場人物Cの名前が登場人物フィールドに設定された登録音声情報である。 More specifically, for example, when the display control unit 802 accepts the selection of any character C in the scenario Si, the registration corresponding to the selected character C is registered from the registration voice DB 230 (see FIG. 6). Get voice information. The registered voice information corresponding to the selected character C is a registration in which the scenario ID of the specified scenario Si is set in the scenario ID field and the name of the selected character C is set in the character field. It is voice information.
 そして、表示制御部802は、取得した登録音声情報を参照して、ユーザ端末202にサンプル再生画面を表示する。サンプル再生画面は、登場人物Cに対応する登録音声情報から特定される演者を、登場人物Cについての演者であることを判別可能に表示する操作画面である。ただし、サンプル再生画面は、情報処理装置101において生成してもよく、また、ユーザ端末202において生成してもよい。サンプル再生画面の画面例については、図14を用いて後述する。 Then, the display control unit 802 displays the sample playback screen on the user terminal 202 with reference to the acquired registered voice information. The sample playback screen is an operation screen that displays the performer specified from the registered voice information corresponding to the character C so as to be a performer for the character C. However, the sample reproduction screen may be generated by the information processing device 101 or may be generated by the user terminal 202. A screen example of the sample reproduction screen will be described later with reference to FIG.
 また、例えば、表示制御部802は、指定されたシナリオSiの登場人物Cごとに対応付けた演者情報を記憶部810から取得することにしてもよい。そして、表示制御部802は、シナリオSiの登場人物Cごとに、取得した当該登場人物Cに対応する演者情報を、当該登場人物Cについての演者であることを判別可能に表示することにしてもよい。 Further, for example, the display control unit 802 may acquire the performer information associated with each character C of the designated scenario Si from the storage unit 810. Then, the display control unit 802 may display the acquired performer information corresponding to the character C for each character C in the scenario Si so as to be able to determine that the character is the performer for the character C. Good.
 すなわち、表示制御部802は、シナリオSi内の選択された登場人物Cについてのみ演者情報を表示するのではなく、シナリオSi内の登場人物Cごとの演者情報を一覧表示することにしてもよい。例えば、図1に示したサンプル再生画面120は、シナリオSi内の登場人物Cごとの演者情報を一覧表示する操作画面の一例である。 That is, the display control unit 802 may display the performer information for each character C in the scenario Si in a list instead of displaying the performer information only for the selected character C in the scenario Si. For example, the sample reproduction screen 120 shown in FIG. 1 is an example of an operation screen that displays a list of performer information for each character C in the scenario Si.
 また、受付部801は、シナリオSi内の登場人物Cについての1または複数の演者のうちのいずれかの演者の指定を受け付ける。指定される演者は、例えば、登場人物Cについての演者を決めるにあたり、音声を試聴したい演者である。演者の指定は、例えば、ユーザ端末202に表示されるサンプル再生画面において行われる。 Further, the reception unit 801 accepts the designation of one or a plurality of performers for the character C in the scenario Si. The designated performer is, for example, a performer who wants to audition the voice when deciding the performer for the character C. The performer is designated, for example, on the sample reproduction screen displayed on the user terminal 202.
 具体的には、例えば、受付部801は、サンプル再生画面においてユーザの操作入力により指定された演者を特定する情報をユーザ端末202から受信することにより、登場人物Cについての演者の指定を受け付ける。演者を特定する情報は、例えば、演者の名称や演者IDである。 Specifically, for example, the reception unit 801 receives the designation of the performer for the character C by receiving the information for identifying the performer specified by the user's operation input on the sample playback screen from the user terminal 202. The information that identifies the performer is, for example, the name of the performer or the performer ID.
 再生制御部803は、シナリオSi内の登場人物Cについての1または複数の演者のうちのいずれかの演者の指定を受け付けるたびに、登場人物Cの同じセリフについて、指定された演者が登録した音声データを再生する。再生対象のセリフ(サンプル再生用セリフ)は、例えば、登場人物Cのセリフの中から決定部804によって決定される。ただし、再生対象のセリフは、予め設定されていてもよい。指定された演者が登録した音声データは、例えば、登場人物Cに対応する登録音声情報のうちの指定された演者の登録音声情報から特定される。 Each time the playback control unit 803 accepts the designation of one or a plurality of performers for the character C in the scenario Si, the voice registered by the designated performer for the same line of the character C. Play the data. The dialogue to be reproduced (the dialogue for sample reproduction) is determined by, for example, the dialogue of the character C by the determination unit 804. However, the dialogue to be reproduced may be set in advance. The voice data registered by the designated performer is specified, for example, from the registered voice information of the designated performer among the registered voice information corresponding to the character C.
 例えば、シナリオSi内のある一人の登場人物Cに対して複数の演者がいるとする。この場合、再生制御部803は、当該一人の登場人物Cについて、ユーザから複数の演者のうちの一人の演者の指定を受け付けるたびに、再生対象のセリフについて、当該一人の演者が登録した音声データを再生する。 For example, suppose that there are multiple performers for one character C in scenario Si. In this case, the reproduction control unit 803 receives the voice data registered by the one performer for the dialogue to be reproduced each time the user receives the designation of one of the plurality of performers for the one character C. To play.
 より詳細に説明すると、例えば、再生制御部803は、サンプル再生画面において演者が指定されると、登場人物Cに対応する登録音声情報のうちの指定された演者の登録音声情報から、サンプル再生用セリフの音声データを取得する。そして、再生制御部803は、取得した音声データをユーザ端末202に送信して、ユーザ端末202のスピーカ408(図4参照)から当該音声データを出力させる。 More specifically, for example, when a performer is designated on the sample playback screen, the playback control unit 803 is used for sample playback from the registered voice information of the designated performer among the registered voice information corresponding to the character C. Acquire the voice data of the dialogue. Then, the reproduction control unit 803 transmits the acquired voice data to the user terminal 202, and outputs the voice data from the speaker 408 (see FIG. 4) of the user terminal 202.
 一例として、シナリオSiを「シナリオS1」とし、登場人物Cを「たろう」とし、演者として「山田花子」が指定されたとする。また、登場人物「たろう」のサンプル再生用セリフをセリフ番号「1」のセリフとする。この場合、再生制御部803は、登録音声DB230内の登録音声情報600-1を参照して、音声データ「たろう1音声」を取得する。そして、再生制御部803は、取得した音声データ「たろう1音声」をユーザ端末202に送信して、ユーザ端末202のスピーカ408から音声データ「たろう1音声」を出力させる。 As an example, assume that scenario Si is "scenario S1", character C is "taro", and "Hanako Yamada" is designated as the performer. In addition, the dialogue for sample reproduction of the character "Taro" is defined as the dialogue with the dialogue number "1". In this case, the reproduction control unit 803 acquires the voice data “Taro 1 voice” by referring to the registered voice information 600-1 in the registered voice DB 230. Then, the reproduction control unit 803 transmits the acquired voice data "Taro 1 voice" to the user terminal 202, and outputs the voice data "Taro 1 voice" from the speaker 408 of the user terminal 202.
 これにより、ユーザは、サンプル再生画面において登場人物「たろう」の演者「山田花子」を指定するだけで、登場人物「たろう」のサンプル再生用セリフ(セリフ番号「1」のセリフ)について、演者「山田花子」が録音した音声データ「たろう1音声」を試聴することができる。 As a result, the user simply specifies the performer "Hanako Yamada" of the character "Taro" on the sample playback screen, and the performer "Taro" has a sample playback dialogue (line number "1"). You can listen to the voice data "Taro 1 Voice" recorded by "Hanako Yamada".
 決定部804は、登場人物Cについて再生対象のセリフ(サンプル再生用セリフ)を決定する。具体的には、例えば、決定部804は、登場人物Cのセリフのうち、文字数が所定数α以上のセリフを優先してサンプル再生用セリフに決定することにしてもよい。ここで、所定数αは、任意に設定可能であり、例えば、10~30程度の値に設定される。 The determination unit 804 determines the dialogue to be reproduced (sample reproduction dialogue) for the character C. Specifically, for example, the determination unit 804 may preferentially determine the lines for sample reproduction among the lines of the character C, in which the number of characters is a predetermined number α or more. Here, the predetermined number α can be arbitrarily set, and is set to a value of, for example, about 10 to 30.
 この場合、再生制御部803は、シナリオSi内の登場人物Cについての1または複数の演者のうちのいずれかの演者の指定を受け付けるたびに、決定されたサンプル再生用セリフについて、指定された演者が登録した音声データを再生する。 In this case, each time the reproduction control unit 803 accepts the designation of one or a plurality of performers for the character C in the scenario Si, the designated performer is used for the determined sample reproduction dialogue. Plays the audio data registered by.
 例えば、所定数αを「α=10」とすると、図5に示したシナリオS1の登場人物「たろう」のセリフ番号「1」のセリフ「暑いね」は、文字数が10文字以下のため、サンプル再生用セリフとして決定されない。これにより、文字数が少なすぎて演者の声の感じを判断しにくいセリフが、サンプル再生用セリフに決定されるのを防ぐことができる。 For example, assuming that the predetermined number α is “α = 10”, the dialogue “hot” of the dialogue number “1” of the character “Taro” in scenario S1 shown in FIG. 5 is a sample because the number of characters is 10 or less. Not determined as a playback line. As a result, it is possible to prevent a line for which it is difficult to judge the feeling of the performer's voice because the number of characters is too small to be determined as a sample reproduction line.
 また、決定部804は、登場人物Cのセリフのうち、文字数が所定数α以上かつ所定数β以下のセリフを優先してサンプル再生用セリフに決定することにしてもよい。ここで、所定数βは、任意に設定可能であり、例えば、50~100程度の値に設定される。これにより、文字数が少なすぎて演者の声の感じを判断しにくいセリフや、文字数が多すぎて試聴に時間がかかるセリフが、サンプル再生用セリフに決定されるのを防ぐことができる。 Further, the determination unit 804 may preferentially determine the lines for sample reproduction among the lines of the character C, in which the number of characters is a predetermined number α or more and a predetermined number β or less. Here, the predetermined number β can be arbitrarily set, and is set to, for example, a value of about 50 to 100. As a result, it is possible to prevent the lines for which the number of characters is too small to judge the feeling of the performer's voice and the lines for which the number of characters is too large and the audition takes a long time to be determined as the sample reproduction lines.
 また、決定部804は、例えば、登場人物Cのセリフのうち、所定の文字コードを含まないセリフを優先してサンプル再生用セリフに決定することにしてもよい。ここで、所定の文字コードは、例えば、「…」、「?」、「!」、「っ」、「、」、「。」などの声によって表現しにくい文字に対応する文字コードである。 Further, the determination unit 804 may, for example, preferentially determine the dialogue for sample reproduction among the dialogue of the character C, which does not include the predetermined character code. Here, the predetermined character code is a character code corresponding to a character that is difficult to be expressed by a voice such as "...", "?", "!", "Tsu", ",", ".".
 例えば、図5に示したシナリオS1の登場人物「たろう」のセリフ番号「3」のセリフ「……」は、「…」を含むため、サンプル再生用セリフとして決定されない。これにより、声によって表現しにくい文字が含まれるため、演者の声の感じを判断しにくいセリフが、サンプル再生用セリフに決定されるのを防ぐことができる。 For example, the dialogue "..." of the dialogue number "3" of the character "Taro" in scenario S1 shown in FIG. 5 is not determined as the dialogue for sample reproduction because it includes "...". As a result, since characters that are difficult to express by voice are included, it is possible to prevent lines that are difficult to judge the feeling of the performer's voice from being determined as sample reproduction lines.
 また、決定部804は、例えば、登場人物Cのセリフのうち、シナリオSiの前半部分に登場するセリフを優先してサンプル再生用セリフに決定することにしてもよい。シナリオSiの前半部分は、例えば、シナリオSiの全セリフを前後二つに分けた前の部分であってもよいし、シナリオSiの先頭のセリフから所定数分のセリフであってもよい。 Further, the determination unit 804 may decide, for example, the dialogue for sample reproduction by giving priority to the dialogue appearing in the first half of the scenario Si among the dialogue of the character C. The first half of the scenario Si may be, for example, the front part in which all the lines of the scenario Si are divided into two front and rear parts, or may be a predetermined number of lines from the first line of the scenario Si.
 例えば、シナリオS1の全セリフ数を「20」とすると、登場人物「たろう」についてのサンプル再生用セリフは、シナリオS1の前半10セリフに含まれる登場人物「たろう」のセリフの中から決定される。これにより、ネタバレとなるようなセリフはシナリオSiの後半部分に登場する傾向があることを考慮して、サンプル再生用セリフを決定することができる。 For example, assuming that the total number of lines in scenario S1 is "20", the sample playback lines for the character "Taro" are determined from the lines of the character "Taro" included in the first 10 lines of scenario S1. .. As a result, the dialogue for sample reproduction can be determined in consideration of the fact that the dialogue that becomes spoiler tends to appear in the latter half of the scenario Si.
 どのようなセリフを優先してサンプル再生用セリフに決定するかは、任意に設定可能である。例えば、サンプル再生用セリフを決定する際に用いる抽出条件は、予め設定されて、図7に示した抽出条件DB240に記憶される。より詳細に説明すると、例えば、決定部804は、シナリオSiのシナリオ情報500-iを参照して、選択された登場人物Cのセリフを特定する。つぎに、決定部804は、図7に示した抽出条件DB240を参照して、条件(i)および(ii)を特定する。そして、決定部804は、特定した条件(i)を参照して、特定した登場人物Cのセリフごとに、当該セリフの文字数をカウントする。つぎに、決定部804は、特定した条件(ii)を参照して、特定した登場人物Cのセリフごとに、文字数×1を各セリフの点数としてカウントし、最も点数が高いセリフを抽出する。そして、決定部804は、抽出したセリフを、登場人物Cについてのサンプル再生用セリフに決定する。 It is possible to arbitrarily set what kind of dialogue is prioritized to determine the dialogue for sample playback. For example, the extraction conditions used when determining the sample reproduction dialogue are set in advance and stored in the extraction condition DB 240 shown in FIG. 7. More specifically, for example, the determination unit 804 identifies the dialogue of the selected character C with reference to the scenario information 500-i of the scenario Si. Next, the determination unit 804 identifies the conditions (i) and (ii) with reference to the extraction condition DB 240 shown in FIG. 7. Then, the determination unit 804 refers to the specified condition (i) and counts the number of characters of the specified character C for each line. Next, the determination unit 804 refers to the specified condition (ii), counts the number of characters × 1 as the score of each dialogue for each dialogue of the specified character C, and extracts the dialogue with the highest score. Then, the determination unit 804 determines the extracted dialogue as the sample reproduction dialogue for the character C.
 ここで、図5に示したシナリオS1の登場人物「たろう」のセリフ番号「1」~「4」のセリフの中から、サンプル再生用セリフを決定する場合を想定する。セリフ番号「1」のセリフ「暑いね」の文字数は「3」である。このため、セリフ番号「1」のセリフ「暑いね」の点数は、「3点」となる。セリフ番号「2」のセリフ「今日も暑いね!」の文字数は、「!」をカウントしないため、「6」となる。このため、セリフ番号「2」のセリフ「今日も暑いね!」の点数は、「6点」となる。セリフ番号「3」のセリフ「……」の文字数は、「…」をカウントしないため、「0」となる。このため、セリフ番号「3」のセリフ「……」の点数は、「0点」となる。セリフ番号「4」のセリフ「その割には暑そうに見えないね」の文字数は「14」である。このため、セリフ番号「4」のセリフ「その割には暑そうに見えないね」の点数は、「14点」となる。この場合、決定部804は、登場人物「たろう」のセリフ番号「1」~「4」のセリフのうち、最も点数が高いセリフ番号「4」のセリフ「その割には暑そうに見えないね」を、サンプル再生用セリフに決定する。 Here, it is assumed that the sample reproduction dialogue is determined from the dialogue numbers "1" to "4" of the character "Taro" in the scenario S1 shown in FIG. The number of characters in the dialogue "It's hot" in the dialogue number "1" is "3". Therefore, the score of the dialogue "It's hot" with the dialogue number "1" is "3 points". The number of characters in the line "It's hot today!" With the line number "2" is "6" because "!" Is not counted. Therefore, the score of the dialogue "It's hot today!" With the dialogue number "2" is "6 points". The number of characters in the dialogue "..." of the dialogue number "3" is "0" because "..." is not counted. Therefore, the score of the dialogue "..." of the dialogue number "3" is "0 points". The number of characters in the line "It doesn't look hot for that" of the line number "4" is "14". Therefore, the score of the dialogue "It doesn't look hot for that" of the dialogue number "4" is "14 points". In this case, the decision unit 804 has the highest score among the lines "1" to "4" of the character "Taro", the line number "4" "It doesn't look hot for that. Is determined as the sample playback dialogue.
 決定されたサンプル再生用セリフは、例えば、登場人物Cと対応付けて記憶部810に記憶される。この場合、再生制御部803は、シナリオSi内の登場人物Cについての演者の指定を受け付けるたびに、記憶部810に記憶された登場人物Cに対応するサンプル再生用セリフについて、指定された演者が登録した音声データを再生する。 The determined sample reproduction dialogue is stored in the storage unit 810 in association with the character C, for example. In this case, each time the playback control unit 803 receives the designation of the performer for the character C in the scenario Si, the designated performer requests the sample playback dialogue corresponding to the character C stored in the storage unit 810. Play the registered audio data.
 なお、シナリオSi内の登場人物Cのセリフごとの文字数(あるいは、点数)のカウント結果は、例えば、セリフの属性情報として、シナリオ情報500-iに格納されることにしてもよい。これにより、登場人物Cのサンプル再生用セリフを決定するにあたり、同一のセリフについて、文字数(あるいは、点数)をカウントする処理を繰り返し行わないようにすることができる。 Note that the count result of the number of characters (or points) for each line of the character C in the scenario Si may be stored in the scenario information 500-i as, for example, the attribute information of the line. As a result, in determining the sample reproduction dialogue of the character C, it is possible not to repeat the process of counting the number of characters (or points) for the same dialogue.
 また、受付部801は、シナリオSi内の登場人物Cについての1または複数の演者から本再生用の演者の選択を受け付ける。本再生用の演者とは、シナリオSiに対応するコンテンツを再生する際に、登場人物Cの役を演じる演者である。シナリオSiに対応するコンテンツとは、シナリオSiの本文と、シナリオSi内の登場人物Cごとのセリフの音声データとを含む電子データである。 In addition, the reception unit 801 accepts the selection of the performer for the main reproduction from one or more performers for the character C in the scenario Si. The performer for this reproduction is a performer who plays the role of the character C when reproducing the content corresponding to the scenario Si. The content corresponding to the scenario Si is electronic data including the text of the scenario Si and the voice data of the dialogue for each character C in the scenario Si.
 本再生用の演者の選択は、例えば、ユーザ端末202に表示されるサンプル再生画面において行われる。具体的には、例えば、受付部801は、サンプル再生画面においてユーザの操作入力により本再生用として選択された演者を特定する情報をユーザ端末202から受信することにより、登場人物Cについての演者の選択を受け付ける。また、受付部801は、サンプル再生画面において直前に指定された演者を、本再生用の演者として受け付けることにしてもよい。 The performer for the main reproduction is selected, for example, on the sample reproduction screen displayed on the user terminal 202. Specifically, for example, the reception unit 801 receives information from the user terminal 202 that identifies the performer selected for the main playback by the user's operation input on the sample playback screen, so that the reception unit 801 receives the information of the performer regarding the character C. Accept your choice. Further, the reception unit 801 may accept the performer designated immediately before on the sample reproduction screen as the performer for the main reproduction.
 ただし、初期状態では、シナリオSi内の登場人物Cについて、最初に音声データを登録した演者が、本再生用の演者として選択されていてもよい。シナリオSi内の登場人物Cごとの当該登場人物Cと演者との組合せを特定する情報は、例えば、ユーザ端末202のユーザ(ログインユーザ)と対応付けて記憶部810に記憶される。 However, in the initial state, the performer who first registered the voice data for the character C in the scenario Si may be selected as the performer for the main reproduction. Information that identifies the combination of the character C and the performer for each character C in the scenario Si is stored in the storage unit 810 in association with, for example, the user (logged-in user) of the user terminal 202.
 表示制御部802は、シナリオSiに対応するコンテンツを再生する際に、シナリオSi内の登場人物Cのセリフを発言順に表示する。具体的には、例えば、表示制御部802は、登場人物Cに対応するセリフを、当該登場人物Cについてのセリフであることを判別可能に発言順に表示する。 The display control unit 802 displays the lines of the character C in the scenario Si in the order of remarks when playing back the content corresponding to the scenario Si. Specifically, for example, the display control unit 802 displays the lines corresponding to the character C in the order of remarks so that it can be determined that the lines correspond to the character C.
 より詳細に説明すると、例えば、表示制御部802は、シナリオDB220を参照して、ユーザ端末202にシナリオ再生画面を表示する。シナリオ再生画面は、シナリオSiに対応するコンテンツを再生するための操作画面である。ただし、シナリオ再生画面は、情報処理装置101において生成してもよく、また、ユーザ端末202において生成してもよい。シナリオ再生画面の画面例については、図13を用いて後述する。 More specifically, for example, the display control unit 802 refers to the scenario DB 220 and displays the scenario reproduction screen on the user terminal 202. The scenario playback screen is an operation screen for playing back the content corresponding to the scenario Si. However, the scenario reproduction screen may be generated by the information processing device 101 or may be generated by the user terminal 202. A screen example of the scenario reproduction screen will be described later with reference to FIG.
 また、再生制御部803は、シナリオSiに対応するコンテンツを再生する際に、登場人物Cのセリフについて、選択された演者が登録した音声データを再生する。シナリオSiに対応するコンテンツの再生開始、一時停止、再生終了などの指示は、例えば、ユーザ端末202に表示されるシナリオ再生画面において行われる。 Further, when the playback control unit 803 reproduces the content corresponding to the scenario Si, the reproduction control unit 803 reproduces the voice data registered by the selected performer for the dialogue of the character C. Instructions such as playback start, pause, and playback end of the content corresponding to the scenario Si are given, for example, on the scenario playback screen displayed on the user terminal 202.
 具体的には、例えば、再生制御部803は、シナリオSiに対応するコンテンツの再生開始指示を受け付けると、シナリオDB220を参照して、再生対象のセリフを順次選択する。つぎに、再生制御部803は、再生対象のセリフを選択するたびに、登録音声DB230を参照して、選択したセリフについて、本再生用の演者として選択された演者が登録した音声データを取得する。そして、再生制御部803は、取得した音声データをユーザ端末202に送信して、ユーザ端末202のスピーカ408から当該音声データを出力させる。 Specifically, for example, when the reproduction control unit 803 receives the reproduction start instruction of the content corresponding to the scenario Si, the reproduction control unit 803 refers to the scenario DB 220 and sequentially selects the dialogue to be reproduced. Next, each time the reproduction control unit 803 selects a dialogue to be reproduced, the reproduction control unit 803 refers to the registered voice DB 230 and acquires the voice data registered by the performer selected as the performer for the main reproduction for the selected dialogue. .. Then, the reproduction control unit 803 transmits the acquired voice data to the user terminal 202, and outputs the voice data from the speaker 408 of the user terminal 202.
 また、再生制御部803は、例えば、シナリオSi内の各登場人物Cについての本再生用の演者の音声データをまとめた音声データ群をユーザ端末202に出力することにしてもよい。この場合、ユーザ端末202は、受信した音声データ群に基づいて、シナリオSi内の登場人物Cごとのセリフの音声データを発言順に順次出力することにしてもよい。 Further, the reproduction control unit 803 may output, for example, an audio data group summarizing the audio data of the performer for the main reproduction for each character C in the scenario Si to the user terminal 202. In this case, the user terminal 202 may sequentially output the voice data of the dialogue for each character C in the scenario Si in the order of speech based on the received voice data group.
 これにより、シナリオSi内の登場人物Cごとのセリフを発言順に連続再生することができる。 This makes it possible to continuously play the lines for each character C in the scenario Si in the order of remarks.
 また、受付部801は、シナリオSi内の登場人物Cと演者との組合せを他のユーザとシェアするシェア要求を受け付けることにしてもよい。シェア要求は、例えば、登場人物Cと演者とのおすすめの組合せを他のユーザとシェアする際に行われる。シェア要求は、例えば、ユーザ端末202に表示されるシナリオ再生画面において行われる。 Further, the reception unit 801 may accept a share request for sharing the combination of the character C and the performer in the scenario Si with other users. The share request is made, for example, when sharing a recommended combination of the character C and the performer with another user. The share request is made, for example, on the scenario reproduction screen displayed on the user terminal 202.
 また、再生制御部803は、シェア要求を受け付けると、シナリオSi内の登場人物Cと、本再生用の演者として選択された演者との組合せを特定する組合せ情報を生成して、生成した組合せ情報を出力する。組合せ情報は、例えば、URL(Uniform Resource Locator)によって表される。 Further, when the reproduction control unit 803 receives the share request, the reproduction control unit 803 generates combination information for specifying the combination of the character C in the scenario Si and the performer selected as the performer for the main reproduction, and the generated combination information. Is output. The combination information is represented by, for example, a URL (Uniform Resource Locator).
 具体的には、例えば、再生制御部803は、ユーザ端末202からシェア要求を受け付けると、ユーザ端末202のユーザ(ログインユーザ)と対応付けて記憶部810に記憶された、シナリオSi内の登場人物Cと演者との組合せを特定する。つぎに、再生制御部803は、特定したシナリオSi内の登場人物Cと演者との組合せを特定する組合せ情報を生成する。 Specifically, for example, when the playback control unit 803 receives a share request from the user terminal 202, the characters in the scenario Si are stored in the storage unit 810 in association with the user (logged-in user) of the user terminal 202. Identify the combination of C and the performer. Next, the reproduction control unit 803 generates combination information that specifies the combination of the character C and the performer in the specified scenario Si.
 より詳細に説明すると、例えば、再生制御部803は、シナリオS1の組合せ情報として、URL「cm-app://service/シナリオS1/たろう:山田花子/はなこ:山田太郎/」を生成する。そして、再生制御部803は、生成した組合せ情報をユーザ端末202に送信する。 More specifically, for example, the reproduction control unit 803 generates the URL "cm-app: // service / scenario S1 / Taro: Hanako Yamada / Hanako: Taro Yamada /" as the combination information of the scenario S1. Then, the reproduction control unit 803 transmits the generated combination information to the user terminal 202.
 これにより、ユーザは、例えば、SNS(Social Networking Service)やメールなどを利用して、シナリオSi内の登場人物Cと演者とのおすすめの組合せを示す組合せ情報を、他のユーザに提供することができる。 As a result, the user can provide other users with combination information indicating a recommended combination of the character C and the performer in the scenario Si by using, for example, SNS (Social Networking Service) or e-mail. it can.
 また、再生制御部803は、ユーザから組合せ情報を受け付けると、シナリオSiに対応するコンテンツを再生する際に、登場人物Cのセリフについて、当該組合せ情報から特定される演者が登録した音声データを再生する。具体的には、例えば、再生制御部803は、URL「cm-app://service/シナリオS1/たろう:山田花子/はなこ:山田太郎/」を受け付けると、シナリオS1に対応するコンテンツを再生する際に、各登場人物C(たろう、はなこ)のセリフについて、当該URLから特定される演者(たろう:山田花子、はなこ:山田太郎)が登録した音声データを再生する。 Further, when the reproduction control unit 803 receives the combination information from the user, when reproducing the content corresponding to the scenario Si, the reproduction control unit 803 reproduces the voice data registered by the performer specified from the combination information with respect to the dialogue of the character C. To do. Specifically, for example, when the playback control unit 803 receives the URL "cm-app: // service / scenario S1 / Taro: Hanako Yamada / Hanako: Taro Yamada /", the playback control unit 803 reproduces the content corresponding to the scenario S1. At that time, the voice data registered by the performers (Taro: Hanako Yamada, Hanako: Taro Yamada) specified from the URL is reproduced for the lines of each character C (Taro, Hanako).
 これにより、あるユーザが選択したシナリオSi内の登場人物Cと演者とのおすすめの組合せを、複数のユーザで共有することができる。 As a result, the recommended combination of the character C and the performer in the scenario Si selected by a certain user can be shared by a plurality of users.
 なお、情報処理装置101の各機能部は、例えば、ユーザ端末202により実現されることにしてもよい。この場合、ユーザ端末202は、例えば、各機能部で用いる各種情報(例えば、シナリオDB220内のシナリオ情報、登録音声DB230内の登録音声情報、抽出条件DB240内の条件)を情報処理装置101から取得する。また、情報処理装置101の各機能部は、音声再生システム200内の複数のコンピュータ(例えば、情報処理装置101およびユーザ端末202)により実現されることにしてもよい。 Note that each functional unit of the information processing device 101 may be realized by, for example, the user terminal 202. In this case, the user terminal 202 acquires, for example, various information used in each functional unit (for example, scenario information in the scenario DB 220, registered voice information in the registered voice DB 230, and conditions in the extraction condition DB 240) from the information processing device 101. To do. Further, each functional unit of the information processing device 101 may be realized by a plurality of computers (for example, the information processing device 101 and the user terminal 202) in the voice reproduction system 200.
(ユーザ端末202の各種画面の画面例)
 つぎに、図9~図14を用いて、ユーザ端末202のディスプレイ405に表示される各種画面の画面例について説明する。以下の説明では、図4に示した入力装置406を用いて、操作画面に表示されているボタン等を選択するユーザ操作として、タップ操作を行う場合を例に挙げて説明する。
(Screen examples of various screens of the user terminal 202)
Next, screen examples of various screens displayed on the display 405 of the user terminal 202 will be described with reference to FIGS. 9 to 14. In the following description, a case where a tap operation is performed as a user operation for selecting a button or the like displayed on the operation screen by using the input device 406 shown in FIG. 4 will be described as an example.
 図9は、ログイン画面の画面例を示す説明図である。図9において、ログイン画面900は、ソーシャルボイスレコーディングwebサービスへのログイン処理を行うための操作画面の一例である。ログイン画面900において、ボックス901をタップすると、メールアドレスを入力することができる。また、ログイン画面900において、ボックス902をタップすると、パスワードを入力することができる。また、ログイン画面900において、ボタン903をタップすると、入力されたメールアドレスとパスワードを用いてログイン処理を行うことができる。 FIG. 9 is an explanatory diagram showing a screen example of the login screen. In FIG. 9, the login screen 900 is an example of an operation screen for performing a login process to the social voice recording web service. On the login screen 900, tap the box 901 to enter your email address. Further, on the login screen 900, tapping the box 902 allows you to enter the password. Further, when the button 903 is tapped on the login screen 900, the login process can be performed using the entered e-mail address and password.
 ログイン処理では、入力されたパスワードと、入力されたメールアドレスと対応付けてユーザDB(不図示)に登録されているパスワードとの照合が行われる。ここで、パスワードが一致する場合、認証OKとなる。一方、パスコードが不一致の場合や、入力されたメールアドレスが存在しない場合には、認証NGとなる。 In the login process, the entered password is matched with the password registered in the user DB (not shown) in association with the entered e-mail address. Here, if the passwords match, the authentication is OK. On the other hand, if the passcodes do not match or the entered e-mail address does not exist, authentication is NG.
 また、ログイン画面900において、ボタン904をタップすると、SNSを利用してログインすることができる。また、ログイン画面900において、ボタン905をタップすると、後述の図10に示す新規登録画面1000を表示して新規登録処理を行うことができる。 Also, on the login screen 900, tap the button 904 to log in using SNS. Further, when the button 905 is tapped on the login screen 900, the new registration screen 1000 shown in FIG. 10 described later can be displayed and the new registration process can be performed.
 図10は、新規登録画面の画面例を示す説明図である。図10において、新規登録画面1000は、ソーシャルボイスレコーディングwebサービスへの新規登録を行うための操作画面の一例である。新規登録画面1000において、各ボックス1001~1005をタップすると、メールアドレス、パスワード、名前、性別および属性を入力することができる。 FIG. 10 is an explanatory diagram showing a screen example of the new registration screen. In FIG. 10, the new registration screen 1000 is an example of an operation screen for newly registering to the social voice recording web service. On the new registration screen 1000, tap each box 1001 to 1005 to enter an e-mail address, password, name, gender and attributes.
 属性としては、例えば、職種、学歴、年収、家族構成、趣味、ライフスタイルなどの情報を入力することができる。また、新規登録画面1000において、登録ボタン1006をタップすると、各ボックス1001~1005に入力された情報を用いて新規登録を行うことができる。新規登録されたユーザの情報は、例えば、ユーザDB(不図示)に登録される。 As attributes, for example, information such as occupation, educational background, annual income, family structure, hobbies, lifestyle, etc. can be entered. Further, when the registration button 1006 is tapped on the new registration screen 1000, new registration can be performed using the information input in each of the boxes 1001 to 1005. The newly registered user information is registered in, for example, a user DB (not shown).
 図11は、アプリホーム画面の画面例を示す説明図である。図11において、アプリホーム画面1100は、ソーシャルボイスレコーディングwebサービスへのログイン処理が完了すると、最初に表示される操作画面(TOP画面)の一例である。アプリホーム画面1100は、お知らせ1110とシナリオ一覧1120とを含む。 FIG. 11 is an explanatory diagram showing a screen example of the application home screen. In FIG. 11, the application home screen 1100 is an example of an operation screen (TOP screen) that is first displayed when the login process to the social voice recording web service is completed. The application home screen 1100 includes notification 1110 and a scenario list 1120.
 お知らせ1110には、ソーシャルボイスレコーディングwebサービスに関するお知らせが表示される。シナリオ一覧1120には、複数のシナリオ(例えば、シナリオS1~S3)が指定可能に表示される。シナリオ一覧1120には、各シナリオのシナリオ名や、あらすじが表示される。 Notification 1110 displays a notification regarding the social voice recording web service. In the scenario list 1120, a plurality of scenarios (for example, scenarios S1 to S3) can be specified and displayed. The scenario name and synopsis of each scenario are displayed in the scenario list 1120.
 なお、どのようなシナリオをシナリオ一覧1120に表示するかは、任意に設定可能である。例えば、シナリオ一覧1120には、最新順、人気順またはタイトル順に、上位所定数のシナリオが表示されることにしてもよい。また、アプリホーム画面1100において、ボタン1130をタップすると、現在表示されていないシナリオを表示することができる。 It should be noted that what kind of scenario is displayed in the scenario list 1120 can be arbitrarily set. For example, in the scenario list 1120, a predetermined number of top scenarios may be displayed in the order of latest, popularity, or title. Further, by tapping the button 1130 on the application home screen 1100, a scenario that is not currently displayed can be displayed.
 アプリホーム画面1100によれば、ユーザは、ソーシャルボイスレコーディングwebサービスに登録されている複数のシナリオ(例えば、シナリオS1~Sn)の中から、コンテンツを読んだり聴いたりするシナリオSiや、登場人物Cのセリフを録音するシナリオSiを指定することができる。 According to the application home screen 1100, the user can read or listen to the content from a plurality of scenarios (for example, scenarios S1 to Sn) registered in the social voice recording web service, and the character C. You can specify the scenario Si to record the lines of.
 アプリホーム画面1100において、シナリオ一覧1120内のいずれかのシナリオSiをタップ(指定)すると、タップ(指定)されたシナリオSiを特定する情報が情報処理装置101に送信され、図12に示すような作品紹介画面1200が表示される。ここでは、シナリオS2が指定された場合を想定する。 When any scenario Si in the scenario list 1120 is tapped (designated) on the application home screen 1100, information identifying the tapped (designated) scenario Si is transmitted to the information processing device 101, as shown in FIG. The work introduction screen 1200 is displayed. Here, it is assumed that scenario S2 is specified.
 図12は、作品紹介画面の画面例を示す説明図である。図12において、作品紹介画面1200は、シナリオS2(タイトル:SENPAI×KOUHAI)についての作品紹介を行うための操作画面の一例である。作品紹介画面1200には、シナリオS2のあらすじ、登場人物、演技のポイントなどの作品紹介情報1210が表示される。 FIG. 12 is an explanatory diagram showing a screen example of the work introduction screen. In FIG. 12, the work introduction screen 1200 is an example of an operation screen for introducing a work about scenario S2 (title: SENPAI × KOUHAI). On the work introduction screen 1200, work introduction information 1210 such as a synopsis of scenario S2, characters, and acting points is displayed.
 作品紹介画面1200によれば、ユーザは、シナリオS2(タイトル:SENPAI×KOUHAI)のあらすじ、登場人物、演技のポイントなどを確認することができる。これにより、ユーザは、コンテンツを読んだり聴いたりするシナリオSiや、登場人物Cのセリフを録音するシナリオSiを判断することができる。 According to the work introduction screen 1200, the user can confirm the synopsis, characters, acting points, etc. of scenario S2 (title: SENPAI × KOUHAI). As a result, the user can determine the scenario Si for reading and listening to the content and the scenario Si for recording the dialogue of the character C.
 作品紹介画面1200において、再生タブtb2をタップすると、図13に示すようなシナリオ再生画面1300が表示される。また、作品紹介画面1200において、録音タブtb3をタップすると、不図示の音声録音画面を表示することができる。音声録音画面は、登場人物Cのセリフを録音するための操作画面である。 When the playback tab tb2 is tapped on the work introduction screen 1200, the scenario playback screen 1300 as shown in FIG. 13 is displayed. Further, on the work introduction screen 1200, by tapping the recording tab tb3, a voice recording screen (not shown) can be displayed. The voice recording screen is an operation screen for recording the dialogue of the character C.
 図13は、シナリオ再生画面の画面例を示す説明図である。図13において、シナリオ再生画面1300は、シナリオS2に対応するコンテンツを再生するための操作画面の一例である。シナリオ再生画面1300には、シナリオS2の登場人物C(山下、鈴木)に対応するセリフが、当該登場人物Cについてのセリフであることを判別可能に発言順に表示される。 FIG. 13 is an explanatory diagram showing a screen example of the scenario reproduction screen. In FIG. 13, the scenario reproduction screen 1300 is an example of an operation screen for reproducing the content corresponding to the scenario S2. On the scenario reproduction screen 1300, the lines corresponding to the character C (Yamashita, Suzuki) in the scenario S2 are displayed in the order of remarks so that it can be determined that the lines correspond to the character C.
 シナリオ再生画面1300において、ボタン1301,1302をタップすると、登場人物C(山下、鈴木)を選択して、当該登場人物Cについてのサンプル再生画面をプルダウン表示することができる。サンプル再生画面は、登場人物Cに対応する演者を、登場人物Cについての演者であることを判別可能に表示する操作画面である。 On the scenario playback screen 1300, when the buttons 1301 and 1302 are tapped, the character C (Yamashita, Suzuki) can be selected and the sample playback screen for the character C can be displayed in a pull-down manner. The sample playback screen is an operation screen that displays the performer corresponding to the character C so as to be able to determine that the performer is the performer for the character C.
 例えば、シナリオ再生画面1300において、ボタン1301をタップすると、図14に示すようなサンプル再生画面1400がプルダウン表示される。 For example, when the button 1301 is tapped on the scenario playback screen 1300, the sample playback screen 1400 as shown in FIG. 14 is pulled down.
 また、シナリオ再生画面1300において、ボタン1303,1304をタップすると、登場人物C(山下、鈴木)についての音量調整画面(不図示)をプルダウン表示することができる。音量調整画面は、登場人物Cのセリフを再生する際の音量を調整するための操作画面である。 Also, on the scenario playback screen 1300, by tapping the buttons 1303 and 1304, the volume adjustment screen (not shown) for the character C (Yamashita, Suzuki) can be pulled down. The volume adjustment screen is an operation screen for adjusting the volume when reproducing the dialogue of the character C.
 また、シナリオ再生画面1300において、連続再生ボタン1310をタップすると、シナリオSi内の登場人物Cのセリフを発言順に連続再生することができる。この際、シナリオSi内の各登場人物Cについて、本再生用に選択された演者が登録した音声データが再生される。 Further, by tapping the continuous playback button 1310 on the scenario playback screen 1300, the lines of the character C in the scenario Si can be continuously played in the order of remarks. At this time, for each character C in the scenario Si, the audio data registered by the performer selected for the main reproduction is reproduced.
 また、シナリオ再生画面1300において、登場人物Cのセリフに対応する再生ボタン(例えば、再生ボタン1321~1329)をタップすると、タップしたセリフについて演者が登録した音声データを再生することができる。例えば、再生ボタン1321をタップすると、登場人物C(山下)のセリフ「.......探しましたよ、鈴木さん」について、演者が登録した音声データが再生される。 Further, when the play button (for example, play buttons 1321 to 1329) corresponding to the dialogue of the character C is tapped on the scenario reproduction screen 1300, the voice data registered by the performer for the tapped dialogue can be reproduced. For example, when the play button 1321 is tapped, the voice data registered by the performer is played for the line "... I searched for, Mr. Suzuki" of character C (Yamashita).
 シナリオ再生画面1300によれば、ユーザは、シナリオSiの本文を読みながら、各登場人物Cのセリフについて演者が登録した音声データを聴くことができる。 According to the scenario playback screen 1300, the user can listen to the voice data registered by the performer for the lines of each character C while reading the text of the scenario Si.
 なお、シナリオ再生画面1300において、シェアボタン1330をタップすると、情報処理装置101にシェア要求が送信され、シナリオS2の登場人物Cと演者とのおすすめの組合せを他のユーザとシェアすることが可能となる。また、シナリオ再生画面1300において、作品タブtb1をタップすると、図12に示した作品紹介画面1200が表示される。また、シナリオ再生画面1300において、録音タブtb3をタップすると、不図示の音声録音画面を表示することができる。 When the share button 1330 is tapped on the scenario playback screen 1300, a share request is sent to the information processing device 101, and the recommended combination of the character C and the performer in scenario S2 can be shared with other users. Become. Further, when the work tab tb1 is tapped on the scenario reproduction screen 1300, the work introduction screen 1200 shown in FIG. 12 is displayed. Further, on the scenario reproduction screen 1300, tapping the recording tab tb3 can display a voice recording screen (not shown).
 図14は、サンプル再生画面の画面例を示す説明図である。図14において、サンプル再生画面1400は、シナリオS2の登場人物C「山下」に対応する複数の演者(山田花子、山田太郎、富士とおる)を、登場人物C「山下」についての演者であることを判別可能に表示する操作画面である。 FIG. 14 is an explanatory diagram showing a screen example of the sample reproduction screen. In FIG. 14, the sample playback screen 1400 indicates that a plurality of performers (Hanako Yamada, Taro Yamada, Toru Fuji) corresponding to the character C “Yamashita” in scenario S2 are performers for the character C “Yamashita”. This is an operation screen that is displayed so that it can be identified.
 サンプル再生画面1400において、サンプル再生ボタン1401,1402,1403をタップすると、演者を指定することができる。サンプル再生画面1400において、演者が指定されると、その都度、登場人物C「山下」の同じセリフについて、指定された演者が登録した音声データが再生される。 On the sample playback screen 1400, tap the sample playback buttons 1401, 1402, 1403 to specify the performer. When a performer is designated on the sample playback screen 1400, the voice data registered by the designated performer is played back for the same line of the character C "Yamashita" each time.
 再生対象のセリフは、例えば、登場人物C「山下」について、決定部804によって決定されたサンプル再生用セリフである。例えば、サンプル再生画面1400において、サンプル再生ボタン1401をタップすると、演者「山田花子」が指定され、登場人物C「山下」のサンプル再生用セリフについて、演者「山田花子」が登録した音声データが再生される。 The dialogue to be reproduced is, for example, the dialogue for sample reproduction determined by the determination unit 804 for the character C "Yamashita". For example, when the sample playback button 1401 is tapped on the sample playback screen 1400, the performer "Hanako Yamada" is designated, and the voice data registered by the performer "Hanako Yamada" is played back for the sample playback dialogue of the character C "Yamashita". Will be done.
 また、サンプル再生画面1400において、サンプル再生ボタン1402をタップすると、演者「山田太郎」が指定され、登場人物C「山下」のサンプル再生用セリフについて、演者「山田太郎」が登録した音声データが再生される。また、サンプル再生画面1400において、サンプル再生ボタン1403をタップすると、演者「富士とおる」が指定され、登場人物C「山下」のサンプル再生用セリフについて、演者「富士とおる」が登録した音声データが再生される。 In addition, when the sample playback button 1402 is tapped on the sample playback screen 1400, the performer "Taro Yamada" is designated, and the voice data registered by the performer "Taro Yamada" is played back for the sample playback dialogue of the character C "Yamashita". Will be done. In addition, when the sample playback button 1403 is tapped on the sample playback screen 1400, the performer "Fuji Toru" is designated, and the voice data registered by the performer "Fuji Toru" is played for the sample playback dialogue of the character C "Yamashita". Will be done.
 サンプル再生画面1400によれば、ユーザは、シナリオS2内の登場人物「山下」についての演者を決めるにあたり、各演者(山田花子、山田太郎、富士とおる)の音声を試聴することができる。この際、サンプル再生する演者を切り替えても、登場人物「山下」の同じセリフの音声が再生されるため、ユーザは、演者の音声を比較して自分好みの演者を容易に判断することができる。 According to the sample playback screen 1400, the user can audition the voices of each performer (Hanako Yamada, Taro Yamada, Toru Fuji) when deciding the performer for the character "Yamashita" in the scenario S2. At this time, even if the performer to play the sample is switched, the voice of the same line of the character "Yamashita" is played, so that the user can easily determine his / her favorite performer by comparing the voices of the performers. ..
 また、サンプル再生画面1400において、登場人物C「山下」について、複数の演者「山田花子、山田太郎、富士とおる」の中から、本再生用の演者を選択することができる。例えば、サンプル再生画面1400において、直前に指定された演者が、本再生用の演者として選択されることにしてもよい。また、サンプル再生画面1400において、演者の名前やアイコンを選択することにより、選択された演者が、本再生用の演者として選択されることにしてもよい。 Also, on the sample playback screen 1400, for the character C "Yamashita", the performer for this playback can be selected from a plurality of performers "Hanako Yamada, Taro Yamada, Toru Fuji". For example, on the sample reproduction screen 1400, the performer designated immediately before may be selected as the performer for the main reproduction. Further, by selecting the performer's name or icon on the sample reproduction screen 1400, the selected performer may be selected as the performer for the main reproduction.
(音声再生システム200の各種動作例)
 つぎに、音声再生システム200の各種動作例について説明する。まず、図15を用いて、音声再生システム200の管理者がシナリオ情報を新規登録する場合の動作例について説明する。
(Various operation examples of audio reproduction system 200)
Next, various operation examples of the voice reproduction system 200 will be described. First, with reference to FIG. 15, an operation example when the administrator of the voice reproduction system 200 newly registers the scenario information will be described.
 図15は、音声再生システム200のシナリオ登録時の動作例を示すシーケンス図である。図15のシーケンス図において、管理者端末201は、管理者の操作入力により、管理画面へアクセスする(ステップS1501)。管理画面(不図示)は、各種管理作業を行うための操作画面である。 FIG. 15 is a sequence diagram showing an operation example at the time of scenario registration of the voice reproduction system 200. In the sequence diagram of FIG. 15, the administrator terminal 201 accesses the management screen by the operation input of the administrator (step S1501). The management screen (not shown) is an operation screen for performing various management tasks.
 情報処理装置101は、管理者端末201の管理画面へのアクセスを検出すると、管理者端末201にログイン画面を表示する(ステップS1502)。管理者端末201は、表示されたログイン画面において、管理者の操作入力により、ログイン情報が入力されると、ログイン処理を行う(ステップS1503)。 When the information processing device 101 detects access to the management screen of the administrator terminal 201, the information processing device 101 displays a login screen on the administrator terminal 201 (step S1502). The administrator terminal 201 performs a login process when login information is input by an operation input of the administrator on the displayed login screen (step S1503).
 情報処理装置101は、入力されたログイン情報を用いてログイン認証を行う(ステップS1504)。ここで、認証OKの場合、情報処理装置101は、管理者端末201に管理メニューを表示する(ステップS1505)。なお、認証NGの場合には、情報処理装置101は、例えば、「認証に失敗しました」などのメッセージを管理者端末201に表示する。 The information processing device 101 performs login authentication using the input login information (step S1504). Here, when the authentication is OK, the information processing device 101 displays the management menu on the administrator terminal 201 (step S1505). In the case of authentication NG, the information processing device 101 displays a message such as "authentication failed" on the administrator terminal 201, for example.
 管理者端末201は、表示された管理メニューから、管理者の操作入力により、シナリオ登録が選択されると、シナリオ登録画面を表示する(ステップS1506)。シナリオ登録画面(不図示)は、シナリオ情報の入力を受け付ける操作画面である。シナリオ情報は、例えば、シナリオID、タイトル、あらすじ、登場人物および本文を含む。 The administrator terminal 201 displays the scenario registration screen when scenario registration is selected by the operation input of the administrator from the displayed management menu (step S1506). The scenario registration screen (not shown) is an operation screen that accepts input of scenario information. The scenario information includes, for example, a scenario ID, a title, a synopsis, characters, and a text.
 管理者端末201は、シナリオ登録画面において、管理者の操作入力により、シナリオ情報が入力されると、入力されたシナリオ情報の登録要求を情報処理装置101に送信する(ステップS1507)。情報処理装置101は、管理者端末201からシナリオ情報の登録要求を受信すると、当該シナリオ情報をシナリオDB220に登録する(ステップS1508)。 When the scenario information is input by the operation input of the administrator on the scenario registration screen, the administrator terminal 201 transmits the input scenario information registration request to the information processing device 101 (step S1507). When the information processing device 101 receives the scenario information registration request from the administrator terminal 201, the information processing device 101 registers the scenario information in the scenario DB 220 (step S1508).
 これにより、音声再生システム200の管理者は、ソーシャルボイスレコーディングwebサービスで提供するシナリオ情報を新規登録することができる。 As a result, the administrator of the voice reproduction system 200 can newly register the scenario information provided by the social voice recording web service.
 つぎに、図16を用いて、音声再生システム200のユーザがソーシャルボイスレコーディングwebサービスにログインする場合の動作例について説明する。 Next, with reference to FIG. 16, an operation example when a user of the voice reproduction system 200 logs in to the social voice recording web service will be described.
 図16は、音声再生システム200のログイン時の動作例を示すシーケンス図である。図16のシーケンス図において、ユーザ端末202は、ユーザの操作入力により、ソーシャルボイスレコーディングwebサービスを利用するためのアプリケーションを起動する(ステップS1601)。 FIG. 16 is a sequence diagram showing an operation example at the time of login of the voice reproduction system 200. In the sequence diagram of FIG. 16, the user terminal 202 activates the application for using the social voice recording web service by the operation input of the user (step S1601).
 ユーザ端末202は、アプリケーションが起動されると、ディスプレイ405にログイン画面を表示する(ステップS1602)。ユーザ端末202は、表示されたログイン画面において、ユーザの操作入力により、ログイン情報が入力されると、ログイン処理を行う(ステップS1603)。 When the application is started, the user terminal 202 displays a login screen on the display 405 (step S1602). When the login information is input by the user's operation input on the displayed login screen, the user terminal 202 performs the login process (step S1603).
 情報処理装置101は、入力されたログイン情報を用いてログイン認証を行う(ステップS1604)。ここで、認証OKの場合、情報処理装置101は、アプリホーム画面の画面情報(お知らせ、シナリオ一覧などの情報)をユーザ端末202に送信する(ステップS1605)。なお、認証NGの場合には、情報処理装置101は、例えば、「認証に失敗しました」などのメッセージをユーザ端末202に表示する。 The information processing device 101 performs login authentication using the input login information (step S1604). Here, when the authentication is OK, the information processing device 101 transmits screen information (information such as notification and scenario list) of the application home screen to the user terminal 202 (step S1605). In the case of authentication NG, the information processing device 101 displays a message such as "authentication failed" on the user terminal 202, for example.
 ユーザ端末202は、アプリホーム画面の画面情報を受信すると、当該画面情報に基づいて、ディスプレイ405にアプリホーム画面を表示する(ステップS1606)。なお、アプリホーム画面は、ユーザ端末202において生成してもよく、また、情報処理装置101において生成してもよい。 When the user terminal 202 receives the screen information of the application home screen, the user terminal 202 displays the application home screen on the display 405 based on the screen information (step S1606). The application home screen may be generated by the user terminal 202 or may be generated by the information processing device 101.
 ユーザ端末202は、アプリホーム画面において、ユーザの操作入力により、シナリオ一覧からいずれかのシナリオSiが指定されると、指定されたシナリオSiを特定する情報を情報処理装置101に送信する(ステップS1607)。 When any scenario Si is specified from the scenario list by the user's operation input on the application home screen, the user terminal 202 transmits information for identifying the specified scenario Si to the information processing device 101 (step S1607). ).
 情報処理装置101は、指定されたシナリオSiを特定する情報を受信すると、シナリオDB220から指定されたシナリオSiの作品紹介情報(あらすじ、登場人物、演技のポイントなど)を取得する(ステップS1608)。そして、情報処理装置101は、取得した作品紹介情報をユーザ端末202に送信する(ステップS1609)。 When the information processing device 101 receives the information that identifies the designated scenario Si, it acquires the work introduction information (synopsis, characters, acting points, etc.) of the designated scenario Si from the scenario DB 220 (step S1608). Then, the information processing device 101 transmits the acquired work introduction information to the user terminal 202 (step S1609).
 ユーザ端末202は、作品紹介情報を受信すると、当該作品紹介情報に基づいて、ディスプレイ405に作品紹介画面を表示する(ステップS1610)。なお、作品紹介画面は、ユーザ端末202において生成してもよく、また、情報処理装置101において生成してもよい。 When the user terminal 202 receives the work introduction information, the user terminal 202 displays the work introduction screen on the display 405 based on the work introduction information (step S1610). The work introduction screen may be generated by the user terminal 202 or may be generated by the information processing device 101.
 これにより、ユーザは、ソーシャルボイスレコーディングwebサービスにログインして、シナリオ一覧から指定したシナリオSiの作品紹介画面を閲覧することができる。 As a result, the user can log in to the social voice recording web service and browse the work introduction screen of the scenario Si specified from the scenario list.
 つぎに、図17および図18を用いて、音声再生システム200のユーザがシナリオSiに対応するコンテンツを再生する場合の動作例について説明する。 Next, an operation example when the user of the voice reproduction system 200 reproduces the content corresponding to the scenario Si will be described with reference to FIGS. 17 and 18.
 図17および図18は、音声再生システム200のコンテンツ再生時の動作例を示すシーケンス図である。図17のシーケンス図において、ユーザ端末202は、作品紹介画面において、ユーザの操作入力により、再生タブ(例えば、図12参照)が選択されると、シナリオSiの再生要求を情報処理装置101に送信する(ステップS1701)。 17 and 18 are sequence diagrams showing an operation example of the audio reproduction system 200 at the time of content reproduction. In the sequence diagram of FIG. 17, when the playback tab (see, for example, FIG. 12) is selected by the user's operation input on the work introduction screen, the user terminal 202 transmits the playback request of the scenario Si to the information processing device 101. (Step S1701).
 情報処理装置101は、シナリオSiの再生要求を受信すると、シナリオSiに対応するコンテンツをユーザ端末202に送信する(ステップS1702)。シナリオSiに対応するコンテンツには、例えば、シナリオSiの本文、各登場人物Cに対応する演者情報、各登場人物Cについての本再生用の演者の音声データ、各登場人物Cのサンプル再生用セリフなどが含まれる。 When the information processing device 101 receives the reproduction request of the scenario Si, the information processing device 101 transmits the content corresponding to the scenario Si to the user terminal 202 (step S1702). The contents corresponding to the scenario Si include, for example, the text of the scenario Si, the performer information corresponding to each character C, the voice data of the performer for the main reproduction of each character C, and the sample reproduction dialogue of each character C. Etc. are included.
 ユーザ端末202は、シナリオSiに対応するコンテンツを受信すると、当該コンテンツに基づいて、ディスプレイ405にシナリオ再生画面を表示する(ステップS1703)。なお、シナリオ再生画面は、ユーザ端末202において生成してもよく、また、情報処理装置101において生成してもよい。 When the user terminal 202 receives the content corresponding to the scenario Si, the user terminal 202 displays the scenario reproduction screen on the display 405 based on the content (step S1703). The scenario reproduction screen may be generated by the user terminal 202 or may be generated by the information processing device 101.
 ユーザ端末202は、シナリオ再生画面において、ユーザの操作入力により、シナリオSi内のいずれかの登場人物Cの選択を受け付けると(ステップS1704)、選択された登場人物Cについてのサンプル再生画面を表示する(ステップS1705)。 When the user terminal 202 accepts the selection of any character C in the scenario Si by the user's operation input on the scenario reproduction screen (step S1704), the user terminal 202 displays a sample reproduction screen for the selected character C. (Step S1705).
 ユーザ端末202は、サンプル再生画面において、ユーザの操作入力により、登場人物Cについてのいずれかの演者が指定されると、指定された演者を特定する情報を情報処理装置101に送信する(ステップS1706)。 When any performer for the character C is designated by the user's operation input on the sample playback screen, the user terminal 202 transmits information identifying the designated performer to the information processing device 101 (step S1706). ).
 情報処理装置101は、指定された演者を特定する情報を受信すると、登録音声DB230を参照して、当該情報から特定される登場人物Cに対応する登録音声情報のうち指定された演者の登録音声情報から、サンプル再生用セリフの音声データを取得する(ステップS1707)。 When the information processing device 101 receives the information that identifies the designated performer, the information processing device 101 refers to the registered voice DB 230, and among the registered voice information corresponding to the character C specified from the information, the registered voice of the designated performer. From the information, the voice data of the sample reproduction dialogue is acquired (step S1707).
 なお、登場人物Cについてのサンプル再生用セリフは、登場人物Cのセリフの中から決定される。登場人物Cについてのサンプル再生用セリフを決定する際の具体的な処理手順については、図19を用いて後述する。 The sample playback dialogue for character C is determined from the dialogue of character C. The specific processing procedure for determining the sample reproduction dialogue for the character C will be described later with reference to FIG.
 情報処理装置101は、取得したサンプル再生用セリフの音声データをユーザ端末202に送信する(ステップS1708)。ユーザ端末202は、受信した音声データをスピーカ408から出力することにより、選択された登場人物Cについてのサンプル再生用セリフを再生する(ステップS1709)。 The information processing device 101 transmits the acquired voice data of the sample reproduction dialogue to the user terminal 202 (step S1708). The user terminal 202 reproduces the sample reproduction dialogue for the selected character C by outputting the received voice data from the speaker 408 (step S1709).
 図18のシーケンス図において、ユーザ端末202は、サンプル再生画面において、ユーザの操作入力により、登場人物Cについての本再生用の演者が選択されると、選択された本再生用の演者を特定する情報を情報処理装置101に送信する(ステップS1801)。 In the sequence diagram of FIG. 18, when the performer for the main reproduction of the character C is selected by the user's operation input on the sample reproduction screen, the user terminal 202 identifies the selected performer for the main reproduction. Information is transmitted to the information processing device 101 (step S1801).
 情報処理装置101は、選択された本再生用の演者を特定する情報を受信すると、登録音声DB230を参照して、当該情報から特定される登場人物Cに対応する登録音声情報のうち選択された演者の登録音声情報を取得する(ステップS1802)。そして、情報処理装置101は、取得した登録音声情報をユーザ端末202に送信する(ステップS1803)。 When the information processing device 101 receives the information for identifying the selected performer for the main reproduction, the information processing device 101 refers to the registered voice DB 230 and selects the registered voice information corresponding to the character C specified from the information. Acquire the registered voice information of the performer (step S1802). Then, the information processing device 101 transmits the acquired registered voice information to the user terminal 202 (step S1803).
 ユーザ端末202は、シナリオ再生画面において、ユーザの操作入力により、再生ボタンが選択されると、シナリオSiに対応するコンテンツを再生する(ステップS1804)。具体的には、例えば、ユーザ端末202は、再生対象のセリフを順次選択し、選択したセリフについて、本再生用の演者として選択された演者が登録した音声データをスピーカ408から出力する。 The user terminal 202 reproduces the content corresponding to the scenario Si when the play button is selected by the user's operation input on the scenario reproduction screen (step S1804). Specifically, for example, the user terminal 202 sequentially selects the dialogue to be reproduced, and outputs the voice data registered by the performer selected as the performer for the main reproduction from the speaker 408 for the selected dialogue.
 これにより、ユーザは、シナリオSi内の登場人物Cについて、演者ごとにサンプル再生用セリフの音声を確認して、自分好みの演者を判断することができる。また、シナリオSi内の各登場人物Cについて自分好みの演者を指定して、シナリオSiに対応するコンテンツを再生することができる。 As a result, the user can check the voice of the sample playback dialogue for each performer for the character C in the scenario Si, and determine the performer of his / her preference. Further, it is possible to specify a performer of his / her own preference for each character C in the scenario Si and play the content corresponding to the scenario Si.
(情報処理装置101のサンプル再生用セリフ決定処理手順)
 つぎに、図19を用いて、情報処理装置101のサンプル再生用セリフ決定処理手順について説明する。サンプル再生用セリフ決定処理は、シナリオSi内の登場人物Cについてのサンプル再生用セリフを決定する処理である。
(Procedure for determining dialogue for sample reproduction of information processing device 101)
Next, the procedure for determining the dialogue for sample reproduction of the information processing apparatus 101 will be described with reference to FIG. The sample reproduction dialogue determination process is a process for determining the sample reproduction dialogue for the character C in the scenario Si.
 図19は、情報処理装置101のサンプル再生用セリフ決定処理手順の一例を示すフローチャートである。図19のフローチャートにおいて、まず、情報処理装置101は、シナリオDB220内のシナリオSiのシナリオ情報500-iを参照して、登場人物Cのセリフのうち選択されていない未選択のセリフを選択する(ステップS1901)。 FIG. 19 is a flowchart showing an example of the sample reproduction dialogue determination processing procedure of the information processing apparatus 101. In the flowchart of FIG. 19, first, the information processing apparatus 101 refers to the scenario information 500-i of the scenario Si in the scenario DB 220, and selects an unselected line among the lines of the character C ( Step S1901).
 そして、情報処理装置101は、抽出条件DB240を参照して、「…」、「?」、「!」、「っ」、「、」、「。」などの文字をカウントせずに、選択したセリフの文字数をカウントする(ステップS1902)。つぎに、情報処理装置101は、登場人物Cのセリフのうち選択されていない未選択のセリフがあるか否かを判断する(ステップS1903)。 Then, the information processing apparatus 101 refers to the extraction condition DB 240 and selects characters such as "...", "?", "!", "Tsu", ",", and "." Without counting them. The number of characters in the dialogue is counted (step S1902). Next, the information processing device 101 determines whether or not there is an unselected line among the lines of the character C (step S1903).
 ここで、未選択のセリフがある場合(ステップS1903:Yes)、情報処理装置101は、ステップS1901に戻る。一方、未選択のセリフがない場合には(ステップS1903:No)、情報処理装置101は、抽出条件DB240を参照して、カウントした文字数を各セリフの点数として、最も高い点数のセリフを特定する(ステップS1904)。 Here, if there is an unselected line (step S1903: Yes), the information processing device 101 returns to step S1901. On the other hand, when there is no unselected dialogue (step S1903: No), the information processing apparatus 101 refers to the extraction condition DB 240 and uses the counted number of characters as the score of each dialogue to specify the dialogue with the highest score. (Step S1904).
 つぎに、情報処理装置101は、特定したセリフをサンプル再生用セリフに決定する(ステップS1905)。そして、情報処理装置101は、決定したサンプル再生用セリフを、シナリオSi内の登場人物Cと対応付けて記憶して(ステップS1906)、本フローチャートによる一連の処理を終了する。 Next, the information processing device 101 determines the specified dialogue as the sample reproduction dialogue (step S1905). Then, the information processing apparatus 101 stores the determined sample reproduction dialogue in association with the character C in the scenario Si (step S1906), and ends a series of processes according to this flowchart.
 これにより、登場人物Cのセリフの中で文字数が多いセリフを、サンプル再生用セリフに決定することができる。また、セリフの文字数をカウントするにあたり、「…」、「?」、「!」、「っ」、「、」、「。」などの文字をカウントしないことで、演者の声の感じを判断しにくいセリフが、サンプル再生用セリフに決定されるのを防ぐことができる。 This makes it possible to determine the dialogue with a large number of characters among the dialogue of character C as the dialogue for sample playback. Also, when counting the number of characters in the dialogue, the feeling of the performer's voice is judged by not counting the characters such as "...", "?", "!", "Tsu", ",", "." It is possible to prevent difficult lines from being determined as sample reproduction lines.
 なお、ステップS1901において、情報処理装置101は、登場人物Cのセリフのうち、シナリオSiの前半部分に登場するセリフを選択することにしてもよい。これにより、ネタバレとなるようなセリフが、サンプル再生用セリフに決定されるのを防ぐことができる。 Note that in step S1901, the information processing device 101 may select the lines that appear in the first half of the scenario Si from the lines of the character C. This makes it possible to prevent spoiler lines from being determined as sample reproduction lines.
 サンプル再生用セリフ決定処理は、例えば、シナリオSiのシナリオ情報が新規登録された際に実行されてもよい。また、サンプル再生用セリフ決定処理は、例えば、サンプル再生画面において指定された演者を特定する情報をユーザ端末202から情報処理装置101が受信した際に実行されることにしてもよい。 The sample reproduction dialogue determination process may be executed, for example, when the scenario information of scenario Si is newly registered. Further, the sample reproduction dialogue determination process may be executed, for example, when the information processing apparatus 101 receives information specifying the performer specified on the sample reproduction screen from the user terminal 202.
 以上説明したように、実施の形態にかかる情報処理装置101によれば、ユーザからシナリオSiの指定を受け付けると、指定されたシナリオSiの登場人物Cに対応付けた演者情報を記憶部810から取得し、取得したシナリオSiの登場人物Cに対応する演者情報を、当該登場人物Cについての演者であることを判別可能に表示することができる。また、情報処理装置101によれば、シナリオSi内のいずれかの登場人物Cについての複数の演者のうちのいずれかの演者の指定を受け付けるたびに、当該登場人物Cの同じセリフ(サンプル再生用セリフ)について、指定された演者が登録した音声データを再生することができる。 As described above, according to the information processing apparatus 101 according to the embodiment, when the user receives the designation of the scenario Si, the performer information associated with the character C of the designated scenario Si is acquired from the storage unit 810. Then, the performer information corresponding to the character C of the acquired scenario Si can be displayed so as to be able to determine that the performer is the character C. Further, according to the information processing device 101, each time the designation of any one of the plurality of performers for any of the characters C in the scenario Si is accepted, the same line of the character C (for sample reproduction) is received. The voice data registered by the designated performer can be played back for the dialogue).
 これにより、シナリオSi内の登場人物Cに対して複数の演者がいる場合であっても、複数の演者のうちの一人の演者を指定するだけで、サンプル再生用セリフについて、指定された演者が録音した音声を試聴することができる。また、サンプル再生する演者を切り替えても、登場人物Cの同じセリフの音声が再生されるため、演者の音声を比較しやすくさせることができる。 As a result, even if there are a plurality of performers for the character C in the scenario Si, the designated performer can be specified for the sample playback dialogue by simply designating one of the plurality of performers. You can audition the recorded voice. Further, even if the performer to reproduce the sample is switched, the voice of the same line of the character C is reproduced, so that the voice of the performer can be easily compared.
 また、情報処理装置101によれば、登場人物Cのセリフのうち、文字数が所定数α以上のセリフを優先してサンプル再生用セリフに決定することができる。 Further, according to the information processing device 101, among the lines of the character C, the lines having a predetermined number of characters α or more can be preferentially determined as the sample reproduction lines.
 これにより、文字数が少なすぎて演者の声の感じを判断しにくいセリフが、サンプル再生用セリフに決定されるのを防ぐことができる。 This makes it possible to prevent lines for which it is difficult to judge the feeling of the performer's voice because the number of characters is too small to be determined as the sample playback line.
 また、情報処理装置101によれば、登場人物Cのセリフのうち、文字数が所定数α以上かつ所定数β以下のセリフを優先してサンプル再生用セリフに決定することができる。 Further, according to the information processing device 101, among the lines of the character C, the lines having a predetermined number of characters α or more and a predetermined number β or less can be preferentially determined as the sample reproduction lines.
 これにより、文字数が少なすぎて演者の声の感じを判断しにくいセリフや、文字数が多すぎて試聴に時間がかかるセリフが、サンプル再生用セリフに決定されるのを防ぐことができる。 This makes it possible to prevent lines that are too few characters to judge the feeling of the performer's voice, or lines that are too many characters and take a long time to audition, to be determined as sample playback lines.
 また、情報処理装置101によれば、登場人物Cのセリフのうち、所定の文字コードを含まないセリフを優先してサンプル再生用セリフに決定することができる。 Further, according to the information processing device 101, among the lines of the character C, the lines that do not include the predetermined character code can be preferentially determined as the sample reproduction lines.
 これにより、「…」、「?」、「!」などの声によって表現しにくい文字が含まれるセリフが、サンプル再生用セリフに決定されるのを防ぐことができる。 This makes it possible to prevent lines containing characters that are difficult to express by voice, such as "...", "?", And "!", To be determined as sample playback lines.
 また、情報処理装置101によれば、登場人物Cのセリフのうち、シナリオSiの前半部分に登場するセリフを優先してサンプル再生用セリフに決定することができる。 Further, according to the information processing device 101, among the lines of the character C, the lines appearing in the first half of the scenario Si can be prioritized and determined as the sample reproduction lines.
 これにより、ネタバレとなるようなセリフが、サンプル再生用セリフに決定されるのを防ぐことができる。 This makes it possible to prevent spoiler lines from being determined as sample playback lines.
 また、情報処理装置101によれば、登場人物Cについての複数の演者から本再生用の演者の選択を受け付け、シナリオSiに対応するコンテンツを再生する際に、登場人物Cのセリフについて、選択された演者が登録した音声データを再生することができる。 Further, according to the information processing device 101, when the selection of the performer for the main reproduction is received from a plurality of performers for the character C and the content corresponding to the scenario Si is reproduced, the dialogue of the character C is selected. The audio data registered by the performer can be played back.
 これにより、シナリオSiに対応するコンテンツをユーザが読んだり聴いたりする際に、登場人物Cと演者について好みの組合せを指定して楽しむことができる。 As a result, when the user reads or listens to the content corresponding to the scenario Si, he / she can specify and enjoy the desired combination of the character C and the performer.
 また、情報処理装置101によれば、シナリオSi内の登場人物Cと演者との組合せを他のユーザとシェアする要求を受け付けると、登場人物Cと、本再生用の演者として選択された演者との組合せを特定する組合せ情報を生成して、生成した組合せ情報を出力することができる。そして、情報処理装置101によれば、ユーザから組合せ情報を受け付けると、シナリオSiに対応するコンテンツを再生する際に、登場人物Cのセリフについて、組合せ情報から特定される演者が登録した音声データを再生することができる。 Further, according to the information processing device 101, when the request for sharing the combination of the character C and the performer in the scenario Si is received with another user, the character C and the performer selected as the performer for the main reproduction are received. It is possible to generate combination information that specifies the combination of and output the generated combination information. Then, according to the information processing device 101, when the combination information is received from the user, when the content corresponding to the scenario Si is reproduced, the voice data registered by the performer specified from the combination information is input to the dialogue of the character C. Can be played.
 これにより、あるユーザが指定したシナリオSi内の登場人物Cと演者とのおすすめの組合せを、複数のユーザで共有することが可能となる。 This makes it possible for a plurality of users to share the recommended combination of the character C and the performer in the scenario Si specified by a certain user.
 また、情報処理装置101によれば、ユーザからシナリオSiの指定を受け付けると、指定されたシナリオSiの登場人物Cごとに対応付けた演者情報を記憶部810から取得し、シナリオSiの登場人物Cごとに、取得した登場人物Cに対応する演者情報を、当該登場人物Cについての演者であることを判別可能に表示することができる。 Further, according to the information processing device 101, when the user receives the designation of the scenario Si, the performer information associated with each character C of the designated scenario Si is acquired from the storage unit 810, and the character C of the scenario Si is obtained. For each, the acquired performer information corresponding to the character C can be displayed so as to be able to determine that the performer is the character C.
 これにより、シナリオSi内の全登場人物Cについての演者情報を一覧表示して、登場人物Cごとに指定可能な演者を確認しやすくさせることができる。 As a result, the performer information for all the characters C in the scenario Si can be displayed in a list, and it is possible to easily confirm the performers that can be specified for each character C.
 これらのことから、実施の形態にかかる情報処理装置101および音声再生システム200によれば、シナリオSi内の登場人物Cに対して複数の演者がいる場合に、複数の演者のうちの一人の演者を指定するという簡単な操作を行うだけで、登場人物Cの同じセリフについて、各演者が録音した音声を試聴可能にすることができる。これにより、登場人物Cに対して複数の演者がいる場合であっても、複数の演者の中から自分好みの演者を指定しやすくさせることができる。 From these facts, according to the information processing apparatus 101 and the voice reproduction system 200 according to the embodiment, when there are a plurality of performers for the character C in the scenario Si, one of the plurality of performers is performed. By simply performing a simple operation of specifying, the voice recorded by each performer can be auditioned for the same line of the character C. As a result, even when there are a plurality of performers for the character C, it is possible to easily specify a performer of his / her choice from the plurality of performers.
 上記に説明した構成の他の例として、例えば、ユーザ端末202はステップS1706の前後でシナリオSiの中の何れかのセリフの指定を受け付ける構成とすることも可能である。ユーザ端末202は、セリフの指定を受け付けた場合、その後のサンプル再生画面において、ユーザの操作入力により、登場人物Cについてのいずれかの演者が指定されると、指定された演者を特定する情報およびシナリオSiの指定されたセリフを特定する情報を情報処理装置101に送信する。ステップS1707では、情報処理装置101は、指定された演者を特定する情報および指定されたセリフを特定する情報を受信すると、登録音声DB230を参照して、当該情報から特定される登場人物Cに対応する登録音声情報のうち指定された演者の登録音声情報から、指定されたセリフをサンプル再生用セリフの音声データとして取得する。その後は、ユーザ端末202は、演者が指定されるたびに、演者を特定する情報を情報処理装置101に送信し、情報処理装置101は指定されたセリフをサンプル再生用セリフの音声データとして取得する。 As another example of the configuration described above, for example, the user terminal 202 may be configured to accept the designation of any of the lines in the scenario Si before and after step S1706. When the user terminal 202 accepts the designation of the dialogue, when any of the performers for the character C is designated by the user's operation input on the subsequent sample playback screen, the information for identifying the designated performer and the information Information that identifies the specified dialogue in scenario Si is transmitted to the information processing device 101. In step S1707, when the information processing device 101 receives the information specifying the designated performer and the information specifying the designated dialogue, the information processing device 101 refers to the registered voice DB 230 and corresponds to the character C specified from the information. From the registered voice information of the specified performer among the registered voice information to be processed, the specified dialogue is acquired as the voice data of the sample playback dialogue. After that, each time the performer is designated, the user terminal 202 transmits information identifying the performer to the information processing device 101, and the information processing device 101 acquires the designated dialogue as voice data of the sample reproduction dialogue. ..
 なお、本実施の形態で説明した音声再生方法は、予め用意されたプログラムをパーソナル・コンピュータやワークステーション等のコンピュータで実行することにより実現することができる。本音声再生プログラムは、ハードディスク、フレキシブルディスク、CD-ROM、DVD、USBメモリ等のコンピュータで読み取り可能な記録媒体に記録され、コンピュータによって記録媒体から読み出されることによって実行される。また、本音声再生プログラムは、インターネット等のネットワークを介して配布してもよい。 The audio reproduction method described in the present embodiment can be realized by executing a program prepared in advance on a computer such as a personal computer or a workstation. This audio reproduction program is recorded on a computer-readable recording medium such as a hard disk, a flexible disk, a CD-ROM, a DVD, or a USB memory, and is executed by being read from the recording medium by the computer. Further, this audio reproduction program may be distributed via a network such as the Internet.
 また、本実施の形態で説明した情報処理装置101は、スタンダードセルやストラクチャードASIC(Application Specific Integrated Circuit)などの特定用途向けICやFPGAなどのPLD(Programmable Logic Device)によっても実現することができる。 Further, the information processing apparatus 101 described in the present embodiment can also be realized by a standard cell, an IC for a specific purpose such as a structured ASIC (Application Specific Integrated Circuit), or a PLD (Programmable Logic Device) such as an FPGA.
 101 情報処理装置
 102 端末装置
 110,810 記憶部
 120,1400 サンプル再生画面
 200 音声再生システム
 201 管理者端末
 202 ユーザ端末
 210 ネットワーク
 220 シナリオDB
 230 登録音声DB
 240 抽出条件DB
 300,400 バス
 301,401 CPU
 302,402 メモリ
 303 ディスクドライブ
 304 ディスク
 305,403 通信I/F
 306,409 可搬型記録媒体I/F
 307,410 可搬型記録媒体
 404 カメラ
 405 ディスプレイ
 406 入力装置
 407 マイク
 408 スピーカ
 801 受付部
 802 表示制御部
 803 再生制御部
 804 決定部
 900 ログイン画面
 1000 新規登録画面
 1100 アプリホーム画面
 1200 作品紹介画面
 1300 シナリオ再生画面
 C 登場人物
 S1~Sn,Si シナリオ
101 Information processing device 102 Terminal device 110,810 Storage unit 120,1400 Sample playback screen 200 Voice playback system 201 Administrator terminal 202 User terminal 210 Network 220 Scenario DB
230 Registered voice DB
240 Extraction condition DB
300,400 Bus 301,401 CPU
302,402 Memory 303 Disk drive 304 Disk 305,403 Communication I / F
306,409 Portable recording medium I / F
307,410 Portable recording medium 404 Camera 405 Display 406 Input device 407 Microphone 408 Speaker 801 Reception unit 802 Display control unit 803 Playback control unit 804 Decision unit 900 Login screen 1000 New registration screen 1100 App home screen 1200 Work introduction screen 1300 Scenario playback Screen C Characters S1-Sn, Si Scenario

Claims (11)

  1.  ユーザからシナリオの指定を受け付けると、指定された前記シナリオの登場人物に対応付けた演者情報を記憶部から取得し、
     取得した前記シナリオの登場人物に対応する演者情報を、当該登場人物についての演者であることを判別可能に表示し、
     前記シナリオ内のいずれかの登場人物についての複数の演者のうちのいずれかの演者の指定を受け付けるたびに、前記いずれかの登場人物の同じセリフについて、指定された前記いずれかの演者が登録した音声データを再生する、
     処理をコンピュータに実行させることを特徴とする音声再生プログラム。
    When a scenario specification is received from the user, the performer information associated with the characters in the specified scenario is acquired from the storage unit.
    The acquired performer information corresponding to the character in the scenario is displayed so as to be able to determine that the performer is the character.
    Each time the designation of any of the performers for any of the characters in the scenario is accepted, the designated performer registers for the same line of any of the characters. Play audio data,
    An audio playback program characterized by having a computer perform processing.
  2.  前記いずれかの登場人物のセリフのうち、文字数が所定数以上のセリフを優先して再生対象のセリフに決定する、処理を前記コンピュータに実行させ、
     前記再生する処理は、
     前記いずれかの演者の指定を受け付けるたびに、決定した前記再生対象のセリフについて、指定された前記いずれかの演者が登録した音声データを再生する、ことを特徴とする請求項1に記載の音声再生プログラム。
    Among the lines of any of the above characters, the computer is made to execute a process of preferentially determining the lines having a predetermined number of characters or more as the lines to be reproduced.
    The regenerating process is
    The voice according to claim 1, wherein each time the designation of any of the performers is accepted, the voice data registered by the designated performer is reproduced for the determined dialogue to be reproduced. Playback program.
  3.  前記いずれかの登場人物のセリフのうち、所定の文字コードを含まないセリフを優先して再生対象のセリフに決定する、処理を前記コンピュータに実行させ、
     前記再生する処理は、
     前記いずれかの演者の指定を受け付けるたびに、決定した前記再生対象のセリフについて、指定された前記いずれかの演者が登録した音声データを再生する、ことを特徴とする請求項1に記載の音声再生プログラム。
    Among the lines of any of the above characters, the line that does not include a predetermined character code is preferentially determined as the line to be reproduced, and the computer is made to execute the process.
    The regenerating process is
    The voice according to claim 1, wherein each time the designation of any of the performers is accepted, the voice data registered by the designated performer is reproduced for the determined dialogue to be reproduced. Playback program.
  4.  前記いずれかの登場人物のセリフのうち、前記シナリオの前半部分に登場するセリフを優先して再生対象のセリフに決定する、処理を前記コンピュータに実行させ、
     前記再生する処理は、
     前記いずれかの演者の指定を受け付けるたびに、決定した前記再生対象のセリフについて、指定された前記いずれかの演者が登録した音声データを再生する、ことを特徴とする請求項1に記載の音声再生プログラム。
    Among the lines of any of the above characters, the lines appearing in the first half of the scenario are prioritized to determine the lines to be reproduced, and the computer is made to execute the process.
    The regenerating process is
    The voice according to claim 1, wherein each time the designation of any of the performers is accepted, the voice data registered by the designated performer is reproduced for the determined dialogue to be reproduced. Playback program.
  5.  前記いずれかの登場人物についての複数の演者から本再生用の演者の選択を受け付け、
     前記シナリオに対応するコンテンツを再生する際に、前記いずれかの登場人物のセリフについて、選択された前記演者が登録した音声データを再生する、
     処理を前記コンピュータに実行させることを特徴とする請求項1に記載の音声再生プログラム。
    Accepting the selection of performers for this reproduction from multiple performers for any of the above characters
    When playing back the content corresponding to the scenario, the voice data registered by the selected performer is played back for the dialogue of any of the characters.
    The audio reproduction program according to claim 1, wherein the computer executes the process.
  6.  前記シナリオ内の登場人物と演者との組合せを他のユーザとシェアする要求を受け付けると、前記いずれかの登場人物と、選択された前記演者との組合せを特定する組合せ情報を生成して、生成した前記組合せ情報を出力し、
     ユーザから前記組合せ情報を受け付けると、前記シナリオに対応するコンテンツを再生する際に、前記いずれかの登場人物のセリフについて、前記組合せ情報から特定される演者が登録した音声データを再生する、
     処理を前記コンピュータに実行させることを特徴とする請求項5に記載の音声再生プログラム。
    When a request for sharing a combination of a character and a performer in the scenario is received with another user, combination information for specifying the combination of any of the characters and the selected performer is generated and generated. Output the above-mentioned combination information
    When the combination information is received from the user, when the content corresponding to the scenario is reproduced, the voice data registered by the performer specified from the combination information is reproduced for the dialogue of any of the characters.
    The audio reproduction program according to claim 5, wherein the computer executes the process.
  7.  前記決定する処理は、
     前記いずれかの登場人物のセリフのうち、文字数が第1数以上かつ第2数以下のセリフを優先して再生対象のセリフに決定する、ことを特徴とする請求項2に記載の音声再生プログラム。
    The process of determining is
    The voice reproduction program according to claim 2, wherein among the lines of any of the above characters, the lines having the first number or more and the second number or less are preferentially determined as the lines to be reproduced. ..
  8.  ユーザからシナリオの指定を受け付けると、指定された前記シナリオ内の登場人物を選択可能に表示する、処理を前記コンピュータに実行させ、
     前記取得する処理は、
     前記シナリオ内のいずれかの登場人物の選択を受け付けると、選択された前記いずれかの登場人物に対応付けた演者情報を前記記憶部から取得し、
     前記判別可能に表示する処理は、
     取得した前記いずれかの登場人物に対応する演者情報を、当該登場人物についての演者であることを判別可能に表示する、ことを特徴とする請求項1に記載の音声再生プログラム。
    When a scenario specification is received from the user, the computer is made to execute a process of displaying the characters in the specified scenario in a selectable manner.
    The process to be acquired is
    When the selection of any of the characters in the scenario is accepted, the performer information associated with the selected character is acquired from the storage unit.
    The process of displaying in a distinguishable manner is
    The audio reproduction program according to claim 1, wherein the acquired performer information corresponding to any of the characters is displayed so as to be able to determine that the performer is the performer of the character.
  9.  前記取得する処理は、
     ユーザからシナリオの指定を受け付けると、指定された前記シナリオの登場人物ごとに対応付けた演者情報を記憶部から取得し、
     前記判別可能に表示する処理は、
     前記シナリオの登場人物ごとに、取得した前記シナリオの登場人物に対応する演者情報を、当該登場人物についての演者であることを判別可能に表示する、ことを特徴とする請求項1に記載の音声再生プログラム。
    The process to be acquired is
    When a scenario specification is received from the user, the performer information associated with each character in the specified scenario is acquired from the storage unit.
    The process of displaying in a distinguishable manner is
    The voice according to claim 1, wherein the acquired performer information corresponding to the characters in the scenario is displayed for each character in the scenario so that the performer can be identified as the performer for the character. Playback program.
  10.  ユーザからシナリオの指定を受け付けると、指定された前記シナリオの登場人物に対応付けた演者情報を記憶部から取得し、
     取得した前記シナリオの登場人物に対応する演者情報を、当該登場人物についての演者であることを判別可能に表示し、
     前記シナリオ内のいずれかの登場人物についての複数の演者のうちのいずれかの演者の指定を受け付けるたびに、前記いずれかの登場人物の同じセリフについて、指定された前記いずれかの演者が登録した音声データを再生する、
     処理をコンピュータが実行することを特徴とする音声再生方法。
    When a scenario specification is received from the user, the performer information associated with the characters in the specified scenario is acquired from the storage unit.
    The acquired performer information corresponding to the character in the scenario is displayed so as to be able to determine that the performer is the character.
    Each time the designation of any of the performers for any of the characters in the scenario is accepted, the designated performer registers for the same line of any of the characters. Play audio data,
    A method of playing audio, characterized in that the processing is performed by a computer.
  11.  ユーザからシナリオの指定を受け付けると、指定された前記シナリオの登場人物に対応付けた演者情報を記憶部から取得し、
     取得した前記シナリオの登場人物に対応する演者情報を、当該登場人物についての演者であることを判別可能に表示し、
     前記シナリオ内のいずれかの登場人物についての複数の演者のうちのいずれかの演者の指定を受け付けるたびに、前記いずれかの登場人物の同じセリフについて、指定された前記いずれかの演者が登録した音声データを再生する、
     ことを特徴とする音声再生システム。
    When a scenario specification is received from the user, the performer information associated with the characters in the specified scenario is acquired from the storage unit.
    The acquired performer information corresponding to the character in the scenario is displayed so as to be able to determine that the performer is the character.
    Each time the designation of any of the performers for any of the characters in the scenario is accepted, the designated performer registers for the same line of any of the characters. Play audio data,
    A voice reproduction system characterized by that.
PCT/JP2019/042944 2019-10-31 2019-10-31 Voice playback program, voice playback method, and voice playback system WO2021084718A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/042944 WO2021084718A1 (en) 2019-10-31 2019-10-31 Voice playback program, voice playback method, and voice playback system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/042944 WO2021084718A1 (en) 2019-10-31 2019-10-31 Voice playback program, voice playback method, and voice playback system

Publications (1)

Publication Number Publication Date
WO2021084718A1 true WO2021084718A1 (en) 2021-05-06

Family

ID=75714967

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/042944 WO2021084718A1 (en) 2019-10-31 2019-10-31 Voice playback program, voice playback method, and voice playback system

Country Status (1)

Country Link
WO (1) WO2021084718A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002202786A (en) * 2000-12-28 2002-07-19 Casio Comput Co Ltd Digital book apparatus and voice reproducing system
JP2002304482A (en) * 2001-04-06 2002-10-18 Casio Comput Co Ltd Work disclosing system, work disclosing method and program
JP2018169691A (en) * 2017-03-29 2018-11-01 富士通株式会社 Reproduction control device, cartoon data provision program, voice reproduction program, reproduction control program, and reproduction control method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002202786A (en) * 2000-12-28 2002-07-19 Casio Comput Co Ltd Digital book apparatus and voice reproducing system
JP2002304482A (en) * 2001-04-06 2002-10-18 Casio Comput Co Ltd Work disclosing system, work disclosing method and program
JP2018169691A (en) * 2017-03-29 2018-11-01 富士通株式会社 Reproduction control device, cartoon data provision program, voice reproduction program, reproduction control program, and reproduction control method

Similar Documents

Publication Publication Date Title
JP7351907B2 (en) Online document sharing methods, devices, electronic devices and storage media
JP6084261B2 (en) Web-based music partner system and method
US8973153B2 (en) Creating audio-based annotations for audiobooks
WO2018077214A1 (en) Information search method and apparatus
US10680993B2 (en) Sonic social network
KR101377235B1 (en) System for sequential juxtaposition of separately recorded scenes
US9055193B2 (en) System and method of a remote conference
JP6690796B1 (en) Information management program, information management method, and information management device
JP2014082582A (en) Viewing device, content provision device, viewing program, and content provision program
WO2022148227A1 (en) Conference information query method and apparatus, storage medium, terminal device, and server
JP2020064493A (en) Online communication review system, method, and computer program
US20130080918A1 (en) Voice enabled social artifacts
CN110379406A (en) Voice remark conversion method, system, medium and electronic equipment
WO2020233171A1 (en) Song list switching method, apparatus and system, terminal, and storage medium
JP6367748B2 (en) Recognition device, video content presentation system
WO2021084718A1 (en) Voice playback program, voice playback method, and voice playback system
WO2021084720A1 (en) Voice recording program, voice recording method, and voice recording system
US20220114210A1 (en) Social media video sharing and cyberpersonality building system
JP7254842B2 (en) A method, system, and computer-readable recording medium for creating notes for audio files through interaction between an app and a website
WO2021084719A1 (en) Audio playback program, audio playback method, and audio playback system
WO2021084721A1 (en) Voice reproduction program, voice reproduction method, and voice reproduction system
US11086592B1 (en) Distribution of audio recording for social networks
US8196046B2 (en) Parallel visual radio station selection
JP2021089576A (en) Information processor, information processing method and program
JP7243447B2 (en) VOICE ACTOR EVALUATION PROGRAM, VOICE ACTOR EVALUATION METHOD, AND VOICE ACTOR EVALUATION SYSTEM

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19950824

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19950824

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP