WO2019038807A1 - Information processing system and information processing program - Google Patents

Information processing system and information processing program Download PDF

Info

Publication number
WO2019038807A1
WO2019038807A1 PCT/JP2017/029794 JP2017029794W WO2019038807A1 WO 2019038807 A1 WO2019038807 A1 WO 2019038807A1 JP 2017029794 W JP2017029794 W JP 2017029794W WO 2019038807 A1 WO2019038807 A1 WO 2019038807A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
user
voice
processing
speech
Prior art date
Application number
PCT/JP2017/029794
Other languages
French (fr)
Japanese (ja)
Inventor
哲生 塩飽
穂積 金子
一真 岡本
Original Assignee
リーズンホワイ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by リーズンホワイ株式会社 filed Critical リーズンホワイ株式会社
Priority to JP2019537438A priority Critical patent/JPWO2019038807A1/en
Priority to PCT/JP2017/029794 priority patent/WO2019038807A1/en
Publication of WO2019038807A1 publication Critical patent/WO2019038807A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers

Definitions

  • the present invention relates to an information processing system and an information processing program.
  • the medical information input device (refer patent document 1) which stores the information regarding medical treatment and assists the medical worker was proposed.
  • the medical worker stores information related to medical treatment (for example, data of treatment performed on the patient by the medical worker, etc.), and the information is reflected in the medical field.
  • This invention is made in view of the above, Comprising: It aims at providing the information processing system and information processing program which can reflect the uttered content.
  • an information processing system uses a first user using a first call terminal and a second user using a second call terminal.
  • Receiving means for receiving an utterance voice of the utterance performed using the first call terminal or the second call terminal between the user and the user, and corresponding to the utterance voice received by the receiving means Converting means for converting voice data to text data corresponding to the voice data, the voice data corresponding to the speech received by the receiving means, or the text data among the text data converted by the converting means
  • processing means for performing speech related processing, which is processing related to the speech received by the receiving means, based on at least one of the data.
  • the processing means sends the voice data or the text data to the first user or the second user.
  • Data recognition processing for confirmation is performed as the uttered voice related processing.
  • the receiving unit is configured to transmit an utterance related to a request from the first user to the second user.
  • the processing means performs request execution support processing for causing the second user to execute a request corresponding to the uttered voice received by the receiving means as the uttered voice related processing.
  • the processing means recognizes the second user the request corresponding to the uttered voice received by the receiving means.
  • Request recognition processing for causing the request to be executed is performed as the request execution support processing.
  • the information processing system is the information processing system according to any one of claims 1 to 5, wherein the receiving means receives an utterance related to the use of an object as the utterance voice, The processing means performs usage information output processing for outputting information related to the use of the object as the utterance voice related processing.
  • the information processing system is the information processing system according to any one of claims 1 to 6, wherein at least a speech storage unit for storing the text data converted by the conversion unit.
  • the first user or the second user is a medical worker, and the receiving unit receives an utterance related to a state of a patient of the first user or the second user as the utterance voice.
  • the processing means performs state information output processing for outputting information on the state of the patient as the speech voice related processing based on at least the text data stored in the speech storage means.
  • An information processing program is an information processing program, wherein the computer is a second user using a first call terminal using a first call terminal and a second user using a second call terminal.
  • Receiving means for receiving an utterance voice of the utterance performed using the first call terminal or the second call terminal, and a voice corresponding to the utterance voice received by the receiving means.
  • At least one of conversion means for converting data into text data corresponding to the voice data, the voice data corresponding to the speech received by the reception means, or the text data converted by the conversion means Function as processing means for performing an uttered voice related process which is a process related to the uttered voice received by the receiving means based on the data of
  • the information processing system it is related to the uttered voice based on the voice data corresponding to the uttered voice received by the receiving means or the data of at least one of the text data converted by the converting means.
  • the speech-related process can be performed. It becomes possible to reflect the contents surely.
  • the content of the speech at the medical site can be reflected, and the activity at the medical site can be facilitated or the content of the speech at the medical site can be effective. It can be used to
  • the first user or the second user can be made to confirm the voice data or the text data, so the content of the utterance can be It becomes possible to reflect on the first user or the second user. Further, for example, it is possible to remind the first user or the second user that there is an utterance performed using the first call terminal or the second call terminal.
  • the request execution support process by performing the request execution support process, it becomes possible to support the second user to execute the request corresponding to the uttered voice, and the first use It is possible to facilitate the activities of the person and the second user.
  • the second user by performing the request recognition process, for example, the second user can be made to recognize the request, and thus the second user can be notified that the request has been made. It will be possible to remind.
  • the second user when the second user is made to recognize a request at a medical site, it is possible to facilitate activities at the medical site.
  • the request information request process for example, the lack information corresponding to the uttered voice can be compensated, so that an appropriate request can be made, and the first use It is possible to further facilitate the activities of the person and the second user.
  • the use information output process for example, it is possible to output information regarding the use of the object, so that the contents of the utterance can be used to improve the object, etc. It is possible to make effective use of the contents of the utterance.
  • the information processing system of the seventh aspect by performing the state information output process, for example, information on the state of the patient can be output, so that the contents of the utterance can be used for the treatment of the patient. It becomes possible to effectively utilize the contents of the utterance.
  • the speech-related process can be performed. It becomes possible to reflect the contents surely.
  • the content of the speech at the medical site can be reflected, and the activity at the medical site can be facilitated or the content of the speech at the medical site can be effective. It can be used to
  • FIG. 1 is a block diagram of a medical system. It is the figure which illustrated terminal information. It is the figure which illustrated utterance record information. It is a flowchart of a storage notification process. It is a flow chart of medicine information output processing.
  • the embodiment relates generally to an information processing system and an information processing program.
  • the “information processing system” is a system that processes arbitrary information, specifically, a system that processes information of an utterance, for example, a dedicated system that processes information of an utterance, or a general-purpose system It is a concept including the system etc. which are realized by implementing the function which processes the information on utterance to a system.
  • this “information processing system” includes, for example, a system realized by an integrated computer, or a system realized by a plurality of computers distributed and communicable with each other, As an example, the receiving means, the converting means, and the processing means are provided.
  • the “reception means” is an utterance between the first user using the first call terminal and the second user using the second call terminal, and the first call terminal or the second It is a means to receive the uttered voice of the said utterance performed using a telephone terminal.
  • Speech between the first user using the first call terminal and the second user using the second call terminal means an utterance from the first user to the second user, or , An utterance from the second user to the first user, specifically, a dialog in which both perform interactively at the same time, or from one user to the other at the same time or a different time It is a concept that includes one-way utterance and so on.
  • the “first call terminal” is a call terminal used by a first user, and for example, a portable terminal such as a mobile phone such as a smartphone, a PHS telephone, or a tablet device, or a stationary telephone , A fixed type terminal such as a personal computer and the like.
  • the “second call terminal” is another call terminal different from the first call terminal and is a call terminal used by the second user, and for example, the portable terminal described above, or the above-described Concept including a fixed type terminal and the like.
  • the “first user” is a user who uses the first call terminal, for example, a person registered as a user of the first call terminal, and as an example, medical services such as a doctor or a nurse It is a concept that includes a person or any person other than a medical worker (for example, a clerk in charge of a carrier, a driver, etc.).
  • the “second user” is a person who is different from the first user and who uses the second call terminal, for example, a person registered as a user of the second call terminal, As an example, the concept includes the above-described health care worker, or any person other than the health care worker (for example, a carrier clerk, a driver, etc.) and the like.
  • the "uttered voice” is a voice generated by a voice, and is, for example, a voice that can be received by the information processing system via the first call terminal or the second call terminal.
  • the “conversion means” is a means for converting speech data corresponding to the speech received by the reception means into text data corresponding to the speech data.
  • the “voice data” is, for example, information for specifying a waveform corresponding to the vibration of air by voice
  • the “text data” is, for example, character information.
  • Processing means refers to speech data received by the receiving means based on voice data corresponding to speech voice received by the receiving means, or at least one of text data converted by the converting means. It is a means to perform the uttered voice related processing which is processing.
  • the "voice-related process” is a process related to the voice received by the receiving means, and more specifically, a process performed based on at least one of voice data and text data, for example This is a concept including data recognition processing, request execution support processing, usage information output processing, state information output processing, and the like.
  • the “data recognition process” is a process of outputting information for causing the first user or the second user to check voice data or text data, and for example, a process of notifying voice data or text data, etc. Is a concept that includes
  • the “request execution support process” is a process of outputting information for causing the second user to execute a request corresponding to the uttered voice, and is a concept including, for example, a request recognition process, a request information request process, and the like.
  • the “request recognition process” is a process of outputting information for causing the second user to recognize the request corresponding to the uttered voice, and, for example, the request content is stored as a memorandum (for example, a so-called reminder, a so-called ToDo list, etc.) It is a concept including processing to notify.
  • the “request information request process” determines whether there is insufficient information in the request corresponding to the uttered voice, and when it is determined that the insufficient information is present, the first user is requested to compensate for the insufficient information.
  • the process is a process of outputting information for requesting, and is a concept including, for example, a process of determining whether or not there is any information such as an execution time limit of the request content, and the like, as required.
  • the “use information output process” is a process for outputting information on the use of an object, and for example, information on the use of medicine, information on the use of medical equipment, or information on the use of other things, etc. It is a concept that includes, for example, information on side effects of drugs, information that specifies the possibility of using a medical device (specifically, information that specifies the content of planned surgery), etc. It is an included concept.
  • the “object” is an object to be uttered, and is a concept including, for example, an object related to medical treatment such as a drug or a medical device, or any object not related to medical treatment.
  • the “state information output process” is a process for outputting information related to the condition of a patient, and is a concept including, for example, a process for outputting information related to the condition of treatment of the patient.
  • FIG. 1 is a view showing an application example of the medical system according to the present embodiment
  • FIG. 2 is a block diagram of the medical system.
  • the number of call terminals included in the medical system 100 of FIG. 2 is arbitrary, the first call terminal 1 and the second call terminal 2 will be specifically exemplified and described here.
  • the first call terminal 1 and the second call terminal 2 are call terminals having the same configuration as each other, but will be described by attaching “first” and “second” for convenience.
  • the medical system 100 of FIG. 2 schematically includes a first call terminal 1, a second call terminal 2, and a management server 3.
  • the management servers 3 are communicably connected to each other wirelessly via a network.
  • This medical system 100 is an existing medical system (for example, a system capable of exchanging various information using a call terminal or a stationary terminal or the like between a plurality of doctors or a plurality of hospitals). ) May be realized as a separate system, or may be realized by incorporating it into the existing medical system, or a part of a so-called social networking service (SNS: Social Networking Service). It may be realized here by, for example, being incorporated into an existing medical system.
  • SNS Social Networking Service
  • the installation location and mounting method of the management server 3 are arbitrary, and can be configured by a plurality of distributed servers, etc., or one or more provided in a specific place such as a server management room of a hospital However, for convenience of explanation, it will be described as one server.
  • the first call terminal 1 and the second call terminal 2 will be described below as being carried and used by a doctor who is working at a general hospital or the like, which is a relatively large hospital, for example.
  • the first call terminal 1 is, for example, a smartphone assigned to a doctor (AA teacher) in the hospital who is the first user and used by the doctor, and as an example, the communication unit 11, the operation unit 12, and the display A speaker 14, a microphone 15, a recording unit 16, and a control unit 17 are provided.
  • the communication unit 11 is communication means for performing communication with each call terminal of the medical system 100, and performs, for example, voice communication with the second call terminal 2 via the management server 3.
  • the specific type and configuration of the communication unit 11 are arbitrary, for example, the communication unit 11 can be configured to include a known wireless communication circuit and the like.
  • the operation unit 12 is an operation unit that receives an operation input by a user.
  • the specific type and configuration of the operation unit 12 are arbitrary, for example, the operation unit 12 is configured as a known touch pad, and further, the touch pad is formed to be transparent or translucent, and the front surface of the display 13 is The touch panel is provided so as to overlap the display surface of the display 13 in FIG.
  • the display 13 is display means for displaying various images under the control of the control unit 17. Although the specific type and configuration of the display 13 are optional, for example, the display 13 can be configured using a flat panel display such as a known liquid crystal display or an organic EL display.
  • the speaker 14 is an audio output unit that outputs various types of audio based on the control of the control unit 17. Although the specific type and configuration of the speaker 14 are arbitrary, for example, the speaker 14 can be configured using a known audio output circuit or the like.
  • the microphone 15 is a sound collection unit that receives a speech sound.
  • the specific type and configuration of the microphone 15 are arbitrary, for example, it can be configured using a known microphone configuration (for example, a vibrating plate, a coil or the like).
  • the recording unit 16 is a recording unit that records programs necessary for the operation of the first call terminal 1 and various data.
  • the recording unit 16 is configured using, for example, a flash memory as an external recording device.
  • a magnetic recording medium such as a hard disk, or any other recording medium including an optical recording medium such as a DVD or a Blu-ray disc instead of or together with the flash memory.
  • the control unit 17 is a control unit that controls the first call terminal 1, and more specifically, the CPU, various programs to be interpreted and executed on the CPU (a basic control program such as an OS, or the It is a computer comprising an internal memory such as a RAM for storing programs and various data, including an application program for realizing functions.
  • the information processing program substantially configures each part of the control unit 17 by being installed in the first call terminal 1 via an arbitrary recording medium or network (a management server 3 described later) The same applies to the control unit 33 of
  • the second call terminal 2 is, for example, a smartphone assigned to a doctor (BB teacher) in the hospital who is the second user and used by the doctor, and as an example, the communication unit 21, the operation unit 22, and the display A speaker 24, a microphone 25, a recording unit 26, and a control unit 27 are provided.
  • the components of the second call terminal 2 are configured in the same manner as the components of the first call terminal 1 having the same name.
  • the management server 3 is an information processing system, and includes, for example, a communication unit 31, a recording unit 32, and a control unit 33.
  • the communication unit 31 is a communication unit that performs communication with each call terminal of the medical system 100, and relays voice communication between call terminals of the medical system 100, for example.
  • the specific type and configuration of the communication unit 31 are arbitrary, for example, the communication unit 31 can be configured to include known wireless communication circuits, relay circuits, and the like.
  • the recording unit 32 is a recording unit that records programs necessary for the operation of the management server 3 and various data.
  • the recording unit 32 is configured, for example, using a hard disk (not shown) as an external recording device.
  • a hard disk not shown
  • any other recording medium may be used, including a magnetic recording medium such as a magnetic disk or an optical recording medium such as a DVD or a Blu-ray disc.
  • the recording unit 32 includes, for example, a terminal information database 321 (hereinafter, the database is referred to as “DB”), a speech recording information DB 322, a voice data DB 323, and a text data DB 324.
  • DB terminal information database 321
  • speech recording information DB 322 a speech recording information DB 322
  • voice data DB 323 a voice data DB 323
  • text data DB 324 a text data DB 324.
  • the terminal information DB 321 is a terminal information storage that stores terminal information.
  • the “terminal information” is information related to each call terminal of the medical system 100.
  • FIG. 3 is a diagram illustrating terminal information. As shown in FIG. 3, the terminal information is configured, for example, by mutually associating the item "terminal ID” and the item "doctor name information" with the information corresponding to each item.
  • the information corresponding to the item “terminal ID” is terminal identification information that uniquely identifies each call terminal of the medical system 100 (hereinafter, “identification information” is referred to as “ID”) (in FIG.
  • doctor name information is doctor name information specifying the name of the doctor who is using each call terminal of the medical system 100 (in FIG. 3, it is a description for convenience of description, The doctor whose image is illustrated on the left side of the drawing of FIG. 1 and whose name is the doctor who is using the first call terminal 1 “AA”, the doctor whose image is illustrated on the right side of the drawing of FIG. 1 “BB” which is the name of the doctor who is using the second call terminal 2).
  • the speech recording information DB 322 is speech recording information storage means for storing speech recording information.
  • the “speech record information” is information in which a speech is recorded, and more specifically, information in which information on a call (communication) between the communication devices of the medical system 100 is recorded.
  • FIG. 4 is a diagram illustrating speech record information. As shown in FIG. 4, the speech recording information includes, for example, the item "source information”, the item “destination information”, the item "date and time information", the item “voice data identification information”, and the item “text data identification information”. "And information corresponding to each item are mutually associated and configured.
  • the information corresponding to the item “source information” is source information that specifies the source of the call (communication) (in FIG. 4, it is the terminal ID of the call terminal of the source, "IDd2", etc. ).
  • the information corresponding to the item “destination information” is transmission destination information for specifying a transmission destination (destination) of the call (communication) (in FIG. 4, it is the terminal ID of the call terminal of the destination, “IDd1 "etc).
  • the information corresponding to the item “date and time information” is date and time information specifying the date and time when the telephone call (communication) was performed (in FIG. 4, it is an 8-digit number for specifying month and day and hour and minute) To identify 14:01 on June 15, "06151401", etc.).
  • the information corresponding to the item "audio data identification information” is audio data identification information for identifying audio data stored in the audio data DB 323 (in FIG. 4, it is a file name of audio data, "vFile 1" etc).
  • the information corresponding to the item “text data identification information” is text data identification information identifying text data stored in the text data DB 324 (in FIG. 4, it is a file name of text data, “tFile1”. etc). Then, such utterance record information is stored by executing a storage notification process described later.
  • the voice data DB 323 is voice data storage means for storing voice data.
  • “voice data” is, for example, information that specifies a waveform corresponding to the vibration of air due to voice, and more specifically, the voice of a conversation between call terminals of the medical system 100
  • the data to be specified is, for example, one stored as a file of audio data compressed by a known compression method. Such voice data is stored by executing storage notification processing described later.
  • the text data DB 324 is text data storage means for storing text data, and is speech storage means for storing text data converted by the conversion unit 332 described later.
  • text data is, for example, character information, which is converted from speech data by the conversion unit 332, and is stored as a text data file, for example. Such text data is stored by executing storage notification processing described later.
  • the control unit 33 in FIG. 2 is a control unit that controls the management server 3 and includes a receiving unit 331, a converting unit 332, and a processing unit 333 in functional concept.
  • the receiving unit 331 is a receiving unit that receives an utterance voice of an utterance performed using the call terminal, which is an utterance between users using the call terminal of the medical system 100, and more specifically, An utterance between a first user using the first call terminal 1 and a second user using the second call terminal 2, the first call terminal 1 or the second call terminal 2 To receive the uttered voice of the utterance performed using
  • the conversion unit 332 is a conversion unit that converts voice data corresponding to the utterance voice received by the reception unit 331 into text data corresponding to the voice data.
  • the processing unit 333 relates to the speech voice received by the reception unit 331 based on the voice data corresponding to the speech voice received by the reception unit 331, or at least one of the text data converted by the conversion unit 332.
  • Processing means for performing a speech related process which is a process to be performed. The processing performed by each unit of the control unit 33 will be described later.
  • processing executed by the medical system 100 configured as described above will be described.
  • storage notification processing and medicine information output processing will be described.
  • FIG. 5 is a flowchart of the storage notification process (steps will be abbreviated as “S” in the following description of each process).
  • the “storage notification process” is a process including data recognition process, and stores, for example, voice data and text data, and also causes the first user or the second user to check voice data or text data. Processing.
  • timing for executing the storage notification process is arbitrary, for example, when a call is made to the call terminal of the call destination using the call terminal of the medical system 100 (that is, each call terminal of the medical system 100 In the case where the communication of the uttered voice via the management server 3 has started) or the call terminal of the call destination is called using the call terminal of the medical system 100, within a predetermined time (for example, 30) Execution is started when, for example, the voice communication between the call terminal of the medical system 100 and the management server 3 is started, etc., if the telephone switches to the answering machine mode without communication in seconds).
  • the execution of the storage notification process will be described as execution of the storage notification process as execution of the storage notification process when the call is made to the call terminal of the call destination.
  • a list for example, a list such as a so-called address book
  • the recording unit 16 of the call terminal 1 records a list as a candidate of the call destination including the terminal IDs “IDd1” and “IDd2” of the own terminal, and the recording unit 26 of the second call terminal 2
  • a list that is a candidate of a call destination including IDs “IDd2” and “IDd1” is recorded), and a call terminal that makes a call manages its own terminal ID and a call destination terminal ID
  • the storage notification process will be described as being started after the execution of the storage notification process is started.
  • the doctor who is the request source who is the first user in FIG. 1 uses the first call terminal 1 to call the second call terminal 2 of the doctor who is the request destination who is the second user. If the doctor who is the client asks me to receive the medication M1 to patient P1, she developed pressure pleural effusion complications on July 1 and the side effects of anorexia also occurred.
  • a case will be described by exemplifying the case where the patient P1 should be administered the drug M2 by 9:00 am on July 3 (hereinafter referred to as a request speech regarding the drug administration).
  • a well-known call process can be applied to the call between the first call terminal 1 and the second call terminal 2, only the process characteristic to the present application performed by the management server 3 will be described here. .
  • the reception unit 331 starts recording. Specifically, although it is arbitrary, for example, the speech voice from the first call terminal 1 and the speech voice from the second call terminal 2 are received through the communication unit 31, and the voice data DB 323 is stored as voice data Start the recording process to continue. Then, when the doctor who made the request utters at the first call terminal 1, after the speech sound is collected by the microphone 15 of the first call terminal 1, the collected speech sound is transmitted via the communication unit 11. Is transmitted to the management server 3 and relayed by the management server 3 and transmitted to the second call terminal 2. However, when the recording process described above is started, the management server 3 The reception unit 331 receives and records the uttered voice transmitted from the first call terminal 1 to the management server 3. The recording by the management server 3 in the case where the requested doctor utters at the second call terminal 2 is also performed in the same manner as the case where the requesting doctor utters at the first call terminal 1.
  • the microphone 15 of the first call terminal 1 speaks uttered voice "request utterance for drug administration”
  • the collected speech voice is transmitted to the management server 3 via the communication unit 11, and the transmitted speech speech is relayed by the management server 3 and transmitted to the second call terminal 2.
  • the reception unit 331 of the management server 3 receives the uttered voice transmitted from the first call terminal 1 to the management server 3 and copes with the received uttered voice.
  • the voice data corresponding to the “request utterance regarding drug administration” is generated and stored in the voice data DB 323 in a file format compressed by a known compression method.
  • the reception unit 331 determines whether the call has ended. Specifically, although it is arbitrary, for example, it is determined whether or not the first call terminal 1 or the second call terminal 2 has disconnected the telephone communication by a known method, and the call is ended based on the determination result. It is determined whether or not. When the first call terminal 1 or the second call terminal 2 does not disconnect the communication of the telephone (that is, the communication of the telephone between the first call terminal 1 and the second call terminal 2 is performed). In the case where it is determined that the call has not ended (NO in SA2), SA2 is repeatedly executed until it is determined that the call has ended.
  • the receiving unit 331 ends the recording in SA3 of FIG. Specifically, although it is arbitrary, for example, the audio data stored between the start of the recording at SA1 and the end of the recording at SA3 after the above-described recording process is regarded as one file. After putting it together, a file name (a file name which can be uniquely identified by the audio data DB 323) is added to this one file according to a predetermined algorithm, and the audio data is stored in the audio data DB 323 with the attached file name. Do.
  • a file name “vFile 3” is attached to one file of voice data corresponding to “request speech regarding drug administration”, and the file is stored in the voice data DB 323.
  • the converting unit 332 converts voice data corresponding to the speech received by the receiving unit 331 into text data corresponding to the voice data and stores the converted text data.
  • the voice data stored in SA3 is acquired, the acquired voice data is converted to text data by a known conversion algorithm, and the converted text data is converted to a file name according to a predetermined algorithm (File name that can be uniquely identified in the text data DB 324) is added, and text data is stored in the text data DB 324 with the attached file name.
  • a known method can be used as a specific method of converting voice data into text data, detailed description will be omitted.
  • vFile3 of the voice data DB 323 is acquired, the acquired “vFile3” is converted into text data which is one file, and the converted text data is given a file name “tFile3” It stores in data DB324.
  • the processing unit 333 stores speech recording information.
  • each information in FIG. 4 that is, source information, destination information, date and time information, voice data identification information, text data identification information
  • each information in FIG. 4 is specified as follows and stored Do.
  • source information and destination information for example, the terminal ID of the call terminal and the terminal ID of the call destination, which are transmitted from the call terminal of the medical system 100 to the management server 3 when starting the storage notification process And specifies the terminal ID of the call terminal making the acquired call as source information, and specifies the acquired terminal ID of the call destination as destination information.
  • the first call terminal 1 making a call transmits “IDd1” which is its own terminal ID and “IDd2” which is a called party terminal ID to the management server 3
  • IDd1 is specified as the sender information
  • IDd2 is specified as the sender information.
  • date and time information when storage notification processing is started by accessing a clocking unit (for example, a clock circuit such as a known timer circuit) not shown (that is, for example, for a call terminal of a call destination).
  • a clocking unit for example, a clock circuit such as a known timer circuit
  • the date and time of the call is acquired, and the date and time information corresponding to the acquired date and time is specified.
  • 070211714 is specified.
  • the file name of voice data stored at SA3 is identified
  • the file name of text data stored at SA4 is identified.
  • “vFile 3” is identified as the audio data identification information
  • “tFile 3” is identified as the text data identification information.
  • the specified information is stored in the speech recording information DB 322 to store the speech recording information.
  • each information of the third level from the upper side of the drawing of FIG. 4 is stored.
  • the processing unit 333 sends a notification to confirm the voice data or text data to the first user or the second user.
  • the first call terminal The notification 13 is performed by displaying the transmission source notification image G1 on the first display 13 and displaying the transmission destination notification image G2 on the display 23 of the second call terminal 2.
  • the “source side notification image” G1 is information related to the uttered voice and is an image displayed on the call terminal which has made a call in the medical system 100, and, for example, the source side message information G11, It is an image including the sender side voice reproduction button G12 and the sender side text display button G13.
  • the sender side message information G11 is information for specifying that the first user has made a request, and more specifically, is text information for specifying the date and time of the request and the doctor of the request destination.
  • the source-side voice reproduction button G12 is a button for reproducing the stored voice data
  • the source-side text display button G13 is a button for displaying the stored text data.
  • the “destination notification image” G2 is information related to the uttered voice and is an image displayed on the call terminal (that is, the call destination) called in the medical system 100. It is an image including the front side message information G21, the transmission side audio reproduction button G22, and the transmission side text display button G23.
  • the transmission destination side message information G21 is information for specifying that the second user has been requested, and more specifically, is text information for specifying the date and time when the request was made and the doctor as the request source.
  • the destination-side audio reproduction button G22 is a button for reproducing stored audio data
  • the destination-side text display button G23 is a button for displaying stored text data.
  • the processing unit 333 displays the source notification image G1 and the transmission destination notification image G2 of FIG. 1 based on the terminal information of FIG. 3 and the utterance recording information of FIG. Do.
  • the date and time information of FIG. 4 is acquired, and the date and time corresponding to the acquired date and time information is on the left side of ":" of the source message information G11 of FIG. indicate.
  • the transmission destination information of FIG. 4 is acquired, referring to FIG. 3, the doctor name information corresponding to the terminal ID of the acquired transmission destination information is acquired, and the acquired doctor name information is on the right side of “:”.
  • the message "To teacher" is displayed on the right side of the message to make the sender message information G11.
  • the voice data identification information of FIG. 4 is acquired, and a link to the voice data specified by the acquired voice data identification information among the voice data stored in the voice data DB 323 of FIG. 2 is generated and generated. After the generated link is associated with the transmission side audio reproduction button G12, the transmission side audio reproduction button G12 is displayed. Also, the text data identification information of FIG. 4 is acquired, and a link to text data specified by the acquired text data identification information among the text data stored in the text data DB 324 of FIG. 2 is generated and generated. After the generated link is associated with the sender-side text display button G13, the sender-side text display button G13 is displayed.
  • date and time information “07021714”
  • the date and time corresponding to the acquired date and time information “July 2, 17:14 "" Is displayed on the left side of ":” of the source message information G11 of FIG.
  • the date and time information of FIG. 4 is acquired, and the date and time corresponding to the acquired date and time information is “:” of the transmission destination side message information G21 of FIG. Display on the left side.
  • the source information in FIG. 4 is acquired, and referring to FIG. 3, the doctor name information corresponding to the terminal ID of the acquired destination information is acquired, and the acquired doctor name information is on the right side of “:”.
  • destination message information G21 is made. Further, the voice data identification information of FIG.
  • date / time information “07021714”
  • sender information “IDd2”
  • audio data identification information “vFile3”
  • text data identification information “tFile3 Is displayed, and the transmission destination notification image G2 of FIG. 1 is displayed.
  • a link to voice data corresponding to “request speech regarding drug administration” is associated with the transmission side audio reproduction button G22 of the transmission destination side notification image G2, and to the transmission side text display button G23.
  • a link to text data corresponding to “request speech regarding drug administration” will be associated. This completes the storage notification process.
  • the doctor who is the request source who is the first user can recognize that he / she has made a request by telephone by visually recognizing the sender message information G11 of the sender notification image G1. Become. Further, when the caller side voice reproduction button G12 is pressed, the control unit 17 of the first call terminal 1 makes the voice data (information related to the uttered voice) corresponding to the pressed caller side voice reproduction button G12 Since the voice is output via the speaker 14 and reproduced, the doctor as the request source can confirm the content of the “request utterance regarding drug administration” by voice.
  • the control unit 17 of the first call terminal 1 displays text data (information related to speech) corresponding to the sender side text display button G13 pressed.
  • text data information related to speech
  • display and output is performed via S13, and the request source doctor can confirm the content of "request utterance regarding drug administration" by text.
  • the doctor who is the request destination who is the second user can recognize that he / she is requested by telephone by visually recognizing the transmission destination side message information G21 of the transmission destination side notification image G2. It becomes.
  • the control unit 27 of the second call terminal 2 converts the voice data (information related to the speech) corresponding to the pressed transmission side voice reproduction button G22, Since the voice is output and reproduced through the speaker 24, the doctor as the request destination can confirm the content of the "request utterance regarding drug administration" by voice.
  • the control unit 27 of the second call terminal 2 displays text data (information related to speech) corresponding to the pressed transmission destination side text display button G23.
  • text data information related to speech
  • display and output are performed through S.23, and the doctor of the request destination can confirm the content of "request speech regarding drug administration" by text.
  • FIG. 6 is a flowchart of medicine information output processing.
  • the “drug information output process” is a process including a usage information output process, for example, a process of outputting information related to the use of a drug, and as an example, a process of outputting information on a side effect of a drug.
  • the timing to execute the drug information output process is arbitrary, but for example, it is repeatedly executed every predetermined time (for example, 12 hours to 24 hours, etc.), and after execution of the drug information output process is started. explain.
  • the processing unit 333 acquires data in SB1 of FIG. Specifically, for example, the text data of the text data DB 324 of the recording unit 32 of FIG.
  • the drug information output process is repeatedly performed at predetermined time intervals as described above, in order to prevent the same text data of the text data DB 324 from being redundantly performed a plurality of times.
  • the file name of the text data acquired at SB1 is recorded in the recording unit 32, and only text data not recorded in the recording unit 32 is acquired.
  • text data or the like corresponding to “request speech regarding drug administration” having a file name “tFile 3” is acquired.
  • the processing unit 333 determines whether to output information. Specifically, although it is arbitrary, for example, it is determined whether the text data acquired at SB1 includes a predetermined keyword, and it is determined whether to output information based on the determination result.
  • the "predetermined keyword” is a keyword used to determine whether or not to output information, and relates to, for example, information contributing to the business of a drug provider (eg, a pharmaceutical company etc.) It is a keyword, for example, a keyword set by the provider side.
  • a keyword for example, a keyword set by the provider side.
  • SB1 when the text data acquired in SB1 does not include all of the predetermined keywords, it is determined that the information is not output (NO in SB2), and the process is ended.
  • the text data acquired in SB1 includes a predetermined keyword, it is determined that the information is to be output (YES in SB2), and the process shifts to SB3.
  • the text data acquired at SB1 “The patient P1 was administered the drug M1 and developed complications of compression pleural effusion on July 1 and a side effect of anorexia also occurred. 7 Please administer the medication M2 to the patient P1 by 9:00 am on March 3. "contains all the keywords" drug ",” side effects “and” complications ", It is determined that information is to be output.
  • the processing unit 333 outputs information at SB3 in FIG. Specifically, although it is arbitrary, for example, it is assumed that the management server 3 is communicably connected to a terminal device (for example, a smartphone or a stationary computer) on the medicine provider side, and acquired at SB1.
  • the transmitted text data is output by transmitting it to the terminal device on the medicine provider side.
  • text data information relating to speech sound
  • the provider side of the drug that has received the text data corresponding to the “request administration for drug administration” changes the usage of the existing drug, improves the existing drug, or stores the received text data. It can be used to develop new drugs. This completes the information output process.
  • At least one of speech data corresponding to the speech received by the reception unit 331 or text data converted by the conversion unit 332 relates to the speech.
  • the first user or the second user manually inputs the content of the utterance by performing the utterance voice related processing (specifically, the data recognition processing and the use information output processing) which is the processing. Since it is possible to perform the utterance voice related processing, the contents of the utterance can be reliably reflected.
  • the content of the speech at the medical site can be reflected, and the activity at the medical site can be facilitated or the content of the speech at the medical site can be effective. It can be used to
  • the first user or the second user can be made to confirm voice data or text data, so the content of the utterance is reflected to the first user or the second user. It is possible to Also, for example, it becomes possible to remind the first user or the second user that there is an utterance performed using the first call terminal 1 or the second call terminal 2.
  • use information output processing for example, it is possible to output information regarding the use of an object (specifically, a drug), so that the contents of the utterance can be used to improve the object, etc. It becomes possible to effectively utilize the contents of the utterance.
  • each of the above-described electrical components is functionally conceptual and does not necessarily have to be physically configured as illustrated. That is, the specific form of the distribution and integration of each part is not limited to the illustrated one, and all or a part thereof is functionally or physically dispersed or integrated in any unit according to various loads, usage conditions, etc. Can be configured.
  • the “system” in the present application is not limited to one configured by a plurality of devices, but includes one configured by a single device.
  • the “device” in the present application is not limited to one configured by a single device, but includes one configured by a plurality of devices.
  • the data structure of each piece of information (including DB) described in the above embodiment may be arbitrarily changed.
  • the management server 3 may be omitted by distributing the function of the management server 3 to the first call terminal 1 or the second call terminal 2.
  • each process described in the above embodiment may be arbitrarily rearranged, omitted, changed, or a new process may be added.
  • the conversion unit 332 of the management server 3 receives the speech sound transmitted to the management server 3 without waiting for the storage of the voice data of SA3 in SA4 of FIG. 5. Text data may be converted each time.
  • each processing may be performed using only voice data, or voice data and text
  • each process may be performed using both of the data.
  • the processing unit 333 may be configured to perform only the text data, or may be configured to perform only the voice data, or the voice data and the text data may be processed for the utterance voice related processing. It may be configured to perform using both of the above (the same applies to the speech related processing of the modification).
  • the "predetermined keyword” in SB2 of FIG. 6 is a keyword related to the information contributing to the business of the provider of the medical device (for example, medical device maker etc.).
  • "Stomach cancer surgery” and "endoscopic surgery” may be set as keywords set by the provider side, and the management server 3 may output information to the terminal device of the provider of the medical device at SB3.
  • the medical device manufacturer gives the doctor a manual for the operation of the device for using the endoscopic surgery device in gastric cancer before surgery, or asks the doctor about the feeling of use of the device or the like after surgery. By doing so, it can be used for sales activities or product development (improvement) activities.
  • the text data displayed when the transmission source side text display button G13 or the like in FIG. 1 of the above-described embodiment is pressed may be displayed in association with the person who spoke.
  • each utterance content may be associated with the speaker and converted into text data, and then text data may be displayed in association with the speaker. .
  • the information corresponding to the display content is voiced from the speaker of each device (for example, the speaker 14 of the first call terminal 1 etc.) It may be configured to output, or may be configured to provide printing means such as a printer and print out on a sheet, or to communicate units of each device (for example, the first call terminal 1
  • the communication unit 11 or the like) may be configured to perform communication output.
  • the processing unit 333 of the management server 3 in FIG. 2 performs data recognition processing and usage information output processing as the uttered voice related processing, but the present invention is not limited to this.
  • Perform the above-mentioned request execution support processing specifically, request recognition processing, request information request processing
  • status information output processing based on voice data of the voice data DB 323 and text data of the text data DB 324. It may be configured.
  • the respective processes may be performed together with the process of SA6 of FIG. 4 (or the processes of SB1 to SB3 of FIG. 5) or in place of SA6 (or the processes of SB1 to SB3 of FIG. 5).
  • the reception unit 331 receives an utterance related to the request for the second user from the first user as the utterance voice, and the voice data and the text are the same as in the embodiment.
  • the processing unit 333 performs reminder as request recognition processing based on voice data of the voice data DB 323 and text data of the text data DB 324, outputs a ToDo list as request recognition processing, or Perform request information request processing.
  • the process here it is necessary to identify that the uttered voice received by the receiving unit 331 is related to the request on the management server 3 side, and to process the identified uttered voice, For example, a predetermined operation by the user (for example, when making a call, “11” is input via the operation unit 12 or the like) or a predetermined utterance by the user (for example, “request item” or the like) Each process may be performed after identifying that it is related to the request by uttering a keyword or the like.
  • a predetermined operation by the user for example, when making a call, “11” is input via the operation unit 12 or the like
  • a predetermined utterance by the user for example, “request item” or the like
  • the processing unit 333 sets the second call terminal 2 to perform the following operation. For example, predetermined time intervals (for example, 1 hour interval to 2 hour intervals, or 12 hour intervals) until the second user performs a predetermined operation (operation to end output of reminder information) at the second call terminal 2 A message indicating that there is a request on the display 23 at a 24-hour interval etc.) or a predetermined time (9 am, 1 pm and 5 pm etc.) (for example, a message such as “request exists, please check”) Are displayed, or the message is output as voice via the speaker 24, or, if the vibrator function is installed, vibration is performed to set a reminder.
  • predetermined time intervals for example, 1 hour interval to 2 hour intervals, or 12 hour intervals
  • a predetermined operation operation to end output of reminder information
  • the processing unit 333 sets the first call terminal 1 and the second call terminal 2 so as to perform the following operation.
  • the request content and the check box are associated with each other and displayed in a list on the display 23 of the second call terminal 2, and the second user performs a predetermined operation at the second call terminal 2 (check to show that the request has been executed)
  • a check is displayed in the displayed check box, and in this case, the first call terminal that is the request source side that the second call terminal 2 is checked Send to 1 and when the first call terminal 1 receives this check, a message indicating that the request has been executed (for example, a message such as "request has been executed") to the first call terminal Display on the display 13 of FIG. 1, or voice output of the message via the speaker 14, or vibration when the vibrator function is installed.
  • a message indicating that the request has been executed for example, a message such as "request has been executed"
  • the request execution support process for example, it becomes possible to support the second user to execute the request corresponding to the uttered voice, and the first user and the second user It is possible to facilitate the activities of
  • the request recognition process for example, the second user can be made to recognize the request, so that it is possible to remind the second user that there is a request.
  • the second user when making the second user recognize a request at a medical site, it is possible to facilitate activities at the medical site.
  • the processing unit 333 In the request information request process, the processing unit 333 first acquires the text data of the text data DB 324, and determines whether there is insufficient information in the request corresponding to the acquired text data.
  • the "insufficient information” is information lacking in the request for the uttered voice, and specifically, is information lacking for predetermined information, and, for example, In the case where the execution deadline is information that is predetermined, if the execution deadline is not included, the execution deadline corresponds to the shortage information. In addition, below, this execution deadline is illustrated and demonstrated.
  • the processing unit 333 determines that there is insufficient information in the request corresponding to the acquired text data, a message indicating that there is insufficient information on the display 13 of the first call terminal 1 (for example, “ There is no remark of the deadline, please call again to notify the deadline, etc.) and request the first user to make up for the missing information.
  • the processing unit 333 determines that there is no shortage information in the request corresponding to the acquired text data, the processing unit 333 does not have to output the above-described message because there is no need to compensate for the shortage information.
  • the display 13 of the first call terminal 1 makes the first user recognize that the request has been normally performed, A message indicating that the request has been successfully made (for example, a message such as “The request has been made without excess or deficiency.”) May be displayed.
  • the reception unit 331 receives an utterance relating to the state of the patient of the first user or the second user as an utterance voice, and voice data and text are received in the same manner as described in the embodiment.
  • the processing unit 333 performs processing for outputting information related to the condition of the patient based on the voice data of the voice data DB 323 and the text data of the text data DB 324.
  • it is necessary to identify that the uttered voice received by the receiving unit 331 is related to the state of the patient on the management server 3 side, and to process the identified uttered voice.
  • a predetermined operation of the user for example, when making a call, “99” is input via the operation unit 12
  • each process may be performed after identifying that it is related to the request by a predetermined utterance of the user (for example, uttering a predetermined keyword such as “the patient's condition”) .
  • the receiving unit 331 receives and stores the phase of treatment (the state of the patient and the treatment performed for the disease, etc.) as speech voice
  • the processing unit 333 stores the text data of the text data DB 324 Is acquired, and based on the acquired text data, the phase of the stored treatment is displayed on the display 13 of the first call terminal 1, the display 23 of the second call terminal 2, or the first call terminal 1 or the second Display and output on a terminal device (not shown) (for example, a smartphone or a stationary computer) of the managing doctor who manages the doctor who uses the call terminal 2.
  • the patient's condition for example, an inquiry result, an examination result, etc.
  • the treatment for example, an administered drug, an operation performed, etc.
  • a predetermined standard path phase of flow treatment that serves as a model until patient cures (patient's condition and treatment performed for the disease, etc.)
  • the actual treatment may be evaluated and the evaluation result may be displayed by comparing the reference pass with the actually performed treatment.
  • control unit 33 of the management server 3 performs the following automatic extraction processing, assist processing, or dictionary processing via each call terminal based on the uttered voice or the stored voice data or text data. It may be configured to perform.
  • the “automatic extraction process” is a process of automatically extracting particularly important information from the medical point of view, and performing display and management to prevent oversight, for example, “lifesaving”, “emergency”, “medication”, “retraction” This is a concept including processing such as highlighting in a text based on an utterance when a word such as “allergy” is uttered, or giving a warning when the correspondence confirmation button is not pressed within a predetermined time.
  • assistant processing is processing to support the business activities of the doctor, and for example, when the doctor gives informed consent to the patient or the family, the contents described and the conversation with the patient etc. are recorded, It is a concept including processing to be kept as a record of informed consent.
  • dictionary-like process is a process of outputting an answer to speech as voice according to the dictionary information stored in advance.
  • the information processing system is an utterance between a first user using a first call terminal and a second user using a second call terminal, the first call terminal or the first call terminal Receiving means for receiving the uttered voice of the uttered performed using the second call terminal, and converting the voice data corresponding to the uttered voice received by the receiving means into text data corresponding to the voice data Means, the speech data received by the receiving means based on data of at least one of the voice data corresponding to the speech voice received by the receiving means, or the text data converted by the converting means Processing means for performing speech related processing, which is processing related to.
  • the information processing system according to appendix 2 in the information processing system according to appendix 1, the processing means is data recognition for causing the first user or the second user to confirm the voice data or the text data.
  • the process is performed as the uttered voice related process.
  • the reception means receives an utterance relating to a request for the second user from the first user as the speech voice, and the process The means performs a request execution support process for causing the second user to execute a request corresponding to the uttered voice received by the receiving means as the uttered voice related process.
  • the information processing system according to appendix 4 in the information processing system according to appendix 3, the processing means is a request recognition process for causing the second user to recognize a request corresponding to the speech received by the reception means. As the request execution support process.
  • the processing means determines whether or not there is insufficient information in a request corresponding to the speech received by the reception means, When it is determined that there is the shortage information, request information request processing for requesting the first user to compensate for the deficiency information is performed as the request execution support processing.
  • the information processing system in the information processing system according to any one of appendixes 1 to 5, wherein the receiving means receives an utterance relating to use of an object as the speech, and the processing means Usage information output processing for outputting information on usage of the object is performed as the uttered voice related processing.
  • the information processing system in the information processing system according to any one of appendixes 1 to 6, further comprising: speech storage means for storing the text data converted by the conversion means, the first use The person or the second user is a medical worker, and the receiving means receives an utterance relating to the state of the patient of the first user or the second user as the speech, and the processing means at least A state information output process for outputting information on the state of the patient is performed as the speech voice related process based on the text data stored in the speech storage means.
  • the information processing program according to appendix 8 is an information processing program, and the computer speaks between a first user using a first call terminal and a second user using a second call terminal.
  • processing means for performing a speech related process, which is processing related to the speech received by the reception means.
  • the speech data is related to the speech sound based on at least one of the speech data corresponding to the speech speech received by the reception means or the text data converted by the conversion means.
  • the uttered voice related process which is the process, for example, the uttered voice related process can be performed without the first user or the second user manually inputting the content of the uttered, so the uttered content Can be reliably reflected.
  • the content of the speech at the medical site can be reflected, and the activity at the medical site can be facilitated or the content of the speech at the medical site can be effective. It can be used to
  • the first user or the second user can check, for example, the voice data or the text data by performing the data recognition process. It becomes possible to reflect to 1 user or the 2nd user. Further, for example, it is possible to remind the first user or the second user that there is an utterance performed using the first call terminal or the second call terminal.
  • the second user by performing the request recognition process, for example, the second user can be made to recognize the request, so the second user remembers that there is a request. It is possible to In particular, for example, when the second user is made to recognize a request at a medical site, it is possible to facilitate activities at the medical site.
  • the speech data is related to the speech sound based on the voice data corresponding to the speech sound received by the reception means or the data of at least one of the text data converted by the conversion means.
  • the uttered voice related process which is the process, for example, the uttered voice related process can be performed without the first user or the second user manually inputting the content of the uttered, so the uttered content Can be reliably reflected.
  • the content of the speech at the medical site can be reflected, and the activity at the medical site can be facilitated or the content of the speech at the medical site can be effective. It can be used to

Abstract

[Purpose] To provide an information processing system and an information processing program for allowing the contents of utterances to be reflected. [Solution] An information processing system is provided with: a reception unit 331 that, in relation to the utterance between a first user using a first telephone communication terminal 1 and a second user using a second telephone communication terminal 2, receives the utterance voices of the utterance performed by use of the first telephone communication terminal 1 or the second telephone communication terminal 2; a conversion unit 332 that converts voice data corresponding to the utterance voices received by the reception unit 331 to text data corresponding to the voice data; and a processing unit 333 that, on the basis of at least one of the voice data corresponding to the utterance voices received by the reception unit 331 or the text data as converted by the conversion unit 332, performs an utterance-voice-related processing that is a processing related to the utterance voices received by the reception unit 331.

Description

情報処理システム及び情報処理プログラムINFORMATION PROCESSING SYSTEM AND INFORMATION PROCESSING PROGRAM
 本発明は、情報処理システム及び情報処理プログラムに関する。 The present invention relates to an information processing system and an information processing program.
 従来、医療現場では医療従事者(例えば、医師又は看護師等)の間についてのコミュニケーションは、概ね、携帯端末(例えば、携帯用の電話機等)等を用いて口頭での会話にて行われていたので、例えば、携帯端末での会話を終了した後に、会話の内容を忘却してしまったり、あるいは、会話を行ったこと自体を忘却してしまったりして、会話の内容が反映されないことがあった。 Conventionally, in medical practice, communication between medical workers (for example, a doctor or a nurse) is generally performed in an oral conversation using a portable terminal (for example, a portable telephone). Therefore, for example, after the conversation on the portable terminal is ended, the conversation content may not be reflected by forgetting the content of the conversation or forgetting that the conversation has been made itself. there were.
 そこで、医療に関する情報を格納して、医療従事者を支援する医療情報入力装置(特許文献1参照)が提案されていた。この医療情報入力装置においては、医療従事者が医療に関する情報(例えば、医療従事者が患者に対して行った治療のデータ等)を格納して、当該情報を医療現場に反映していた。 Then, the medical information input device (refer patent document 1) which stores the information regarding medical treatment and assists the medical worker was proposed. In this medical information input device, the medical worker stores information related to medical treatment (for example, data of treatment performed on the patient by the medical worker, etc.), and the information is reflected in the medical field.
特開2014-211828号公報JP 2014-211828 A
 しかしながら、特許文献1の医療情報入力装置の如き装置を用いて医療従事者相互間の会話の内容を反映する場合、情報を医療従事者が手作業にて入力する必要があったが、実際の医療現場の医療従事者は極めて多忙であるので、医療に関する情報を手作業にて入力するための時間を確保することが困難であり、会話の内容を反映することが困難となる可能性があった。 However, when reflecting the contents of conversation between medical workers using a device such as the medical information input device of Patent Document 1, it was necessary for the medical workers to manually input the information, but the actual As medical personnel at medical sites are extremely busy, it may be difficult to secure time to manually input medical information, and it may be difficult to reflect the contents of conversations. The
 本発明は、上記に鑑みてなされたものであって、発話した内容を反映することが可能な情報処理システム及び情報処理プログラムを提供することを目的とする。 This invention is made in view of the above, Comprising: It aims at providing the information processing system and information processing program which can reflect the uttered content.
 上述した課題を解決し、目的を達成するために、請求項1に記載の情報処理システムは、第1通話端末を利用している第1利用者と第2通話端末を利用している第2利用者との間の発話であって、前記第1通話端末又は前記第2通話端末を用いて行われる当該発話の発話音声を受信する受信手段と、前記受信手段が受信した前記発話音声に対応する音声データを、当該音声データに対応するテキストデータに変換する変換手段と、前記受信手段が受信した前記発話音声に対応する前記音声データ、又は、前記変換手段が変換した前記テキストデータのうちの少なくとも一方のデータに基づいて、前記受信手段が受信した前記発話音声に関連する処理である発話音声関連処理を行う処理手段と、を備える。 In order to solve the problems described above and achieve the purpose, an information processing system according to claim 1 uses a first user using a first call terminal and a second user using a second call terminal. Receiving means for receiving an utterance voice of the utterance performed using the first call terminal or the second call terminal between the user and the user, and corresponding to the utterance voice received by the receiving means Converting means for converting voice data to text data corresponding to the voice data, the voice data corresponding to the speech received by the receiving means, or the text data among the text data converted by the converting means And processing means for performing speech related processing, which is processing related to the speech received by the receiving means, based on at least one of the data.
 また、請求項2に記載の情報処理システムは、請求項1に記載の情報処理システムにおいて、前記処理手段は、前記音声データ又は前記テキストデータを、前記第1利用者又は前記第2利用者に確認させるためのデータ認識処理を前記発話音声関連処理として行う。 Further, in the information processing system according to claim 2, in the information processing system according to claim 1, the processing means sends the voice data or the text data to the first user or the second user. Data recognition processing for confirmation is performed as the uttered voice related processing.
 また、請求項3に記載の情報処理システムは、請求項1又は2に記載の情報処理システムにおいて、前記受信手段は、前記第1利用者から前記第2利用者に対する依頼に関する発話を前記発話音声として受信し、前記処理手段は、前記受信手段が受信した前記発話音声に対応する依頼を前記第2利用者に実行させるための依頼実行支援処理を前記発話音声関連処理として行う。 Further, in the information processing system according to claim 3, in the information processing system according to claim 1 or 2, the receiving unit is configured to transmit an utterance related to a request from the first user to the second user. The processing means performs request execution support processing for causing the second user to execute a request corresponding to the uttered voice received by the receiving means as the uttered voice related processing.
 また、請求項4に記載の情報処理システムは、請求項3に記載の情報処理システムにおいて、前記処理手段は、前記受信手段が受信した前記発話音声に対応する依頼を前記第2利用者に認識させるための依頼認識処理を前記依頼実行支援処理として行う。 In the information processing system according to claim 4, in the information processing system according to claim 3, the processing means recognizes the second user the request corresponding to the uttered voice received by the receiving means. Request recognition processing for causing the request to be executed is performed as the request execution support processing.
 また、請求項5に記載の情報処理システムは、請求項3又は4に記載の情報処理システムにおいて、前記処理手段は、前記受信手段が受信した前記発話音声に対応する依頼における不足情報があるか否かを判定し、前記不足情報があるものと判定した場合に、当該不足情報を補うことを前記第1利用者に要求する依頼情報要求処理を前記依頼実行支援処理として行う。 In the information processing system according to claim 5, in the information processing system according to claim 3 or 4, whether the processing means has insufficient information in a request corresponding to the speech received by the reception means If it is determined that there is the shortage information, request information request processing for requesting the first user to compensate for the deficiency information is performed as the request execution support processing.
 また、請求項6に記載の情報処理システムは、請求項1から5の何れか一項に記載の情報処理システムにおいて、前記受信手段は、対象物の使用に関する発話を前記発話音声として受信し、前記処理手段は、前記対象物の使用に関する情報を出力するための使用情報出力処理を前記発話音声関連処理として行う。 The information processing system according to claim 6 is the information processing system according to any one of claims 1 to 5, wherein the receiving means receives an utterance related to the use of an object as the utterance voice, The processing means performs usage information output processing for outputting information related to the use of the object as the utterance voice related processing.
 また、請求項7に記載の情報処理システムは、請求項1から6の何れか一項に記載の情報処理システムにおいて、少なくとも、前記変換手段が変換した前記テキストデータを格納する発話格納手段、を備え、前記第1利用者又は前記第2利用者は医療従事者であり、前記受信手段は、前記第1利用者又は前記第2利用者の患者の状態に関する発話を前記発話音声として受信し、前記処理手段は、少なくとも、前記発話格納手段に格納されている前記テキストデータに基づいて、前記患者の状態に関する情報を出力するための状態情報出力処理を前記発話音声関連処理として行う。 The information processing system according to claim 7 is the information processing system according to any one of claims 1 to 6, wherein at least a speech storage unit for storing the text data converted by the conversion unit. The first user or the second user is a medical worker, and the receiving unit receives an utterance related to a state of a patient of the first user or the second user as the utterance voice. The processing means performs state information output processing for outputting information on the state of the patient as the speech voice related processing based on at least the text data stored in the speech storage means.
 また、請求項8に記載の情報処理プログラムは、情報処理プログラムであって、コンピュータを、第1通話端末を利用している第1利用者と第2通話端末を利用している第2利用者との間の発話であって、前記第1通話端末又は前記第2通話端末を用いて行われる当該発話の発話音声を受信する受信手段と、前記受信手段が受信した前記発話音声に対応する音声データを、当該音声データに対応するテキストデータに変換する変換手段と、前記受信手段が受信した前記発話音声に対応する前記音声データ、又は、前記変換手段が変換した前記テキストデータのうちの少なくとも一方のデータに基づいて、前記受信手段が受信した前記発話音声に関連する処理である発話音声関連処理を行う処理手段と、として機能させる。 An information processing program according to claim 8 is an information processing program, wherein the computer is a second user using a first call terminal using a first call terminal and a second user using a second call terminal. Receiving means for receiving an utterance voice of the utterance performed using the first call terminal or the second call terminal, and a voice corresponding to the utterance voice received by the receiving means. At least one of conversion means for converting data into text data corresponding to the voice data, the voice data corresponding to the speech received by the reception means, or the text data converted by the conversion means Function as processing means for performing an uttered voice related process which is a process related to the uttered voice received by the receiving means based on the data of
 請求項1に記載の情報処理システムによれば、受信手段が受信した発話音声に対応する音声データ、又は、変換手段が変換したテキストデータのうちの少なくとも一方のデータに基づいて、発話音声に関連する処理である発話音声関連処理を行うことにより、例えば、第1利用者又は第2利用者が発話の内容を手作業にて入力することなく発話音声関連処理を行うことができるので、発話の内容を確実に反映することが可能となる。特に、例えば、医療現場での発話音声に適用する場合、医療現場での発話の内容を反映することができ、医療現場での活動を円滑化したり、あるいは、医療現場での発話の内容を有効に活用することが可能となる。 According to the information processing system according to claim 1, it is related to the uttered voice based on the voice data corresponding to the uttered voice received by the receiving means or the data of at least one of the text data converted by the converting means. For example, since the first user or the second user can perform the speech-related process without manually inputting the contents of the speech, the speech-related process can be performed. It becomes possible to reflect the contents surely. In particular, for example, when applied to speech at a medical site, the content of the speech at the medical site can be reflected, and the activity at the medical site can be facilitated or the content of the speech at the medical site can be effective. It can be used to
 請求項2に記載の情報処理システムによれば、データ認識処理を行うことにより、例えば、第1利用者又は第2利用者に音声データ又はテキストデータを確認させることができるので、発話の内容を第1利用者又は第2利用者に反映することが可能となる。また、例えば、第1利用者又は第2利用者に対して第1通話端末又は第2通話端末を用いて行われる発話があったことを思い出させることが可能となる。 According to the information processing system of the second aspect, by performing the data recognition process, for example, the first user or the second user can be made to confirm the voice data or the text data, so the content of the utterance can be It becomes possible to reflect on the first user or the second user. Further, for example, it is possible to remind the first user or the second user that there is an utterance performed using the first call terminal or the second call terminal.
 請求項3に記載の情報処理システムによれば、依頼実行支援処理を行うことにより、例えば、発話音声に対応する依頼を第2利用者に実行させることを支援することが可能となり、第1利用者及び第2利用者の活動を円滑化させることが可能となる。 According to the information processing system according to claim 3, for example, by performing the request execution support process, it becomes possible to support the second user to execute the request corresponding to the uttered voice, and the first use It is possible to facilitate the activities of the person and the second user.
 請求項4に記載の情報処理システムによれば、依頼認識処理を行うことにより、例えば、第2利用者に対して依頼を認識させることができるので、依頼があったことを第2利用者に思い出させることが可能となる。特に、例えば、医療現場での依頼を第2利用者に認識させる場合、医療現場での活動の円滑化を図ることが可能となる。 According to the information processing system of the fourth aspect, by performing the request recognition process, for example, the second user can be made to recognize the request, and thus the second user can be notified that the request has been made. It will be possible to remind. In particular, for example, when the second user is made to recognize a request at a medical site, it is possible to facilitate activities at the medical site.
 請求項5に記載の情報処理システムによれば、依頼情報要求処理を行うことにより、例えば、発話音声に対応する不足情報を補うことができるので、適切な依頼を行うことができ、第1利用者及び第2利用者の活動を一層円滑化させることが可能となる。 According to the information processing system according to the fifth aspect, by performing the request information request process, for example, the lack information corresponding to the uttered voice can be compensated, so that an appropriate request can be made, and the first use It is possible to further facilitate the activities of the person and the second user.
 請求項6に記載の情報処理システムによれば、使用情報出力処理を行うことにより、例えば、対象物の使用に関する情報を出力することができるので、発話の内容を対象物の改良等に役立てることができ、発話の内容を有効に活用することが可能となる。 According to the information processing system of the sixth aspect, by performing the use information output process, for example, it is possible to output information regarding the use of the object, so that the contents of the utterance can be used to improve the object, etc. It is possible to make effective use of the contents of the utterance.
 請求項7に記載の情報処理システムによれば、状態情報出力処理を行うことにより、例えば、患者の状態に関する情報を出力することができるので、発話の内容を患者の治療に役立てることができ、発話の内容を有効に活用することが可能となる。 According to the information processing system of the seventh aspect, by performing the state information output process, for example, information on the state of the patient can be output, so that the contents of the utterance can be used for the treatment of the patient. It becomes possible to effectively utilize the contents of the utterance.
 請求項8に記載の情報処理プログラムによれば、受信手段が受信した発話音声に対応する音声データ、又は、変換手段が変換したテキストデータのうちの少なくとも一方のデータに基づいて、発話音声に関連する処理である発話音声関連処理を行うことにより、例えば、第1利用者又は第2利用者が発話の内容を手作業にて入力することなく発話音声関連処理を行うことができるので、発話の内容を確実に反映することが可能となる。特に、例えば、医療現場での発話音声に適用する場合、医療現場での発話の内容を反映することができ、医療現場での活動を円滑化したり、あるいは、医療現場での発話の内容を有効に活用することが可能となる。 According to the information processing program according to claim 8, related to the uttered voice based on the voice data corresponding to the uttered voice received by the receiving means, or at least one of the text data converted by the converting means. For example, since the first user or the second user can perform the speech-related process without manually inputting the contents of the speech, the speech-related process can be performed. It becomes possible to reflect the contents surely. In particular, for example, when applied to speech at a medical site, the content of the speech at the medical site can be reflected, and the activity at the medical site can be facilitated or the content of the speech at the medical site can be effective. It can be used to
本実施の形態に係る医療システムの利用例を示す図である。It is a figure which shows the example of utilization of the medical system which concerns on this Embodiment. 医療システムのブロック図である。1 is a block diagram of a medical system. 端末情報を例示した図である。It is the figure which illustrated terminal information. 発話記録情報を例示した図である。It is the figure which illustrated utterance record information. 格納通知処理のフローチャートである。It is a flowchart of a storage notification process. 薬剤情報出力処理のフローチャートである。It is a flow chart of medicine information output processing.
 以下、本発明に係る情報処理システム及び情報処理プログラムの実施の形態について図面を参照しつつ詳細に説明する。ただし、この実施の形態によって本発明が限定されるものではない。 Hereinafter, embodiments of an information processing system and an information processing program according to the present invention will be described in detail with reference to the drawings. However, the present invention is not limited by this embodiment.
〔実施の形態の基本的概念〕
 まずは、実施の形態の基本的概念について説明する。実施の形態は、概略的に、情報処理システム及び情報処理プログラムに関するものである。ここで、「情報処理システム」とは、任意の情報を処理するシステムであり、具体的には、発話の情報を処理するシステムであり、例えば、発話の情報を処理する専用システム、あるいは、汎用システムに対して発話の情報を処理する機能を実装することによって実現されるシステム等を含む概念である。また、この「情報処理システム」は、例えば、集約されたコンピュータによって実現されるシステム、あるいは、分散配置され相互に通信可能となっている複数のコンピュータによって実現されるシステム等を含むものであり、一例としては、受信手段、変換手段、及び処理手段を備えるものである。
[Basic concept of the embodiment]
First, the basic concept of the embodiment will be described. The embodiment relates generally to an information processing system and an information processing program. Here, the “information processing system” is a system that processes arbitrary information, specifically, a system that processes information of an utterance, for example, a dedicated system that processes information of an utterance, or a general-purpose system It is a concept including the system etc. which are realized by implementing the function which processes the information on utterance to a system. Further, this “information processing system” includes, for example, a system realized by an integrated computer, or a system realized by a plurality of computers distributed and communicable with each other, As an example, the receiving means, the converting means, and the processing means are provided.
 「受信手段」とは、第1通話端末を利用している第1利用者と第2通話端末を利用している第2利用者との間の発話であって、第1通話端末又は第2通話端末を用いて行われる当該発話の発話音声を受信する手段である。 The “reception means” is an utterance between the first user using the first call terminal and the second user using the second call terminal, and the first call terminal or the second It is a means to receive the uttered voice of the said utterance performed using a telephone terminal.
 「第1通話端末を利用している第1利用者と第2通話端末を利用している第2利用者との間の発話」とは、第1利用者から第2利用者に対する発話、又は、第2利用者から第1利用者に対する発話であり、具体的には、両者が同時期に双方向にて行う対話、又は、同時期又は異なる時期に一方の利用者から他方の利用者に対して行う一方向の発話等を含む概念である。 “Speech between the first user using the first call terminal and the second user using the second call terminal” means an utterance from the first user to the second user, or , An utterance from the second user to the first user, specifically, a dialog in which both perform interactively at the same time, or from one user to the other at the same time or a different time It is a concept that includes one-way utterance and so on.
 「第1通話端末」とは、第1利用者によって利用される通話端末であり、例えば、スマートフォンの如き携帯電話機、PHS電話機、又はタブレット機器等のような携帯可能な端末、あるいは、据え置き型電話機、パーソナルコンピュータ等のような固定型端末等を含む概念である。「第2通話端末」とは、第1通話端末とは異なる別の通話端末であって、第2利用者によって利用される通話端末であり、例えば、前述した携帯可能な端末、あるいは、前述したな固定型端末等を含む概念である。 The “first call terminal” is a call terminal used by a first user, and for example, a portable terminal such as a mobile phone such as a smartphone, a PHS telephone, or a tablet device, or a stationary telephone , A fixed type terminal such as a personal computer and the like. The “second call terminal” is another call terminal different from the first call terminal and is a call terminal used by the second user, and for example, the portable terminal described above, or the above-described Concept including a fixed type terminal and the like.
 「第1利用者」とは、第1通話端末を利用する者であり、例えば、第1通話端末の利用者として登録されている者であり、一例としては、医師又は看護師の如き医療従事者、あるいは、医療従事者以外の任意の者(例えば、運送業者の事務担当者、ドライバー等)等を含む概念である。「第2利用者」とは、第1利用者とは異なる者であって、第2通話端末を利用する者であり、例えば、第2通話端末の利用者として登録されている者であり、一例としては、前述の医療従事者、あるいは、医療従事者以外の任意の者(例えば、運送業者の事務担当者、ドライバー等)等を含む概念である。 The “first user” is a user who uses the first call terminal, for example, a person registered as a user of the first call terminal, and as an example, medical services such as a doctor or a nurse It is a concept that includes a person or any person other than a medical worker (for example, a clerk in charge of a carrier, a driver, etc.). The “second user” is a person who is different from the first user and who uses the second call terminal, for example, a person registered as a user of the second call terminal, As an example, the concept includes the above-described health care worker, or any person other than the health care worker (for example, a carrier clerk, a driver, etc.) and the like.
 「発話音声」とは、発話によって生じる音声であり、例えば、第1通話端末又は第2通話端末を介して情報処理システムが受信し得る音声である。 The "uttered voice" is a voice generated by a voice, and is, for example, a voice that can be received by the information processing system via the first call terminal or the second call terminal.
 「変換手段」とは、受信手段が受信した発話音声に対応する音声データを、音声データに対応するテキストデータに変換する手段である。「音声データ」とは、例えば、音声による空気の振動に対応する波形を特定する情報であり、「テキストデータ」とは、例えば、文字情報である。 The "conversion means" is a means for converting speech data corresponding to the speech received by the reception means into text data corresponding to the speech data. The “voice data” is, for example, information for specifying a waveform corresponding to the vibration of air by voice, and the “text data” is, for example, character information.
 「処理手段」とは、受信手段が受信した発話音声に対応する音声データ、又は、変換手段が変換したテキストデータのうちの少なくとも一方のデータに基づいて、受信手段が受信した発話音声に関連する処理である発話音声関連処理を行う手段である。 "Processing means" refers to speech data received by the receiving means based on voice data corresponding to speech voice received by the receiving means, or at least one of text data converted by the converting means. It is a means to perform the uttered voice related processing which is processing.
 「発話音声関連処理」とは、受信手段が受信した発話音声に関連する処理であり、具体的には、音声データ又はテキストデータのうちの少なくとも一方のデータに基づいて行われる処理であり、例えば、データ認識処理、依頼実行支援処理、使用情報出力処理、及び状態情報出力処理等を含む概念である。 The "voice-related process" is a process related to the voice received by the receiving means, and more specifically, a process performed based on at least one of voice data and text data, for example This is a concept including data recognition processing, request execution support processing, usage information output processing, state information output processing, and the like.
 「データ認識処理」とは、音声データ又はテキストデータを、第1利用者又は第2利用者に確認させるための情報を出力する処理であり、例えば、音声データ又はテキストデータの通知を行う処理等を含む概念である。「依頼実行支援処理」とは、発話音声に対応する依頼を第2利用者に実行させるための情報を出力する処理であり、例えば、依頼認識処理、依頼情報要求処理等を含む概念である。「依頼認識処理」とは、発話音声に対応する依頼を第2利用者に認識させるための情報を出力する処理であり、例えば、依頼内容を備忘録(例えば、いわゆるリマインダ、いわゆるToDoリスト等)として通知する処理等を含む概念である。「依頼情報要求処理」とは、発話音声に対応する依頼における不足情報があるか否かを判定し、不足情報があるものと判定した場合に、当該不足情報を補うことを第1利用者に要求するための情報を出力する処理であり、例えば、依頼内容の実行期限等の任意の情報があるか否かを判定して適宜要求する処理等を含む概念である。「使用情報出力処理」とは、対象物の使用に関する情報を出力するための処理であり、例えば、薬剤の使用に関する情報、医療機器の使用に関する情報、あるいは、その他の物の使用に関する情報等を含む概念であり、一例としては、薬剤の副作用の情報、医療機器を使用する可能性があることを特定する情報(具体的には、予定されている外科手術の内容を特定する情報)等を含む概念である。「対象物」とは、発話の対象となる物であり、例えば、薬剤や医療機器の如き医療に関連する物、あるいは、医療に関連しない任意の物等を含む概念である。「状態情報出力処理」とは、患者の状態に関する情報を出力するための処理であり、例えば、患者の治療の状態に関連する情報を出力するための処理等を含む概念である。 The “data recognition process” is a process of outputting information for causing the first user or the second user to check voice data or text data, and for example, a process of notifying voice data or text data, etc. Is a concept that includes The “request execution support process” is a process of outputting information for causing the second user to execute a request corresponding to the uttered voice, and is a concept including, for example, a request recognition process, a request information request process, and the like. The “request recognition process” is a process of outputting information for causing the second user to recognize the request corresponding to the uttered voice, and, for example, the request content is stored as a memorandum (for example, a so-called reminder, a so-called ToDo list, etc.) It is a concept including processing to notify. The “request information request process” determines whether there is insufficient information in the request corresponding to the uttered voice, and when it is determined that the insufficient information is present, the first user is requested to compensate for the insufficient information. The process is a process of outputting information for requesting, and is a concept including, for example, a process of determining whether or not there is any information such as an execution time limit of the request content, and the like, as required. The “use information output process” is a process for outputting information on the use of an object, and for example, information on the use of medicine, information on the use of medical equipment, or information on the use of other things, etc. It is a concept that includes, for example, information on side effects of drugs, information that specifies the possibility of using a medical device (specifically, information that specifies the content of planned surgery), etc. It is an included concept. The “object” is an object to be uttered, and is a concept including, for example, an object related to medical treatment such as a drug or a medical device, or any object not related to medical treatment. The “state information output process” is a process for outputting information related to the condition of a patient, and is a concept including, for example, a process for outputting information related to the condition of treatment of the patient.
 そして、以下に示す実施の形態では、「第1通話端末」及び「第2通話端末」がスマートフォンであり、「第1利用者」及び「第2利用者」が医師である場合に、「発話音声関連処理」として、データ認識処理及び使用情報出力処理を行う場合について説明する。 And in the embodiment shown below, when the "first call terminal" and the "second call terminal" are smart phones and the "first user" and the "second user" are doctors, "speech" A case where data recognition processing and usage information output processing are performed will be described as "voice related processing".
(構成)
 まず、実施の形態に係る医療システムの構成を説明する。図1は、本実施の形態に係る医療システムの利用例を示す図であり、また、図2は、医療システムのブロック図である。なお、図2の医療システム100に含まれている通話端末の数については任意であるが、ここでは、第1通話端末1及び第2通話端末2を具体的に例示して説明する。また、これらの第1通話端末1及び第2通話端末2については、相互に同様な構成の通話端末であるが、便宜上「第1」及び「第2」を付して説明する。
(Constitution)
First, the configuration of the medical system according to the embodiment will be described. FIG. 1 is a view showing an application example of the medical system according to the present embodiment, and FIG. 2 is a block diagram of the medical system. Although the number of call terminals included in the medical system 100 of FIG. 2 is arbitrary, the first call terminal 1 and the second call terminal 2 will be specifically exemplified and described here. The first call terminal 1 and the second call terminal 2 are call terminals having the same configuration as each other, but will be described by attaching “first” and “second” for convenience.
 図2の医療システム100は、概略的に、第1通話端末1、第2通話端末2、及び管理サーバ3を備えて構成されており、これら第1通話端末1、第2通話端末2、及び管理サーバ3は、ネットワークを介して相互に無線にて通信可能に接続されている。この医療システム100は、既存の医療システム(例えば、複数の医師の相互間又は複数の病院の相互間にて、通話端末又は据え置き型端末等を用いて各種情報をやりとりすることが可能なシステム等)とは別のシステムとして実現してもよいし、当該既存の医療システムに対して組み込むことによって実現してもよいし、あるいは、いわゆるソーシャル・ネットワーキング・サービス(SNS:Social Networking Service)の一部の機能として実現してもよいが、ここでは、例えば、既存の医療システムに対して組み込むことによって実現されているものとする。なお、管理サーバ3の設置場所及び実装方法は任意であり、分散配置された複数台のサーバ等によって構成することもできるし、病院のサーバ管理室の如き特定の場所に設けられた1つ以上のサーバによって構成することもできるが、ここでは、説明の便宜上、1つのサーバによって構成されていることとして説明する。また、第1通話端末1、及び第2通話端末2については、例えば、比較的大きな病院である総合病院等に勤務している医師によって携帯されて用いられていることとして、以下説明する。 The medical system 100 of FIG. 2 schematically includes a first call terminal 1, a second call terminal 2, and a management server 3. The first call terminal 1, the second call terminal 2, and the medical server 100 in FIG. The management servers 3 are communicably connected to each other wirelessly via a network. This medical system 100 is an existing medical system (for example, a system capable of exchanging various information using a call terminal or a stationary terminal or the like between a plurality of doctors or a plurality of hospitals). ) May be realized as a separate system, or may be realized by incorporating it into the existing medical system, or a part of a so-called social networking service (SNS: Social Networking Service). It may be realized here by, for example, being incorporated into an existing medical system. The installation location and mounting method of the management server 3 are arbitrary, and can be configured by a plurality of distributed servers, etc., or one or more provided in a specific place such as a server management room of a hospital However, for convenience of explanation, it will be described as one server. The first call terminal 1 and the second call terminal 2 will be described below as being carried and used by a doctor who is working at a general hospital or the like, which is a relatively large hospital, for example.
(構成-第1通話端末)
 最初に、第1通話端末1の構成について説明する。第1通話端末1は、例えば、第1利用者である病院内の医師(AA先生)に割り当てられて、当該医師によって用いられるスマートフォンであり、一例としては、通信部11、操作部12、ディスプレイ13、スピーカ14、マイク15、記録部16、及び制御部17を備えている。
(Configuration-first call terminal)
First, the configuration of the first call terminal 1 will be described. The first call terminal 1 is, for example, a smartphone assigned to a doctor (AA teacher) in the hospital who is the first user and used by the doctor, and as an example, the communication unit 11, the operation unit 12, and the display A speaker 14, a microphone 15, a recording unit 16, and a control unit 17 are provided.
(構成-第1通話端末-通信部)
 通信部11は、医療システム100の各通話端末との間で通信を行う通信手段であり、例えば、管理サーバ3を介して第2通話端末2との間で音声通信を行うものである。この通信部11の具体的な種類や構成は任意であるが、例えば、公知の無線通信回路等を備えて構成することができるものである。
(Configuration-first call terminal-communication unit)
The communication unit 11 is communication means for performing communication with each call terminal of the medical system 100, and performs, for example, voice communication with the second call terminal 2 via the management server 3. Although the specific type and configuration of the communication unit 11 are arbitrary, for example, the communication unit 11 can be configured to include a known wireless communication circuit and the like.
(構成-第1通話端末-操作部)
 操作部12は、ユーザによる操作入力を受け付ける操作手段である。この操作部12の具体的な種類や構成は任意であるが、例えば、公知のタッチパッドとして構成されるものであり、更に、このタッチパッドを透明又は半透明状に形成し、ディスプレイ13の前面において当該ディスプレイ13の表示面と重畳するように設けてタッチパネルを構成するものである。
(Configuration-first call terminal-operation unit)
The operation unit 12 is an operation unit that receives an operation input by a user. Although the specific type and configuration of the operation unit 12 are arbitrary, for example, the operation unit 12 is configured as a known touch pad, and further, the touch pad is formed to be transparent or translucent, and the front surface of the display 13 is The touch panel is provided so as to overlap the display surface of the display 13 in FIG.
(構成-第1通話端末-ディスプレイ)
 ディスプレイ13は、制御部17の制御に基づいて各種の画像を表示する表示手段である。このディスプレイ13の具体的な種類や構成は任意であるが、例えば、公知の液晶ディスプレイや有機ELディスプレイの如きフラットパネルディスプレイ等を用いて構成することができるものである。
(Configuration-1st call terminal-Display)
The display 13 is display means for displaying various images under the control of the control unit 17. Although the specific type and configuration of the display 13 are optional, for example, the display 13 can be configured using a flat panel display such as a known liquid crystal display or an organic EL display.
(構成-第1通話端末-スピーカ)
 スピーカ14は、制御部17の制御に基づいて各種の音声を出力する音声出力手段である。このスピーカ14の具体的な種類や構成は任意であるが、例えば、公知の音声出力回路等を用いて構成することができるものである。
(Configuration-1st call terminal-speaker)
The speaker 14 is an audio output unit that outputs various types of audio based on the control of the control unit 17. Although the specific type and configuration of the speaker 14 are arbitrary, for example, the speaker 14 can be configured using a known audio output circuit or the like.
(構成-第1通話端末-マイク)
 マイク15は、発話音声を受け付ける集音手段である。このマイク15の具体的な種類や構成は任意であるが、例えば、公知のマイクロフォンの構成(一例としては、振動版、コイル等)用いて構成することができるものである。
(Configuration-1st call terminal-microphone)
The microphone 15 is a sound collection unit that receives a speech sound. Although the specific type and configuration of the microphone 15 are arbitrary, for example, it can be configured using a known microphone configuration (for example, a vibrating plate, a coil or the like).
(構成-第1通話端末-記録部)
 記録部16は、第1通話端末1の動作に必要なプログラム及び各種のデータを記録する記録手段である。この記録部16は、例えば、外部記録装置としてのフラッシュメモリ用いて構成されている。ただし、フラッシュメモリに代えてあるいはフラッシュメモリと共に、ハードディスクの如き磁気的記録媒体、又はDVDやブルーレイディスクの如き光学的記録媒体を含むその他の任意の記録媒体を用いることができる。
(Configuration-first call terminal-recording unit)
The recording unit 16 is a recording unit that records programs necessary for the operation of the first call terminal 1 and various data. The recording unit 16 is configured using, for example, a flash memory as an external recording device. However, it is possible to use a magnetic recording medium such as a hard disk, or any other recording medium including an optical recording medium such as a DVD or a Blu-ray disc instead of or together with the flash memory.
(構成-第1通話端末-制御部)
 制御部17は第1通話端末1を制御する制御手段であり、具体的には、CPU、当該CPU上で解釈実行される各種のプログラム(OSなどの基本制御プログラムや、OS上で起動され特定機能を実現するアプリケーションプログラムを含む)、及びプログラムや各種のデータを格納するためのRAMの如き内部メモリを備えて構成されるコンピュータである。特に、実施の形態に係る情報処理プログラムは、任意の記録媒体又はネットワークを介して第1通話端末1にインストールされることで、制御部17の各部を実質的に構成する(後述の管理サーバ3の制御部33も同様とする)。
(Configuration-first call terminal-control unit)
The control unit 17 is a control unit that controls the first call terminal 1, and more specifically, the CPU, various programs to be interpreted and executed on the CPU (a basic control program such as an OS, or the It is a computer comprising an internal memory such as a RAM for storing programs and various data, including an application program for realizing functions. In particular, the information processing program according to the embodiment substantially configures each part of the control unit 17 by being installed in the first call terminal 1 via an arbitrary recording medium or network (a management server 3 described later) The same applies to the control unit 33 of
(構成-第2通話端末)
 次に、第2通話端末2の構成について説明する。第2通話端末2は、例えば、第2利用者である病院内の医師(BB先生)に割り当てられて、当該医師によって用いられるスマートフォンであり、一例としては、通信部21、操作部22、ディスプレイ23、スピーカ24、マイク25、記録部26、及び制御部27を備えている。なお、これらの第2通話端末2の各構成要素は、第1通話端末1の同一名称の構成要素と同様にして構成されるものである。
(Configuration-second call terminal)
Next, the configuration of the second call terminal 2 will be described. The second call terminal 2 is, for example, a smartphone assigned to a doctor (BB teacher) in the hospital who is the second user and used by the doctor, and as an example, the communication unit 21, the operation unit 22, and the display A speaker 24, a microphone 25, a recording unit 26, and a control unit 27 are provided. The components of the second call terminal 2 are configured in the same manner as the components of the first call terminal 1 having the same name.
(構成-管理サーバ)
 次に、管理サーバ3の構成について説明する。管理サーバ3は、情報処理システムであり、例えば、通信部31、記録部32、及び制御部33を備える。
(Configuration-Management Server)
Next, the configuration of the management server 3 will be described. The management server 3 is an information processing system, and includes, for example, a communication unit 31, a recording unit 32, and a control unit 33.
(構成-管理サーバ-通信部)
 通信部31は、医療システム100の各通話端末との間で通信を行う通信手段であり、例えば、医療システム100の通話端末の相互間の音声通信を中継するものである。この通信部31の具体的な種類や構成は任意であるが、例えば、公知の無線通信回路や中継回路等を備えて構成することができるものである。
(Configuration-Management Server-Communication Unit)
The communication unit 31 is a communication unit that performs communication with each call terminal of the medical system 100, and relays voice communication between call terminals of the medical system 100, for example. Although the specific type and configuration of the communication unit 31 are arbitrary, for example, the communication unit 31 can be configured to include known wireless communication circuits, relay circuits, and the like.
(構成-管理サーバ-記録部)
 記録部32は、管理サーバ3の動作に必要なプログラム及び各種のデータを記録する記録手段である。この記録部32は、例えば、外部記録装置としてのハードディスク(図示省略)を用いて構成されている。ただし、ハードディスクに代えてあるいはハードディスクと共に、磁気ディスクの如き磁気的記録媒体、又はDVDやブルーレイディスクの如き光学的記録媒体を含む、その他の任意の記録媒体を用いることができる。
(Configuration-Management Server-Recording Unit)
The recording unit 32 is a recording unit that records programs necessary for the operation of the management server 3 and various data. The recording unit 32 is configured, for example, using a hard disk (not shown) as an external recording device. However, instead of the hard disk or together with the hard disk, any other recording medium may be used, including a magnetic recording medium such as a magnetic disk or an optical recording medium such as a DVD or a Blu-ray disc.
 また、この記録部32は、例えば、端末情報データベース321(以下、データベースを「DB」と称する)、発話記録情報DB322、音声データDB323、及びテキストデータDB324を備えている。 Further, the recording unit 32 includes, for example, a terminal information database 321 (hereinafter, the database is referred to as “DB”), a speech recording information DB 322, a voice data DB 323, and a text data DB 324.
(構成-管理サーバ-記録部-端末情報DB)
 端末情報DB321は、端末情報を格納する端末情報格納である。「端末情報」とは、医療システム100の各通話端末に関連する情報である。図3は、端末情報を例示した図である。この図3に示すように、端末情報は、例えば、項目「端末ID」、及び項目「医師氏名情報」と、各項目に対応する情報とを、相互に関連付けて構成されている。ここで、項目「端末ID」に対応する情報は、医療システム100の各通話端末を一意に識別する端末識別情報である(以下、「識別情報」を「ID」と称する)(図3では、第1通話端末1の端末IDである「IDd1」及び第2通話端末2の端末IDである「IDd2」等)。また、項目「医師氏名情報」に対応する情報は、医療システム100の各通話端末を利用している医師の氏名を特定する医師氏名情報である(図3では、説明の便宜上の記載であり、図1の図面左側にイメージが図示されている医師であって、第1通話端末1を利用している医師の氏名である「AA」、図1の図面右側にイメージが図示されている医師であって、第2通話端末2を利用している医師の氏名である「BB」等)。そして、このような端末情報については、例えば、病院内の医師に対して医療システム100の各通話端末を割り当てた(貸し出した)上で、この割り当て結果(貸し出し結果)が反映されるように、管理サーバ3の不図示の入力手段(例えば、キーボード及びマウス等)を用いて入力することにより格納される。
(Configuration-Management Server-Recording Unit-Terminal Information DB)
The terminal information DB 321 is a terminal information storage that stores terminal information. The “terminal information” is information related to each call terminal of the medical system 100. FIG. 3 is a diagram illustrating terminal information. As shown in FIG. 3, the terminal information is configured, for example, by mutually associating the item "terminal ID" and the item "doctor name information" with the information corresponding to each item. Here, the information corresponding to the item “terminal ID” is terminal identification information that uniquely identifies each call terminal of the medical system 100 (hereinafter, “identification information” is referred to as “ID”) (in FIG. 3, “IDd1” that is the terminal ID of the first call terminal 1 and “IDd2” that is the terminal ID of the second call terminal 2). Further, the information corresponding to the item “doctor name information” is doctor name information specifying the name of the doctor who is using each call terminal of the medical system 100 (in FIG. 3, it is a description for convenience of description, The doctor whose image is illustrated on the left side of the drawing of FIG. 1 and whose name is the doctor who is using the first call terminal 1 “AA”, the doctor whose image is illustrated on the right side of the drawing of FIG. 1 “BB” which is the name of the doctor who is using the second call terminal 2). Then, for such terminal information, for example, after each call terminal of the medical system 100 is assigned (loaned) to a doctor in a hospital, the assignment result (loan result) is reflected, It stores by inputting using unillustrated input means (for example, a keyboard, a mouse, etc.) of the management server 3.
(構成-管理サーバ-記録部-端末情報DB)
 発話記録情報DB322は、発話記録情報を格納する発話記録情報格納手段である。「発話記録情報」とは、発話を記録した情報であり、具体的には、医療システム100の通話装置の相互間の通話(通信)に関する情報を記録した情報である。図4は、発話記録情報を例示した図である。この図4に示すように、発話記録情報は、例えば、項目「発信元情報」、項目「発信先情報」、項目「日時情報」、項目「音声データ特定情報」、及び項目「テキストデータ特定情報」と、各項目に対応する情報とを、相互に関連付けて構成されている。ここで、項目「発信元情報」に対応する情報は、通話(通信)の発信元を特定する発信元情報である(図4では、発信元の通話端末の端末IDであり、「IDd2」等)。また、項目「発信先情報」に対応する情報は、通話(通信)の発信先(宛先)を特定する発信先情報である(図4では、発信先の通話端末の端末IDであり、「IDd1」等)。また、項目「日時情報」に対応する情報は、通話(通信)が行われた日時を特定する日時情報である(図4では、月日及び時分を特定する8桁の数字であり、例えば、6月15日の14時01分を特定する「06151401」等)。また、項目「音声データ特定情報」に対応する情報は、音声データDB323に格納されている音声データを特定する音声データ特定情報である(図4では、音声データのファイル名であり、「vFile1」等)。また、項目「テキストデータ特定情報」に対応する情報は、テキストデータDB324に格納されているテキストデータを特定するテキストデータ特定情報である(図4では、テキストデータのファイル名であり、「tFile1」等)。そして、このような発話記録情報については、後述する格納通知処理を実行することにより格納される。
(Configuration-Management Server-Recording Unit-Terminal Information DB)
The speech recording information DB 322 is speech recording information storage means for storing speech recording information. The “speech record information” is information in which a speech is recorded, and more specifically, information in which information on a call (communication) between the communication devices of the medical system 100 is recorded. FIG. 4 is a diagram illustrating speech record information. As shown in FIG. 4, the speech recording information includes, for example, the item "source information", the item "destination information", the item "date and time information", the item "voice data identification information", and the item "text data identification information". "And information corresponding to each item are mutually associated and configured. Here, the information corresponding to the item "source information" is source information that specifies the source of the call (communication) (in FIG. 4, it is the terminal ID of the call terminal of the source, "IDd2", etc. ). Further, the information corresponding to the item “destination information” is transmission destination information for specifying a transmission destination (destination) of the call (communication) (in FIG. 4, it is the terminal ID of the call terminal of the destination, “IDd1 "etc). Further, the information corresponding to the item “date and time information” is date and time information specifying the date and time when the telephone call (communication) was performed (in FIG. 4, it is an 8-digit number for specifying month and day and hour and minute) To identify 14:01 on June 15, "06151401", etc.). Further, the information corresponding to the item "audio data identification information" is audio data identification information for identifying audio data stored in the audio data DB 323 (in FIG. 4, it is a file name of audio data, "vFile 1" etc). Further, the information corresponding to the item “text data identification information” is text data identification information identifying text data stored in the text data DB 324 (in FIG. 4, it is a file name of text data, “tFile1”. etc). Then, such utterance record information is stored by executing a storage notification process described later.
(構成-管理サーバ-記録部-音声データDB)
 音声データDB323は、音声データを格納する音声データ格納手段である。「音声データ」とは、前述したように、例えば、音声による空気の振動に対応する波形を特定する情報であり、より具体的には、医療システム100の通話端末相互間での会話の音声を特定するデータであり、一例としては、公知の圧縮方式にて圧縮された音声データのファイルとして格納されているものである。そして、このような音声データについては、後述する格納通知処理を実行することにより格納される。
(Configuration-Management server-Recording unit-Voice data DB)
The voice data DB 323 is voice data storage means for storing voice data. As described above, “voice data” is, for example, information that specifies a waveform corresponding to the vibration of air due to voice, and more specifically, the voice of a conversation between call terminals of the medical system 100 The data to be specified is, for example, one stored as a file of audio data compressed by a known compression method. Such voice data is stored by executing storage notification processing described later.
(構成-管理サーバ-記録部-テキストデータDB)
 テキストデータDB324は、テキストデータを格納するテキストデータ格納手段であり、また、後述する変換部332が変換したテキストデータを格納する発話格納手段である。「テキストデータ」とは、前述したように、例えば、文字情報であって、変換部332が音声データから変換したものであり、一例としては、テキストデータのファイルとして格納されているものである。そして、このようなテキストデータについては、後述する格納通知処理を実行することにより格納される。
(Configuration-Management server-Recording unit-Text data DB)
The text data DB 324 is text data storage means for storing text data, and is speech storage means for storing text data converted by the conversion unit 332 described later. As described above, “text data” is, for example, character information, which is converted from speech data by the conversion unit 332, and is stored as a text data file, for example. Such text data is stored by executing storage notification processing described later.
(構成-管理サーバ-制御部)
 図2の制御部33は、管理サーバ3を制御する制御手段であり、機能概念的に、受信部331、変換部332、及び処理部333を備える。受信部331は、医療システム100の通話端末を利用している利用者の間の発話であって、当該通話端末を用いて行われる発話の発話音声を受信する受信手段であり、具体的には、第1通話端末1を利用している第1利用者と第2通話端末2を利用している第2利用者との間の発話であって、第1通話端末1又は第2通話端末2を用いて行われる当該発話の発話音声を受信するものである。変換部332は、受信部331が受信した発話音声に対応する音声データを、当該音声データに対応するテキストデータに変換する変換手段である。処理部333は、受信部331が受信した発話音声に対応する音声データ、又は、変換部332が変換したテキストデータのうちの少なくとも一方のデータに基づいて、受信部331が受信した発話音声に関連する処理である発話音声関連処理を行う処理手段である。なお、この制御部33の各部により行われる処理については後述する。
(Configuration-Management Server-Control Unit)
The control unit 33 in FIG. 2 is a control unit that controls the management server 3 and includes a receiving unit 331, a converting unit 332, and a processing unit 333 in functional concept. The receiving unit 331 is a receiving unit that receives an utterance voice of an utterance performed using the call terminal, which is an utterance between users using the call terminal of the medical system 100, and more specifically, An utterance between a first user using the first call terminal 1 and a second user using the second call terminal 2, the first call terminal 1 or the second call terminal 2 To receive the uttered voice of the utterance performed using The conversion unit 332 is a conversion unit that converts voice data corresponding to the utterance voice received by the reception unit 331 into text data corresponding to the voice data. The processing unit 333 relates to the speech voice received by the reception unit 331 based on the voice data corresponding to the speech voice received by the reception unit 331, or at least one of the text data converted by the conversion unit 332. Processing means for performing a speech related process, which is a process to be performed. The processing performed by each unit of the control unit 33 will be described later.
(処理)
 次に、このように構成される医療システム100によって実行される処理について説明する。ここでは、例えば、格納通知処理、及び薬剤情報出力処理について説明する。
(processing)
Next, processing executed by the medical system 100 configured as described above will be described. Here, for example, storage notification processing and medicine information output processing will be described.
(処理-格納通知処理)
 まず、格納通知処理について説明する。図5は、格納通知処理のフローチャートである(以下の各処理の説明ではステップを「S」と略記する)。「格納通知処理」とは、データ認識処理を含む処理であり、例えば、音声データ及びテキストデータを格納し、また、音声データ又はテキストデータを、第1利用者又は第2利用者に確認させるための処理である。この格納通知処理を実行するタイミングは任意であるが、例えば、医療システム100の通話端末を用いて通話先の通話端末に対して電話が通じた場合(つまり、医療システム100の各通話端末の相互間での管理サーバ3を介する発話音声の通信が開始した場合)、あるいは、医療システム100の通話端末を用いて通話先の通話端末に対して電話をかけたが、所定時間以内(例えば、30秒等)に電話が通じずに留守番電話モードに切り替わった場合(つまり、医療システム100の通話端末と管理サーバ3との間の発話音声の通信が開始した場合)等に実行を開始するが、ここでは、通話先の通話端末に対して電話が通じた場合に格納通知処理を実行することとして、格納通知処理が実行が開始された後から説明する。また、医療システム100の通話端末の記録部に自己の端末ID及び通話先の候補である通話端末の端末IDのリスト(例えば、いわゆるアドレス帳の如きリスト)が記録されている(例えば、第1通話端末1の記録部16に自己の端末IDである「IDd1」、及び「IDd2」を含む通話先の候補であるリストが記録されており、第2通話端末2の記録部26に自己の端末IDである「IDd2」、及び「IDd1」を含む通話先の候補であるリストが記録されている)こととし、電話をかける通話端末が自己の端末IDと通話先の端末IDとを管理サーバ3に送信することにより格納通知処理が開始されることとして、格納通知処理が実行が開始された後から説明する。
(Process-storage notification process)
First, storage notification processing will be described. FIG. 5 is a flowchart of the storage notification process (steps will be abbreviated as “S” in the following description of each process). The “storage notification process” is a process including data recognition process, and stores, for example, voice data and text data, and also causes the first user or the second user to check voice data or text data. Processing. Although the timing for executing the storage notification process is arbitrary, for example, when a call is made to the call terminal of the call destination using the call terminal of the medical system 100 (that is, each call terminal of the medical system 100 In the case where the communication of the uttered voice via the management server 3 has started) or the call terminal of the call destination is called using the call terminal of the medical system 100, within a predetermined time (for example, 30) Execution is started when, for example, the voice communication between the call terminal of the medical system 100 and the management server 3 is started, etc., if the telephone switches to the answering machine mode without communication in seconds). Here, the execution of the storage notification process will be described as execution of the storage notification process as execution of the storage notification process when the call is made to the call terminal of the call destination. In addition, a list (for example, a list such as a so-called address book) of the terminal ID of the own terminal and the terminal ID of the call terminal which is the candidate of the call destination is recorded in the recording unit of the call terminal of the medical system 100 (for example, the first The recording unit 16 of the call terminal 1 records a list as a candidate of the call destination including the terminal IDs “IDd1” and “IDd2” of the own terminal, and the recording unit 26 of the second call terminal 2 It is assumed that a list that is a candidate of a call destination including IDs “IDd2” and “IDd1” is recorded), and a call terminal that makes a call manages its own terminal ID and a call destination terminal ID The storage notification process will be described as being started after the execution of the storage notification process is started.
 また、例えば、図1の第1利用者である依頼元の医師が第1通話端末1を用いて、第2利用者である依頼先の医師の第2通話端末2に対して電話をかけて通じた場合に、依頼元の医師が依頼先の医師に対して「患者P1に薬剤M1を投与したところ、7月1日に圧胸水合併症を発症し、また、食欲不振の副作用が生じました。7月3日の午前9時00分までに、患者P1に薬剤M2を投与して下さい。」(以下、薬剤投与に関する依頼発話)という依頼を行う場合を例示して説明する。なお、第1通話端末1及び第2通話端末2の間の通話について、公知の通話処理を適用することができるので、ここでは、管理サーバ3で行われる本願に特徴的な処理についてのみ説明する。 Further, for example, the doctor who is the request source who is the first user in FIG. 1 uses the first call terminal 1 to call the second call terminal 2 of the doctor who is the request destination who is the second user. If the doctor who is the client asks me to receive the medication M1 to patient P1, she developed pressure pleural effusion complications on July 1 and the side effects of anorexia also occurred. A case will be described by exemplifying the case where the patient P1 should be administered the drug M2 by 9:00 am on July 3 (hereinafter referred to as a request speech regarding the drug administration). In addition, since a well-known call process can be applied to the call between the first call terminal 1 and the second call terminal 2, only the process characteristic to the present application performed by the management server 3 will be described here. .
 まず、図5のSA1において受信部331は、録音を開始する。具体的には任意であるが、例えば、第1通話端末1からの発話音声及び第2通話端末2からの発話音声を、通信部31を介して受信して音声データとして音声データDB323を蓄積し続ける録音処理を開始する。そして、依頼元の医師が第1通話端末1にて発話した場合、第1通話端末1のマイク15にて発話音声を集音した上で、当該集音された発話音声が通信部11を介して管理サーバ3に送信され、当該送信された発話音声が管理サーバ3にて中継されて第2通話端末2に送信されることになるが、前述の録音処理が開始された場合、管理サーバ3の受信部331は、第1通話端末1から管理サーバ3に送信された発話音声を受信して録音する。なお、依頼先の医師が第2通話端末2にて発話した場合の、管理サーバ3による録音も、依頼元の医師が第1通話端末1にて発話した場合と同様にして行われる。 First, at SA1 in FIG. 5, the reception unit 331 starts recording. Specifically, although it is arbitrary, for example, the speech voice from the first call terminal 1 and the speech voice from the second call terminal 2 are received through the communication unit 31, and the voice data DB 323 is stored as voice data Start the recording process to continue. Then, when the doctor who made the request utters at the first call terminal 1, after the speech sound is collected by the microphone 15 of the first call terminal 1, the collected speech sound is transmitted via the communication unit 11. Is transmitted to the management server 3 and relayed by the management server 3 and transmitted to the second call terminal 2. However, when the recording process described above is started, the management server 3 The reception unit 331 receives and records the uttered voice transmitted from the first call terminal 1 to the management server 3. The recording by the management server 3 in the case where the requested doctor utters at the second call terminal 2 is also performed in the same manner as the case where the requesting doctor utters at the first call terminal 1.
 ここでは、例えば、依頼元の医師が第1通話端末1にて「薬剤投与に関する依頼発話」を発話した場合、第1通話端末1のマイク15にて「薬剤投与に関する依頼発話」の発話音声を集音した上で、当該集音された発話音声が通信部11を介して管理サーバ3に送信され、当該送信された発話音声が管理サーバ3にて中継されて第2通話端末2に送信されることになるが、前述の録音処理が開始された場合、管理サーバ3の受信部331は、第1通話端末1から管理サーバ3に送信された発話音声を受信し、受信した発話音声に対応する「薬剤投与に関する依頼発話」に対応する音声データ生成して音声データDB323に公知の圧縮方式にて圧縮されたファイル形式にて格納する。 Here, for example, when the doctor who made the request utters "request utterance regarding drug administration" at the first call terminal 1, the microphone 15 of the first call terminal 1 speaks uttered voice "request utterance for drug administration" After collecting the sound, the collected speech voice is transmitted to the management server 3 via the communication unit 11, and the transmitted speech speech is relayed by the management server 3 and transmitted to the second call terminal 2. However, when the above-mentioned recording process is started, the reception unit 331 of the management server 3 receives the uttered voice transmitted from the first call terminal 1 to the management server 3 and copes with the received uttered voice. The voice data corresponding to the “request utterance regarding drug administration” is generated and stored in the voice data DB 323 in a file format compressed by a known compression method.
 図5のSA2において受信部331は、通話が終了したか否かを判定する。具体的には任意であるが、例えば、第1通話端末1又は第2通話端末2が電話の通信を切断したか否かを公知の手法にて判定して判定結果に基づいて通話が終了したか否かを判定する。そして、第1通話端末1又は第2通話端末2が電話の通信を切断したわけではない場合(つまり、第1通話端末1及び第2通話端末2の相互間の電話の通信が行われている場合)、通話が終了していないものと判定し(SA2のNO)、通話が終了したものと判定するまで、SA2を繰り返し実行する。また、第1通話端末1又は第2通話端末2が電話の通信を切断した場合、通話が終了したものと判定し(SA2のYES)、SA3に移行する。ここでは、例えば、依頼元の医師が第1通話端末1の電話の通信の切断する操作を行った場合、電話の通信が切断されて、通話が終了したものと判定する。 In SA2 of FIG. 5, the reception unit 331 determines whether the call has ended. Specifically, although it is arbitrary, for example, it is determined whether or not the first call terminal 1 or the second call terminal 2 has disconnected the telephone communication by a known method, and the call is ended based on the determination result. It is determined whether or not. When the first call terminal 1 or the second call terminal 2 does not disconnect the communication of the telephone (that is, the communication of the telephone between the first call terminal 1 and the second call terminal 2 is performed). In the case where it is determined that the call has not ended (NO in SA2), SA2 is repeatedly executed until it is determined that the call has ended. When the first call terminal 1 or the second call terminal 2 disconnects the telephone communication, it is determined that the call has ended (YES in SA2), and the process proceeds to SA3. Here, for example, when the doctor as the request source performs an operation to cut off the communication of the telephone of the first call terminal 1, it is determined that the communication of the telephone is cut and the call is ended.
 図5のSA3において受信部331は、録音を終了する。具体的には任意であるが、例えば、前述の録音処理を終了して、SA1にて録音を開始してからSA3にて録音を終了するまでの間に格納された音声データを1つのファイルとしてまとめた上で、この1つのファイルに、所定のアルゴリズムに従ってファイル名(音声データDB323で一意に識別し得るファイル名)を付して、当該付したファイル名にて音声データを音声データDB323に格納する。ここでは、例えば、「薬剤投与に関する依頼発話」に対応する音声データの1つのファイルに「vFile3」というファイル名を付して、音声データDB323に格納する。 The receiving unit 331 ends the recording in SA3 of FIG. Specifically, although it is arbitrary, for example, the audio data stored between the start of the recording at SA1 and the end of the recording at SA3 after the above-described recording process is regarded as one file. After putting it together, a file name (a file name which can be uniquely identified by the audio data DB 323) is added to this one file according to a predetermined algorithm, and the audio data is stored in the audio data DB 323 with the attached file name. Do. Here, for example, a file name “vFile 3” is attached to one file of voice data corresponding to “request speech regarding drug administration”, and the file is stored in the voice data DB 323.
 図5のSA4において変換部332は、受信部331が受信した発話音声に対応する音声データを、当該音声データに対応するテキストデータに変換して格納する。具体的には任意であるが、例えば、SA3で格納した音声データを取得し、取得した音声データを公知の変換アルゴリズムにてテキストデータに変換し、変換したテキストデータに、所定のアルゴリズムに従ってファイル名(テキストデータDB324で一意に識別し得るファイル名)を付して、当該付したファイル名にてテキストデータをテキストデータDB324に格納する。なお、音声データをテキストデータに変換する具体的な手法については、公知の手法を用いることができるので、詳細の説明は省略する。ここでは、例えば、音声データDB323の「vFile3」を取得し、取得した「vFile3」を1つのファイルであるテキストデータに変換し、変換したテキストデータに「tFile3」というファイル名を付して、テキストデータDB324に格納する。 In SA4 of FIG. 5, the converting unit 332 converts voice data corresponding to the speech received by the receiving unit 331 into text data corresponding to the voice data and stores the converted text data. Specifically, although it is arbitrary, for example, the voice data stored in SA3 is acquired, the acquired voice data is converted to text data by a known conversion algorithm, and the converted text data is converted to a file name according to a predetermined algorithm (File name that can be uniquely identified in the text data DB 324) is added, and text data is stored in the text data DB 324 with the attached file name. In addition, since a known method can be used as a specific method of converting voice data into text data, detailed description will be omitted. Here, for example, “vFile3” of the voice data DB 323 is acquired, the acquired “vFile3” is converted into text data which is one file, and the converted text data is given a file name “tFile3” It stores in data DB324.
 図5のSA5において処理部333は、発話記録情報を格納する。具体的には任意であるが、例えば、図4の各情報(つまり、発信元情報、発信先情報、日時情報、音声データ特定情報、テキストデータ特定情報)について以下のように特定した上で格納する。 In SA5 of FIG. 5, the processing unit 333 stores speech recording information. Specifically, although it is optional, for example, each information in FIG. 4 (that is, source information, destination information, date and time information, voice data identification information, text data identification information) is specified as follows and stored Do.
 まず、発信元情報及び発信先情報ついては、例えば、格納通知処理を開始する場合に医療システム100の通話端末から管理サーバ3に送信された、電話をかける通話端末の端末IDと通話先の端末IDとを取得し、取得した電話をかける通話端末の端末IDを発信元情報として特定し、また、取得した通話先の端末IDを発信先情報として特定する。ここでは、例えば、格納通知処理を開始する場合に、電話をかける第1通話端末1が自己の端末IDである「IDd1」と通話先の端末IDである「IDd2」を管理サーバ3に送信した場合、「IDd1」を発信元情報として特定し、また、「IDd2」を発信先情報として特定する。 First, for source information and destination information, for example, the terminal ID of the call terminal and the terminal ID of the call destination, which are transmitted from the call terminal of the medical system 100 to the management server 3 when starting the storage notification process And specifies the terminal ID of the call terminal making the acquired call as source information, and specifies the acquired terminal ID of the call destination as destination information. Here, for example, when the storage notification process is started, the first call terminal 1 making a call transmits “IDd1” which is its own terminal ID and “IDd2” which is a called party terminal ID to the management server 3 In this case, “IDd1” is specified as the sender information, and “IDd2” is specified as the sender information.
 また、日時情報については、不図示の計時手段(例えば、公知のタイマー回路の如き時計回路)にアクセスして、格納通知処理が開始されたとき(つまり、例えば、通話先の通話端末に対して電話が通じたとき)の日時を取得し、取得した日時に対応する日時情報を特定する。ここでは、例えば、通話先の通話端末に対して電話が通じたときの日時として、7月2日17時14分を取得した場合、07021714を特定する。 Also, with regard to date and time information, when storage notification processing is started by accessing a clocking unit (for example, a clock circuit such as a known timer circuit) not shown (that is, for example, for a call terminal of a call destination). The date and time of the call) is acquired, and the date and time information corresponding to the acquired date and time is specified. Here, for example, when 17:14 on July 2 is acquired as the date and time when the call is made to the call terminal of the call destination, 070211714 is specified.
 また、音声データ特定情報については、SA3で格納した音声データのファイル名を特定し、また、テキストデータ特定情報については、SA4で格納したテキストデータのファイル名を特定する。ここでは、例えば、音声データ特定情報として「vFile3」を特定し、テキストデータ特定情報として「tFile3」を特定する。 Further, for voice data identification information, the file name of voice data stored at SA3 is identified, and for text data identification information, the file name of text data stored at SA4 is identified. Here, for example, “vFile 3” is identified as the audio data identification information, and “tFile 3” is identified as the text data identification information.
 そして、前述のようにして図4の各情報を特定した後、特定した各情報を発話記録情報DB322に格納することにより発話記録情報を格納する。ここでは、例えば、図4の図面上側から3段目の各情報を格納する。 Then, after each information in FIG. 4 is specified as described above, the specified information is stored in the speech recording information DB 322 to store the speech recording information. Here, for example, each information of the third level from the upper side of the drawing of FIG. 4 is stored.
 図5のSA6において処理部333は、音声データ又はテキストデータを第1利用者又は第2利用者に確認させるために通知を行う。具体的には任意であるが、例えば、第1利用者及び第2利用者に対して通話が行われたことを認識させるために、一例としては、図1に示すように、第1通話端末1のディスプレイ13に発信元側通知画像G1を表示し、また、第2通話端末2のディスプレイ23に発信先側通知画像G2を表示しすることにより通知を行う。 In SA6 of FIG. 5, the processing unit 333 sends a notification to confirm the voice data or text data to the first user or the second user. Specifically, although it is arbitrary, for example, as shown in FIG. 1, in order to recognize that a call has been made to the first user and the second user, for example, the first call terminal The notification 13 is performed by displaying the transmission source notification image G1 on the first display 13 and displaying the transmission destination notification image G2 on the display 23 of the second call terminal 2.
 ここで、「発信元側通知画像」G1は、発話音声に関連する情報であって、医療システム100における電話をかけた通話端末に表示される画像であり、例えば、発信元側メッセージ情報G11、発信元側音声再生ボタンG12、及び発信元側テキスト表示ボタンG13を含む画像である。発信元側メッセージ情報G11は、第1利用者が依頼を行ったことを特定する情報であり、具体的には、依頼を行った日時と依頼先の医師とを特定するテキスト情報である。発信元側音声再生ボタンG12は、格納されている音声データを再生するためのボタンであり、また、発信元側テキスト表示ボタンG13は、格納されているテキストデータを表示するためのボタンである。 Here, the “source side notification image” G1 is information related to the uttered voice and is an image displayed on the call terminal which has made a call in the medical system 100, and, for example, the source side message information G11, It is an image including the sender side voice reproduction button G12 and the sender side text display button G13. The sender side message information G11 is information for specifying that the first user has made a request, and more specifically, is text information for specifying the date and time of the request and the doctor of the request destination. The source-side voice reproduction button G12 is a button for reproducing the stored voice data, and the source-side text display button G13 is a button for displaying the stored text data.
 また、「発信先側通知画像」G2は、発話音声に関連する情報であって、医療システム100における電話をかけられた通話端末(つまり、通話先)に表示される画像であり、例えば、発信先側メッセージ情報G21、発信先側音声再生ボタンG22、及び発信先側テキスト表示ボタンG23を含む画像である。発信先側メッセージ情報G21は、第2利用者が依頼されたことを特定する情報であり、具体的には、依頼が行われた日時と依頼元の医師とを特定するテキスト情報である。発信先側音声再生ボタンG22は、格納されている音声データを再生するためのボタンであり、また、発信先側テキスト表示ボタンG23は、格納されているテキストデータを表示するためのボタンである。 The “destination notification image” G2 is information related to the uttered voice and is an image displayed on the call terminal (that is, the call destination) called in the medical system 100. It is an image including the front side message information G21, the transmission side audio reproduction button G22, and the transmission side text display button G23. The transmission destination side message information G21 is information for specifying that the second user has been requested, and more specifically, is text information for specifying the date and time when the request was made and the doctor as the request source. The destination-side audio reproduction button G22 is a button for reproducing stored audio data, and the destination-side text display button G23 is a button for displaying stored text data.
 図5のSA6についてより具体的には、処理部333は、図3の端末情報及び図4の発話記録情報に基づいて、図1の発信元側通知画像G1及び発信先側通知画像G2を表示する。図1の発信元側通知画像G1について詳細には、まず、図4の日時情報を取得し、取得した日時情報に対応する日時を図1の発信元側メッセージ情報G11の「:」の左側に表示する。また、図4の発信先情報を取得し、図3を参照して、当該取得した発信先情報の端末IDに対応する医師氏名情報を取得し、取得した医師氏名情報を「:」の右側に表示し、更にその右側に「先生へ」という文字を表示することにより、発信元側メッセージ情報G11する。また、図4の音声データ特定情報を取得し、図2の音声データDB323に格納されている音声データのうちの、当該取得した音声データ特定情報が特定する音声データへのリンクを生成し、生成したリンクを発信元側音声再生ボタンG12に対応付けた上で、当該発信元側音声再生ボタンG12を表示する。また、図4のテキストデータ特定情報を取得し、図2のテキストデータDB324に格納されているテキストデータのうちの、当該取得したテキストデータ特定情報が特定するテキストデータへのリンクを生成し、生成したリンクを発信元側テキスト表示ボタンG13に対応付けた上で、当該発信元側テキスト表示ボタンG13を表示する。 More specifically, the processing unit 333 displays the source notification image G1 and the transmission destination notification image G2 of FIG. 1 based on the terminal information of FIG. 3 and the utterance recording information of FIG. Do. For details of the source notification image G1 of FIG. 1, first, the date and time information of FIG. 4 is acquired, and the date and time corresponding to the acquired date and time information is on the left side of ":" of the source message information G11 of FIG. indicate. Further, the transmission destination information of FIG. 4 is acquired, referring to FIG. 3, the doctor name information corresponding to the terminal ID of the acquired transmission destination information is acquired, and the acquired doctor name information is on the right side of “:”. Further, the message "To teacher" is displayed on the right side of the message to make the sender message information G11. Further, the voice data identification information of FIG. 4 is acquired, and a link to the voice data specified by the acquired voice data identification information among the voice data stored in the voice data DB 323 of FIG. 2 is generated and generated. After the generated link is associated with the transmission side audio reproduction button G12, the transmission side audio reproduction button G12 is displayed. Also, the text data identification information of FIG. 4 is acquired, and a link to text data specified by the acquired text data identification information among the text data stored in the text data DB 324 of FIG. 2 is generated and generated. After the generated link is associated with the sender-side text display button G13, the sender-side text display button G13 is displayed.
 ここでは、例えば、まず、図4の図面上側から3段目の情報において、まず、日時情報=「07021714」を取得し、取得した日時情報に対応する日時=「7月2日17時14分」を図1の発信元側メッセージ情報G11の「:」の左側に表示する。また、図4の発信先情報=「IDd2」を取得し、図3を参照して、当該取得した発信先情報の端末IDに対応する医師氏名情報=「BB」を取得し、取得した医師氏名情報を「:」の右側に表示し、更にその右側に「先生へ」という文字を表示することにより、発信元側メッセージ情報G11する。また、図4の音声データ特定情報=「vFile3」を取得し、図2の音声データDB323に格納されている音声データのうちの、当該取得した音声データ特定情報が特定する音声データ(「薬剤投与に関する依頼発話」に対応する音声データ)へのリンクを生成し、生成したリンクを発信元側音声再生ボタンG12に対応付けた上で、当該発信元側音声再生ボタンG12を表示する。また、図4のテキストデータ特定情報=「tFile3」を取得し、図2のテキストデータDB324に格納されているテキストデータのうちの、当該取得したテキストデータ特定情報が特定するテキストデータ(「薬剤投与に関する依頼発話」に対応するテキストデータ)へのリンクを生成し、生成したリンクを発信元側テキスト表示ボタンG13に対応付けた上で、当該発信元側テキスト表示ボタンG13を表示する。 Here, for example, first, in the third row of information from the upper side of the drawing in FIG. 4, first, date and time information = “07021714” is acquired, and the date and time corresponding to the acquired date and time information = “July 2, 17:14 "" Is displayed on the left side of ":" of the source message information G11 of FIG. Further, the transmission destination information = “IDd2” in FIG. 4 is acquired, and referring to FIG. 3, the doctor name information = “BB” corresponding to the terminal ID of the acquired transmission destination information is acquired, and the acquired doctor name The information is displayed on the right side of ":", and the message "To teacher" is displayed on the right side of the ":" to make the sender side message information G11. Further, the voice data identification information = “vFile3” in FIG. 4 is acquired, and the voice data specified by the acquired voice data identification information among the voice data stored in the voice data DB 323 in FIG. After the link to the voice data corresponding to “request uttering regarding” is generated, and the generated link is associated with the sending side voice reproduction button G12, the transmission source side voice reproduction button G12 is displayed. Further, the text data identification information = “tFile3” in FIG. 4 is acquired, and the text data specified by the acquired text data identification information among the text data stored in the text data DB 324 in FIG. After the link to the text data corresponding to “request utterance regarding“) is generated and the generated link is associated with the transmission source side text display button G 13, the transmission source side text display button G 13 is displayed.
 また、図1の発信先側通知画像G2について詳細には、まず、図4の日時情報を取得し、取得した日時情報に対応する日時を図1の発信先側メッセージ情報G21の「:」の左側に表示する。また、図4の発信元情報を取得し、図3を参照して、当該取得した発信先情報の端末IDに対応する医師氏名情報を取得し、取得した医師氏名情報を「:」の右側に表示し、更にその右側に「先生より」という文字を表示することにより、発信先側メッセージ情報G21する。また、図4の音声データ特定情報を取得し、図2の音声データDB323に格納されている音声データのうちの、当該取得した音声データ特定情報が特定する音声データへのリンクを生成し、生成したリンクを発信先側音声再生ボタンG22に対応付けた上で、当該発信先側音声再生ボタンG22を表示する。また、図4のテキストデータ特定情報を取得し、図2のテキストデータDB324に格納されているテキストデータのうちの、当該取得したテキストデータ特定情報が特定するテキストデータへのリンクを生成し、生成したリンクを発信先側テキスト表示ボタンG23に対応付けた上で、当該発信先側テキスト表示ボタンG23を表示する。 Also, in detail with respect to the transmission destination side notification image G2 of FIG. 1, first, the date and time information of FIG. 4 is acquired, and the date and time corresponding to the acquired date and time information is “:” of the transmission destination side message information G21 of FIG. Display on the left side. Also, the source information in FIG. 4 is acquired, and referring to FIG. 3, the doctor name information corresponding to the terminal ID of the acquired destination information is acquired, and the acquired doctor name information is on the right side of “:”. By displaying the text and further displaying the text "from the teacher" on the right side, destination message information G21 is made. Further, the voice data identification information of FIG. 4 is acquired, and a link to the voice data specified by the acquired voice data identification information among the voice data stored in the voice data DB 323 of FIG. 2 is generated and generated. After the generated link is associated with the destination voice reproduction button G22, the destination voice reproduction button G22 is displayed. Also, the text data identification information of FIG. 4 is acquired, and a link to text data specified by the acquired text data identification information among the text data stored in the text data DB 324 of FIG. 2 is generated and generated. After associating the generated link with the transmission destination side text display button G23, the transmission destination side text display button G23 is displayed.
 ここでは、例えば、図4の図面上側から3段目の情報において、日時情報=「07021714」、発信元情報=「IDd2」、音声データ特定情報=「vFile3」、及びテキストデータ特定情報=「tFile3」を取得した上で、図1の発信先側通知画像G2を表示する。なお、この発信先側通知画像G2の発信先側音声再生ボタンG22には、「薬剤投与に関する依頼発話」に対応する音声データへのリンクが対応付けられ、また、発信先側テキスト表示ボタンG23には、「薬剤投与に関する依頼発話」に対応するテキストデータへのリンクが対応付けられることになる。これにて、格納通知処理を終了する。 Here, for example, in the third level information from the upper side of the drawing of FIG. 4, date / time information = “07021714”, sender information = “IDd2”, audio data identification information = “vFile3”, and text data identification information = “tFile3 Is displayed, and the transmission destination notification image G2 of FIG. 1 is displayed. In addition, a link to voice data corresponding to “request speech regarding drug administration” is associated with the transmission side audio reproduction button G22 of the transmission destination side notification image G2, and to the transmission side text display button G23. A link to text data corresponding to “request speech regarding drug administration” will be associated. This completes the storage notification process.
(処理-格納通知処理-データの確認)
 前述のように、図5のSA6にて、図1に示すように、第1通話端末1のディスプレイ13に発信元側通知画像G1が表示され、また、第2通話端末2のディスプレイ23に発信先側通知画像G2が表示されるので、第1利用者又は第2利用者に通話があったことを認識させ、音声データ又はテキストデータを確認させることが可能となる。
(Process-Storage notification process-Data confirmation)
As described above, at SA6 in FIG. 5, as shown in FIG. 1, the source notification image G1 is displayed on the display 13 of the first call terminal 1, and the call is sent to the display 23 of the second call terminal 2. Since the front notification image G2 is displayed, it is possible to make the first user or the second user recognize that there is a call, and confirm the audio data or the text data.
 例えば、第1利用者である依頼元の医師は、発信元側通知画像G1の発信元側メッセージ情報G11を視認することにより、自己が電話にて依頼を行ったことを認識することが可能となる。また、発信元側音声再生ボタンG12を押下した場合、第1通話端末1の制御部17が、押下された発信元側音声再生ボタンG12に対応する音声データ(発話音声に関連する情報)を、スピーカ14を介して音声出力して再生することになり、依頼元の医師は、「薬剤投与に関する依頼発話」の内容を音声にて確認することが可能となる。また、発信元側テキスト表示ボタンG13を押下した場合、第1通話端末1の制御部17が、押下された発信元側テキスト表示ボタンG13に対応するテキストデータ(発話音声に関連する情報)をディスプレイ13を介して表示出力しすることになり、依頼元の医師は、「薬剤投与に関する依頼発話」の内容をテキストにて確認することが可能となる。 For example, the doctor who is the request source who is the first user can recognize that he / she has made a request by telephone by visually recognizing the sender message information G11 of the sender notification image G1. Become. Further, when the caller side voice reproduction button G12 is pressed, the control unit 17 of the first call terminal 1 makes the voice data (information related to the uttered voice) corresponding to the pressed caller side voice reproduction button G12 Since the voice is output via the speaker 14 and reproduced, the doctor as the request source can confirm the content of the “request utterance regarding drug administration” by voice. In addition, when the sender side text display button G13 is pressed, the control unit 17 of the first call terminal 1 displays text data (information related to speech) corresponding to the sender side text display button G13 pressed. As a result, display and output is performed via S13, and the request source doctor can confirm the content of "request utterance regarding drug administration" by text.
 また、例えば、第2利用者である依頼先の医師は、発信先側通知画像G2の発信先側メッセージ情報G21を視認することにより、自己が電話にて依頼されたことを認識することが可能となる。また、発信先側音声再生ボタンG22を押下した場合、第2通話端末2の制御部27が、押下された発信先側音声再生ボタンG22に対応する音声データ(発話音声に関連する情報)を、スピーカ24を介して音声出力して再生することになり、依頼先の医師は、「薬剤投与に関する依頼発話」の内容を音声にて確認することが可能となる。また、発信先側テキスト表示ボタンG23を押下した場合、第2通話端末2の制御部27が、押下された発信先側テキスト表示ボタンG23に対応するテキストデータ(発話音声に関連する情報)をディスプレイ23を介して表示出力しすることになり、依頼先の医師は、「薬剤投与に関する依頼発話」の内容をテキストにて確認することが可能となる。 Also, for example, the doctor who is the request destination who is the second user can recognize that he / she is requested by telephone by visually recognizing the transmission destination side message information G21 of the transmission destination side notification image G2. It becomes. In addition, when the transmission side voice reproduction button G22 is pressed, the control unit 27 of the second call terminal 2 converts the voice data (information related to the speech) corresponding to the pressed transmission side voice reproduction button G22, Since the voice is output and reproduced through the speaker 24, the doctor as the request destination can confirm the content of the "request utterance regarding drug administration" by voice. In addition, when the transmission destination side text display button G23 is pressed, the control unit 27 of the second call terminal 2 displays text data (information related to speech) corresponding to the pressed transmission destination side text display button G23. As a result, display and output are performed through S.23, and the doctor of the request destination can confirm the content of "request speech regarding drug administration" by text.
(処理-薬剤情報出力処理)
 次に、薬剤情報出力処理について説明する。図6は、薬剤情報出力処理のフローチャートである。「薬剤情報出力処理」とは、使用情報出力処理を含む処理であり、例えば、薬剤の使用に関する情報を出力する処理であり、一例としては、薬剤の副作用の情報を出力する処理である。この薬剤情報出力処理を実行するタイミングは任意であるが、例えば、所定時間(例えば、12時間~24時間等)毎に繰り返し実行されることとし、薬剤情報出力処理の実行が開始された後から説明する。
(Process-Drug information output process)
Next, drug information output processing will be described. FIG. 6 is a flowchart of medicine information output processing. The “drug information output process” is a process including a usage information output process, for example, a process of outputting information related to the use of a drug, and as an example, a process of outputting information on a side effect of a drug. The timing to execute the drug information output process is arbitrary, but for example, it is repeatedly executed every predetermined time (for example, 12 hours to 24 hours, etc.), and after execution of the drug information output process is started. explain.
 まず、図6のSB1において処理部333は、データを取得する。具体的には任意であるが、例えば、図2の記録部32のテキストデータDB324のテキストデータを取得する。なお、薬剤情報出力処理は、前述したように、所定時間毎に繰り返し実行されるので、テキストデータDB324の同一のテキストデータに対して重複して複数回処理が行われることを防止するために、例えば、SB1にて取得したテキストデータのファイル名を記録部32に記録し、記録部32に記録されていないテキストデータのみを取得することとする。ここでは、例えば、「tFile3」というファイル名の「薬剤投与に関する依頼発話」に対応するテキストデータ等を取得する。 First, the processing unit 333 acquires data in SB1 of FIG. Specifically, for example, the text data of the text data DB 324 of the recording unit 32 of FIG. In addition, since the drug information output process is repeatedly performed at predetermined time intervals as described above, in order to prevent the same text data of the text data DB 324 from being redundantly performed a plurality of times, For example, the file name of the text data acquired at SB1 is recorded in the recording unit 32, and only text data not recorded in the recording unit 32 is acquired. Here, for example, text data or the like corresponding to “request speech regarding drug administration” having a file name “tFile 3” is acquired.
 図6のSB2において処理部333は、情報を出力するか否かを判定する。具体的には任意であるが、例えば、SB1にて取得したテキストデータに所定のキーワードが含まれているか否かを判定して、判定結果に基づいて情報を出力するか否かを判定する。なお、「所定のキーワード」とは、情報を出力するか否かを判定するために用いられるキーワードであり、例えば、薬剤の提供者側(例えば、製薬会社等)の業務に資する情報に関連するキーワードであり、一例としては、当該提供者側によって設定されるキーワードである。ここでは、例えば、所定のキーワードとして、「薬剤」、「副作用」、及び「合併症」が設定されていることとする。SB1について具体的には、SB1にて取得したテキストデータに所定のキーワードの全てが含まれているわけではない場合、情報を出力しないものと判定し(SB2のNO)、処理を終了する。また、SB1にて取得したテキストデータに所定のキーワードが含まれている場合、情報を出力するものと判定し(SB2のYES)、SB3に移行する。ここでは、例えば、SB1にて取得したテキストデータである「患者P1に薬剤M1を投与したところ、7月1日に圧胸水合併症を発症し、また、食欲不振の副作用が生じました。7月3日の午前9時00分までに、患者P1に薬剤M2を投与して下さい。」には、「薬剤」、「副作用」、及び「合併症」というキーワードが全て含まれているので、情報を出力するものと判定する。 In SB2 of FIG. 6, the processing unit 333 determines whether to output information. Specifically, although it is arbitrary, for example, it is determined whether the text data acquired at SB1 includes a predetermined keyword, and it is determined whether to output information based on the determination result. The "predetermined keyword" is a keyword used to determine whether or not to output information, and relates to, for example, information contributing to the business of a drug provider (eg, a pharmaceutical company etc.) It is a keyword, for example, a keyword set by the provider side. Here, for example, it is assumed that “drug”, “side effect”, and “complication” are set as predetermined keywords. Specifically, with regard to SB1, when the text data acquired in SB1 does not include all of the predetermined keywords, it is determined that the information is not output (NO in SB2), and the process is ended. When the text data acquired in SB1 includes a predetermined keyword, it is determined that the information is to be output (YES in SB2), and the process shifts to SB3. Here, for example, the text data acquired at SB1 “The patient P1 was administered the drug M1 and developed complications of compression pleural effusion on July 1 and a side effect of anorexia also occurred. 7 Please administer the medication M2 to the patient P1 by 9:00 am on March 3. "contains all the keywords" drug "," side effects "and" complications ", It is determined that information is to be output.
 図6のSB3において処理部333は、情報を出力する。具体的には任意であるが、例えば、管理サーバ3が薬剤の提供者側の端末装置(例えば、スマートフォン、あるいは、据え置き型のコンピュータ等)と通信可能に接続されていることとし、SB1で取得したテキストデータを薬剤の提供者側の端末装置に送信することにより出力する。ここでは、例えば、「tFile3」というファイル名の「薬剤投与に関する依頼発話」に対応するテキストデータ(発話音声に関連する情報)を送信する。そして、この「薬剤投与に関する依頼発話」に対応するテキストデータを受信した薬剤の提供者側は、受信した当該テキストデータを蓄積することにより、既存薬剤の使用方法の変更、既存薬剤の改良、又は新薬の開発等のために役立てることが可能になる。これにて、情報出力処理を終了する。 The processing unit 333 outputs information at SB3 in FIG. Specifically, although it is arbitrary, for example, it is assumed that the management server 3 is communicably connected to a terminal device (for example, a smartphone or a stationary computer) on the medicine provider side, and acquired at SB1. The transmitted text data is output by transmitting it to the terminal device on the medicine provider side. Here, for example, text data (information relating to speech sound) corresponding to the “request speech regarding drug administration” having the file name “tFile 3” is transmitted. Then, the provider side of the drug that has received the text data corresponding to the “request administration for drug administration” changes the usage of the existing drug, improves the existing drug, or stores the received text data. It can be used to develop new drugs. This completes the information output process.
(実施の形態の効果)
 このように実施の形態によれば、受信部331が受信した発話音声に対応する音声データ、又は、変換部332が変換したテキストデータのうちの少なくとも一方のデータに基づいて、発話音声に関連する処理である発話音声関連処理(具体的には、データ認識処理及び使用情報出力処理)を行うことにより、例えば、第1利用者又は第2利用者が発話の内容を手作業にて入力することなく発話音声関連処理を行うことができるので、発話の内容を確実に反映することが可能となる。特に、例えば、医療現場での発話音声に適用する場合、医療現場での発話の内容を反映することができ、医療現場での活動を円滑化したり、あるいは、医療現場での発話の内容を有効に活用することが可能となる。
(Effect of the embodiment)
As described above, according to the embodiment, at least one of speech data corresponding to the speech received by the reception unit 331 or text data converted by the conversion unit 332 relates to the speech. For example, the first user or the second user manually inputs the content of the utterance by performing the utterance voice related processing (specifically, the data recognition processing and the use information output processing) which is the processing. Since it is possible to perform the utterance voice related processing, the contents of the utterance can be reliably reflected. In particular, for example, when applied to speech at a medical site, the content of the speech at the medical site can be reflected, and the activity at the medical site can be facilitated or the content of the speech at the medical site can be effective. It can be used to
 また、データ認識処理を行うことにより、例えば、第1利用者又は第2利用者に音声データ又はテキストデータを確認させることができるので、発話の内容を第1利用者又は第2利用者に反映することが可能となる。また、例えば、第1利用者又は第2利用者に対して第1通話端末1又は第2通話端末2を用いて行われる発話があったことを思い出させることが可能となる。 Also, by performing data recognition processing, for example, the first user or the second user can be made to confirm voice data or text data, so the content of the utterance is reflected to the first user or the second user. It is possible to Also, for example, it becomes possible to remind the first user or the second user that there is an utterance performed using the first call terminal 1 or the second call terminal 2.
 また、使用情報出力処理を行うことにより、例えば、対象物(具体的には、薬剤)の使用に関する情報を出力することができるので、発話の内容を対象物の改良等に役立てることができ、発話の内容を有効に活用することが可能となる。 In addition, by performing use information output processing, for example, it is possible to output information regarding the use of an object (specifically, a drug), so that the contents of the utterance can be used to improve the object, etc. It becomes possible to effectively utilize the contents of the utterance.
〔変形例〕
 以上、本発明に係る実施の形態について説明したが、本発明の具体的な構成及び手段は、請求の範囲に記載した各発明の技術的思想の範囲内において、任意に改変及び改良できる。以下、このような変形例について説明する。
[Modification]
Although the embodiments according to the present invention have been described above, the specific configuration and means of the present invention can be arbitrarily modified and improved within the scope of the technical idea of each invention described in the claims. Hereinafter, such a modified example will be described.
(解決しようとする課題や発明の効果について)
 まず、発明が解決しようとする課題や発明の効果は、前記した内容に限定されるものではなく、本発明によって、前記に記載されていない課題を解決したり、前記に記載されていない効果を奏することもでき、また、記載されている課題の一部のみを解決したり、記載されている効果の一部のみを奏することがある。
(About problem to be solved and effect of invention)
First of all, the problems to be solved by the invention and the effects of the invention are not limited to the contents described above, and the present invention solves the problems not described above, and the effects not described above. It may also play, or may only solve some of the listed tasks or only some of the listed effects.
(分散や統合について)
 また、上述した各電気的構成要素は機能概念的なものであり、必ずしも物理的に図示の如く構成されていることを要しない。すなわち、各部の分散や統合の具体的形態は図示のものに限られず、その全部または一部を、各種の負荷や使用状況などに応じて、任意の単位で機能的または物理的に分散又は統合して構成できる。また、本出願における「システム」とは、複数の装置によって構成されたものに限定されず、単一の装置によって構成されたものを含む。また、本出願における「装置」とは、単一の装置によって構成されたものに限定されず、複数の装置によって構成されたものを含む。また、上記実施の形態で説明した各情報(DBを含む)については、そのデータ構造を任意に変更してもよい。また、管理サーバ3の機能を第1通話端末1又は第2通話端末2に分散させることにより、管理サーバ3を省略してもよい。
(About distribution and integration)
In addition, each of the above-described electrical components is functionally conceptual and does not necessarily have to be physically configured as illustrated. That is, the specific form of the distribution and integration of each part is not limited to the illustrated one, and all or a part thereof is functionally or physically dispersed or integrated in any unit according to various loads, usage conditions, etc. Can be configured. Furthermore, the “system” in the present application is not limited to one configured by a plurality of devices, but includes one configured by a single device. In addition, the “device” in the present application is not limited to one configured by a single device, but includes one configured by a plurality of devices. The data structure of each piece of information (including DB) described in the above embodiment may be arbitrarily changed. The management server 3 may be omitted by distributing the function of the management server 3 to the first call terminal 1 or the second call terminal 2.
(形状、数値、構造、時系列について)
 実施の形態や図面において例示した構成要素に関して、形状、数値、又は複数の構成要素の構造若しくは時系列の相互関係については、本発明の技術的思想の範囲内において、任意に改変及び改良することができる。
(About shape, number, structure, time series)
With regard to the components illustrated in the embodiments and the drawings, the shape, the numerical value, or the structure or the time-series correlation of a plurality of components may be arbitrarily modified and improved within the scope of the technical idea of the present invention. Can.
(各処理について(その1))
 また、上記実施の形態で説明した各処理を任意に組み替えたり、省略したり、変更したり、あるいは、新たな処理を追加したりしてもよい。具体的には、例えば、図5のSA4にて、SA3の音声データの格納を待たずに、管理サーバ3の変換部332が、管理サーバ3に送信された発話音声を受信部331が受信する毎に、テキストデータを変換してもよい。
(About each process (1))
In addition, each process described in the above embodiment may be arbitrarily rearranged, omitted, changed, or a new process may be added. Specifically, for example, the conversion unit 332 of the management server 3 receives the speech sound transmitted to the management server 3 without waiting for the storage of the voice data of SA3 in SA4 of FIG. 5. Text data may be converted each time.
(各処理について(その2))
 また、例えば、図6のSB1~SB3については、テキストデータのみを用いて処理する場合について説明したが、これに限らず、音声データのみを用いて各処理を行ったり、あるいは、音声データ及びテキストデータの双方を用いて各処理を行ったりしてもよい。つまり、発話音声関連処理について、処理部333が、テキストデータのみを用いて行うように構成してもよく、音声データのみを用いて行うように構成してもよく、あるいは、音声データ及びテキストデータの双方を用いて行うように構成してもよい(変形例の発話音声関連処理についても、同様である)。
(About each process (2))
Further, for example, in the case of SB1 to SB3 in FIG. 6, although the case of processing using only text data has been described, the present invention is not limited thereto, and each processing may be performed using only voice data, or voice data and text Each process may be performed using both of the data. In other words, the processing unit 333 may be configured to perform only the text data, or may be configured to perform only the voice data, or the voice data and the text data may be processed for the utterance voice related processing. It may be configured to perform using both of the above (the same applies to the speech related processing of the modification).
(各処理について(その3))
 また、例えば、図6のSB2での「所定のキーワード」として、例えば、医療機器の提供者側(例えば、医療機器メーカー等)の業務に資する情報に関連するキーワードであり、一例としては、当該提供者側によって設定されるキーワードとして、「胃癌手術」及び「内視鏡手術」を設定し、SB3にて、管理サーバ3が医療機器の提供者側の端末装置に情報を出力してもよい。この場合、医療機器メーカーは、内視鏡手術装置を胃癌にて用いるための当該装置の取り扱い説明を手術前に医師に対して行ったり、あるいは、手術後に当該装置の使用感等を医師に尋ねたりすることにより、営業活動又は製品の開発(改良)活動に役立てることが可能となる。
(About each process (3))
Also, for example, as the "predetermined keyword" in SB2 of FIG. 6, for example, it is a keyword related to the information contributing to the business of the provider of the medical device (for example, medical device maker etc.). "Stomach cancer surgery" and "endoscopic surgery" may be set as keywords set by the provider side, and the management server 3 may output information to the terminal device of the provider of the medical device at SB3. . In this case, the medical device manufacturer gives the doctor a manual for the operation of the device for using the endoscopic surgery device in gastric cancer before surgery, or asks the doctor about the feeling of use of the device or the like after surgery. By doing so, it can be used for sales activities or product development (improvement) activities.
(テキストデータの出力形式について)
 また、上記実施の形態の図1の発信元側テキスト表示ボタンG13等を押下した場合に表示されるテキストデータについては、発話した者と関連付けて表示してもよい。具体的には、音声の録音時に発話者と関連付けて録音した上で、各発話内容を発話者に対応付けてテキストデータに変換した上で、テキストデータを発話者と関連付けて表示してもよい。
(About the output format of text data)
In addition, the text data displayed when the transmission source side text display button G13 or the like in FIG. 1 of the above-described embodiment is pressed may be displayed in association with the person who spoke. Specifically, at the time of voice recording, after recording in association with the speaker, each utterance content may be associated with the speaker and converted into text data, and then text data may be displayed in association with the speaker. .
(情報の出力について)
 また、上記実施の形態又は変形例において、ディスプレイに表示するものと説明した情報については、表示内容に対応する情報を、各装置のスピーカ(例えば、第1通話端末1のスピーカ14等)から音声出力するように構成してもよいし、プリンタの如き印字手段を設けて紙面にて印字出力するように構成してもよいし、あるいは、各装置の通信部(例えば、第1通話端末1の通信部11等)か通信出力するように構成してもよい。
(About the output of information)
In the above-described embodiment or modification, regarding the information described to be displayed on the display, the information corresponding to the display content is voiced from the speaker of each device (for example, the speaker 14 of the first call terminal 1 etc.) It may be configured to output, or may be configured to provide printing means such as a printer and print out on a sheet, or to communicate units of each device (for example, the first call terminal 1 The communication unit 11 or the like) may be configured to perform communication output.
(発話音声関連処理について)
 また、上記実施の形態では、発話音声関連処理として、図2の管理サーバ3の処理部333が、データ認識処理及び使用情報出力処理を行う場合について説明したが、これに限らず、処理部333が、音声データDB323の音声データ、テキストデータDB324のテキストデータに基づいて、前述の依頼実行支援処理(具体的には、依頼認識処理、依頼情報要求処理)、又は状態情報出力処理を行うように構成してもよい。なお、この各処理については、図4のSA6の処理(又は図5のSB1~SB3の処理)と共に、あるいは、SA6(又は図5のSB1~SB3の処理)の代わりに行ってもよい。
(About Utterance-related processing)
In the above embodiment, the processing unit 333 of the management server 3 in FIG. 2 performs data recognition processing and usage information output processing as the uttered voice related processing, but the present invention is not limited to this. Perform the above-mentioned request execution support processing (specifically, request recognition processing, request information request processing) or status information output processing based on voice data of the voice data DB 323 and text data of the text data DB 324. It may be configured. The respective processes may be performed together with the process of SA6 of FIG. 4 (or the processes of SB1 to SB3 of FIG. 5) or in place of SA6 (or the processes of SB1 to SB3 of FIG. 5).
(発話音声関連処理について-依頼実行支援処理)
 依頼実行支援処理については例えば、第1利用者から第2利用者に対する依頼に関連する発話を発話音声として受信部331が受信して、実施の形態で説明した場合と同様にして音声データ及びテキストデータを格納した場合、処理部333は、音声データDB323の音声データ、テキストデータDB324のテキストデータに基づいて、依頼認識処理としてリマインダを行ったり、依頼認識処理としてToDoリストを出力したり、あるいは、依頼情報要求処理を行ったりする。なお、ここでの処理については、管理サーバ3側にて受信部331が受信した発話音声が依頼に関連するものであることを識別して、識別した発話音声に関して処理を行う必要があるが、例えば、利用者における所定操作(例えば、電話をかける場合に、操作部12を介し「11」を入力する等)、あるいは、利用者における所定の発話(例えば、「依頼事項です」等の所定のキーワードを発話する等)によって依頼に関連することを識別した上で、各処理を行ってもよい。
(About Utterance-related processing-Request execution support processing)
For the request execution support process, for example, the reception unit 331 receives an utterance related to the request for the second user from the first user as the utterance voice, and the voice data and the text are the same as in the embodiment. When data is stored, the processing unit 333 performs reminder as request recognition processing based on voice data of the voice data DB 323 and text data of the text data DB 324, outputs a ToDo list as request recognition processing, or Perform request information request processing. In addition, about the process here, it is necessary to identify that the uttered voice received by the receiving unit 331 is related to the request on the management server 3 side, and to process the identified uttered voice, For example, a predetermined operation by the user (for example, when making a call, “11” is input via the operation unit 12 or the like) or a predetermined utterance by the user (for example, “request item” or the like) Each process may be performed after identifying that it is related to the request by uttering a keyword or the like.
(発話音声関連処理について-依頼実行支援処理-依頼認識処理)
 リマインダについては、処理部333は、以下の動作を行うように第2通話端末2を設定する。例えば、第2利用者が第2通話端末2にて所定操作(リマインダ情報の出力を終了する操作)を行うまで、所定時間間隔(例えば、1時間間隔~2時間間隔、あるいは、12時間間隔~24時間間隔等)又は所定時間(午前9時、午後1時、及び午後5時等)にディスプレイ23に依頼がある旨のメッセージ(例えば、「依頼があります、確認してください」等のメッセージ)を表示したり、又は、当該メッセージをスピーカ24を介して音声出力したり、あるいは、バイブレータ機能が搭載されている場合には振動させたりすることによりリマインダを行うように設定する。
(About Utterance-related processing-Request execution support processing-Request recognition processing)
For the reminder, the processing unit 333 sets the second call terminal 2 to perform the following operation. For example, predetermined time intervals (for example, 1 hour interval to 2 hour intervals, or 12 hour intervals) until the second user performs a predetermined operation (operation to end output of reminder information) at the second call terminal 2 A message indicating that there is a request on the display 23 at a 24-hour interval etc.) or a predetermined time (9 am, 1 pm and 5 pm etc.) (for example, a message such as “request exists, please check”) Are displayed, or the message is output as voice via the speaker 24, or, if the vibrator function is installed, vibration is performed to set a reminder.
 また、ToDoリストについては、処理部333は、以下の動作を行うように第1通話端末1及び第2通話端末2を設定する。例えば、第2通話端末2のディスプレイ23に依頼内容とチェックボックスを相互に関連付けてリスト表示し、第2利用者が第2通話端末2にて所定操作(依頼を実行したことを示すためにチェックボックスにチェックを入れる操作)を行った場合に、表示されたチェックボックスにチェックを表示し、また、この場合に、第2通話端末2がチェックされた旨を依頼元側である第1通話端末1に送信し、このチェックされた旨を第1通話端末1が受信した場合に、依頼が実行された旨のメッセージ(例えば、「依頼が実行されました)等のメッセージ)を第1通話端末1のディスプレイ13に表示したり、又は、当該メッセージをスピーカ14を介して音声出力したり、あるいは、バイブレータ機能が搭載されている場合には振動させたりするように設定する。なお、これらの第1通話端末1及び第2通話端末2の間の通信において、公知の技術を用いて実現してもよい。 Further, with regard to the to-do list, the processing unit 333 sets the first call terminal 1 and the second call terminal 2 so as to perform the following operation. For example, the request content and the check box are associated with each other and displayed in a list on the display 23 of the second call terminal 2, and the second user performs a predetermined operation at the second call terminal 2 (check to show that the request has been executed) When an operation to put a check in the box is performed, a check is displayed in the displayed check box, and in this case, the first call terminal that is the request source side that the second call terminal 2 is checked Send to 1 and when the first call terminal 1 receives this check, a message indicating that the request has been executed (for example, a message such as "request has been executed") to the first call terminal Display on the display 13 of FIG. 1, or voice output of the message via the speaker 14, or vibration when the vibrator function is installed. Set to. Note that in the communication between these first call terminal 1 and the second call terminal 2, may be implemented using known techniques.
 このように構成した場合、依頼実行支援処理を行うことにより、例えば、発話音声に対応する依頼を第2利用者に実行させることを支援することが可能となり、第1利用者及び第2利用者の活動を円滑化させることが可能となる。特に、依頼認識処理を行うことにより、例えば、第2利用者に対して依頼を認識させることができるので、依頼があったことを第2利用者に思い出させることが可能となる。また、例えば、医療現場での依頼を第2利用者に認識させる場合、医療現場での活動の円滑化を図ることが可能となる。 When configured in this way, by performing the request execution support process, for example, it becomes possible to support the second user to execute the request corresponding to the uttered voice, and the first user and the second user It is possible to facilitate the activities of In particular, by performing the request recognition process, for example, the second user can be made to recognize the request, so that it is possible to remind the second user that there is a request. Also, for example, when making the second user recognize a request at a medical site, it is possible to facilitate activities at the medical site.
(発話音声関連処理について-依頼情報要求処理)
 依頼情報要求処理については、処理部333は、まず、テキストデータDB324のテキストデータを取得し、取得したテキストデータに対応する依頼における不足情報があるか否かを判定する。ここで、「不足情報」とは、発話音声の依頼において不足している情報であり、具体的には、予め定められている情報に対して不足している情報であり、例えば、依頼内容の実行期限が予め定めれている情報である場合において、当該実行期限が含まれていない場合には当該実行期限が不足情報に該当する。なお、以下では、この実行期限を例示して説明する。上述の判定において、処理部333は、取得したテキストデータに対応する依頼における不足情報が有るものと判定した場合、第1通話端末1のディスプレイ13に不足情報がある旨を示すメッセージ(例えば、「実行期限の発言がありませんでした。再度電話して実行期限をご連絡してください。」等のメッセージ)を表示して、不足情報を補うことを第1利用者に要求する。一方、処理部333は、取得したテキストデータに対応する依頼における不足情報が無いものと判定した場合、不足情報を補う必要が無いので、前述のメッセージの出力は行わない。なお、取得したテキストデータに対応する依頼における不足情報が無いものと判定した場合、依頼が正常に行われたことを第1利用者に認識させるために、第1通話端末1のディスプレイ13に、依頼が正常に行われたことを示すメッセージ(例えば、「依頼は過不足なく行われました。」等のメッセージ)を表示してもよい。
(About Utterance-related processing-Request information request processing)
In the request information request process, the processing unit 333 first acquires the text data of the text data DB 324, and determines whether there is insufficient information in the request corresponding to the acquired text data. Here, the "insufficient information" is information lacking in the request for the uttered voice, and specifically, is information lacking for predetermined information, and, for example, In the case where the execution deadline is information that is predetermined, if the execution deadline is not included, the execution deadline corresponds to the shortage information. In addition, below, this execution deadline is illustrated and demonstrated. In the above determination, when the processing unit 333 determines that there is insufficient information in the request corresponding to the acquired text data, a message indicating that there is insufficient information on the display 13 of the first call terminal 1 (for example, “ There is no remark of the deadline, please call again to notify the deadline, etc.) and request the first user to make up for the missing information. On the other hand, when the processing unit 333 determines that there is no shortage information in the request corresponding to the acquired text data, the processing unit 333 does not have to output the above-described message because there is no need to compensate for the shortage information. When it is determined that there is no lack information in the request corresponding to the acquired text data, the display 13 of the first call terminal 1 makes the first user recognize that the request has been normally performed, A message indicating that the request has been successfully made (for example, a message such as “The request has been made without excess or deficiency.”) May be displayed.
 このように構成した場合、依頼実行支援処理を行うことにより、例えば、発話音声に対応する依頼を第2利用者に実行させることを支援することが可能となり、第1利用者及び第2利用者の活動を円滑化させることが可能となる。特に、依頼情報要求処理を行うことにより、例えば、発話音声に対応する不足情報を補うことができるので、適切な依頼を行うことができ、第1利用者及び第2利用者の活動を一層円滑化させることが可能となる。 When configured in this way, by performing the request execution support process, for example, it becomes possible to support the second user to execute the request corresponding to the uttered voice, and the first user and the second user It is possible to facilitate the activities of In particular, by performing request information request processing, for example, it is possible to compensate for the lack information corresponding to the uttered voice, so appropriate requests can be made, and the activities of the first user and the second user are made smoother. It is possible to
(発話音声関連処理について-状態情報出力処理)
 状態情報出力処理については例えば、第1利用者又は第2利用者の患者の状態に関する発話を発話音声として受信部331が受信して、実施の形態で説明した場合と同様にして音声データ及びテキストデータを格納した場合、処理部333は、音声データDB323の音声データ、テキストデータDB324のテキストデータに基づいて、患者の状態に関する情報を出力するための処理を行う。なお、ここでの処理については、管理サーバ3側にて受信部331が受信した発話音声が患者の状態に関連するものであることを識別して、識別した発話音声に関して処理を行う必要があるが、例えば、「(発話音声関連処理について-依頼実行支援処理)」の場合と同様にして、利用者における所定操作(例えば、電話をかける場合に、操作部12を介し「99」を入力する等)、あるいは、利用者における所定の発話(例えば、「患者さんの状態です」等の所定のキーワードを発話する等)によって依頼に関連することを識別した上で、各処理を行ってもよい。
(About Utterance-related processing-Status information output processing)
In the state information output process, for example, the reception unit 331 receives an utterance relating to the state of the patient of the first user or the second user as an utterance voice, and voice data and text are received in the same manner as described in the embodiment. When the data is stored, the processing unit 333 performs processing for outputting information related to the condition of the patient based on the voice data of the voice data DB 323 and the text data of the text data DB 324. In addition, about the process here, it is necessary to identify that the uttered voice received by the receiving unit 331 is related to the state of the patient on the management server 3 side, and to process the identified uttered voice. In the same manner as, for example, “(for speech-related processing-request execution support processing)”, a predetermined operation of the user (for example, when making a call, “99” is input via the operation unit 12 Or, each process may be performed after identifying that it is related to the request by a predetermined utterance of the user (for example, uttering a predetermined keyword such as “the patient's condition”) .
 具体的には、治療のフェーズ(患者の状態と当該疾患に対して行った治療等)を発話音声として受信部331が受信して格納することとし、処理部333が、テキストデータDB324のテキストデータを取得し、取得したテキストデータに基づいて、この格納している治療のフェーズを、第1通話端末1のディスプレイ13、第2通話端末2のディスプレイ23、又は、第1通話端末1又は第2通話端末2を利用している医師を管理している管理医師の不図示の端末装置(例えば、スマートフォン、あるいは、据え置き型のコンピュータ等)に表示出力する。ここでは、例えば、患者の状態(例えば、問診結果、検査結果等)と行った治療(例えば、投与した薬、行った手術等)とを受信して、これらを対応付けて表示出力してもよい。また、医療行為の質を向上させる観点から、予め定められている基準のパス(患者が治癒するまでのモデルとなる流れ治療のフェーズ(患者の状態と当該疾患に対して行った治療等))も表示出力したり、あるいは、基準のパスと実際に行われた治療とを比較して、実際に行われた治療を評価し、評価結果を表示したりしてもよい。このように構成した場合、状態情報出力処理を行うことにより、例えば、患者の状態に関する情報を出力することができるので、発話の内容を患者の治療に役立てることができ、発話の内容を有効に活用することが可能となる。 Specifically, the receiving unit 331 receives and stores the phase of treatment (the state of the patient and the treatment performed for the disease, etc.) as speech voice, and the processing unit 333 stores the text data of the text data DB 324 Is acquired, and based on the acquired text data, the phase of the stored treatment is displayed on the display 13 of the first call terminal 1, the display 23 of the second call terminal 2, or the first call terminal 1 or the second Display and output on a terminal device (not shown) (for example, a smartphone or a stationary computer) of the managing doctor who manages the doctor who uses the call terminal 2. Here, for example, even if the patient's condition (for example, an inquiry result, an examination result, etc.) and the treatment (for example, an administered drug, an operation performed, etc.) are received, they are displayed in association with each other Good. In addition, from the viewpoint of improving the quality of medical practice, a predetermined standard path (phase of flow treatment that serves as a model until patient cures (patient's condition and treatment performed for the disease, etc.)) Alternatively, the actual treatment may be evaluated and the evaluation result may be displayed by comparing the reference pass with the actually performed treatment. When configured in this way, by performing state information output processing, for example, it is possible to output information about the state of the patient, so the content of the utterance can be used for the treatment of the patient, and the content of the utterance is made effective. It becomes possible to utilize.
(その他の処理)
 また、管理サーバ3の制御部33が、発話音声、あるいは、格納されている音声データ又はテキストデータに基づいて、各通話端末を介して、以下の自動抽出処理、アシスト処理、又は辞典的処理を行うように構成してもよい。「自動抽出処理」とは、医療の観点から特に重要な情報を自動抽出し、見落としを防止するような表示や管理を行う処理であり、例えば、「救命」「応急」「投薬」「忌避」「アレルギー」等の単語が発話された場合には、発話に基づくテキストにおいて、強調表示を行たり、所定時間以内に対応確認ボタンが押されない場合には警告を行う処理等を含む概念である。また、「アシスト処理」とは、医師の業務活動を支援する処理であり、例えば、医師が患者や家族にインフォームドコンセントの説明を行う際、説明した内容や患者等との会話を録音し、インフォームドコンセントの記録として残す処理等を含む概念である。また、「辞典的処理」とは、予め格納されている辞典情報に従って発話に対する回答を音声等にて出力する処理であり、例えば、医師が「病名Aの薬」と発話すると、「病名Aの薬には、a1、a2、a3があります」という回答を音声で行う処理、又は、医師が「薬Xと薬Y、併用禁忌」と発話すると、薬Xと薬Yの併用禁忌の有無について、回答を音声で行う処理等を含む概念である。
(Other processing)
In addition, the control unit 33 of the management server 3 performs the following automatic extraction processing, assist processing, or dictionary processing via each call terminal based on the uttered voice or the stored voice data or text data. It may be configured to perform. The “automatic extraction process” is a process of automatically extracting particularly important information from the medical point of view, and performing display and management to prevent oversight, for example, “lifesaving”, “emergency”, “medication”, “retraction” This is a concept including processing such as highlighting in a text based on an utterance when a word such as “allergy” is uttered, or giving a warning when the correspondence confirmation button is not pressed within a predetermined time. In addition, “assist processing” is processing to support the business activities of the doctor, and for example, when the doctor gives informed consent to the patient or the family, the contents described and the conversation with the patient etc. are recorded, It is a concept including processing to be kept as a record of informed consent. The “dictionary-like process” is a process of outputting an answer to speech as voice according to the dictionary information stored in advance. For example, when the doctor utters “drug of disease name A”, “the disease name A Drugs are processed by voice to answer "a1, a2, a3" or when the doctor utters "drug X and drug Y, concomitant contraindications", the presence or absence of concomitant contraindications of drug X and drug Y, It is a concept that includes the process of making an answer by voice.
(複数名の発話)
 また、上記実施の形態では、2者間で発話に本願を適用する場合について説明したが、これに限らず、例えば、3者以上で発話を行う場合について本願を適用可能である。
(Speech of multiple people)
Moreover, although the case where this application was applied to an utterance between two persons was demonstrated in the said embodiment, this application is applicable not only to this but the case where an utterance is performed by three or more persons, for example.
(特徴について)
 また、上記実施の形態の特徴及び変形例の特徴を任意に選択して組み合わせてもよい。
(About the feature)
Further, the features of the above-described embodiment and the features of the modification may be arbitrarily selected and combined.
(付記)
 付記1の情報処理システムは、第1通話端末を利用している第1利用者と第2通話端末を利用している第2利用者との間の発話であって、前記第1通話端末又は前記第2通話端末を用いて行われる当該発話の発話音声を受信する受信手段と、前記受信手段が受信した前記発話音声に対応する音声データを、当該音声データに対応するテキストデータに変換する変換手段と、前記受信手段が受信した前記発話音声に対応する前記音声データ、又は、前記変換手段が変換した前記テキストデータのうちの少なくとも一方のデータに基づいて、前記受信手段が受信した前記発話音声に関連する処理である発話音声関連処理を行う処理手段と、を備える。
(Supplementary note)
The information processing system according to supplementary note 1 is an utterance between a first user using a first call terminal and a second user using a second call terminal, the first call terminal or the first call terminal Receiving means for receiving the uttered voice of the uttered performed using the second call terminal, and converting the voice data corresponding to the uttered voice received by the receiving means into text data corresponding to the voice data Means, the speech data received by the receiving means based on data of at least one of the voice data corresponding to the speech voice received by the receiving means, or the text data converted by the converting means Processing means for performing speech related processing, which is processing related to.
 付記2の情報処理システムは、付記1に記載の情報処理システムにおいて、前記処理手段は、前記音声データ又は前記テキストデータを、前記第1利用者又は前記第2利用者に確認させるためのデータ認識処理を前記発話音声関連処理として行う。 The information processing system according to appendix 2, in the information processing system according to appendix 1, the processing means is data recognition for causing the first user or the second user to confirm the voice data or the text data. The process is performed as the uttered voice related process.
 付記3の情報処理システムは、付記1又は2に記載の情報処理システムにおいて、前記受信手段は、前記第1利用者から前記第2利用者に対する依頼に関する発話を前記発話音声として受信し、前記処理手段は、前記受信手段が受信した前記発話音声に対応する依頼を前記第2利用者に実行させるための依頼実行支援処理を前記発話音声関連処理として行う。 In the information processing system according to appendix 3, in the information processing system according to appendix 1 or 2, the reception means receives an utterance relating to a request for the second user from the first user as the speech voice, and the process The means performs a request execution support process for causing the second user to execute a request corresponding to the uttered voice received by the receiving means as the uttered voice related process.
 付記4の情報処理システムは、付記3に記載の情報処理システムにおいて、前記処理手段は、前記受信手段が受信した前記発話音声に対応する依頼を前記第2利用者に認識させるための依頼認識処理を前記依頼実行支援処理として行う。 The information processing system according to appendix 4, in the information processing system according to appendix 3, the processing means is a request recognition process for causing the second user to recognize a request corresponding to the speech received by the reception means. As the request execution support process.
 付記5の情報処理システムは、付記3又は4に記載の情報処理システムにおいて、前記処理手段は、前記受信手段が受信した前記発話音声に対応する依頼における不足情報があるか否かを判定し、前記不足情報があるものと判定した場合に、当該不足情報を補うことを前記第1利用者に要求する依頼情報要求処理を前記依頼実行支援処理として行う。 In the information processing system according to appendix 5, in the information processing system according to appendix 3 or 4, the processing means determines whether or not there is insufficient information in a request corresponding to the speech received by the reception means, When it is determined that there is the shortage information, request information request processing for requesting the first user to compensate for the deficiency information is performed as the request execution support processing.
 付記6の情報処理システムは、付記1から5の何れか一項に記載の情報処理システムにおいて、前記受信手段は、対象物の使用に関する発話を前記発話音声として受信し、前記処理手段は、前記対象物の使用に関する情報を出力するための使用情報出力処理を前記発話音声関連処理として行う。 The information processing system according to appendix 6, in the information processing system according to any one of appendixes 1 to 5, wherein the receiving means receives an utterance relating to use of an object as the speech, and the processing means Usage information output processing for outputting information on usage of the object is performed as the uttered voice related processing.
 付記7の情報処理システムは、付記1から6の何れか一項に記載の情報処理システムにおいて、少なくとも、前記変換手段が変換した前記テキストデータを格納する発話格納手段、を備え、前記第1利用者又は前記第2利用者は医療従事者であり、前記受信手段は、前記第1利用者又は前記第2利用者の患者の状態に関する発話を前記発話音声として受信し、前記処理手段は、少なくとも、前記発話格納手段に格納されている前記テキストデータに基づいて、前記患者の状態に関する情報を出力するための状態情報出力処理を前記発話音声関連処理として行う。 The information processing system according to appendix 7, in the information processing system according to any one of appendixes 1 to 6, further comprising: speech storage means for storing the text data converted by the conversion means, the first use The person or the second user is a medical worker, and the receiving means receives an utterance relating to the state of the patient of the first user or the second user as the speech, and the processing means at least A state information output process for outputting information on the state of the patient is performed as the speech voice related process based on the text data stored in the speech storage means.
 付記8の情報処理プログラムは、情報処理プログラムであって、コンピュータを、第1通話端末を利用している第1利用者と第2通話端末を利用している第2利用者との間の発話であって、前記第1通話端末又は前記第2通話端末を用いて行われる当該発話の発話音声を受信する受信手段と、前記受信手段が受信した前記発話音声に対応する音声データを、当該音声データに対応するテキストデータに変換する変換手段と、前記受信手段が受信した前記発話音声に対応する前記音声データ、又は、前記変換手段が変換した前記テキストデータのうちの少なくとも一方のデータに基づいて、前記受信手段が受信した前記発話音声に関連する処理である発話音声関連処理を行う処理手段と、として機能させる。 The information processing program according to appendix 8 is an information processing program, and the computer speaks between a first user using a first call terminal and a second user using a second call terminal. Receiving means for receiving an utterance voice of the utterance performed using the first call terminal or the second call terminal, voice data corresponding to the voice voice received by the reception means, the voice Based on at least one of data of conversion means for converting into text data corresponding to data, said voice data corresponding to said speech received by said receiving means, or said text data converted by said conversion means And processing means for performing a speech related process, which is processing related to the speech received by the reception means.
(付記の効果)
 付記1に記載の情報処理システムによれば、受信手段が受信した発話音声に対応する音声データ、又は、変換手段が変換したテキストデータのうちの少なくとも一方のデータに基づいて、発話音声に関連する処理である発話音声関連処理を行うことにより、例えば、第1利用者又は第2利用者が発話の内容を手作業にて入力することなく発話音声関連処理を行うことができるので、発話の内容を確実に反映することが可能となる。特に、例えば、医療現場での発話音声に適用する場合、医療現場での発話の内容を反映することができ、医療現場での活動を円滑化したり、あるいは、医療現場での発話の内容を有効に活用することが可能となる。
(Effect of Supplementary Note)
According to the information processing system described in appendix 1, the speech data is related to the speech sound based on at least one of the speech data corresponding to the speech speech received by the reception means or the text data converted by the conversion means. By performing the uttered voice related process which is the process, for example, the uttered voice related process can be performed without the first user or the second user manually inputting the content of the uttered, so the uttered content Can be reliably reflected. In particular, for example, when applied to speech at a medical site, the content of the speech at the medical site can be reflected, and the activity at the medical site can be facilitated or the content of the speech at the medical site can be effective. It can be used to
 付記2に記載の情報処理システムによれば、データ認識処理を行うことにより、例えば、第1利用者又は第2利用者に音声データ又はテキストデータを確認させることができるので、発話の内容を第1利用者又は第2利用者に反映することが可能となる。また、例えば、第1利用者又は第2利用者に対して第1通話端末又は第2通話端末を用いて行われる発話があったことを思い出させることが可能となる。 According to the information processing system described in appendix 2, the first user or the second user can check, for example, the voice data or the text data by performing the data recognition process. It becomes possible to reflect to 1 user or the 2nd user. Further, for example, it is possible to remind the first user or the second user that there is an utterance performed using the first call terminal or the second call terminal.
 付記3に記載の情報処理システムによれば、依頼実行支援処理を行うことにより、例えば、発話音声に対応する依頼を第2利用者に実行させることを支援することが可能となり、第1利用者及び第2利用者の活動を円滑化させることが可能となる。 According to the information processing system described in Appendix 3, by performing the request execution support process, for example, it becomes possible to support the second user to execute the request corresponding to the uttered voice, and the first user And the second user's activity can be facilitated.
 付記4に記載の情報処理システムによれば、依頼認識処理を行うことにより、例えば、第2利用者に対して依頼を認識させることができるので、依頼があったことを第2利用者に思い出させることが可能となる。特に、例えば、医療現場での依頼を第2利用者に認識させる場合、医療現場での活動の円滑化を図ることが可能となる。 According to the information processing system described in Appendix 4, by performing the request recognition process, for example, the second user can be made to recognize the request, so the second user remembers that there is a request. It is possible to In particular, for example, when the second user is made to recognize a request at a medical site, it is possible to facilitate activities at the medical site.
 付記5に記載の情報処理システムによれば、依頼情報要求処理を行うことにより、例えば、発話音声に対応する不足情報を補うことができるので、適切な依頼を行うことができ、第1利用者及び第2利用者の活動を一層円滑化させることが可能となる。 According to the information processing system described in Supplementary Note 5, by performing the request information request process, for example, it is possible to compensate for the lack information corresponding to the uttered voice, so it is possible to make an appropriate request, and the first user And the activity of the second user can be further facilitated.
 付記6に記載の情報処理システムによれば、使用情報出力処理を行うことにより、例えば、対象物の使用に関する情報を出力することができるので、発話の内容を対象物の改良等に役立てることができ、発話の内容を有効に活用することが可能となる。 According to the information processing system described in Appendix 6, by performing the use information output process, for example, information on the use of the object can be output, so that the contents of the utterance can be used to improve the object etc. It is possible to effectively use the contents of the utterance.
 付記7に記載の情報処理システムによれば、状態情報出力処理を行うことにより、例えば、患者の状態に関する情報を出力することができるので、発話の内容を患者の治療に役立てることができ、発話の内容を有効に活用することが可能となる。 According to the information processing system described in Appendix 7, by performing the state information output processing, for example, information on the patient's state can be output, so the content of the utterance can be used for the treatment of the patient It is possible to effectively utilize the contents of
 付記8に記載の情報処理プログラムによれば、受信手段が受信した発話音声に対応する音声データ、又は、変換手段が変換したテキストデータのうちの少なくとも一方のデータに基づいて、発話音声に関連する処理である発話音声関連処理を行うことにより、例えば、第1利用者又は第2利用者が発話の内容を手作業にて入力することなく発話音声関連処理を行うことができるので、発話の内容を確実に反映することが可能となる。特に、例えば、医療現場での発話音声に適用する場合、医療現場での発話の内容を反映することができ、医療現場での活動を円滑化したり、あるいは、医療現場での発話の内容を有効に活用することが可能となる。 According to the information processing program described in appendix 8, the speech data is related to the speech sound based on the voice data corresponding to the speech sound received by the reception means or the data of at least one of the text data converted by the conversion means. By performing the uttered voice related process which is the process, for example, the uttered voice related process can be performed without the first user or the second user manually inputting the content of the uttered, so the uttered content Can be reliably reflected. In particular, for example, when applied to speech at a medical site, the content of the speech at the medical site can be reflected, and the activity at the medical site can be facilitated or the content of the speech at the medical site can be effective. It can be used to
1 第1通話端末
2 第2通話端末
3 管理サーバ
11 通信部
12 操作部
13 ディスプレイ
14 スピーカ
15 マイク
16 記録部
17 制御部
21 通信部
22 操作部
23 ディスプレイ
24 スピーカ
25 マイク
26 記録部
27 制御部
31 通信部
32 記録部
33 制御部
100 医療システム
321 端末情報DB
322 発話記録情報DB
323 音声データDB
324 テキストデータDB
331 受信部
332 変換部
333 処理部
G1 発信元側通知画像
G11 発信元側メッセージ情報
G12 発信元側音声再生ボタン
G13 発信元側テキスト表示ボタン
G2 発信先側通知画像
G21 発信先側メッセージ情報
G22 発信先側音声再生ボタン
G23 発信先側テキスト表示ボタン
Reference Signs List 1 first call terminal 2 second call terminal 3 management server 11 communication unit 12 operation unit 13 display 14 speaker 15 microphone 16 recording unit 17 control unit 21 communication unit 22 operation unit 23 display 24 speaker 25 microphone 26 recording unit 27 control unit 31 Communication unit 32 Recording unit 33 Control unit 100 Medical system 321 Terminal information DB
322 Speech record information DB
323 voice data DB
324 Text Data DB
331 receiver 332 converter 333 processor G1 source side notification image G11 source side message information G12 source side voice reproduction button G13 source side text display button G2 destination side notification image G21 destination side message information G22 destination Voice play button G23 Destination text display button

Claims (8)

  1.  第1通話端末を利用している第1利用者と第2通話端末を利用している第2利用者との間の発話であって、前記第1通話端末又は前記第2通話端末を用いて行われる当該発話の発話音声を受信する受信手段と、
     前記受信手段が受信した前記発話音声に対応する音声データを、当該音声データに対応するテキストデータに変換する変換手段と、
     前記受信手段が受信した前記発話音声に対応する前記音声データ、又は、前記変換手段が変換した前記テキストデータのうちの少なくとも一方のデータに基づいて、前記受信手段が受信した前記発話音声に関連する処理である発話音声関連処理を行う処理手段と、
     を備える情報処理システム。
    An utterance between a first user using a first call terminal and a second user using a second call terminal, using the first call terminal or the second call terminal Receiving means for receiving the uttered voice of the uttered speech to be performed;
    Conversion means for converting voice data corresponding to the uttered voice received by the receiving means into text data corresponding to the voice data;
    Related to the speech received by the receiving means based on at least one of the voice data corresponding to the speech received by the receiving means or the text data converted by the converting means Processing means for performing a speech related process, which is a process;
    An information processing system comprising:
  2.  前記処理手段は、前記音声データ又は前記テキストデータを、前記第1利用者又は前記第2利用者に確認させるためのデータ認識処理を前記発話音声関連処理として行う、
     請求項1に記載の情報処理システム。
    The processing means performs data recognition processing for causing the first user or the second user to confirm the voice data or the text data as the speech voice related processing.
    The information processing system according to claim 1.
  3.  前記受信手段は、前記第1利用者から前記第2利用者に対する依頼に関する発話を前記発話音声として受信し、
     前記処理手段は、前記受信手段が受信した前記発話音声に対応する依頼を前記第2利用者に実行させるための依頼実行支援処理を前記発話音声関連処理として行う、
     請求項1又は2に記載の情報処理システム。
    The receiving unit receives, as the speech sound, an utterance related to a request for the second user from the first user,
    The processing means performs a request execution support process for causing the second user to execute a request corresponding to the uttered voice received by the receiving means as the uttered voice related process.
    The information processing system according to claim 1 or 2.
  4.  前記処理手段は、前記受信手段が受信した前記発話音声に対応する依頼を前記第2利用者に認識させるための依頼認識処理を前記依頼実行支援処理として行う、
     請求項3に記載の情報処理システム。
    The processing means performs, as the request execution support process, a request recognition process for causing the second user to recognize a request corresponding to the uttered voice received by the reception means.
    The information processing system according to claim 3.
  5.  前記処理手段は、前記受信手段が受信した前記発話音声に対応する依頼における不足情報があるか否かを判定し、前記不足情報があるものと判定した場合に、当該不足情報を補うことを前記第1利用者に要求する依頼情報要求処理を前記依頼実行支援処理として行う、
     請求項3又は4に記載の情報処理システム。
    The processing means determines whether there is insufficient information in the request corresponding to the uttered voice received by the receiving means, and if it is determined that the insufficient information is present, the insufficient information is supplemented. Performing request information request processing requested to the first user as the request execution support processing;
    The information processing system according to claim 3 or 4.
  6.  前記受信手段は、対象物の使用に関する発話を前記発話音声として受信し、
     前記処理手段は、前記対象物の使用に関する情報を出力するための使用情報出力処理を前記発話音声関連処理として行う、
     請求項1から5の何れか一項に記載の情報処理システム。
    The receiving means receives an utterance relating to use of an object as the utterance voice,
    The processing means performs usage information output processing for outputting information related to the use of the object as the speech voice related processing.
    The information processing system according to any one of claims 1 to 5.
  7.  少なくとも、前記変換手段が変換した前記テキストデータを格納する発話格納手段、を備え、
     前記第1利用者又は前記第2利用者は医療従事者であり、
     前記受信手段は、前記第1利用者又は前記第2利用者の患者の状態に関する発話を前記発話音声として受信し、
     前記処理手段は、少なくとも、前記発話格納手段に格納されている前記テキストデータに基づいて、前記患者の状態に関する情報を出力するための状態情報出力処理を前記発話音声関連処理として行う、
     請求項1から6の何れか一項に記載の情報処理システム。
    At least a speech storage means for storing the text data converted by the conversion means,
    The first user or the second user is a medical worker,
    The receiving means receives an utterance relating to a state of a patient of the first user or the second user as the utterance voice.
    The processing means performs state information output processing for outputting information related to the state of the patient as the utterance voice related processing based on at least the text data stored in the utterance storage means.
    The information processing system according to any one of claims 1 to 6.
  8.  情報処理プログラムであって、
     コンピュータを、
     第1通話端末を利用している第1利用者と第2通話端末を利用している第2利用者との間の発話であって、前記第1通話端末又は前記第2通話端末を用いて行われる当該発話の発話音声を受信する受信手段と、
     前記受信手段が受信した前記発話音声に対応する音声データを、当該音声データに対応するテキストデータに変換する変換手段と、
     前記受信手段が受信した前記発話音声に対応する前記音声データ、又は、前記変換手段が変換した前記テキストデータのうちの少なくとも一方のデータに基づいて、前記受信手段が受信した前記発話音声に関連する処理である発話音声関連処理を行う処理手段と、
     として機能させる情報処理プログラム。
     
    An information processing program,
    Computer,
    An utterance between a first user using a first call terminal and a second user using a second call terminal, using the first call terminal or the second call terminal Receiving means for receiving the uttered voice of the uttered speech to be performed;
    Conversion means for converting voice data corresponding to the uttered voice received by the receiving means into text data corresponding to the voice data;
    Related to the speech received by the receiving means based on at least one of the voice data corresponding to the speech received by the receiving means or the text data converted by the converting means Processing means for performing a speech related process, which is a process;
    Information processing program to function as.
PCT/JP2017/029794 2017-08-21 2017-08-21 Information processing system and information processing program WO2019038807A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2019537438A JPWO2019038807A1 (en) 2017-08-21 2017-08-21 Information processing system and information processing program
PCT/JP2017/029794 WO2019038807A1 (en) 2017-08-21 2017-08-21 Information processing system and information processing program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/029794 WO2019038807A1 (en) 2017-08-21 2017-08-21 Information processing system and information processing program

Publications (1)

Publication Number Publication Date
WO2019038807A1 true WO2019038807A1 (en) 2019-02-28

Family

ID=65438557

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/029794 WO2019038807A1 (en) 2017-08-21 2017-08-21 Information processing system and information processing program

Country Status (2)

Country Link
JP (1) JPWO2019038807A1 (en)
WO (1) WO2019038807A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021025074A1 (en) * 2019-08-05 2021-02-11 株式会社Bonx Group calling system, group calling method, and program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005283972A (en) * 2004-03-30 2005-10-13 Advanced Media Inc Speech recognition method, and information presentation method and information presentation device using the speech recognition method
JP2010041676A (en) * 2008-08-08 2010-02-18 Hitachi Building Systems Co Ltd Remote monitoring center apparatus
JP2013152613A (en) * 2012-01-25 2013-08-08 Shimane Univ Information communication network system in emergency medical care

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005283972A (en) * 2004-03-30 2005-10-13 Advanced Media Inc Speech recognition method, and information presentation method and information presentation device using the speech recognition method
JP2010041676A (en) * 2008-08-08 2010-02-18 Hitachi Building Systems Co Ltd Remote monitoring center apparatus
JP2013152613A (en) * 2012-01-25 2013-08-08 Shimane Univ Information communication network system in emergency medical care

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021025074A1 (en) * 2019-08-05 2021-02-11 株式会社Bonx Group calling system, group calling method, and program
JP6842227B1 (en) * 2019-08-05 2021-03-17 株式会社Bonx Group calling system, group calling method and program

Also Published As

Publication number Publication date
JPWO2019038807A1 (en) 2020-07-30

Similar Documents

Publication Publication Date Title
US20060253281A1 (en) Healthcare communications and documentation system
US10165113B2 (en) System and method for providing healthcare related services
US20090089100A1 (en) Clinical information system
US8451101B2 (en) Speech-driven patient care system with wearable devices
US20100188230A1 (en) Dynamic reminder system, method and apparatus for individuals suffering from diminishing cognitive skills
WO2017172963A1 (en) System and method for initiating an emergency response
US20110010087A1 (en) Home Health Point-of-Care and Administration System
US20080180213A1 (en) Digital Intercom Based Data Management System
US10057732B2 (en) Content specific ring tones for clinician alerts
WO2013065113A1 (en) Emergency support system
US20160042623A1 (en) Patient Monitoring System
US20070265838A1 (en) Voice Messaging Systems
US20070214011A1 (en) Patient Discharge System and Associated Methods
US9524717B2 (en) System, method, and computer program for integrating voice-to-text capability into call systems
CN111324468B (en) Message transmission method, device, system and computing equipment
JP2007235292A (en) Recording device
JP2017207809A (en) Voice recording device, information communication system, voice recording control method, and program
WO2019038807A1 (en) Information processing system and information processing program
US20120221353A1 (en) Medical Workflow Queue For Distributed Data Entry
JP6534171B2 (en) Call support system
WO2019104411A1 (en) System and method for voice-enabled disease management
JP2019029918A (en) Emergency report system
JP6336657B1 (en) Reservation notification device, notification device, notification method, computer program
CN113555134A (en) Medical information processing system and method, storage medium and electronic equipment
Mwesigwa An e-Health tele-media application for patient management

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17922742

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019537438

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17922742

Country of ref document: EP

Kind code of ref document: A1