WO2023058461A1 - Notification program, notification device, and notification method - Google Patents

Notification program, notification device, and notification method Download PDF

Info

Publication number
WO2023058461A1
WO2023058461A1 PCT/JP2022/035243 JP2022035243W WO2023058461A1 WO 2023058461 A1 WO2023058461 A1 WO 2023058461A1 JP 2022035243 W JP2022035243 W JP 2022035243W WO 2023058461 A1 WO2023058461 A1 WO 2023058461A1
Authority
WO
WIPO (PCT)
Prior art keywords
notification
responder
target
state
information
Prior art date
Application number
PCT/JP2022/035243
Other languages
French (fr)
Japanese (ja)
Inventor
智 坂口
真衣香 ▲高▼橋
達也 宗
Original Assignee
ユニ・チャーム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ユニ・チャーム株式会社 filed Critical ユニ・チャーム株式会社
Priority to CN202280067832.8A priority Critical patent/CN118077005A/en
Publication of WO2023058461A1 publication Critical patent/WO2023058461A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Definitions

  • the present invention relates to a notification program, a notification device, and a notification method.
  • the child-rearing person can accumulate a successful experience such as when the child stops crying by deducing the cause of the child's crying, thinking of a coping method, and taking action. Can not. For this reason, it cannot be said that the above-described conventional technology enhances the self-affirmation of the responder regarding the response to the response target.
  • This application was made in view of the above, and aims to increase the self-affirmation of the respondent regarding the response to the response target.
  • a notification program includes an acquisition procedure for acquiring voice information indicating a voice uttered by a responder to be handled by a responder, a state of the responder based on the voice information acquired by the acquisition procedure, an estimation procedure for estimating the certainty of the state, and notification of the state of the correspondence target estimated by the estimation procedure as the desire or emotion of the correspondence object, together with the accuracy and a recommended action of the respondent for the state. and a notification procedure to be executed by a computer.
  • FIG. 1 is a diagram (1) illustrating an example of notification processing according to an embodiment.
  • FIG. 2 is a diagram (1) showing an example of a screen of the terminal device according to the embodiment;
  • FIG. 3 is a diagram (2) showing an example of a screen of the terminal device according to the embodiment;
  • FIG. 4 is a diagram (3) showing an example of a screen of the terminal device according to the embodiment;
  • FIG. 5 is a diagram (4) showing an example of a screen of the terminal device according to the embodiment;
  • FIG. 6 is a diagram (2) illustrating an example of notification processing according to the embodiment.
  • FIG. 7 is a diagram illustrating a configuration example of an information processing apparatus according to the embodiment;
  • FIG. 8 is a diagram illustrating an example of a correspondence target information database according to the embodiment;
  • FIG. 9 is a diagram illustrating an example of a responder information database according to the embodiment.
  • FIG. 10 is a diagram illustrating a configuration example of a detection device according to the embodiment;
  • 11 is a flowchart illustrating an example of the flow of notification processing executed by the information processing apparatus according to the embodiment;
  • FIG. 12 is a diagram illustrating an example of a hardware configuration;
  • a notification program characterized in that it is executed by
  • a notification program for example, on the basis of a cry or other sound uttered by a target such as an infant, it is possible to identify the state in which the target is crying and determine the accuracy for each specified state. can be estimated and notified to the responder. As a result, the responder can think about how to deal with each presumed state of the response target and act on it, so that the responder can build up successful experiences such as an infant stopping crying. It is possible to increase the self-affirmation of the respondent regarding the response to In addition, for example, since a notification indicating the degree of certainty of a desire such as eating, excretion, or sleep can be provided to the subject, it is possible to appropriately respond to the subject.
  • the notification program provides a notification indicating that the corresponding target is not in any of the states when the corresponding target is not estimated to be in any of the states.
  • a notification program for example, in a state where the subject is crying without desire or emotion as the main cause, such as occurs in mental leap, the respondent is notified that the crying is caused by mental leap. Therefore, it is possible to reduce the effort of the responder to investigate the cause.
  • the notification program estimates the accuracy based on the history of the state of the correspondence target. Further, when the accuracy cannot be estimated based on the voice information, the accuracy may be estimated based on the history of the state of the response target input by the responder.
  • the state of the correspondence target can be estimated. This prevents the person from being unable to obtain the estimation result, and allows the responder to know the behavior that should be addressed, so that the responder can positively respond to the response target.
  • the notification program provides the responder with a message according to the responder's response to the state of the response target based on the voice uttered by the response target.
  • the notification program provides a message according to the time from when the notification is provided until the voice uttered by the corresponding target satisfies a predetermined condition.
  • the notification program provides the responder with a message according to the response of the responder to the state of the response target based on the voice uttered by the responder.
  • the notification program provides the responder with a message according to the use of the content regarding the status of the response target.
  • the responder when the responder browses the content related to the flow of coping when the respondent cries, the responder can be evaluated and a positive message can be provided. Therefore, the self-affirmation of the respondent can be enhanced.
  • the notification program provides the responder with content that poses a question regarding the correspondence relationship between the voice of the response target and the state of the response target.
  • the responder can deepen his/her understanding of the correspondence relationship between the accumulated cry of the respondent and the state of the respondent, thereby relieving the anxiety of the responder when he or she cries. be able to.
  • the responder when the responder is able to give a correct answer to the accumulated cry of the target to be addressed and the state of the corresponding target to be addressed, the responder can be evaluated and a positive message can be provided. You can increase your self-esteem.
  • the notification program provides a user having a predetermined relationship with the responder with a message according to the responder's response to the state of the response target.
  • a message evaluating the respondent who is the mother can be provided to the user who is the father, so that the respondent can obtain social persuasion.
  • the respondent can obtain social persuasion.
  • the notification program provides the responder with content indicating the history of the status of the response target based on the estimation result.
  • the estimated result can be provided to the respondent as a childcare record, so convenience can be improved.
  • the notification program provides a predetermined message to the responder if the state of the response target is not cleared for a predetermined time or longer after the notification is provided.
  • a message can be provided to the effect that the estimation of the target's condition was incorrect, thereby reducing the respondent's dissatisfaction. can do.
  • the notification program provides the responder with a message posted by another responder different from the responder.
  • the notification program provides the message, which is the amount of information according to the timing of providing to the responder.
  • a long message can be provided when the responder has spare time, so it is possible to provide a message with an appropriate amount of information according to the situation of the responder. .
  • the notification program provides the message, which is the amount of information according to the state of the response target.
  • a short message can be provided.
  • message can be provided.
  • the notification program provides the message posted by the responder to another responder different from the responder.
  • the coordinator According to such a notification program, for example, it is possible for the coordinator to provide other coordinators with stories of their own experiences in childcare, so that the experiences on behalf of the coordinator can be inherited.
  • the notification program estimates the accuracy using a model that has learned the features of the voice uttered by the corresponding target, and the information regarding the state of the corresponding target after the notification is provided and the voice information indicated
  • the model is trained on speech features.
  • a model can be patterned and personalized by learning the characteristics of the voice uttered by the responder, and the model can be used to estimate the state of the response target. can improve the accuracy of
  • the notification program receives information about the state of the response target from the responder, and causes the model to learn the received information and the characteristics of the voice indicated by the voice information.
  • model learning can be performed using information about the actual state of the response target, so it is possible to improve the accuracy of estimation.
  • FIG. 1 is a diagram (1) illustrating an example of notification processing according to an embodiment.
  • the information processing apparatus 10 according to the present embodiment implements the notification processing and the like according to the embodiment.
  • the notification system 1 includes an information processing device 10, a terminal device 100, and a detection device 200. They are communicably connected to each other by wire or wirelessly via a network N (see, for example, FIG. 6).
  • Network N is, for example, a WAN (Wide Area Network) such as the Internet.
  • the notification system 1 shown in FIG. 1 may include a plurality of information processing devices 10, a plurality of terminal devices 100, and a plurality of detection devices 200.
  • FIG. 1 may include a plurality of information processing devices 10, a plurality of terminal devices 100, and a plurality of detection devices 200.
  • the information processing device 10 shown in FIG. 1 is an information processing device that performs notification processing, and is realized by, for example, a server device, a cloud system, or the like.
  • the information processing apparatus 10 provides a user U1 who is raising an infant B1, in other words, when the infant B1 to be dealt with is crying, the information processing apparatus 10 performs various functions such as holding, feeding, and diapering the infant B1.
  • Various types of information are notified to the user U1, who is the person who takes care of the exchange, comfort, and the like.
  • a terminal device 100 shown in FIG. 1 is an information processing device used by a user who is a responder.
  • the terminal device 100 is implemented by, for example, a smartphone, tablet terminal, notebook PC (Personal Computer), desktop PC, mobile phone, PDA (Personal Digital Assistant), and the like.
  • the terminal device 100 displays information delivered by the information processing device 10 or the like using a web browser or an application.
  • the terminal device 100 is a smart phone used by the user U1.
  • the detection device 200 shown in FIG. 1 is a detection device that detects voice information indicating the state of an infant to be addressed, and various IoT (Internet of Things) devices that collect information from sensing devices including various sensing devices or smart speakers. (Things) equipment.
  • the detection device 200 is implemented by a sound sensor such as a microphone installed near the infant and capable of detecting at least the sound uttered by the infant.
  • the terminal device 100 may be identified with the user U1. That is, the user U1 can be read as the terminal device 100 below.
  • the detection device 200 detects crying of the infant B1 (step S1).
  • the detection device 200 uses a sound sensor to collect sounds emitted by the infant B1, and determines whether or not crying of the infant B1 has been detected.
  • the detection device 200 transmits audio information indicating the cry of the infant B1 to the terminal device 100 registered as a transmission destination (step S2). For example, the detection device 200 transmits, to the terminal device 100, voice information indicating the voice determined that the infant B1 is crying.
  • the information processing device 10 acquires voice information transmitted from the terminal device 100 (step S3). Subsequently, information processing device 10 estimates the degree of certainty of each state of infant B1 based on the acquired voice information (step S4). For example, the information processing device 10, based on the voice information, determines the accuracy of each desire of the infant B1 (for example, “hold”, “breastfeeding”, “change diaper”, “sleepy”, etc.) and the feeling of the infant B1. The accuracy of each emotion (for example, “lonely”, “anger”, “sad”, “tired”, etc.) is estimated. As a specific example, the information processing device 10 uses model #1 that has been trained so as to output the degree of certainty of a state such as desires and emotions of an infant when voice information is input. The probability of each state of infant B1 is estimated.
  • model #1 that has been trained so as to output the degree of certainty of a state such as desires and emotions of an infant when voice information is input. The probability of each state of infant B1 is estimated.
  • model learning may be performed using various conventional techniques related to machine learning (for example, techniques related to supervised machine learning such as SVM (Support Vector Machine)).
  • SVM Small Vector Machine
  • deep learning technology may be used for model learning.
  • various deep learning techniques such as RNN (Recurrent Neural Network) and CNN (Convolutional Neural Network) may be used for model learning.
  • the information processing device 10 provides a notification indicating the degree of certainty of each state of the infant B1 (step S5).
  • the terminal device 100 displays the notification provided from the information processing device 10 on the screen (step S6).
  • user U1 responds to infant B1 based on the notification displayed on the screen of terminal device 100 (step S7).
  • the user U1 performs actions such as hugging, breastfeeding, and changing diapers.
  • User U1's response includes not only direct contact and communication with infant B1, such as hugging, breastfeeding, and changing diapers, but also direct behavior such as leaving infant B1 alone. It also includes measures that are not taken.
  • the terminal device 100 notifies him of a response that does not take direct action itself, such as leaving him alone.
  • the information processing device 10 receives information about the condition of the infant B1 from the terminal device 100 (step S8). For example, the information processing device 10 receives information indicating whether or not the infant B1 has stopped crying (in other words, whether or not the estimation result matches the actual state of the infant B1) from the terminal device 100.
  • FIG. 2 is a diagram (1) showing an example of a screen of the terminal device according to the embodiment.
  • the information processing device 10 estimates the accuracy of the infant B1 desire "breastfeeding" to be 60% and the accuracy of the infant B1 desire “sleepy” to be 40%, and notifies the user of the estimation results. It is assumed that the terminal device 100 is provided with the data.
  • the terminal device 100 displays a screen C11 for displaying the estimation result of the state of the infant B1 by the information processing device 10.
  • the terminal device 100 includes an area AR11 for displaying the estimation result, an area AR12 for displaying the specific content of the response, and a button BT11 for notifying the information processing apparatus 10 that the infant B1 has stopped crying. and a button BT12 for notifying the information processing apparatus 10 that the infant B1 will not stop crying.
  • the terminal device 100 associates the text "I'm hungry” that interprets the feeling of the infant B1 having the desire "breastfeeding" with the probability of 60% for the desire "breastfeeding", Display on AR11.
  • the terminal device 100 associates the text "I'm sleepy” that interprets the feeling of the infant B1 having the desire "I'm sleepy” with the desire "I'm sleepy” with a probability of 40%, and displays them in the area AR11. That is, the terminal device 100 displays each state estimated by the information processing device 10 in descending order of accuracy.
  • the infant is metaphorized as in the area AR11 of FIG.
  • the responder can estimate the desire of the target of the response and trace the behavior of taking an appropriate response, so that the responder's sense of self-affirmation can be improved.
  • the terminal device 100 transmits information indicating that the infant B1 has stopped crying to the information processing device 10, and transitions the screen C11 to a screen C12 displaying a predetermined message.
  • the terminal device 100 displays an area AR13 for displaying a message for changing the user U1's understanding of the crying of the infant B1, and displays content for deepening the user U1's understanding of the crying of the infant B1.
  • a screen C12 including a button BT13 for instructing and a button B14 for closing the notification provided from the information processing apparatus 10 is displayed.
  • the terminal device 100 displays a message to the effect that the infant B1 is crying not because of an emotion such as "sad" but because of communication with the user U1 in the area AR13. to display.
  • the terminal device 100 accepts input of information indicating the action taken by the user U1, the condition of the infant B1 estimated by the user U1, and the like, and the information processing device 10 may be sent to Further, when the button BT12 is pressed, the terminal device 100 accepts input of information indicating the condition of the infant B1 estimated by the user U1, and transmits the information to the information processing apparatus 10 together with information indicating that the infant B1 does not stop crying. You may That is, the terminal device 100 processes information indicating whether or not the content of the notification provided by the information processing device 10 matches the actual state of the infant B1, as well as information indicating the actual state of the infant B1. It may be sent to device 10 .
  • the terminal device 100 may transmit information detected by the detection device 200 to the information processing device 10 .
  • the terminal device 100 transmits information indicating that the baby B1 has stopped crying to the information processing device 10 .
  • the detection device 200 continues to detect the crying of the infant B1 for a certain period of time (for example, 30 minutes or 1 hour) after the screen C11 is displayed, the terminal device 100 detects that the infant B1 is crying. Information indicating not to stop is transmitted to the information processing device 10 .
  • FIG. 3 is a diagram (2) showing an example of a screen of the terminal device according to the embodiment
  • the information processing device 10 determines that infant B1 is in a state of crying without desire or emotion as the main cause, as occurs in mental leap.
  • a notification indicating the possibility is provided to the terminal device 100, and a screen C21 indicating the notification is displayed.
  • the terminal device 100 displays the voice waveform of the infant B1, a message indicating that the target is crying without desire or emotion as the main cause, as occurs in the infant B1 in mental leap.
  • a screen C21 including an area AR21 to be displayed and a button BT21 for instructing display of content including detailed information on the state (for example, information on mental leap) is displayed.
  • the terminal device 100 causes the screen C11 shown in FIG. 2 to transition to the screen C22.
  • the terminal device 100 displays a screen C22 including an area AR22 displaying an apology for misestimating the condition of the infant B1 and a message to calm the user U1.
  • the information processing device 10 causes the model #1 to learn the information received from the terminal device 100 and the voice features indicated by the voice information (step S9). For example, information processing apparatus 10 learns model #1 using information indicating whether infant B1 has stopped crying and voice information corresponding to the estimation result as learning data. In addition, the information processing apparatus 10 performs learning on the model #1 using the response taken by the user U1 to stop the infant B1 from crying and the voice information indicating the crying as learning data. As a result, the information processing apparatus 10 patterns and personalizes the model #1 as a model for estimating the state of the infant B1.
  • the information processing device 10 provides the message and content to the user U1 based on the accumulated voice information of the infant B1 (step S10).
  • the information processing device 10 provides content for the user U1 to deepen his understanding of the infant B1, a message for caring for the user U1, and the like.
  • FIG. 4 is a diagram (3) showing an example of a screen of the terminal device according to the embodiment;
  • the information processing device 10 provides content (quiz content) that asks a quiz about the correspondence relationship between the cry of the infant B1 and the state of the infant B1.
  • the terminal device 100 displays a screen C31 showing a quiz about the correspondence relationship between the cry of the infant B1 and the state of the infant B1.
  • the terminal device 100 displays a screen C31 that includes an area AR31 that displays the waveform of infant B1's crying, an area AR32 that displays a quiz, and buttons BT31 to BT33 that respectively correspond to options for answering the quiz.
  • the crying voice of the infant B1 corresponding to the waveform displayed in the area AR31 is output.
  • the terminal device 100 When the user U1 selects any one of the buttons BT31 to BT33 on the screen C31, the terminal device 100 causes the screen C31 to transition to a screen C32 displaying the answer results of the user U1. For example, the terminal device 100 displays a screen C32 including an area AR33 for displaying the result of the answer instead of the area AR32, and a button BT34 for instructing display of the crying waveform of the infant B1 corresponding to each option. do.
  • the terminal device 100 When the user U1 selects the button BT34 on the screen C32, the terminal device 100 causes the screen C32 to transition to a screen C33 displaying the waveform of the cry of the infant B1. For example, the terminal device 100 displays an area AR33 that displays the crying waveform of the infant B1 corresponding to the option "I'm hungry", and an area AR34 that displays the waveform of the crying infant B1 corresponding to the option "Cuddle me”. A screen C33 containing is displayed.
  • the quiz content described above can be used not only by user U1, but also by user U2 who has a predetermined relationship with user U1 (for example, if user U1 is the mother of infant B1, the user who is the father of infant B1). It may be provided to the terminal device 101 used by the person U2). In such a case, the terminal device 101 first displays the screen C31 like the terminal device 100 does. Then, when the user U2 selects any one of the buttons BT31 to BT33, the terminal device 101 causes the screen C31 to transition to a screen C34 displaying the result of the user U2's answer.
  • the terminal device 101 includes an area AR34 that displays a comparison between the answer result and the answer result of the user U1, and displays the crying waveform of the infant B1 corresponding to each option.
  • a screen C34 including a button BT35 for instructing is displayed.
  • the terminal device 101 transitions the screen C34 to the screen C33.
  • FIG. 5 is a diagram (4) showing an example of a screen of the terminal device according to the embodiment.
  • the information processing apparatus 10 provides messages (experience stories, etc.) posted by other users (users raising infants).
  • the information processing device 10 provides the terminal device 100 with a message according to the situation of the user U1, and displays it on the screen. For example, when the number of times the information processing apparatus 10 performs the process of estimating the state of the infant B1 (in other words, the number of times the infant B1 cries) is equal to or greater than a predetermined threshold, or when the infant B1 cries after the notification is provided. If it takes more than a predetermined time to stop, or if the message is provided within a predetermined time after the process of estimating the condition of the infant B1, in other words, if it is estimated that the user U1 is tired. , the terminal device 100 displays a screen C41. As a specific example, the terminal device 100 displays a screen C41 including an area AR41 for displaying short messages having a predetermined number of characters or less among messages posted by other users.
  • the terminal device 100 displays the screen C42.
  • the terminal device 100 displays an area AR42 for displaying a message having more characters than a predetermined number among messages posted by other users, and an area AR42 for displaying the preamble of the message.
  • button BT41 and a screen C42 is displayed.
  • the information processing device 10 may provide the terminal device 100 with content for accepting a message posted by the user U1 for other users, and display the content on the screen. For example, the terminal device 100 displays a screen C43 including an area AR43 for inputting a short message with a predetermined number of characters or less and an area AR44 for inputting a message with more characters than the predetermined number. The terminal device 100 then transmits the messages input to the areas AR43 and AR44 to the information processing apparatus 10 .
  • the information processing apparatus 10 estimates the accuracy for each state, such as the desires that the infant desires and the emotions that the infant feels, based on the cries of the infant, and notifies the user who is raising the infant. .
  • the information processing apparatus 10 according to the embodiment provides a sense of security that the infant can stop crying if the corresponding states are dealt with in descending order of accuracy, and the user himself/herself predicts the baby's desires and emotions.
  • Respondents can accumulate successful experiences, such as taking action on their own, and the infant stopping crying as a result of the action. That is, the information processing apparatus 10 according to the embodiment can enhance the self-affirmation of the responder regarding the response to the response target.
  • the notification system 1 is not limited to the connection relationship among the information processing device 10, the terminal device 100, and the detection device 200 described above.
  • the detection device 200 may be connected to the information processing device 10 and the terminal device 100 may not be connected to the information processing device 10 .
  • the detection device 200 transmits voice information detected by itself to the information processing device 10 , receives a notification indicating an estimation result transmitted from the information processing device 10 , and provides the information to the terminal device 100 . do.
  • the terminal device 100 and the detection device 200 may be individually connected to the information processing device 10 .
  • the detection device 200 transmits audio information detected by itself to the information processing device 10 .
  • the terminal device 100 receives a notification indicating an estimation result transmitted from the information processing device 10 .
  • the notification system according to the embodiment may be composed of the terminal device 100 and the detection device 200 as shown in FIG. In that case, the detection device 200 executes the processes corresponding to steps S3 to S5 and steps S8 to S10 in FIG.
  • the notification system 2 configured by the terminal device 100 and the detection device 200 will be described using FIG.
  • FIG. 6 is a diagram (2) illustrating an example of notification processing according to the embodiment.
  • the terminal device 100 and the detection device 200 shown in FIG. 6 have the same configuration as the notification system 1 shown in FIG. 1, description thereof will be omitted.
  • the detection device 200 detects (acquires) the cry of the infant B1 (step Sa1), and estimates the accuracy of each state of the infant B1 based on the audio information indicating the cry of the infant B1 (Ste Sa2). Subsequently, detection device 200 provides terminal device 100 with a notification indicating the accuracy of each state of infant B1 (step Sa3). Note that the processing of steps Sa1 to Sa3 is the same as steps S1, S4, and S5 in FIG. 1, respectively, so description thereof will be omitted.
  • the terminal device 100 displays the notification provided by the detection device 200 on the screen (step Sa4).
  • user U1 responds to infant B1 based on the notification displayed on the screen of terminal device 100 (step Sa5).
  • the detection device 200 receives information about the state of the infant B1 from the terminal device 100 (step Sa6), and uses the received information and the features of the voice indicated by the voice information as a model to be used for estimating each state of the infant. Let it learn (step Sa7). Subsequently, detection device 200 provides message and contents to user U1 based on the accumulated voice information of infant B1 (step Sa8). Note that the processing of steps Sa6 to Sa8 is the same as steps S8 to S10 in FIG. 1, respectively, so description thereof will be omitted.
  • FIG. 7 is a diagram illustrating a configuration example of an information processing apparatus according to the embodiment.
  • the information processing device 10 has a communication section 20 , a storage section 30 and a control section 40 .
  • the communication unit 20 is realized by, for example, a NIC (Network Interface Card) or the like.
  • the communication unit 20 is connected to the network N by wire or wirelessly, and transmits and receives information to and from the terminal device 100, the detection device 200, and the like.
  • the storage unit 30 is implemented by, for example, a semiconductor memory device such as a RAM (Random Access Memory) or a flash memory, or a storage device such as a hard disk or an optical disk. As shown in FIG. 7 , the storage unit 30 has a correspondence target information database 31 , a correspondence person information database 32 and a model database 33 .
  • the correspondence target information database 31 stores various kinds of information related to a correspondence target (infants) to be dealt with (raised) by a responder (parent or the like).
  • a correspondence target infants
  • a responder parent or the like
  • FIG. 8 is a diagram illustrating an example of a correspondence target information database according to the embodiment;
  • the correspondence target information database 31 includes "correspondence target ID", "corresponding person ID”, “detection device ID”, "month age”, “voice information”, "detection date and time”, “estimation result", It has items such as "Correct/False Information”.
  • Corresponding target ID indicates identification information for identifying the corresponding target.
  • the “responder ID” indicates identification information for identifying the responder who handles the response target.
  • the “detection device ID” indicates identification information for identifying the detection device 200 that detects the audio to be handled.
  • “Moon age” indicates the age in months of the corresponding object.
  • “Voice information” indicates the voice to be handled.
  • the “detection date and time” indicates the date and time when the audio information was detected.
  • Estimatimation result indicates information about the state of the correspondence target estimated based on the voice information.
  • the “correctness information” indicates information transmitted from the responder regarding the estimation result (for example, information indicating whether or not the response target has stopped crying).
  • the response target identified by the response target ID "BID #1” is handled by the responder identified by the responder ID "UID #1", and the voice of the response target is detected by the detection device ID " DID#1”, the age of the corresponding target is “3 months”, and the voice information detected at the detection date and time “detection date and time #1” for the corresponding target is “voice information # 1”, the estimation result based on the speech information is “estimation result #1”, and the correct/incorrect information is “correct/incorrect information #1”.
  • the responder information database 32 stores various types of information about the responders who respond to the response targets.
  • an example of information stored in the responder information database 32 will be described with reference to FIG.
  • FIG. 9 is a diagram illustrating an example of a responder information database according to the embodiment;
  • the responder information database 32 has items such as "responder ID”, "concerned party information”, and "posted message”.
  • Responder ID indicates identification information for identifying the responder who will respond to the response target.
  • Related party information indicates information related to a user who has a predetermined relationship with the responder.
  • posted message indicates a message posted by the correspondent.
  • the information of the user who has a predetermined relationship with the responder identified by the responder ID "UID #1” is "related person information #1”
  • the message posted by the responder is " message #1”.
  • the model database 33 stores models that have learned the features of the speech uttered by the corresponding object.
  • the model database 33 stores, for each corresponding target, a model that has been trained to output the degree of certainty of the state of the corresponding target, such as desires and emotions, when voice information is input.
  • the control unit 40 is a controller, and various programs stored in a storage device inside the information processing apparatus 10 are stored in the RAM by, for example, a CPU (Central Processing Unit) or an MPU (Micro Processing Unit). It is realized by executing as Also, the control unit 40 is a controller, and is implemented by an integrated circuit such as an ASIC (Application Specific Integrated Circuit) or an FPGA (Field Programmable Gate Array).
  • the control unit 40 according to the embodiment includes an acquisition unit 41, an estimation unit 42, a notification unit 43, a provision unit 44, a reception unit 45, and a learning unit 46, as shown in FIG. implements or performs the information processing functions and operations described in .
  • Acquisition unit 41 acquires voice information indicating a voice uttered by a response target handled by a responder. For example, in the example of FIG. 1 , the acquisition unit 41 acquires voice information indicating the cry of the infant B ⁇ b>1 and stores it in the correspondence target information database 31 .
  • the estimation unit 42 estimates the state of the correspondence target and the probability of being in the state. For example, in the example of FIG. 1, the estimation unit 42 estimates the certainty for each state of the infant B1 based on the voice information.
  • the estimation unit 42 may estimate the certainty of a predetermined desire desired by the corresponding target for each desire. For example, in the example of FIG. 1, the estimation unit 42 estimates the accuracy for each desire of the infant B1 such as "hold”, “breastfeed”, “change diaper”, and “sleepy”.
  • the estimation unit 42 may estimate the degree of certainty of the emotion felt by the corresponding target for each emotion. For example, in the example of FIG. 1, the estimation unit 42 estimates the accuracy for each emotion of the infant B1, such as "lonely”, “anger”, “sad”, and "tired”.
  • the estimation unit 42 may estimate the accuracy based on the history of the state of the correspondence target. For example, in the example of FIG. 1, when the estimating unit 42 acquires the voice information of the infant B1, the estimation unit 42 refers to the childcare record of the infant B1, and in the childcare record, breastfeeding is performed as a corresponding action within a predetermined time, When it is judged that it is not the timing for the next breastfeeding based on the average time interval of breastfeeding in the childcare record so far, it is judged that infant B1's desire for "feeding" is low, and it cries because it is another desire.
  • the estimation unit 42 based on the crying of the infant B1, as a method of estimating that the corresponding target is crying without desire or emotion as the main cause, such as when the infant B1 is in a mental leap, includes, for example, To confirm the degree of similarity with the history of voice information, etc., when infant B1 is not estimated to be in any state, or when the responder does not stop crying even if the responder responds, based on the history of previous responders.
  • the estimation unit 42 may estimate the accuracy based on the history of the status of the response target input by the responder. For example, the estimating unit 42 estimates the accuracy based on the history of the status of the response target input by the responder in a predetermined application for recording childcare (childcare diary application, breastfeeding recording application, etc.). As a specific example, the estimating unit 42 determines that the timing at which the corresponding voice information is acquired is the timing for breastfeeding, based on the average of the breastfeeding time intervals in the childcare record input to the childcare diary application. , the accuracy of the desire of the corresponding target "breastfeeding" is estimated higher than other desires.
  • the estimation unit 42 may estimate the accuracy of each state of the response target based on the history of the voice information and the history of the status of the response target input by the responder in a predetermined application for recording childcare. . For example, when voice information is newly acquired, the estimating unit 42 selects the voice information and the voice information of the response target acquired within a predetermined period from the time when the status of the response target was input to the childcare diary application. Based on the degree of similarity with the history, the probability of the state of the correspondence target is estimated.
  • the estimation unit 42 may estimate the accuracy using a model that has learned the features of the voice uttered by the corresponding target. For example, in the example of FIG. 1, the estimating unit 42 uses the model #1 that has been trained to output the probability of the infant's desires, emotions, etc. when voice information is input. Estimate the probability of each state of B1.
  • the notification unit 43 notifies the state of the correspondence target estimated by the estimation unit 42 as the desire or emotion of the correspondence target, together with the degree of certainty and the recommended behavior of the respondent for the state.
  • the notification unit 43 provides the terminal device 100 with a notification indicating the accuracy of each state of the infant B1 shown in FIG.
  • the notification unit 43 may provide the correspondent with a notification indicating the degree of certainty for each desire.
  • the notification unit 43 provides a notification indicating an estimation result in which the accuracy of infant B1's desire "breastfeeding" is estimated to be 60% and the accuracy of infant B1's desire "sleepy" is estimated to be 40%. .
  • a notification indicating the accuracy of each emotion may be provided to the respondent.
  • the notification unit 43 provides notification indicating the degree of certainty for each emotion to be handled, such as “lonely”, “anger”, “sad”, and “tired”.
  • the notification provided by the notification unit 43 may include the accuracy of the desire desired by the corresponding target and the accuracy of the emotion felt by the corresponding target.
  • the notification unit 43 provides a notification indicating that the accuracy of the desire “hold” of the corresponding object is 60% and the accuracy of the desire “sleepy” is 40%, and that the emotion of the infant is “lonely”. 60%, a notification may be provided indicating that the probability of being "sad” is 40%.
  • the notification unit 43 may provide a notification including information (recommended action) for resolving the state of the response target based on the degree of accuracy.
  • the notification unit 43 provides a notification including information (for example, information displayed in the area AR12 of FIG. 2) indicating specific contents of the response to the condition of the infant B1.
  • the notification unit 43 may provide a notification indicating that the correspondence target is in neither state.
  • the notification unit 43 sends a message indicating that the infant B1 is in a twilight crying state and detailed information about the twilight crying.
  • a notification (for example, screen C21 in FIG. 3) is provided.
  • the providing unit 44 provides the responder with a message according to the responder's response to the state of the response target based on the voice uttered by the response target.
  • the provision unit 44 provides the responder with a message corresponding to the response by the responder based on the voice of the response target detected by the detection device 200 after the notification is provided by the notification unit 43 .
  • the providing unit 44 provides a message that enhances self-affirmation of the respondent.
  • the providing unit 44 may provide a message according to the time from when the notification is provided until the voice uttered by the corresponding target satisfies a predetermined condition. For example, the providing unit 44 provides a message that enhances the self-affirmation of the respondent when the time from the notification provided by the notification unit 43 until the respondent stops crying is equal to or less than a predetermined threshold. Further, when the time from when the notification is provided by the notification unit 43 to when the person to be dealt with stops crying is longer than a predetermined threshold, the providing unit 44 Presume that there was, and provide a message of kindness to the responder.
  • the providing unit 44 may provide the responder with a message corresponding to the responder's response to the state of the response target, based on the voice uttered by the responder. For example, after the notification is provided by the notification unit 43, when the detection device 200 detects a predetermined voice (that is, a call) of the responder, the provision unit 44 confirms the response of the responder affirmatively. provide a message.
  • the providing unit 44 may provide the responder with a message according to the use of the content regarding the status of the response target. For example, in the example of FIG. 1, when the user U1 browses the content for deepening the understanding of the crying of the infant B1 (the content displayed when the button BT13 in FIG. 2 is pressed), or when the user U1 browses the details of the crying at twilight. When content including such information (content displayed when the button BT21 in FIG. 3 is pressed) is viewed, when quiz content is used, the provision unit 44 affirmatively confirms the use of each content by the user U1. provide meaningful messages. Note that the providing unit 44 may provide the responder with a positive message regarding the action when the responder browses the content related to the flow of coping when the subject cries.
  • the providing unit 44 may provide the responder with content that poses a question regarding the correspondence relationship between the voice of the response target and the state of the response target. For example, in the example of FIG. 1, the providing unit 44 provides the user U1 with the quiz content shown in FIG.
  • the providing unit 44 may provide a user having a predetermined relationship with the responder with a message according to the responder's response to the condition to be addressed. For example, the providing unit 44 provides the user, who is the father, with a message that evaluates the response of the respondent, who is the mother, to the target.
  • the provision unit 44 may provide the responder with content indicating the history of the state of the response target based on the estimation result by the estimation unit 42 .
  • the provision unit 44 refers to the response target information database 31, and the status of the response target estimated by the estimation unit 42, the estimated date and time, and the response performed by the responder with respect to the status of the response target (for example, Contents (that is, childcare records) indicating information received by the receiving unit 45, which will be described later, are provided to the respondent.
  • the provision unit 44 may provide a predetermined message to the responder if the state of the response target has not been resolved for a predetermined period of time after the notification was provided. For example, in the example of FIG. 1, when the button BT11 is not pressed even after a certain period of time has passed since the screen C11 shown in FIG. If it is not detected, the providing unit 44 provides the message shown on the screen C22 of FIG.
  • the providing unit 44 may provide the responder with a message posted by another responder different from the responder.
  • the providing unit 44 provides the user U1 with messages shown on screens C41 and C42 in FIG.
  • the providing unit 44 may provide a message that is an amount of information corresponding to the timing of providing to the responder. For example, in the example of FIG. 1, when the providing unit 44 provides a message within a predetermined period of time after performing the process of estimating the state of infant B1, the number of characters in messages posted by other users is provides a set number of short messages or less.
  • the provision unit 44 may receive messages posted by other users when the user U1 is using a predetermined application such as a nursing application, or when it is estimated that the infant is asleep. Among them, provide a message having more characters than a predetermined number.
  • the providing unit 44 may provide a message that is an amount of information according to the status of the correspondence target. For example, in the example of FIG. 1, when the number of times the process of estimating the state of infant B1 is performed is equal to or greater than a predetermined threshold value, or when infant B1 stops crying after providing the notification, the providing unit 44 If it takes longer than this, a short message with a predetermined number of characters or less among the messages posted by other users is provided. Further, when the number of times the process of estimating the state of infant B1 is performed is less than a predetermined threshold, the providing unit 44 provides a message having more characters than a predetermined number among messages posted by other users. do.
  • the providing unit 44 may provide the message posted by the responder to another responder other than the responder.
  • the providing unit 44 provides the message input on the screen C43 shown in FIG. 5 to another user other than the user U1.
  • the reception unit 45 receives information about the status of the response target from the responder. For example, in the example of FIG. 1, the receiving unit 45 receives information indicating whether or not the infant B1 has stopped crying from the terminal device 100, and stores the information in the corresponding target information database 31. FIG. As a specific example, the reception unit 45 receives information indicating whether the infant B1 has stopped crying, as well as information indicating the response taken by the user U1, the state of the infant B1 estimated by the user U1, and the like. accept.
  • the learning unit 46 causes the model to learn information about the state of the response target after the notification is provided and the features of the voice indicated by the voice information.
  • the learning unit 46 refers to the correspondence target information database 31, and determines the state of the infant B1 after the notification is provided by the notification unit 43 (for example, whether or not the infant B1 has stopped crying).
  • the model #1 learns the features of the speech indicated by the speech information of .
  • the learning unit 46 may cause the model to learn the information received by the receiving unit 45 and the features of the voice indicated by the voice information. For example, in the example of FIG. 1, learning unit 46 learns model #1 using information indicating whether infant B1 has stopped crying and voice information corresponding to the estimation result as learning data. Further, the learning unit 46 learns the model #1 using as learning data the response taken by the user U1 to stop the infant B1 from crying and the voice information indicating the crying.
  • FIG. 10 is a diagram illustrating a configuration example of a detection device according to the embodiment.
  • the detection device 200 has a communication section 210 , a storage section 220 and a control section 230 .
  • the communication unit 210 is implemented by, for example, a NIC.
  • the communication unit 210 is connected to the network N by wire or wirelessly, and transmits and receives information to and from the terminal device 100 and the like.
  • the storage unit 220 is realized by, for example, a semiconductor memory device such as a RAM or flash memory, or a storage device such as a hard disk or an optical disk. As shown in FIG. 10 , the storage unit 220 has a correspondence target information database 221 , a correspondence person information database 222 and a model database 223 .
  • the correspondence object information database 221, the correspondence person information database 222, and the model database 223 have the same configurations as the correspondence object information database 31, the correspondence person information database 32, and the model database 33, respectively, and thus the description thereof is omitted.
  • the control unit 230 is a controller, and is implemented, for example, by executing various programs stored in a storage device inside the detection device 200 using a RAM as a work area by a CPU, MPU, or the like. Also, the control unit 40 is a controller, and is implemented by an integrated circuit such as an ASIC or FPGA, for example.
  • the control unit 230 according to the embodiment includes an acquisition unit 231, an estimation unit 232, a notification unit 233, a provision unit 234, a reception unit 235, and a learning unit 236, as shown in FIG. 6 realizes or executes information processing functions and actions in the notification system 2 shown in FIG.
  • the acquisition unit 231, the estimation unit 232, the notification unit 233, the provision unit 234, the reception unit 235, and the learning unit 236 Since it has the same configuration as , the description is omitted.
  • FIG. 11 is a flowchart illustrating an example of the flow of notification processing executed by the information processing apparatus according to the embodiment.
  • the information processing device 10 determines whether or not the voice information indicating the voice uttered by the corresponding target has been acquired (step S101). If the voice information has not been acquired (step S101; No), the information processing apparatus 10 waits until the voice information is acquired.
  • the information processing apparatus 10 estimates the probability that the correspondence target is in a predetermined state for each state based on the voice information (step S102). . Subsequently, the information processing device 10 provides the responder with a notification indicating the accuracy (step S103). Subsequently, the information processing device 10 provides a message corresponding to the correspondence of the responder (step S104). Subsequently, the information processing apparatus 10 receives information about the state of the response target from the responder (step S105). Subsequently, the information processing apparatus 10 causes the model to learn the information received from the responder and the characteristics of the voice indicated by the voice information (step S106), and ends the process.
  • the providing unit 44 provides a message according to the response of the responder to the corresponding target, but the function of the providing unit 44 is not limited to such an example.
  • the providing unit 44 may provide a message indicating the degree to which emotion is included in the state of the correspondence target estimated based on the voice information (in other words, the degree of development of the correspondence target).
  • the providing unit 44 based on the voice information, selects desires such as “hold”, “breastfeeding”, “change diapers”, and “sleepy”, as well as desires such as “sadness” and “anger”.
  • the information processing apparatus 10 can ensure that the subject develops smoothly even when there is no child-rearing-experienced person around the responder and the responder cannot confirm the degree of development of the subject. Since it is possible to convey that there is a presence, it is possible to increase the self-affirmation of the respondent.
  • the detection device 200 may output a sound indicating a notification, message, comment, or the like provided by the information processing device 10 .
  • the detection device 200 may display notifications, messages, comments, etc. provided by the information processing device 10 on the display unit.
  • the providing unit 44 provides a message to the respondent according to the result of estimation by the estimating unit 42 and the use of the provided content. is not limited to For example, when the frequency with which the responder uses the application that implements the notification process according to the above-described embodiment is equal to or less than a predetermined threshold value, in other words, even if the responder does not use the application, the responder cries as the response target. , the providing unit 44 may provide a message praising the corresponding target. Further, when the detecting device 200 detects that the corresponding target has stopped crying before the notification unit 43 provides the notification, the providing unit 44 may provide a message praising the corresponding target.
  • the receiving unit 45 receives information about the status of the response target from the responder on the screen displaying the notification provided by the notification unit 43.
  • the function of the receiving unit 45 is such Examples are not limiting.
  • the reception unit 45 determines the timing at which a predetermined time has passed since it was detected that the subject has stopped crying, or the timing at which the user U1 is using a predetermined application such as a nursing application (in other words, the response Information about the status of the response target may be received via the content provided to the response person at the timing when it is estimated that the person is free.
  • the reception unit 45 receives the notification provided by the notification unit 43 within the period and the actual response through content provided to the responder every predetermined period (for example, content provided every night or every morning). An input of the degree of matching with the target state may be received.
  • each component of each device illustrated is functionally conceptual and does not necessarily have to be physically configured as illustrated. That is, the specific form of distribution/integration of each device is not limited to the illustrated one. Further, all or part of each component may be functionally or physically distributed and integrated in arbitrary units according to various loads and usage conditions. Further, each of the processes described above may be executed in combination as long as there is no contradiction.
  • FIG. 12 is a diagram illustrating an example of a hardware configuration
  • the computer 1000 is connected to an output device 1010 and an input device 1020 , and a bus 1090 connects an arithmetic device 1030 , a cache 1040 , a memory 1050 , an output IF (Interface) 1060 , an input IF 1070 and a network IF 1080 .
  • a bus 1090 connects an arithmetic device 1030 , a cache 1040 , a memory 1050 , an output IF (Interface) 1060 , an input IF 1070 and a network IF 1080 .
  • the arithmetic device 1030 operates based on programs stored in the cache 1040 and memory 1050, programs read from the input device 1020, and the like, and executes various processes.
  • the cache 1040 is a cache such as a RAM that temporarily stores data used by the arithmetic device 1030 for various arithmetic operations.
  • the memory 1050 is a storage device in which data used for various calculations by the arithmetic unit 1030 and various databases are registered, and is realized by ROM (Read Only Memory), HDD (Hard Disk Drive), flash memory, etc. memory.
  • the output IF 1060 is an interface for transmitting information to be output to the output device 1010 that outputs various information such as a monitor and a printer. It may be realized by a standard connector such as HDMI (registered trademark) (High Definition Multimedia Interface).
  • the input IF 1070 is an interface for receiving information from various input devices 1020 such as a mouse, keyboard, scanner, etc., and is realized by, for example, USB.
  • the input device 1020 includes optical recording media such as CD (Compact Disc), DVD (Digital Versatile Disc), PD (Phase change rewritable disk), magneto-optical recording media such as MO (Magneto-Optical disk), tape media, It may be implemented by a device that reads information from a magnetic recording medium, a semiconductor memory, or the like. Also, the input device 1020 may be realized by an external storage medium such as a USB memory.
  • the network IF 1080 has a function of receiving data from another device via the network N and sending it to the arithmetic device 1030, and transmitting data generated by the arithmetic device 1030 via the network N to other devices.
  • the arithmetic device 1030 controls the output device 1010 and the input device 1020 via the output IF 1060 and the input IF 1070.
  • arithmetic device 1030 loads a program from input device 1020 or memory 1050 onto cache 1040 and executes the loaded program.
  • the arithmetic device 1030 of the computer 1000 implements the functions of the control unit 230 by executing the program loaded on the cache 1040 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Psychiatry (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Child & Adolescent Psychology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Hospice & Palliative Care (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Alarm Systems (AREA)

Abstract

A notification program according to the present application causes a computer to execute: an acquisition step for acquiring voice information indicating a voice uttered by a response target to be responded to by a responder; an estimation step for estimating the state of the response target and the probability of being in said state on the basis of the voice information acquired in the acquisition step; and a notification step for notifying of the state of the response target, estimated in the estimation step as a desire or emotion of the response target, together with the accuracy and the recommended action of the responder for the state.

Description

通知プログラム、通知装置及び通知方法Notification program, notification device and notification method
 本発明は、通知プログラム、通知装置及び通知方法に関する。 The present invention relates to a notification program, a notification device, and a notification method.
 従来、対応対象(乳幼児等)の泣き声に基づき、対応対象が泣いている原因を推定する技術が知られている。このような技術の一例として、乳幼児が泣き始めた後に、乳幼児に装着されたセンサーユニットから出力された信号に基づいて、乳幼児が泣いている原因を推定する技術が知られている。また、乳児の泣き声に基づいて、おむつが濡れたことを原因として乳児が泣いていると判定された場合は、おむつが濡れたことを原因として乳児が泣いていることを表すおむつ濡れ判定画面を表示部に表示させる技術が知られている。 Conventionally, there is known a technique for estimating the cause of the crying of a corresponding target (infants, etc.) based on the crying of the corresponding target. As an example of such technology, there is known a technology for estimating the cause of an infant's crying based on a signal output from a sensor unit attached to the infant after the infant has started crying. Also, if it is determined that the baby is crying because the diaper is wet, based on the baby's crying, a wet diaper determination screen will be displayed showing that the baby is crying because the diaper is wet. A technique for displaying on a display unit is known.
特開2016-126367号公報JP 2016-126367 A 特開2019-095931号公報JP 2019-095931 A
 しかしながら、上述した従来技術では、対応対象への対応に関して対応者の自己肯定感を高めているとは言えない場合がある。 However, it may not be possible to say that the conventional technology described above enhances the self-affirmation of the respondent regarding the response to the response target.
 例えば、上述した従来技術では、乳幼児の泣き声の原因を特定し、原因に応じた対処を育児者が行うことを想定しているに過ぎない。このため、上述した従来技術では、例えば、育児者自身が乳幼児の泣く原因を推定し、対処法を考えて行動に移したことにより乳幼児が泣き止んだ、といった成功体験を育児者が積み上げることができない。このため、上述した従来技術では、対応対象への対応に関して対応者の自己肯定感を高めているとは言えない。 For example, in the conventional technology described above, it is only assumed that the cause of an infant's crying is identified and the childcare person takes measures according to the cause. For this reason, in the conventional technology described above, for example, the child-rearing person can accumulate a successful experience such as when the child stops crying by deducing the cause of the child's crying, thinking of a coping method, and taking action. Can not. For this reason, it cannot be said that the above-described conventional technology enhances the self-affirmation of the responder regarding the response to the response target.
 本願は、上記に鑑みてなされたものであって、対応対象への対応に関して対応者の自己肯定感を高めることを目的とする。 This application was made in view of the above, and aims to increase the self-affirmation of the respondent regarding the response to the response target.
 本願に係る通知プログラムは、対応者によって対応される対応対象が発した音声を示す音声情報を取得する取得手順と、前記取得手順により取得された音声情報に基づいて、前記対応対象の状態と、当該状態である確度とを推定する推定手順と、前記対応対象の欲求又は感情として前記推定手順により推定された前記対応対象の状態を、前記確度と、当該状態に対する前記対応者の推奨行動とともに通知する通知手順と、をコンピュータに実行させることを特徴とする。 A notification program according to the present application includes an acquisition procedure for acquiring voice information indicating a voice uttered by a responder to be handled by a responder, a state of the responder based on the voice information acquired by the acquisition procedure, an estimation procedure for estimating the certainty of the state, and notification of the state of the correspondence target estimated by the estimation procedure as the desire or emotion of the correspondence object, together with the accuracy and a recommended action of the respondent for the state. and a notification procedure to be executed by a computer.
 実施形態の一態様によれば、対応対象への対応に関して対応者の自己肯定感を高めることができる。 According to one aspect of the embodiment, it is possible to increase the self-affirmation of the responder regarding the response to the response target.
図1は、実施形態に係る通知処理の一例を示す図(1)である。FIG. 1 is a diagram (1) illustrating an example of notification processing according to an embodiment. 図2は、実施形態に係る端末装置の画面の一例を示す図(1)である。FIG. 2 is a diagram (1) showing an example of a screen of the terminal device according to the embodiment; 図3は、実施形態に係る端末装置の画面の一例を示す図(2)である。FIG. 3 is a diagram (2) showing an example of a screen of the terminal device according to the embodiment; 図4は、実施形態に係る端末装置の画面の一例を示す図(3)である。FIG. 4 is a diagram (3) showing an example of a screen of the terminal device according to the embodiment; 図5は、実施形態に係る端末装置の画面の一例を示す図(4)である。FIG. 5 is a diagram (4) showing an example of a screen of the terminal device according to the embodiment; 図6は、実施形態に係る通知処理の一例を示す図(2)である。FIG. 6 is a diagram (2) illustrating an example of notification processing according to the embodiment. 図7は、実施形態に係る情報処理装置の構成例を示す図である。FIG. 7 is a diagram illustrating a configuration example of an information processing apparatus according to the embodiment; 図8は、実施形態に係る対応対象情報データベースの一例を示す図である。FIG. 8 is a diagram illustrating an example of a correspondence target information database according to the embodiment; 図9は、実施形態に係る対応者情報データベースの一例を示す図である。FIG. 9 is a diagram illustrating an example of a responder information database according to the embodiment; 図10は、実施形態に係る検知装置の構成例を示す図である。FIG. 10 is a diagram illustrating a configuration example of a detection device according to the embodiment; 図11は、実施形態に係る情報処理装置が実行する通知処理の流れの一例を示すフローチャートである。11 is a flowchart illustrating an example of the flow of notification processing executed by the information processing apparatus according to the embodiment; FIG. 図12は、ハードウェア構成の一例を示す図である。FIG. 12 is a diagram illustrating an example of a hardware configuration;
 本明細書及び添付図面の記載により、少なくとも以下の事項が明らかとなる。 At least the following matters become clear from the description of this specification and the attached drawings.
 対応者によって対応される対応対象が発した音声を示す音声情報を取得する取得手順と、前記取得手順により取得された音声情報に基づいて、前記対応対象の状態と、当該状態である確度とを推定する推定手順と、前記対応対象の欲求又は感情として前記推定手順により推定された前記対応対象の状態を、前記確度と、当該状態に対する前記対応者の推奨行動とともに通知する通知手順と、をコンピュータに実行させることを特徴とする通知プログラム。 Acquisition procedure for acquiring voice information indicating a voice uttered by a response target handled by a responder, and obtaining the state of the response target and the probability of being in that state based on the voice information acquired by the acquisition procedure an estimation procedure for estimating; and a notification procedure for notifying the state of the correspondence target estimated by the estimation procedure as the desire or emotion of the correspondence target together with the accuracy and the recommended action of the respondent for the state. A notification program characterized in that it is executed by
 このような通知プログラムによれば、例えば、乳幼児等の対応対象が発する泣き声等の音声に基づいて、対応対象がどのような状態に起因して泣いているかを特定し、特定した状態ごとの確度を推定し、対応者に通知することができる。これにより、対応者自身が対応対象について推定される各状態に対する対処法を考え、行動に移したことにより、乳幼児が泣き止んだ、といった成功体験を対応者が積み上げることができるため、対応対象への対応に関して対応者の自己肯定感を高めることができる。また、例えば、食事や排せつ、睡眠などといった対応対象の欲求の確度を示す通知を対応対象に提供できるため、対応対象に対し適切な対応を行うことができる。また、例えば、対応対象の感情の発達を対応者に通知することができるため、対応対象の発達の遅れに対する対応者の不安を解消することができる。また、例えば、対応者に対し、対応対象に対する対処としてやるべき行動を提示することができるため、対応対象が泣くことに対する不安を軽減することができる。 According to such a notification program, for example, on the basis of a cry or other sound uttered by a target such as an infant, it is possible to identify the state in which the target is crying and determine the accuracy for each specified state. can be estimated and notified to the responder. As a result, the responder can think about how to deal with each presumed state of the response target and act on it, so that the responder can build up successful experiences such as an infant stopping crying. It is possible to increase the self-affirmation of the respondent regarding the response to In addition, for example, since a notification indicating the degree of certainty of a desire such as eating, excretion, or sleep can be provided to the subject, it is possible to appropriately respond to the subject. In addition, for example, since it is possible to notify the responder of the emotional development of the respondent, it is possible to relieve the responder's anxiety about the delay in the development of the respondent. In addition, for example, since it is possible to present an action to be taken as a countermeasure to the correspondence target to the correspondence person, it is possible to reduce anxiety about the correspondence target crying.
 また、通知プログラムは、前記対応対象が前記状態のいずれとも推定されない場合は、前記対応対象がいずれの状態でもないことを示す通知を提供する。 Also, the notification program provides a notification indicating that the corresponding target is not in any of the states when the corresponding target is not estimated to be in any of the states.
 このような通知プログラムによれば、例えば、メンタルリープに起こるような、対応対象が欲求や感情を主な原因とせず泣いている状態において、メンタルリープを原因として泣いている旨を対応者に通知することができるため、対応者が原因を究明する労力を軽減することができる。 According to such a notification program, for example, in a state where the subject is crying without desire or emotion as the main cause, such as occurs in mental leap, the respondent is notified that the crying is caused by mental leap. Therefore, it is possible to reduce the effort of the responder to investigate the cause.
 また、通知プログラムは、前記音声情報に基づいて前記確度が推定できない場合は、前記対応対象の状態の履歴に基づいて前記確度を推定する。また、前記音声情報に基づいて前記確度が推定できない場合は、前記対応者が入力した前記対応対象の状態の履歴に基づいて前記確度を推定してもよい。 Also, if the accuracy cannot be estimated based on the voice information, the notification program estimates the accuracy based on the history of the state of the correspondence target. Further, when the accuracy cannot be estimated based on the voice information, the accuracy may be estimated based on the history of the state of the response target input by the responder.
 このような通知プログラムによれば、例えば、対応対象の音声がバックグラウンドノイズにより読み取れない場合であっても対応対象の状態を推定することができるため、例えば、対応対象が泣いているとき、対応者が推定結果を入手できないことを防ぎ、対応者自身が対応対象にすべき行動を知ることができるため、対応者が前向きに対応対象へ対応することができる。 According to such a notification program, for example, even if the speech of the correspondence target cannot be read due to background noise, the state of the correspondence target can be estimated. This prevents the person from being unable to obtain the estimation result, and allows the responder to know the behavior that should be addressed, so that the responder can positively respond to the response target.
 また、通知プログラムは、前記対応対象が発する音声に基づいて、前記対応対象の状態に対する前記対応者の対応に応じたメッセージを前記対応者に提供する。 In addition, the notification program provides the responder with a message according to the responder's response to the state of the response target based on the voice uttered by the response target.
 このような通知プログラムによれば、例えば、対応対象が泣き止んだ場合に、対応者の行動に対する肯定的なメッセージを提供することができるため、対応者の自己肯定感を高めることができる。 According to such a notification program, for example, when the person to be dealt with stops crying, it is possible to provide a positive message about the behavior of the person responding, thereby increasing the sense of self-affirmation of the person responding.
 また、通知プログラムは、前記通知が提供されてから、前記対応対象が発する音声が所定の条件を満たすまでの時間に応じたメッセージを提供する。 Also, the notification program provides a message according to the time from when the notification is provided until the voice uttered by the corresponding target satisfies a predetermined condition.
 このような通知プログラムによれば、例えば、対応対象が泣き止むまでの時間が早い場合には、対応者の行動に対する肯定的なメッセージを提供することができるため、対応者の自己肯定感を高めることができる。 According to such a notification program, for example, when it takes a long time for the respondent to stop crying, it is possible to provide a positive message about the behavior of the respondent, thereby increasing the respondent's sense of self-affirmation. be able to.
 また、通知プログラムは、前記対応者が発する音声に基づいて、前記対応対象の状態に対する前記対応者の対応に応じたメッセージを前記対応者に提供する。 In addition, the notification program provides the responder with a message according to the response of the responder to the state of the response target based on the voice uttered by the responder.
 このような通知プログラムによれば、例えば、対応対象に対する対応者の声掛けを評価し、肯定的なメッセージを提供することができるため、対応者の自己肯定感を高めることができる。 According to such a notification program, for example, it is possible to evaluate the response of the respondent to the target of the response and provide a positive message, thereby increasing the self-esteem of the respondent.
 また、通知プログラムは、前記対応対象の状態に関するコンテンツの利用に応じたメッセージを前記対応者に提供する。 In addition, the notification program provides the responder with a message according to the use of the content regarding the status of the response target.
 このような通知プログラムによれば、例えば、対応対象が泣いた際の対処のフロー等に関するコンテンツを対応者が閲覧した場合には、対応者を評価し、肯定的なメッセージを提供することができるため、対応者の自己肯定感を高めることができる。 According to such a notification program, for example, when the responder browses the content related to the flow of coping when the respondent cries, the responder can be evaluated and a positive message can be provided. Therefore, the self-affirmation of the respondent can be enhanced.
 また、通知プログラムは、前記対応対象の音声と、前記対応対象の状態との対応関係に関する出題を行うコンテンツを前記対応者に提供する。 In addition, the notification program provides the responder with content that poses a question regarding the correspondence relationship between the voice of the response target and the state of the response target.
 このような通知プログラムによれば、例えば、蓄積した対応対象の泣き声と、対応対象の状態との対応関係に対する理解を対応者が深めることができるため、対応者が泣いた際の不安を解消することができる。また、蓄積した対応対象の泣き声と対応する対応対象の状態を対応者が正答することができた場合には、対応者を評価し、肯定的なメッセージを提供することができるため、対応者の自己肯定感を高めることができる。 According to such a notification program, for example, the responder can deepen his/her understanding of the correspondence relationship between the accumulated cry of the respondent and the state of the respondent, thereby relieving the anxiety of the responder when he or she cries. be able to. In addition, when the responder is able to give a correct answer to the accumulated cry of the target to be addressed and the state of the corresponding target to be addressed, the responder can be evaluated and a positive message can be provided. You can increase your self-esteem.
 また、通知プログラムは、前記対応対象の状態に対する前記対応者の対応に応じたメッセージを、前記対応者と所定の関連性を有する利用者に提供する。 In addition, the notification program provides a user having a predetermined relationship with the responder with a message according to the responder's response to the state of the response target.
 このような通知プログラムによれば、例えば、母親である対応者を評価するメッセージを、父親である利用者に対し提供することができるため、対応者が社会的説得を得ることができる。特に、母親である対応者を励ます、褒める内容であるメッセージを父親である利用者に対して提供することで、父親である利用者が母親である対応者を好意的な評価をすることができるため、対応者が社会的説得を得ることができる。 According to such a notification program, for example, a message evaluating the respondent who is the mother can be provided to the user who is the father, so that the respondent can obtain social persuasion. In particular, by providing a user who is a father with a message that encourages and praises the respondent who is a mother, the user who is a father can evaluate the respondent who is a mother favorably. Therefore, the respondent can obtain social persuasion.
 また、通知プログラムは、推定結果に基づいて、前記対応対象の状態の履歴を示すコンテンツを前記対応者に提供する。 Also, the notification program provides the responder with content indicating the history of the status of the response target based on the estimation result.
 このような通知プログラムによれば、例えば、推定した結果を育児記録として対応者に提供することができるため、利便性を向上させることができる。 According to such a notification program, for example, the estimated result can be provided to the respondent as a childcare record, so convenience can be improved.
 また、通知プログラムは、前記通知が提供された後、前記対応対象の状態が所定の時間以上解消されない場合は、前記対応者に所定のメッセージを提供する。 In addition, the notification program provides a predetermined message to the responder if the state of the response target is not cleared for a predetermined time or longer after the notification is provided.
 このような通知プログラムによれば、例えば、対応対象が一定時間泣き止まない場合には、対応対象の状態の推定が間違っていた旨のメッセージを提供することができるため、対応者の不満を軽減することができる。 According to such a notification program, for example, if the target does not stop crying for a certain period of time, a message can be provided to the effect that the estimation of the target's condition was incorrect, thereby reducing the respondent's dissatisfaction. can do.
 また、通知プログラムは、前記対応者とは異なる他の対応者から投稿されたメッセージを前記対応者に提供する。 In addition, the notification program provides the responder with a message posted by another responder different from the responder.
 このような通知プログラムによれば、例えば、育児を行っている他の対応者の体験談を提供することができるため、対応者が代理体験を得ることができる。 According to such a notification program, for example, it is possible to provide the experiences of other coordinators who are raising children, so that the coordinator can obtain a substitute experience.
 また、通知プログラムは、前記対応者に提供するタイミングに応じた情報量である前記メッセージを提供する。 In addition, the notification program provides the message, which is the amount of information according to the timing of providing to the responder.
 このような通知プログラムによれば、例えば、対対応者に余裕がある場合には長いメッセージを提供することができるため、対応者の状況に応じた適切な情報量のメッセージを提供することができる。 According to such a notification program, for example, a long message can be provided when the responder has spare time, so it is possible to provide a message with an appropriate amount of information according to the situation of the responder. .
 また、通知プログラムは、前記対応対象の状態に応じた情報量である前記メッセージを提供する。 Also, the notification program provides the message, which is the amount of information according to the state of the response target.
 このような通知プログラムによれば、例えば、対応者が対応対象への対応に追われている場合には、短いメッセージを提供することができるため、対応者の状況に応じた適切な情報量のメッセージを提供することができる。 According to such a notification program, for example, when a responder is busy with dealing with a subject to be dealt with, a short message can be provided. message can be provided.
 また、通知プログラムは、前記対応者から投稿されたメッセージを前記対応者とは異なる他の対応者に提供する。 Also, the notification program provides the message posted by the responder to another responder different from the responder.
 このような通知プログラムによれば、例えば、対応者自身が育児において経験した体験談を他の対応者に提供することができるため、代理体験を受け継ぐことができる。 According to such a notification program, for example, it is possible for the coordinator to provide other coordinators with stories of their own experiences in childcare, so that the experiences on behalf of the coordinator can be inherited.
 また、通知プログラムは、前記対応対象が発する音声の特徴を学習させたモデルを用いて前記確度を推定し、前記通知が提供された後の前記対応対象の状態に関する情報と、前記音声情報が示す音声の特徴とを前記モデルに学習させる。 In addition, the notification program estimates the accuracy using a model that has learned the features of the voice uttered by the corresponding target, and the information regarding the state of the corresponding target after the notification is provided and the voice information indicated The model is trained on speech features.
 このような通知プログラムによれば、例えば、対応者が発する音声の特徴を学習させることでモデルをパターン化、パーソナル化させ、当該モデルを用いて対応対象の状態を推定することができるため、推定の精度を向上させることができる。 According to such a notification program, for example, a model can be patterned and personalized by learning the characteristics of the voice uttered by the responder, and the model can be used to estimate the state of the response target. can improve the accuracy of
 また、通知プログラムは、前記通知が提供された後に、前記対応対象の状態に関する情報を前記対応者から受け付け、受け付けられた情報と、前記音声情報が示す音声の特徴とを前記モデルに学習させる。 In addition, after the notification is provided, the notification program receives information about the state of the response target from the responder, and causes the model to learn the received information and the characteristics of the voice indicated by the voice information.
 このような通知プログラムによれば、例えば、実際の対応対象の状態に関する情報を用いてモデルの学習を行うことができるため、推定の精度を向上させることができる。 According to such a notification program, for example, model learning can be performed using information about the actual state of the response target, so it is possible to improve the accuracy of estimation.
 以下に、通知プログラム、通知装置及び通知方法を実施するための形態(以下、「実施形態」と記載する。)の一例について図面を参照しつつ詳細に説明する。なお、この実施形態により通知プログラム、通知装置及び通知方法が限定されるものではない。また、以下の実施形態において同一の部位には同一の符号を付し、重複する説明は省略する。 An example of a form (hereinafter referred to as "embodiment") for implementing the notification program, notification device, and notification method will be described in detail below with reference to the drawings. Note that the notification program, the notification device, and the notification method are not limited by this embodiment. Also, in the following embodiments, the same parts are denoted by the same reference numerals, and overlapping descriptions are omitted.
[実施形態]
〔1.通知処理の一例について〕
 図1を用いて、本実施形態の通知プログラム等により実現される通知処理を説明する。図1は、実施形態に係る通知処理の一例を示す図(1)である。なお、図1では、本実施形態に係る情報処理装置10によって、実施形態に係る通知処理などが実現されるものとする。
[Embodiment]
[1. About an example of notification processing]
The notification processing realized by the notification program etc. of this embodiment will be described with reference to FIG. FIG. 1 is a diagram (1) illustrating an example of notification processing according to an embodiment. In FIG. 1, it is assumed that the information processing apparatus 10 according to the present embodiment implements the notification processing and the like according to the embodiment.
 図1に示すように、実施形態に係る通知システム1は、情報処理装置10と、端末装置100と、検知装置200とを含む。ネットワークN(例えば、図6参照)を介して有線または無線により相互に通信可能に接続される。ネットワークNは、例えば、インターネットなどのWAN(Wide Area Network)である。なお、図1に示した通知システム1には、複数台の情報処理装置10、複数台の端末装置100及び複数台の検知装置200が含まれていてもよい。 As shown in FIG. 1, the notification system 1 according to the embodiment includes an information processing device 10, a terminal device 100, and a detection device 200. They are communicably connected to each other by wire or wirelessly via a network N (see, for example, FIG. 6). Network N is, for example, a WAN (Wide Area Network) such as the Internet. Note that the notification system 1 shown in FIG. 1 may include a plurality of information processing devices 10, a plurality of terminal devices 100, and a plurality of detection devices 200. FIG.
 図1に示す情報処理装置10は、通知処理を行う情報処理装置であり、例えば、サーバ装置やクラウドシステム等により実現される。例えば、図1の例において、情報処理装置10は、乳幼児B1を養育する利用者U1、言い換えると、対応対象である乳幼児B1が泣いている場合に乳幼児B1に対して、抱っこや授乳、おむつの交換、あやすなどといった対応を行う対応者である利用者U1に対し、各種の情報を通知する。 The information processing device 10 shown in FIG. 1 is an information processing device that performs notification processing, and is realized by, for example, a server device, a cloud system, or the like. For example, in the example of FIG. 1 , the information processing apparatus 10 provides a user U1 who is raising an infant B1, in other words, when the infant B1 to be dealt with is crying, the information processing apparatus 10 performs various functions such as holding, feeding, and diapering the infant B1. Various types of information are notified to the user U1, who is the person who takes care of the exchange, comfort, and the like.
 図1に示す端末装置100は、対応者である利用者によって利用される情報処理装置である。端末装置100は、例えば、スマートフォンや、タブレット型端末、ノート型PC(Personal Computer)、デスクトップPC、携帯電話機、PDA(Personal Digital Assistant)などにより実現される。また、端末装置100は、情報処理装置10等によって配信される情報を、ウェブブラウザやアプリケーションにより表示する。なお、図1に示す例において、端末装置100は、利用者U1によって利用されるスマートフォンである場合を示す。 A terminal device 100 shown in FIG. 1 is an information processing device used by a user who is a responder. The terminal device 100 is implemented by, for example, a smartphone, tablet terminal, notebook PC (Personal Computer), desktop PC, mobile phone, PDA (Personal Digital Assistant), and the like. In addition, the terminal device 100 displays information delivered by the information processing device 10 or the like using a web browser or an application. In the example shown in FIG. 1, the terminal device 100 is a smart phone used by the user U1.
 図1に示す検知装置200は、対応対象である乳幼児の状態を示す音声情報を検知する検知装置であり、各種のセンシングデバイス若しくはスマートスピーカを含むセンシングデバイスの情報を収集する各種のIoT(Internet of Things)機器により実現される。例えば、検知装置200は、乳幼児の近傍に設置され、少なくとも乳幼児が発した音声を検知可能なマイクなどの音センサにより実現される。 The detection device 200 shown in FIG. 1 is a detection device that detects voice information indicating the state of an infant to be addressed, and various IoT (Internet of Things) devices that collect information from sensing devices including various sensing devices or smart speakers. (Things) equipment. For example, the detection device 200 is implemented by a sound sensor such as a microphone installed near the infant and capable of detecting at least the sound uttered by the infant.
 以下、図1を用いて、情報処理装置10により実現される通知処理について説明する。なお、以下の説明では、端末装置100を利用者U1と同一視する場合がある。すなわち、以下では、利用者U1を端末装置100と読み替えることもできる。 The notification process realized by the information processing apparatus 10 will be described below with reference to FIG. In the following description, the terminal device 100 may be identified with the user U1. That is, the user U1 can be read as the terminal device 100 below.
 まず、検知装置200は、乳幼児B1の泣き声を検知する(ステップS1)。例えば、検知装置200は、音声センサを用いて乳幼児B1が発する音声を収集し、乳幼児B1の泣き声が検知されたか否かを判定する。 First, the detection device 200 detects crying of the infant B1 (step S1). For example, the detection device 200 uses a sound sensor to collect sounds emitted by the infant B1, and determines whether or not crying of the infant B1 has been detected.
 続いて、検知装置200は、乳幼児B1の泣き声を示す音声情報を、送信先として登録されている端末装置100に送信する(ステップS2)。例えば、検知装置200は、乳幼児B1が泣いていると判定した音声を示す音声情報を端末装置100に送信する。 Next, the detection device 200 transmits audio information indicating the cry of the infant B1 to the terminal device 100 registered as a transmission destination (step S2). For example, the detection device 200 transmits, to the terminal device 100, voice information indicating the voice determined that the infant B1 is crying.
 続いて、情報処理装置10は、端末装置100から送信される音声情報を取得する(ステップS3)。続いて、情報処理装置10は、取得した音声情報に基づいて、乳幼児B1の各状態の確度を推定する(ステップS4)。例えば、情報処理装置10は、音声情報に基づいて、乳幼児B1の欲求(例えば、「抱っこ」や、「授乳」、「おむつの交換」、「眠い」など)ごとの確度や、乳幼児B1が感じている感情の確度を感情(例えば、「さみしい」や、「怒り」、「悲しい」、「疲れた」など)ごとの確度を推定する。具体的な例を挙げると、情報処理装置10は、音声情報が入力された場合に、乳幼児の欲求や感情などといった状態の確度を出力するように学習が行われたモデル#1を用いて、乳幼児B1の各状態の確度を推定する。 Subsequently, the information processing device 10 acquires voice information transmitted from the terminal device 100 (step S3). Subsequently, information processing device 10 estimates the degree of certainty of each state of infant B1 based on the acquired voice information (step S4). For example, the information processing device 10, based on the voice information, determines the accuracy of each desire of the infant B1 (for example, "hold", "breastfeeding", "change diaper", "sleepy", etc.) and the feeling of the infant B1. The accuracy of each emotion (for example, “lonely”, “anger”, “sad”, “tired”, etc.) is estimated. As a specific example, the information processing device 10 uses model #1 that has been trained so as to output the degree of certainty of a state such as desires and emotions of an infant when voice information is input. The probability of each state of infant B1 is estimated.
 ここで、モデルの学習には、任意の公知技術が適用可能であり、取得される情報に応じて適宜選択された学習手法が用いられてもよい。例えば、モデルの学習には、機械学習に関する種々の従来技術(例えば、SVM(Support Vector Machine)等の教師あり学習の機械学習に関する技術)を用いて行われてもよい。また、モデルの学習には、深層学習(ディープラーニング)の技術が用いられてもよい。例えば、モデルの学習には、RNN(Recurrent Neural Network)やCNN(Convolutional Neural Network)等の種々のディープラーニングの技術が用いられてもよい。 Here, any known technique can be applied to model learning, and a learning method appropriately selected according to the information to be acquired may be used. For example, model learning may be performed using various conventional techniques related to machine learning (for example, techniques related to supervised machine learning such as SVM (Support Vector Machine)). In addition, deep learning technology may be used for model learning. For example, various deep learning techniques such as RNN (Recurrent Neural Network) and CNN (Convolutional Neural Network) may be used for model learning.
 続いて、情報処理装置10は、乳幼児B1の各状態の確度を示す通知を提供する(ステップS5)。続いて、端末装置100は、情報処理装置10から提供された通知を画面に表示する(ステップS6)。続いて、利用者U1は、端末装置100の画面に表示された通知に基づいて、乳幼児B1に対する対応を行う(ステップS7)。例えば、利用者U1は、抱っこや、授乳、おむつの交換などといった対応を行う。なお、利用者U1の対応には、乳幼児B1に対して抱っこや、授乳、おむつの交換などいった直接的に接触やコミュニケーションを行う行動のみならず、そっとしておく等の直接的な行動そのものを行わない対応も含まれる。特に、寝言泣きのように、対応対象が直接的な行動を求めていないと推定される場合において、端末装置100は、そっとしておく等の直接的な行動そのものを行わない対応の通知を行う。 Subsequently, the information processing device 10 provides a notification indicating the degree of certainty of each state of the infant B1 (step S5). Subsequently, the terminal device 100 displays the notification provided from the information processing device 10 on the screen (step S6). Subsequently, user U1 responds to infant B1 based on the notification displayed on the screen of terminal device 100 (step S7). For example, the user U1 performs actions such as hugging, breastfeeding, and changing diapers. User U1's response includes not only direct contact and communication with infant B1, such as hugging, breastfeeding, and changing diapers, but also direct behavior such as leaving infant B1 alone. It also includes measures that are not taken. In particular, when it is presumed that the target does not require direct action, such as crying in his sleep, the terminal device 100 notifies him of a response that does not take direct action itself, such as leaving him alone.
 続いて、情報処理装置10は、乳幼児B1の状態に関する情報を端末装置100から受け付ける(ステップS8)。例えば、情報処理装置10は、乳幼児B1が泣き止んだか否かを示す情報(言い換えると、推定結果が実際の乳幼児B1の状態と一致しているか否か)を端末装置100から受け付ける。 Subsequently, the information processing device 10 receives information about the condition of the infant B1 from the terminal device 100 (step S8). For example, the information processing device 10 receives information indicating whether or not the infant B1 has stopped crying (in other words, whether or not the estimation result matches the actual state of the infant B1) from the terminal device 100.
 ここで、図2を用いて、上述した実施形態において端末装置100が画面に表示する通知の例を説明する。図2は、実施形態に係る端末装置の画面の一例を示す図(1)である。なお、図2に示す例において、情報処理装置10が、乳幼児B1の欲求「授乳」の確度を60%、乳幼児B1の欲求「眠い」の確度を40%と推定し、推定結果を示す通知を端末装置100に提供したものとする。 Here, an example of the notification displayed on the screen by the terminal device 100 in the above-described embodiment will be described using FIG. FIG. 2 is a diagram (1) showing an example of a screen of the terminal device according to the embodiment; In the example shown in FIG. 2, the information processing device 10 estimates the accuracy of the infant B1 desire "breastfeeding" to be 60% and the accuracy of the infant B1 desire "sleepy" to be 40%, and notifies the user of the estimation results. It is assumed that the terminal device 100 is provided with the data.
 まず、端末装置100は、情報処理装置10による乳幼児B1の状態の推定結果を表示するための画面C11を表示する。例えば、端末装置100は、推定結果を表示するための領域AR11と、具体的な対応の内容を表示する領域AR12と、乳幼児B1が泣き止んだことを情報処理装置10に通知するためのボタンBT11と、乳幼児B1が泣き止まないことを情報処理装置10に通知するためのボタンBT12とを含む画面C11を表示する。具体的な例を挙げると、端末装置100は、欲求「授乳」を有する乳幼児B1の気持ちを解釈したテキスト「おなかがすいたよ」と、欲求「授乳」の確度を60%とを対応付け、領域AR11に表示する。そして次に、端末装置100は、欲求「眠い」を有する乳幼児B1の気持ちを解釈したテキスト「ねむたいよ」と、欲求「眠い」の確度を40%とを対応付け、領域AR11に表示する。すなわち、端末装置100は、情報処理装置10により推定された各状態を、確度が高い順に表示する。 First, the terminal device 100 displays a screen C11 for displaying the estimation result of the state of the infant B1 by the information processing device 10. FIG. For example, the terminal device 100 includes an area AR11 for displaying the estimation result, an area AR12 for displaying the specific content of the response, and a button BT11 for notifying the information processing apparatus 10 that the infant B1 has stopped crying. and a button BT12 for notifying the information processing apparatus 10 that the infant B1 will not stop crying. As a specific example, the terminal device 100 associates the text "I'm hungry" that interprets the feeling of the infant B1 having the desire "breastfeeding" with the probability of 60% for the desire "breastfeeding", Display on AR11. Next, the terminal device 100 associates the text "I'm sleepy" that interprets the feeling of the infant B1 having the desire "I'm sleepy" with the desire "I'm sleepy" with a probability of 40%, and displays them in the area AR11. That is, the terminal device 100 displays each state estimated by the information processing device 10 in descending order of accuracy.
 このように、対応対象の感情の推定・状態の予測結果を通知するだけなく、図2の領域AR11のように、乳幼児をメタファー化し、領域AR12のように、第3者からのメッセージのような内容で対応者が具体的に行動に移しやすい内容を表示することで、対応者である利用者U1が対応対象の乳幼児B1の欲求を推定する支援と、利用者U1の乳幼児B1に対する対応の成功確度を高めることができる。これにより、対応者は対応対象の欲求を推定し適切な対応をとる行動をトレースすることができるため、対応者の自己肯定感を向上させることができる。 In this way, not only is the estimation result of the emotion/state of the target to be notified, but also the infant is metaphorized as in the area AR11 of FIG. By displaying contents that are easy for the responder to take concrete actions, the user U1 who is the responder is supported to estimate the needs of the infant B1 to be addressed, and the user U1 successfully responds to the infant B1. Accuracy can be improved. As a result, the responder can estimate the desire of the target of the response and trace the behavior of taking an appropriate response, so that the responder's sense of self-affirmation can be improved.
 ここで、図2の例において、ボタンBT11が押下されたものとする。この場合、端末装置100は、乳幼児B1が泣き止んだことを示す情報を情報処理装置10に送信するとともに、所定のメッセージを表示する画面C12に画面C11を遷移させる。例えば、端末装置100は、乳幼児B1が泣くことに対する利用者U1の捉え方を変えるためのメッセージを表示する領域AR13と、乳幼児B1が泣くことに対する利用者U1の理解を深めるためのコンテンツの表示を指示するためのボタンBT13と、情報処理装置10から提供された通知を閉じるためのボタンB14とを含む画面C12を表示する。具体的な例を挙げると、端末装置100は、乳幼児B1は「悲しい」といった感情に起因して泣いているのではなく、利用者U1とのコミュニケーションのために泣いている旨のメッセージを領域AR13に表示する。 Here, in the example of FIG. 2, it is assumed that the button BT11 has been pressed. In this case, the terminal device 100 transmits information indicating that the infant B1 has stopped crying to the information processing device 10, and transitions the screen C11 to a screen C12 displaying a predetermined message. For example, the terminal device 100 displays an area AR13 for displaying a message for changing the user U1's understanding of the crying of the infant B1, and displays content for deepening the user U1's understanding of the crying of the infant B1. A screen C12 including a button BT13 for instructing and a button B14 for closing the notification provided from the information processing apparatus 10 is displayed. To give a specific example, the terminal device 100 displays a message to the effect that the infant B1 is crying not because of an emotion such as "sad" but because of communication with the user U1 in the area AR13. to display.
 なお、画面C11においてボタンBT11が押下された場合、端末装置100は、利用者U1が行った対応や、利用者U1が推定する乳幼児B1の状態などを示す情報の入力を受け付け、情報処理装置10に送信してもよい。また、端末装置100は、ボタンBT12が押下された場合、利用者U1が推定する乳幼児B1の状態を示す情報の入力を受け付け、乳幼児B1が泣き止まないことを示す情報とともに情報処理装置10に送信してもよい。すなわち、端末装置100は、情報処理装置10が提供した通知の内容と、乳幼児B1の実際の状態とが一致するか否かを示す情報、並びに、乳幼児B1の実際の状態を示す情報を情報処理装置10に送信してもよい。 Note that when the button BT11 is pressed on the screen C11, the terminal device 100 accepts input of information indicating the action taken by the user U1, the condition of the infant B1 estimated by the user U1, and the like, and the information processing device 10 may be sent to Further, when the button BT12 is pressed, the terminal device 100 accepts input of information indicating the condition of the infant B1 estimated by the user U1, and transmits the information to the information processing apparatus 10 together with information indicating that the infant B1 does not stop crying. You may That is, the terminal device 100 processes information indicating whether or not the content of the notification provided by the information processing device 10 matches the actual state of the infant B1, as well as information indicating the actual state of the infant B1. It may be sent to device 10 .
 また、端末装置100は、検知装置200により検知された情報を情報処理装置10に送信してもよい。例えば、端末装置100は、検知装置200により乳幼児B1が泣き止んだことが検知された場合に、乳幼児B1が泣き止んだことを示す情報を情報処理装置10に送信する。また、端末装置100は、画面C11を表示した後、検知装置200により一定時間(例えば、30分、1時間)の間継続して乳幼児B1の泣き声が検知されている場合に、乳幼児B1が泣き止まない旨を示す情報を情報処理装置10に送信する。 Also, the terminal device 100 may transmit information detected by the detection device 200 to the information processing device 10 . For example, when the detection device 200 detects that the baby B1 has stopped crying, the terminal device 100 transmits information indicating that the baby B1 has stopped crying to the information processing device 10 . Further, when the detection device 200 continues to detect the crying of the infant B1 for a certain period of time (for example, 30 minutes or 1 hour) after the screen C11 is displayed, the terminal device 100 detects that the infant B1 is crying. Information indicating not to stop is transmitted to the information processing device 10 .
 また、情報処理装置10による乳幼児B1の状態の推定結果や、端末装置100が表示する画面は図2の例に限定されない。ここで、図3を用いて、上述した実施形態において端末装置100が表示する他の画面の例を説明する。図3は、実施形態に係る端末装置の画面の一例を示す図(2)である。 In addition, the estimation result of the state of infant B1 by information processing device 10 and the screen displayed by terminal device 100 are not limited to the example in FIG. Here, another example of the screen displayed by the terminal device 100 in the above embodiment will be described with reference to FIG. FIG. 3 is a diagram (2) showing an example of a screen of the terminal device according to the embodiment;
 一例をあげると、乳幼児B1がいずれの状態とも推定されない場合、情報処理装置10は、乳幼児B1がメンタルリープに起こるような、対応対象が欲求や感情を主な原因とせず泣いている状態である可能性を示す通知を端末装置100に提供し、当該通知を示す画面C21を表示させる。例えば、端末装置100は、乳幼児B1の音声の波形や、乳幼児B1がメンタルリープに起こるような、対応対象が欲求や感情を主な原因とせず泣いている状態である旨を示すメッセージなどを表示する領域AR21と、その状態に関する詳細な情報(例えば、メンタルリープに関する情報等)を含むコンテンツの表示を指示するためのボタンBT21とを含む画面C21を表示する。 As an example, if infant B1 is not estimated to be in any state, the information processing device 10 determines that infant B1 is in a state of crying without desire or emotion as the main cause, as occurs in mental leap. A notification indicating the possibility is provided to the terminal device 100, and a screen C21 indicating the notification is displayed. For example, the terminal device 100 displays the voice waveform of the infant B1, a message indicating that the target is crying without desire or emotion as the main cause, as occurs in the infant B1 in mental leap. A screen C21 including an area AR21 to be displayed and a button BT21 for instructing display of content including detailed information on the state (for example, information on mental leap) is displayed.
 また、図2に示す画面C11のボタンBT12が押下された場合や、画面C11が表示されてから一定時間(例えば、30分、1時間)が経過してもボタンBT11が押下されない場合(言い換えると、一定時間が経過しても乳幼児B1が泣き止まない場合)、端末装置100は、図2に示す画面C11を画面C22に遷移させる。例えば、端末装置100は、乳幼児B1の状態の推定が外れたことに対する謝罪とともに、利用者U1の気持ちを穏やかにするメッセージを表示する領域AR22を含む画面C22を表示する。 Also, when the button BT12 on the screen C11 shown in FIG. 2 is pressed, or when the button BT11 is not pressed even after a certain period of time (for example, 30 minutes or 1 hour) has elapsed since the screen C11 was displayed (in other words, , the infant B1 does not stop crying after a certain period of time), the terminal device 100 causes the screen C11 shown in FIG. 2 to transition to the screen C22. For example, the terminal device 100 displays a screen C22 including an area AR22 displaying an apology for misestimating the condition of the infant B1 and a message to calm the user U1.
 図1に戻り説明を続ける。情報処理装置10は、端末装置100から受け付けた情報と、音声情報が示す音声の特徴とをモデル#1に学習させる(ステップS9)。例えば、情報処理装置10は、乳幼児B1が泣き止んだか否かを示す情報と、推定結果に対応する音声情報とを学習データとしてモデル#1の学習を行う。また、情報処理装置10は、利用者U1が乳幼児B1を泣き止ませるために行った対応と、当該泣き声を示す音声情報とを学習データとしてモデル#1に学習の学習を行う。これにより、情報処理装置10は、モデル#1を、乳幼児B1の状態の推定を行うためのモデルとしてパターン化、パーソナル化を行う。 Return to Figure 1 and continue the explanation. The information processing device 10 causes the model #1 to learn the information received from the terminal device 100 and the voice features indicated by the voice information (step S9). For example, information processing apparatus 10 learns model #1 using information indicating whether infant B1 has stopped crying and voice information corresponding to the estimation result as learning data. In addition, the information processing apparatus 10 performs learning on the model #1 using the response taken by the user U1 to stop the infant B1 from crying and the voice information indicating the crying as learning data. As a result, the information processing apparatus 10 patterns and personalizes the model #1 as a model for estimating the state of the infant B1.
 続いて、情報処理装置10は、蓄積した乳幼児B1の音声情報に基づいて、メッセージ及びコンテンツを利用者U1に提供する(ステップS10)。例えば、情報処理装置10は、乳幼児B1への理解を利用者U1が深めるためのコンテンツや、利用者U1を労わるためのメッセージなどを提供する。 Subsequently, the information processing device 10 provides the message and content to the user U1 based on the accumulated voice information of the infant B1 (step S10). For example, the information processing device 10 provides content for the user U1 to deepen his understanding of the infant B1, a message for caring for the user U1, and the like.
 ここで、図4を用いて、上述した実施形態において情報処理装置10が提供するコンテンツの例を説明する。図4は、実施形態に係る端末装置の画面の一例を示す図(3)である。なお、図4の例において、情報処理装置10は、乳幼児B1の泣き声と、乳幼児B1の状態との対応関係に関するクイズを出題するコンテンツ(クイズコンテンツ)を提供するものとする。 Here, an example of content provided by the information processing apparatus 10 in the above-described embodiment will be described using FIG. FIG. 4 is a diagram (3) showing an example of a screen of the terminal device according to the embodiment; In the example of FIG. 4, the information processing device 10 provides content (quiz content) that asks a quiz about the correspondence relationship between the cry of the infant B1 and the state of the infant B1.
 まず、端末装置100は、乳幼児B1の泣き声と、乳幼児B1の状態との対応関係に関するクイズを示す画面C31を表示する。例えば、端末装置100は、乳幼児B1の泣き声の波形を表示する領域AR31と、クイズを表示する領域AR32と、クイズの解答の選択肢にそれぞれ対応するボタンBT31~BT33とを含む画面C31を表示するとともに、領域AR31に表示した波形に対応する乳幼児B1の泣き声を出力する。 First, the terminal device 100 displays a screen C31 showing a quiz about the correspondence relationship between the cry of the infant B1 and the state of the infant B1. For example, the terminal device 100 displays a screen C31 that includes an area AR31 that displays the waveform of infant B1's crying, an area AR32 that displays a quiz, and buttons BT31 to BT33 that respectively correspond to options for answering the quiz. , the crying voice of the infant B1 corresponding to the waveform displayed in the area AR31 is output.
 画面C31において、利用者U1がボタンBT31~BT33のいずれかを選択した場合、端末装置100は、利用者U1の解答の結果を表示する画面C32に画面C31を遷移させる。例えば、端末装置100は、領域AR32に替えて、解答の結果を表示する領域AR33を含み、各選択肢に対応する乳幼児B1の泣き声の波形の表示を指示するためのボタンBT34を含む画面C32を表示する。 When the user U1 selects any one of the buttons BT31 to BT33 on the screen C31, the terminal device 100 causes the screen C31 to transition to a screen C32 displaying the answer results of the user U1. For example, the terminal device 100 displays a screen C32 including an area AR33 for displaying the result of the answer instead of the area AR32, and a button BT34 for instructing display of the crying waveform of the infant B1 corresponding to each option. do.
 画面C32において、利用者U1がボタンBT34を選択した場合、端末装置100は、乳幼児B1の泣き声の波形を表示する画面C33に画面C32を遷移させる。例えば、端末装置100は、選択肢「おなかすいた」に対応する乳幼児B1の泣き声の波形を表示する領域AR33や、選択肢「だっこして」に対応する乳幼児B1の泣き声の波形を表示する領域AR34などを含む画面C33を表示する。 When the user U1 selects the button BT34 on the screen C32, the terminal device 100 causes the screen C32 to transition to a screen C33 displaying the waveform of the cry of the infant B1. For example, the terminal device 100 displays an area AR33 that displays the crying waveform of the infant B1 corresponding to the option "I'm hungry", and an area AR34 that displays the waveform of the crying infant B1 corresponding to the option "Cuddle me". A screen C33 containing is displayed.
 なお、上述のクイズコンテンツは、利用者U1のみならず、利用者U1と所定の関連性を有する利用者U2(例えば、利用者U1が乳幼児B1の母親である場合、乳幼児B1の父親である利用者U2)が利用する端末装置101に提供されてもよい。このような場合、端末装置101は、まず、端末装置100と同様に画面C31を表示する。そして、利用者U2がボタンBT31~BT33のいずれかを選択した場合、端末装置101は、利用者U2の解答の結果を表示する画面C34に画面C31を遷移させる。例えば、端末装置101は、領域AR32に替えて、解答の結果と、利用者U1の解答の結果との比較を表示する領域AR34を含み、各選択肢に対応する乳幼児B1の泣き声の波形の表示を指示するためのボタンBT35を含む画面C34を表示する。ここで、ボタンBT35が押下された場合、端末装置101は、画面C34を画面C33に遷移させる。 Note that the quiz content described above can be used not only by user U1, but also by user U2 who has a predetermined relationship with user U1 (for example, if user U1 is the mother of infant B1, the user who is the father of infant B1). It may be provided to the terminal device 101 used by the person U2). In such a case, the terminal device 101 first displays the screen C31 like the terminal device 100 does. Then, when the user U2 selects any one of the buttons BT31 to BT33, the terminal device 101 causes the screen C31 to transition to a screen C34 displaying the result of the user U2's answer. For example, instead of the area AR32, the terminal device 101 includes an area AR34 that displays a comparison between the answer result and the answer result of the user U1, and displays the crying waveform of the infant B1 corresponding to each option. A screen C34 including a button BT35 for instructing is displayed. Here, when the button BT35 is pressed, the terminal device 101 transitions the screen C34 to the screen C33.
 次に、図5を用いて、上述した実施形態において情報処理装置10が提供するメッセージの例を説明する。図5は、実施形態に係る端末装置の画面の一例を示す図(4)である。なお、図5の例において、情報処理装置10は、他の利用者(乳幼児を養育する利用者)により投稿されたメッセージ(体験談等)を提供するものとする。 Next, an example of a message provided by the information processing device 10 in the above embodiment will be described using FIG. FIG. 5 is a diagram (4) showing an example of a screen of the terminal device according to the embodiment; In the example of FIG. 5, the information processing apparatus 10 provides messages (experience stories, etc.) posted by other users (users raising infants).
 図5に示す例において、情報処理装置10は、利用者U1の状況に応じたメッセージを端末装置100に提供し、画面に表示させる。例えば、情報処理装置10が乳幼児B1の状態を推定する処理を行った回数(言い換えると、乳幼児B1が泣いた回数)が所定の閾値以上である場合や、通知を提供してから乳幼児B1が泣き止むまで所定の時間以上かかった場合、乳幼児B1の状態を推定する処理を行ってからの所定の時間内にメッセージを提供する場合、言い換えると、利用者U1が疲労していると推定される場合、端末装置100は、画面C41を表示する。具体的な例を挙げると、端末装置100は、他の利用者により投稿されたメッセージのうち、文字数が所定数以下の短いメッセージを表示する領域AR41を含む画面C41を表示する。 In the example shown in FIG. 5, the information processing device 10 provides the terminal device 100 with a message according to the situation of the user U1, and displays it on the screen. For example, when the number of times the information processing apparatus 10 performs the process of estimating the state of the infant B1 (in other words, the number of times the infant B1 cries) is equal to or greater than a predetermined threshold, or when the infant B1 cries after the notification is provided. If it takes more than a predetermined time to stop, or if the message is provided within a predetermined time after the process of estimating the condition of the infant B1, in other words, if it is estimated that the user U1 is tired. , the terminal device 100 displays a screen C41. As a specific example, the terminal device 100 displays a screen C41 including an area AR41 for displaying short messages having a predetermined number of characters or less among messages posted by other users.
 また、情報処理装置10が乳幼児B1の状態を推定する処理を行った回数が所定の閾値よりも少ない場合や、授乳アプリ等の所定のアプリを利用者U1が利用している場合、乳幼児が就寝していると推定される場合、言い換えると、利用者U1に時間の余裕があると推定される場合、端末装置100は、画面C42を表示する。具体的な例を挙げると、端末装置100は、他の利用者により投稿されたメッセージのうち、文字数が所定数よりも多いメッセージを表示する領域AR42と、当該メッセージの前文の表示を指示するためのボタンBT41とを含む画面C42を表示する。 Further, when the number of times the information processing device 10 has performed the process of estimating the state of the infant B1 is less than a predetermined threshold value, or when the user U1 is using a predetermined application such as a nursing application, the infant goes to bed. If it is estimated that the user U1 has enough time, the terminal device 100 displays the screen C42. To give a specific example, the terminal device 100 displays an area AR42 for displaying a message having more characters than a predetermined number among messages posted by other users, and an area AR42 for displaying the preamble of the message. button BT41 and a screen C42 is displayed.
 なお、情報処理装置10は、他の利用者に向けたメッセージの投稿を利用者U1から受け付けるためのコンテンツを端末装置100に提供し、画面に表示させてもよい。例えば、端末装置100は、文字数が所定数以下の短いメッセージを入力するための領域AR43と、文字数が所定数よりも多いメッセージを入力するための領域AR44とを含む画面C43を表示する。そして、端末装置100は、領域AR43及び領域AR44に入力されたメッセージを情報処理装置10に送信する。 Note that the information processing device 10 may provide the terminal device 100 with content for accepting a message posted by the user U1 for other users, and display the content on the screen. For example, the terminal device 100 displays a screen C43 including an area AR43 for inputting a short message with a predetermined number of characters or less and an area AR44 for inputting a message with more characters than the predetermined number. The terminal device 100 then transmits the messages input to the areas AR43 and AR44 to the information processing apparatus 10 .
 以上のように、実施形態に係る情報処理装置10は、乳幼児の泣き声に基づいて、乳幼児が欲する欲求や感じている感情などといった状態ごとの確度を推定し、乳幼児を養育する利用者に通知する。これにより、実施形態に係る情報処理装置10は、確度が高い順に対応する状態に対処していけば乳幼児を泣き止ませることができるといった安心感とともに、利用者自身が赤ちゃんの欲求や感情を予測し、自身で行動し、行動の結果乳幼児が泣き止んだ、といった成功体験を対応者が積み上げることができる。すなわち、実施形態に係る情報処理装置10は、対応対象への対応に関して対応者の自己肯定感を高めることができる。 As described above, the information processing apparatus 10 according to the embodiment estimates the accuracy for each state, such as the desires that the infant desires and the emotions that the infant feels, based on the cries of the infant, and notifies the user who is raising the infant. . As a result, the information processing apparatus 10 according to the embodiment provides a sense of security that the infant can stop crying if the corresponding states are dealt with in descending order of accuracy, and the user himself/herself predicts the baby's desires and emotions. Respondents can accumulate successful experiences, such as taking action on their own, and the infant stopping crying as a result of the action. That is, the information processing apparatus 10 according to the embodiment can enhance the self-affirmation of the responder regarding the response to the response target.
 なお、実施形態に係る通知システム1は、上述した情報処理装置10、端末装置100及び検知装置200の接続関係に限定されない。例えば、検知装置200が情報処理装置10と接続し、端末装置100が情報処理装置10と接続していなくともよい。具体的な例を挙げると、検知装置200は、自装置で検知した音声情報を情報処理装置10に送信し、情報処理装置10から送信される推定結果を示す通知を受け付け、端末装置100に提供する。 Note that the notification system 1 according to the embodiment is not limited to the connection relationship among the information processing device 10, the terminal device 100, and the detection device 200 described above. For example, the detection device 200 may be connected to the information processing device 10 and the terminal device 100 may not be connected to the information processing device 10 . As a specific example, the detection device 200 transmits voice information detected by itself to the information processing device 10 , receives a notification indicating an estimation result transmitted from the information processing device 10 , and provides the information to the terminal device 100 . do.
 また、端末装置100と、検知装置200とが個別に情報処理装置10に接続されていてもよい。具体的な例を挙げると、検知装置200は、自装置で検知した音声情報を情報処理装置10に送信する。そして、端末装置100は、情報処理装置10から送信される推定結果を示す通知を受け付ける。 Also, the terminal device 100 and the detection device 200 may be individually connected to the information processing device 10 . As a specific example, the detection device 200 transmits audio information detected by itself to the information processing device 10 . Then, the terminal device 100 receives a notification indicating an estimation result transmitted from the information processing device 10 .
 また、実施形態に係る通知システムは、図6のように、端末装置100と検知装置200とで構成されてもよい。その場合、図1におけるステップS3~5、及びステップS8~S10に相当する処理は、検知装置200が実行する。ここで、図6を用いて、端末装置100と検知装置200とで構成された通知システム2について説明する。図6は、実施形態に係る通知処理の一例を示す図(2)である。なお、図6に示す端末装置100及び検知装置200は、図1に示す通知システム1と同様の構成であるため、説明を省略する。 Also, the notification system according to the embodiment may be composed of the terminal device 100 and the detection device 200 as shown in FIG. In that case, the detection device 200 executes the processes corresponding to steps S3 to S5 and steps S8 to S10 in FIG. Here, the notification system 2 configured by the terminal device 100 and the detection device 200 will be described using FIG. FIG. 6 is a diagram (2) illustrating an example of notification processing according to the embodiment. In addition, since the terminal device 100 and the detection device 200 shown in FIG. 6 have the same configuration as the notification system 1 shown in FIG. 1, description thereof will be omitted.
 図6に示すように、検知装置200は、乳幼児B1の泣き声を検知(取得)し(ステップSa1)、乳幼児B1の泣き声を示す音声情報に基づいて、乳幼児B1の各状態の確度を推定する(ステップSa2)。続いて、検知装置200は、乳幼児B1の各状態の確度を示す通知を端末装置100に提供する(ステップSa3)。なお、ステップSa1~Sa3の処理は、それぞれ図1におけるステップS1、S4、S5と同一であるため、説明を省略する。 As shown in FIG. 6, the detection device 200 detects (acquires) the cry of the infant B1 (step Sa1), and estimates the accuracy of each state of the infant B1 based on the audio information indicating the cry of the infant B1 ( Step Sa2). Subsequently, detection device 200 provides terminal device 100 with a notification indicating the accuracy of each state of infant B1 (step Sa3). Note that the processing of steps Sa1 to Sa3 is the same as steps S1, S4, and S5 in FIG. 1, respectively, so description thereof will be omitted.
 続いて、端末装置100は、検知装置200から提供された通知を画面に表示する(ステップSa4)。続いて、利用者U1は、端末装置100の画面に表示された通知に基づいて、乳幼児B1に対する対応を行う(ステップSa5)。 Subsequently, the terminal device 100 displays the notification provided by the detection device 200 on the screen (step Sa4). Subsequently, user U1 responds to infant B1 based on the notification displayed on the screen of terminal device 100 (step Sa5).
 続いて、検知装置200は、乳幼児B1の状態に関する情報を端末装置100から受け付け(ステップSa6)、受け付けた情報と、音声情報が示す音声の特徴とを、乳幼児の各状態の推定に用いるモデルに学習させる(ステップSa7)。続いて、検知装置200は、蓄積した乳幼児B1の音声情報に基づいて、メッセージ及びコンテンツを利用者U1に提供する(ステップSa8)。なお、ステップSa6~Sa8の処理は、それぞれ図1におけるステップS8~S10と同一であるため、説明を省略する。 Subsequently, the detection device 200 receives information about the state of the infant B1 from the terminal device 100 (step Sa6), and uses the received information and the features of the voice indicated by the voice information as a model to be used for estimating each state of the infant. Let it learn (step Sa7). Subsequently, detection device 200 provides message and contents to user U1 based on the accumulated voice information of infant B1 (step Sa8). Note that the processing of steps Sa6 to Sa8 is the same as steps S8 to S10 in FIG. 1, respectively, so description thereof will be omitted.
〔2.情報処理装置の構成〕
 次に、図7を用いて、図1に示す通知システム1における情報処理装置10の構成について説明する。図7は、実施形態に係る情報処理装置の構成例を示す図である。図7に示すように、情報処理装置10は、通信部20と、記憶部30と、制御部40とを有する。
[2. Configuration of Information Processing Device]
Next, the configuration of the information processing device 10 in the notification system 1 shown in FIG. 1 will be described using FIG. FIG. 7 is a diagram illustrating a configuration example of an information processing apparatus according to the embodiment; As shown in FIG. 7 , the information processing device 10 has a communication section 20 , a storage section 30 and a control section 40 .
(通信部20について)
 通信部20は、例えば、NIC(Network Interface Card)等によって実現される。そして、通信部20は、ネットワークNと有線または無線で接続され、端末装置100、検知装置200等との間で情報の送受信を行う。
(Regarding communication unit 20)
The communication unit 20 is realized by, for example, a NIC (Network Interface Card) or the like. The communication unit 20 is connected to the network N by wire or wirelessly, and transmits and receives information to and from the terminal device 100, the detection device 200, and the like.
(記憶部30について)
 記憶部30は、例えば、RAM(Random Access Memory)、フラッシュメモリ(Flash Memory)等の半導体メモリ素子、または、ハードディスク、光ディスク等の記憶装置によって実現される。図7に示すように、記憶部30は、対応対象情報データベース31と、対応者情報データベース32と、モデルデータベース33とを有する。
(Regarding storage unit 30)
The storage unit 30 is implemented by, for example, a semiconductor memory device such as a RAM (Random Access Memory) or a flash memory, or a storage device such as a hard disk or an optical disk. As shown in FIG. 7 , the storage unit 30 has a correspondence target information database 31 , a correspondence person information database 32 and a model database 33 .
(対応対象情報データベース31について)
 対応対象情報データベース31は、対応者(親等)により対応(養育)される対応対象(乳幼児)関する各種の情報を記憶する。ここで、図8を用いて、対応対象情報データベース31が記憶する情報の一例を説明する。図8は、実施形態に係る対応対象情報データベースの一例を示す図である。図8の例において、対応対象情報データベース31は、「対応対象ID」、「対応者ID」、「検知装置ID」、「月齢」、「音声情報」、「検知日時」、「推定結果」、「正誤情報」などといった項目を有する。
(Regarding the correspondence target information database 31)
The correspondence target information database 31 stores various kinds of information related to a correspondence target (infants) to be dealt with (raised) by a responder (parent or the like). Here, an example of information stored in the correspondence target information database 31 will be described with reference to FIG. FIG. 8 is a diagram illustrating an example of a correspondence target information database according to the embodiment; In the example of FIG. 8, the correspondence target information database 31 includes "correspondence target ID", "corresponding person ID", "detection device ID", "month age", "voice information", "detection date and time", "estimation result", It has items such as "Correct/False Information".
 「対応対象ID」は、対応対象を識別するため識別情報を示す。「対応者ID」は、対応対象の対応を行う対応者を識別するため識別情報を示す。「検知装置ID」は、対応対象の音声を検知する検知装置200を識別するため識別情報を示す。「月齢」は、対応対象の月齢を示す。「音声情報」は、対応対象の音声を示す。「検知日時」は、音声情報が検知された日時を示す。「推定結果」は、音声情報に基づいて推定された対応対象の状態に関する情報を示す。「正誤情報」は、推定結果に関して対応者から送信された情報(例えば、対応対象が泣き止んだか否か等を示す情報)を示す。 "Corresponding target ID" indicates identification information for identifying the corresponding target. The “responder ID” indicates identification information for identifying the responder who handles the response target. The “detection device ID” indicates identification information for identifying the detection device 200 that detects the audio to be handled. "Moon age" indicates the age in months of the corresponding object. "Voice information" indicates the voice to be handled. The "detection date and time" indicates the date and time when the audio information was detected. "Estimation result" indicates information about the state of the correspondence target estimated based on the voice information. The “correctness information” indicates information transmitted from the responder regarding the estimation result (for example, information indicating whether or not the response target has stopped crying).
 すなわち、図8では、対応対象ID「BID#1」により識別される対応対象が、対応者ID「UID#1」により識別される対応者により対応され、当該対応対象の音声が検知装置ID「DID#1」により識別される検知装置200により検知され、当該対応対象の月齢が「3か月」、当該対応対象について検知日時「検知日時#1」に検知された音声情報が「音声情報#1」、当該音声情報に基づく推定結果が「推定結果#1」、正誤情報が「正誤情報#1」である例を示す。 That is, in FIG. 8, the response target identified by the response target ID "BID #1" is handled by the responder identified by the responder ID "UID #1", and the voice of the response target is detected by the detection device ID " DID#1”, the age of the corresponding target is “3 months”, and the voice information detected at the detection date and time “detection date and time #1” for the corresponding target is “voice information # 1”, the estimation result based on the speech information is “estimation result #1”, and the correct/incorrect information is “correct/incorrect information #1”.
(対応者情報データベース32について)
 対応者情報データベース32は、対応対象の対応を行う対応者に関する各種の情報を記憶する。ここで、図9を用いて、対応者情報データベース32が記憶する情報の一例を説明する。図9は、実施形態に係る対応者情報データベースの一例を示す図である。図9の例において、対応者情報データベース32は、「対応者ID」、「関係者情報」、「投稿メッセージ」などといった項目を有する。
(Regarding the responder information database 32)
The responder information database 32 stores various types of information about the responders who respond to the response targets. Here, an example of information stored in the responder information database 32 will be described with reference to FIG. FIG. 9 is a diagram illustrating an example of a responder information database according to the embodiment; In the example of FIG. 9, the responder information database 32 has items such as "responder ID", "concerned party information", and "posted message".
 「対応者ID」は、対応対象の対応を行う対応者を識別するため識別情報を示す。「関係者情報」は、対応者と所定の関係性を有する利用者に関する情報を示す。「投稿メッセージ」は、対応者により投稿されたメッセージを示す。 "Responder ID" indicates identification information for identifying the responder who will respond to the response target. "Related party information" indicates information related to a user who has a predetermined relationship with the responder. "Posted message" indicates a message posted by the correspondent.
 すなわち、図9では、対応者ID「UID#1」により識別される対応者と所定の関係性を有する利用者の情報が「関係者情報#1」、当該対応者により投稿されたメッセージが「メッセージ#1」である例を示す。 That is, in FIG. 9, the information of the user who has a predetermined relationship with the responder identified by the responder ID "UID #1" is "related person information #1", and the message posted by the responder is " message #1”.
(モデルデータベース33について)
 モデルデータベース33は、対応対象が発する音声の特徴を学習させたモデルを記憶する。例えば、モデルデータベース33は、音声情報が入力された場合に、対応対象の欲求や感情などといった状態の確度を出力するように学習が行われたモデルを、対応対象ごとに記憶する。
(Regarding model database 33)
The model database 33 stores models that have learned the features of the speech uttered by the corresponding object. For example, the model database 33 stores, for each corresponding target, a model that has been trained to output the degree of certainty of the state of the corresponding target, such as desires and emotions, when voice information is input.
(制御部40について)
 制御部40は、コントローラ(controller)であり、例えば、CPU(Central Processing Unit)やMPU(Micro Processing Unit)等によって、情報処理装置10内部の記憶装置に記憶されている各種プログラムがRAMを作業領域として実行されることにより実現される。また、制御部40は、コントローラであり、例えば、ASIC(Application Specific Integrated Circuit)やFPGA(Field Programmable Gate Array)等の集積回路により実現される。実施形態に係る制御部40は、図2に示すように、取得部41と、推定部42と、通知部43と、提供部44と、受付部45と、学習部46とを有し、以下に説明する情報処理の機能や作用を実現または実行する。
(Regarding the control unit 40)
The control unit 40 is a controller, and various programs stored in a storage device inside the information processing apparatus 10 are stored in the RAM by, for example, a CPU (Central Processing Unit) or an MPU (Micro Processing Unit). It is realized by executing as Also, the control unit 40 is a controller, and is implemented by an integrated circuit such as an ASIC (Application Specific Integrated Circuit) or an FPGA (Field Programmable Gate Array). The control unit 40 according to the embodiment includes an acquisition unit 41, an estimation unit 42, a notification unit 43, a provision unit 44, a reception unit 45, and a learning unit 46, as shown in FIG. implements or performs the information processing functions and operations described in .
(取得部41について)
 取得部41は、対応者によって対応される対応対象が発した音声を示す音声情報を取得する。例えば、図1の例において、取得部41は、乳幼児B1の泣き声を示す音声情報を取得し、対応対象情報データベース31に格納する。
(Regarding the acquisition unit 41)
Acquisition unit 41 acquires voice information indicating a voice uttered by a response target handled by a responder. For example, in the example of FIG. 1 , the acquisition unit 41 acquires voice information indicating the cry of the infant B<b>1 and stores it in the correspondence target information database 31 .
(推定部42について)
 推定部42は、取得部41により取得された音声情報に基づいて、対応対象の状態と、当該状態である確度とを推定する。例えば、図1の例において、推定部42は、音声情報に基づいて、乳幼児B1の状態ごとの確度を推定する。
(Regarding the estimation unit 42)
Based on the voice information acquired by the acquisition unit 41, the estimation unit 42 estimates the state of the correspondence target and the probability of being in the state. For example, in the example of FIG. 1, the estimation unit 42 estimates the certainty for each state of the infant B1 based on the voice information.
 また、推定部42は、対応対象が欲する所定の欲求の確度を当該欲求ごとに推定してもよい。例えば、図1の例において、推定部42は、「抱っこ」や、「授乳」、「おむつの交換」、「眠い」などの乳幼児B1の欲求ごとの確度を推定する。 Also, the estimation unit 42 may estimate the certainty of a predetermined desire desired by the corresponding target for each desire. For example, in the example of FIG. 1, the estimation unit 42 estimates the accuracy for each desire of the infant B1 such as "hold", "breastfeed", "change diaper", and "sleepy".
 また、推定部42は、対応対象が感じている感情の確度を感情ごとに推定してもよい。例えば、図1の例において、推定部42は、「さみしい」や、「怒り」、「悲しい」、「疲れた」などの乳幼児B1の感情ごとの確度を推定する。 In addition, the estimation unit 42 may estimate the degree of certainty of the emotion felt by the corresponding target for each emotion. For example, in the example of FIG. 1, the estimation unit 42 estimates the accuracy for each emotion of the infant B1, such as "lonely", "anger", "sad", and "tired".
 また、推定部42は、音声情報に基づいて確度が推定できない場合は、対応対象の状態の履歴に基づいて確度を推定してもよい。例えば、図1の例において、推定部42は、乳幼児B1の音声情報を取得したとき、当該乳幼児B1の育児記録を参照し、育児記録において、所定時間内の対応行動として授乳がされており、これまでの育児記録における授乳の時間間隔の平均等に基づいて、次の授乳のタイミングでないと判断されたとき、乳幼児B1の「授乳」の欲求は低いと判断し、他の欲求であるとして泣き声による対応対象の確度を推定するとよい。このようにすることで、泣き声だけでは推定ができなかった場合にも推定結果を提供することができるため、対応対象が泣いているとき、対応者が推定結果を入手できないことを防ぎ、対応者自身が対応対象にすべき行動を知ることが可能となり、対応者が前向きに対応対象へ対応することができる。 In addition, when the accuracy cannot be estimated based on the voice information, the estimation unit 42 may estimate the accuracy based on the history of the state of the correspondence target. For example, in the example of FIG. 1, when the estimating unit 42 acquires the voice information of the infant B1, the estimation unit 42 refers to the childcare record of the infant B1, and in the childcare record, breastfeeding is performed as a corresponding action within a predetermined time, When it is judged that it is not the timing for the next breastfeeding based on the average time interval of breastfeeding in the childcare record so far, it is judged that infant B1's desire for "feeding" is low, and it cries because it is another desire. It is preferable to estimate the accuracy of the correspondence target by By doing so, it is possible to provide the estimation result even if the estimation cannot be made from the crying alone. It becomes possible for the responder to know the behavior that should be handled, and the responder can positively respond to the response target.
 また、推定部42は、乳幼児B1の泣き声に基づき、乳幼児B1がメンタルリープに起こるような、対応対象が欲求や感情を主な原因とせず泣いている状態であると推定する方法として、例えば、乳幼児B1がいずれの状態とも推定されない場合や、これまでの対応者が対応した履歴に基づき、対応者が対応しても泣き止まなかったときの音声情報の履歴などとの類似度を確認することで、メンタルリープに起こるような、対応対象が欲求や感情を主な原因とせず泣いている状態かを推定すると良いし、乳幼児B1が「抱っこ」や、「授乳」、「おむつの交換」、「眠い」などのいずれの欲求のある状態とも推定されていない場合、メンタルリープに起こるような、対応対象が欲求や感情を主な原因とせず泣いている状態かを推定しても良い。 In addition, the estimation unit 42, based on the crying of the infant B1, as a method of estimating that the corresponding target is crying without desire or emotion as the main cause, such as when the infant B1 is in a mental leap, includes, for example, To confirm the degree of similarity with the history of voice information, etc., when infant B1 is not estimated to be in any state, or when the responder does not stop crying even if the responder responds, based on the history of previous responders. Therefore, it is good to estimate whether the target is crying without desire or emotion as the main cause, as occurs in mental leap, and infant B1 is "holding", "feeding", "changing diapers" If it is not presumed to be in a state of having any desire such as "sleepy", it may be presumed that the corresponding object is in a state of crying without desire or emotion as the main cause, as occurs in mental leap.
 また、推定部42は、音声情報に基づいて確度が推定できない場合は、対応者が入力した対応対象の状態の履歴に基づいて確度を推定してもよい。例えば、推定部42は、育児を記録する所定のアプリケーション(育児日記アプリ、授乳記録アプリ等)において対応者が入力した対応対象の状態の履歴に基づいて確度を推定する。具体的な例を挙げると、推定部42は、育児日記アプリに入力された育児記録における授乳の時間間隔の平均等に基づいて、対応対象の音声情報が取得されたタイミングが授乳のタイミングであると判断される場合、対応対象の「授乳」の欲求の確度を、他の欲求よりも高く推定する。 In addition, when the accuracy cannot be estimated based on the voice information, the estimation unit 42 may estimate the accuracy based on the history of the status of the response target input by the responder. For example, the estimating unit 42 estimates the accuracy based on the history of the status of the response target input by the responder in a predetermined application for recording childcare (childcare diary application, breastfeeding recording application, etc.). As a specific example, the estimating unit 42 determines that the timing at which the corresponding voice information is acquired is the timing for breastfeeding, based on the average of the breastfeeding time intervals in the childcare record input to the childcare diary application. , the accuracy of the desire of the corresponding target "breastfeeding" is estimated higher than other desires.
 なお、推定部42は、音声情報の履歴と、育児を記録する所定のアプリケーションにおいて対応者が入力した対応対象の状態の履歴とに基づいて、対応対象の各状態の確度を推定してもよい。例えば、推定部42は、新たに音声情報が取得された際、当該音声情報と、育児日記アプリに対応対象の状態が入力された時点から所定の期間内に取得された対応対象の音声情報の履歴との類似度に基づいて、対応対象の状態の確度を推定する。 Note that the estimation unit 42 may estimate the accuracy of each state of the response target based on the history of the voice information and the history of the status of the response target input by the responder in a predetermined application for recording childcare. . For example, when voice information is newly acquired, the estimating unit 42 selects the voice information and the voice information of the response target acquired within a predetermined period from the time when the status of the response target was input to the childcare diary application. Based on the degree of similarity with the history, the probability of the state of the correspondence target is estimated.
 また、推定部42は、対応対象が発する音声の特徴を学習させたモデルを用いて確度を推定してもよい。例えば、図1の例において、推定部42は、音声情報が入力された場合に、乳幼児の欲求や感情などといった状態の確度を出力するように学習が行われたモデル#1を用いて、乳幼児B1の各状態の確度を推定する。 Also, the estimation unit 42 may estimate the accuracy using a model that has learned the features of the voice uttered by the corresponding target. For example, in the example of FIG. 1, the estimating unit 42 uses the model #1 that has been trained to output the probability of the infant's desires, emotions, etc. when voice information is input. Estimate the probability of each state of B1.
(通知部43について)
 通知部43は、対応対象の欲求又は感情として推定部42により推定された対応対象の状態を、確度と、当該状態に対する対応者の推奨行動とともに通知する。例えば、図1の例において、通知部43は、図2に示す乳幼児B1の各状態の確度を示す通知を端末装置100に提供する。
(Regarding the notification unit 43)
The notification unit 43 notifies the state of the correspondence target estimated by the estimation unit 42 as the desire or emotion of the correspondence target, together with the degree of certainty and the recommended behavior of the respondent for the state. For example, in the example of FIG. 1, the notification unit 43 provides the terminal device 100 with a notification indicating the accuracy of each state of the infant B1 shown in FIG.
 また、通知部43は、欲求ごとの確度を示す通知を対応者に提供してもよい。例えば、図1の例において、通知部43は、乳幼児B1の欲求「授乳」の確度が60%、乳幼児B1の欲求「眠い」の確度が40%と推定された推定結果を示す通知を提供する。 In addition, the notification unit 43 may provide the correspondent with a notification indicating the degree of certainty for each desire. For example, in the example of FIG. 1, the notification unit 43 provides a notification indicating an estimation result in which the accuracy of infant B1's desire "breastfeeding" is estimated to be 60% and the accuracy of infant B1's desire "sleepy" is estimated to be 40%. .
 また、感情ごとの確度を示す通知を対応者に提供してもよい。例えば、通知部43は、「さみしい」、「怒り」や、「悲しい」、「疲れた」などの対応対象の感情ごとの確度を示す通知を提供する。 In addition, a notification indicating the accuracy of each emotion may be provided to the respondent. For example, the notification unit 43 provides notification indicating the degree of certainty for each emotion to be handled, such as "lonely", "anger", "sad", and "tired".
 なお、通知部43が提供する通知には、対応対象が欲する欲求の確度と、対応対象が感じている感情の確度とが含まれていてもよい。例えば、通知部43は、対応対象の欲求「抱っこ」の確度が60%、欲求「眠い」の確度が40%であることを示す通知を提供するとともに、乳幼児の感情として「さみしい」の確度が60%、「悲しい」の確度が40%であることを示す通知を提供してもよい。 It should be noted that the notification provided by the notification unit 43 may include the accuracy of the desire desired by the corresponding target and the accuracy of the emotion felt by the corresponding target. For example, the notification unit 43 provides a notification indicating that the accuracy of the desire “hold” of the corresponding object is 60% and the accuracy of the desire “sleepy” is 40%, and that the emotion of the infant is “lonely”. 60%, a notification may be provided indicating that the probability of being "sad" is 40%.
 また、通知部43は、確度に基づいて、対応対象の状態を解消するための情報(推奨行動)を含む通知を提供してもよい。例えば、図1の例において、通知部43は、乳幼児B1の状態に対する具体的な対応の内容を示す情報(例えば、図2の領域AR12に表示される情報)を含む通知を提供する。 In addition, the notification unit 43 may provide a notification including information (recommended action) for resolving the state of the response target based on the degree of accuracy. For example, in the example of FIG. 1, the notification unit 43 provides a notification including information (for example, information displayed in the area AR12 of FIG. 2) indicating specific contents of the response to the condition of the infant B1.
 また、通知部43は、対応対象が状態のいずれとも推定されない場合は、対応対象がいずれの状態でもないことを示す通知を提供してもよい。例えば、図1の例において、乳幼児B1がたそがれ泣きの状態であると推定された場合、通知部43は、乳幼児B1がたそがれ泣きの状態である旨を示すメッセージや、たそがれ泣きに関する詳細な情報を含む通知(例えば、図3の画面C21)を提供する。 In addition, when the correspondence target is not estimated to be in any of the states, the notification unit 43 may provide a notification indicating that the correspondence target is in neither state. For example, in the example of FIG. 1, when it is estimated that the infant B1 is in a twilight crying state, the notification unit 43 sends a message indicating that the infant B1 is in a twilight crying state and detailed information about the twilight crying. A notification (for example, screen C21 in FIG. 3) is provided.
(提供部44について)
 提供部44は、対応対象が発する音声に基づいて、対応対象の状態に対する対応者の対応に応じたメッセージを対応者に提供する。例えば、提供部44は、通知部43により通知が提供された後に検知装置200により検知された対応対象の音声に基づいて、対応者の対応に応じたメッセージを対応者に提供する。具体的な例を挙げると、提供部44は、対応者の自己肯定感を高めるメッセージを提供する。
(About the providing unit 44)
The providing unit 44 provides the responder with a message according to the responder's response to the state of the response target based on the voice uttered by the response target. For example, the provision unit 44 provides the responder with a message corresponding to the response by the responder based on the voice of the response target detected by the detection device 200 after the notification is provided by the notification unit 43 . As a specific example, the providing unit 44 provides a message that enhances self-affirmation of the respondent.
 また、提供部44は、通知が提供されてから、対応対象が発する音声が所定の条件を満たすまでの時間に応じたメッセージを提供してもよい。例えば、提供部44は、通知部43により通知が提供されてから、対応対象の泣き止むまでの時間が所定の閾値以下である場合、対応者の自己肯定感を高めるメッセージを提供する。また、提供部44は、通知部43により通知が提供されてから、対応対象の泣き止むまでの時間が所定の閾値よりも長い場合、対応者が対応に苦慮した、若しくは、対応者が多忙であったと推定し、対応者を労わるメッセージを提供する。 In addition, the providing unit 44 may provide a message according to the time from when the notification is provided until the voice uttered by the corresponding target satisfies a predetermined condition. For example, the providing unit 44 provides a message that enhances the self-affirmation of the respondent when the time from the notification provided by the notification unit 43 until the respondent stops crying is equal to or less than a predetermined threshold. Further, when the time from when the notification is provided by the notification unit 43 to when the person to be dealt with stops crying is longer than a predetermined threshold, the providing unit 44 Presume that there was, and provide a message of kindness to the responder.
 また、提供部44は、対応者が発する音声に基づいて、対応対象の状態に対する対応者の対応に応じたメッセージを対応者に提供してもよい。例えば、通知部43により通知が提供された後に、検知装置200により対応者の所定の音声(すなわち、声かけ)が検知された場合、提供部44は、対応者の対応に対して肯定的なメッセージを提供する。 In addition, the providing unit 44 may provide the responder with a message corresponding to the responder's response to the state of the response target, based on the voice uttered by the responder. For example, after the notification is provided by the notification unit 43, when the detection device 200 detects a predetermined voice (that is, a call) of the responder, the provision unit 44 confirms the response of the responder affirmatively. provide a message.
 また、提供部44は、対応対象の状態に関するコンテンツの利用に応じたメッセージを対応者に提供してもよい。例えば、図1の例において、乳幼児B1が泣くことに対する理解を深めるためのコンテンツ(図2のボタンBT13を押下した場合に表示されるコンテンツ)を利用者U1が閲覧した場合や、たそがれ泣きに関する詳細な情報を含むコンテンツ(図3のボタンBT21を押下した場合に表示されるコンテンツ)を閲覧した場合、クイズコンテンツを利用した場合、提供部44は、利用者U1の各コンテンツの利用に対して肯定的なメッセージを提供する。なお、提供部44は、対応対象が泣いた際の対処のフロー等に関するコンテンツを対応者が閲覧した場合に、対応者に行動に対して肯定的なメッセージを提供してもよい。 In addition, the providing unit 44 may provide the responder with a message according to the use of the content regarding the status of the response target. For example, in the example of FIG. 1, when the user U1 browses the content for deepening the understanding of the crying of the infant B1 (the content displayed when the button BT13 in FIG. 2 is pressed), or when the user U1 browses the details of the crying at twilight. When content including such information (content displayed when the button BT21 in FIG. 3 is pressed) is viewed, when quiz content is used, the provision unit 44 affirmatively confirms the use of each content by the user U1. provide meaningful messages. Note that the providing unit 44 may provide the responder with a positive message regarding the action when the responder browses the content related to the flow of coping when the subject cries.
 また、提供部44は、対応対象の音声と、対応対象の状態との対応関係に関する出題を行うコンテンツを対応者に提供してもよい。例えば、図1の例において、提供部44は、図4に示すクイズコンテンツを利用者U1に提供する。 In addition, the providing unit 44 may provide the responder with content that poses a question regarding the correspondence relationship between the voice of the response target and the state of the response target. For example, in the example of FIG. 1, the providing unit 44 provides the user U1 with the quiz content shown in FIG.
 また、提供部44は、対応対象の状態に対する対応者の対応に応じたメッセージを、対応者と所定の関連性を有する利用者に提供してもよい。例えば、提供部44は、母親である対応者の対応対象に対する対応を評価するメッセージを、父親である利用者に対し提供する。 In addition, the providing unit 44 may provide a user having a predetermined relationship with the responder with a message according to the responder's response to the condition to be addressed. For example, the providing unit 44 provides the user, who is the father, with a message that evaluates the response of the respondent, who is the mother, to the target.
 また、提供部44は、推定部42による推定結果に基づいて、対応対象の状態の履歴を示すコンテンツを対応者に提供してもよい。例えば、提供部44は、対応対象情報データベース31を参照し、推定部42により推定された対応対象の状態や、推定された日時、対応対象の状態に対して対応者が行った対応(例えば、後述する受付部45により受け付けられる情報)を示すコンテンツ(すなわち、育児記録)を対応者に提供する。 Further, the provision unit 44 may provide the responder with content indicating the history of the state of the response target based on the estimation result by the estimation unit 42 . For example, the provision unit 44 refers to the response target information database 31, and the status of the response target estimated by the estimation unit 42, the estimated date and time, and the response performed by the responder with respect to the status of the response target (for example, Contents (that is, childcare records) indicating information received by the receiving unit 45, which will be described later, are provided to the respondent.
 また、提供部44は、通知が提供された後、対応対象の状態が所定の時間以上解消されない場合は、対応者に所定のメッセージを提供してもよい。例えば、図1の例において、図2に示す画面C11が表示されてから一定時間が経過してもボタンBT11が押下されない場合や、一定時間が経過しても検知装置200により乳幼児B1が泣き止んだことが検知されない場合、提供部44は、図3の画面C22に示すメッセージを提供する。 In addition, the provision unit 44 may provide a predetermined message to the responder if the state of the response target has not been resolved for a predetermined period of time after the notification was provided. For example, in the example of FIG. 1, when the button BT11 is not pressed even after a certain period of time has passed since the screen C11 shown in FIG. If it is not detected, the providing unit 44 provides the message shown on the screen C22 of FIG.
 また、提供部44は、対応者とは異なる他の対応者から投稿されたメッセージを対応者に提供してもよい。例えば、図1の例において、提供部44は、図5の画面C41やC42に示すメッセージを利用者U1に提供する。 In addition, the providing unit 44 may provide the responder with a message posted by another responder different from the responder. For example, in the example of FIG. 1, the providing unit 44 provides the user U1 with messages shown on screens C41 and C42 in FIG.
 また、提供部44は、対応者に提供するタイミングに応じた情報量であるメッセージを提供してもよい。例えば、図1の例において、提供部44は、乳幼児B1の状態を推定する処理を行ってからの所定の時間内にメッセージを提供する場合、他の利用者により投稿されたメッセージのうち、文字数が所定数以下の短いメッセージを提供する。また、提供部44は、授乳アプリ等の所定のアプリを利用者U1が利用している場合や、乳幼児が就寝していると推定される場合などに、他の利用者により投稿されたメッセージのうち、文字数が所定数よりも多いメッセージを提供する。 In addition, the providing unit 44 may provide a message that is an amount of information corresponding to the timing of providing to the responder. For example, in the example of FIG. 1, when the providing unit 44 provides a message within a predetermined period of time after performing the process of estimating the state of infant B1, the number of characters in messages posted by other users is provides a set number of short messages or less. In addition, the provision unit 44 may receive messages posted by other users when the user U1 is using a predetermined application such as a nursing application, or when it is estimated that the infant is asleep. Among them, provide a message having more characters than a predetermined number.
 また、提供部44は、対応対象の状態に応じた情報量であるメッセージを提供してもよい。例えば、図1の例において、提供部44は、乳幼児B1の状態を推定する処理を行った回数が所定の閾値以上である場合や、通知を提供してから乳幼児B1が泣き止むまで所定の時間以上かかった場合、他の利用者により投稿されたメッセージのうち、文字数が所定数以下の短いメッセージを提供する。また、提供部44は、乳幼児B1の状態を推定する処理を行った回数が所定の閾値よりも少ない場合、他の利用者により投稿されたメッセージのうち、文字数が所定数よりも多いメッセージを提供する。 In addition, the providing unit 44 may provide a message that is an amount of information according to the status of the correspondence target. For example, in the example of FIG. 1, when the number of times the process of estimating the state of infant B1 is performed is equal to or greater than a predetermined threshold value, or when infant B1 stops crying after providing the notification, the providing unit 44 If it takes longer than this, a short message with a predetermined number of characters or less among the messages posted by other users is provided. Further, when the number of times the process of estimating the state of infant B1 is performed is less than a predetermined threshold, the providing unit 44 provides a message having more characters than a predetermined number among messages posted by other users. do.
 また、提供部44は、対応者から投稿されたメッセージを対応者とは異なる他の対応者に提供してもよい。例えば、図1の例において、提供部44は、図5に示す画面C43に入力されたメッセージを、利用者U1とは異なる他の利用者に提供する。 In addition, the providing unit 44 may provide the message posted by the responder to another responder other than the responder. For example, in the example of FIG. 1, the providing unit 44 provides the message input on the screen C43 shown in FIG. 5 to another user other than the user U1.
(受付部45について)
 受付部45は、通知が提供された後に、対応対象の状態に関する情報を対応者から受け付ける。例えば、図1の例において、受付部45は、乳幼児B1が泣き止んだか否かを示す情報を端末装置100から受け付け、対応対象情報データベース31に格納する。具体的な例を挙げると、受付部45は、乳幼児B1が泣き止んだか否かを示す情報とともに、利用者U1が行った対応や、利用者U1が推定する乳幼児B1の状態などを示す情報を受け付ける。
(Regarding the reception unit 45)
After the notification is provided, the reception unit 45 receives information about the status of the response target from the responder. For example, in the example of FIG. 1, the receiving unit 45 receives information indicating whether or not the infant B1 has stopped crying from the terminal device 100, and stores the information in the corresponding target information database 31. FIG. As a specific example, the reception unit 45 receives information indicating whether the infant B1 has stopped crying, as well as information indicating the response taken by the user U1, the state of the infant B1 estimated by the user U1, and the like. accept.
(学習部46について)
 学習部46は、通知が提供された後の対応対象の状態に関する情報と、音声情報が示す音声の特徴とをモデルに学習させる。例えば、図1の例において、学習部46は、対応対象情報データベース31を参照し、通知部43により通知が提供された後の乳幼児B1の状態(例えば、泣き止んだか否か)と、乳幼児B1の音声情報が示す音声の特徴をモデル#1に学習させる。
(Regarding the learning section 46)
The learning unit 46 causes the model to learn information about the state of the response target after the notification is provided and the features of the voice indicated by the voice information. For example, in the example of FIG. 1, the learning unit 46 refers to the correspondence target information database 31, and determines the state of the infant B1 after the notification is provided by the notification unit 43 (for example, whether or not the infant B1 has stopped crying). The model #1 learns the features of the speech indicated by the speech information of .
 また、学習部46は、受付部45により受け付けられた情報と、音声情報が示す音声の特徴とをモデルに学習させてもよい。例えば、図1の例において、学習部46は、乳幼児B1が泣き止んだか否かを示す情報と、推定結果に対応する音声情報とを学習データとしてモデル#1の学習を行う。また、学習部46は、乳幼児B1が泣き声を発した際に利用者U1が泣き止ませるために行った対応と、当該泣き声を示す音声情報とを学習データとしてモデル#1の学習を行う。 In addition, the learning unit 46 may cause the model to learn the information received by the receiving unit 45 and the features of the voice indicated by the voice information. For example, in the example of FIG. 1, learning unit 46 learns model #1 using information indicating whether infant B1 has stopped crying and voice information corresponding to the estimation result as learning data. Further, the learning unit 46 learns the model #1 using as learning data the response taken by the user U1 to stop the infant B1 from crying and the voice information indicating the crying.
〔3.検知装置の構成〕
 次に、図10を用いて、図6に示す通知システム2における検知装置200の構成について説明する。図10は、実施形態に係る検知装置の構成例を示す図である。図10に示すように、検知装置200は、通信部210と、記憶部220と、制御部230とを有する。
[3. Configuration of detection device]
Next, the configuration of the detection device 200 in the notification system 2 shown in FIG. 6 will be described using FIG. FIG. 10 is a diagram illustrating a configuration example of a detection device according to the embodiment; As shown in FIG. 10 , the detection device 200 has a communication section 210 , a storage section 220 and a control section 230 .
(通信部210について)
 通信部210は、例えば、NIC等によって実現される。そして、通信部210は、ネットワークNと有線または無線で接続され、端末装置100等との間で情報の送受信を行う。
(Regarding communication unit 210)
The communication unit 210 is implemented by, for example, a NIC. The communication unit 210 is connected to the network N by wire or wirelessly, and transmits and receives information to and from the terminal device 100 and the like.
(記憶部220について)
 記憶部220は、例えば、RAM、フラッシュメモリ等の半導体メモリ素子、または、ハードディスク、光ディスク等の記憶装置によって実現される。図10に示すように、記憶部220は、対応対象情報データベース221と、対応者情報データベース222と、モデルデータベース223とを有する。なお、対応対象情報データベース221、対応者情報データベース222及びモデルデータベース223は、それぞれ対応対象情報データベース31、対応者情報データベース32及びモデルデータベース33と同様の構成であるため、説明を省略する。
(Regarding storage unit 220)
The storage unit 220 is realized by, for example, a semiconductor memory device such as a RAM or flash memory, or a storage device such as a hard disk or an optical disk. As shown in FIG. 10 , the storage unit 220 has a correspondence target information database 221 , a correspondence person information database 222 and a model database 223 . The correspondence object information database 221, the correspondence person information database 222, and the model database 223 have the same configurations as the correspondence object information database 31, the correspondence person information database 32, and the model database 33, respectively, and thus the description thereof is omitted.
(制御部230について)
 制御部230は、コントローラであり、例えば、CPUやMPU等によって、検知装置200内部の記憶装置に記憶されている各種プログラムがRAMを作業領域として実行されることにより実現される。また、制御部40は、コントローラであり、例えば、ASICやFPGA等の集積回路により実現される。実施形態に係る制御部230は、図2に示すように、取得部231と、推定部232と、通知部233と、提供部234と、受付部235と、学習部236とを有し、図6に示す通知システム2における情報処理の機能や作用を実現または実行する。なお、取得部231、推定部232、通知部233、提供部234、受付部235及び学習部236は、取得部41、推定部42、通知部43、提供部44、受付部45及び学習部46と同様の構成であるため、説明は省略する。
(Regarding the control unit 230)
The control unit 230 is a controller, and is implemented, for example, by executing various programs stored in a storage device inside the detection device 200 using a RAM as a work area by a CPU, MPU, or the like. Also, the control unit 40 is a controller, and is implemented by an integrated circuit such as an ASIC or FPGA, for example. The control unit 230 according to the embodiment includes an acquisition unit 231, an estimation unit 232, a notification unit 233, a provision unit 234, a reception unit 235, and a learning unit 236, as shown in FIG. 6 realizes or executes information processing functions and actions in the notification system 2 shown in FIG. The acquisition unit 231, the estimation unit 232, the notification unit 233, the provision unit 234, the reception unit 235, and the learning unit 236 Since it has the same configuration as , the description is omitted.
〔4.通知処理手順〕
 次に、図11を用いて、実施形態に係る情報処理装置10が実行する通知処理の手順について説明する。図11は、実施形態に係る情報処理装置が実行する通知処理の流れの一例を示すフローチャートである。
[4. Notification processing procedure]
Next, the procedure of notification processing executed by the information processing apparatus 10 according to the embodiment will be described with reference to FIG. 11 . 11 is a flowchart illustrating an example of the flow of notification processing executed by the information processing apparatus according to the embodiment; FIG.
 図11に示すように、情報処理装置10は、対応対象が発した音声を示す音声情報が取得されたか否かを判定する(ステップS101)。音声情報が取得されていない場合(ステップS101;No)、情報処理装置10は、音声情報が取得されるまで待機する。 As shown in FIG. 11, the information processing device 10 determines whether or not the voice information indicating the voice uttered by the corresponding target has been acquired (step S101). If the voice information has not been acquired (step S101; No), the information processing apparatus 10 waits until the voice information is acquired.
 一方、音声情報が取得されたと判定した場合(ステップS101;Yes)、情報処理装置10は、音声情報に基づいて、対応対象が所定の状態である確度を当該状態ごとに推定する(ステップS102)。続いて、情報処理装置10は、確度を示す通知を対応者に提供する(ステップS103)。続いて、情報処理装置10は、対応者の対応に応じたメッセージを提供する(ステップS104)。続いて、情報処理装置10は、対応対象の状態に関する情報を対応者から受け付ける(ステップS105)。続いて、情報処理装置10は、対応者から受け付けられた情報と、音声情報が示す音声の特徴とをモデルに学習させ(ステップS106)、処理を終了する。 On the other hand, if it is determined that the voice information has been acquired (step S101; Yes), the information processing apparatus 10 estimates the probability that the correspondence target is in a predetermined state for each state based on the voice information (step S102). . Subsequently, the information processing device 10 provides the responder with a notification indicating the accuracy (step S103). Subsequently, the information processing device 10 provides a message corresponding to the correspondence of the responder (step S104). Subsequently, the information processing apparatus 10 receives information about the state of the response target from the responder (step S105). Subsequently, the information processing apparatus 10 causes the model to learn the information received from the responder and the characteristics of the voice indicated by the voice information (step S106), and ends the process.
〔5.変形例〕
 上述した情報処理装置10は、上記実施形態以外にも種々の異なる形態にて実施されてよい。
[5. Modification]
The information processing apparatus 10 described above may be embodied in various forms other than the above embodiment.
〔5-1.対応対象の発達の度合いについて〕
 上述の実施形態において、提供部44が、対応対象に対する対応者の対応に応じたメッセージを提供する例を示したが、提供部44の機能はこのような例に限定されない。例えば、提供部44が、音声情報に基づいて推定される対応対象の状態に、感情が含まれる度合い(言い換えると、対応対象の発達の度合い)を示すメッセージを提供してもよい。具体的な例を挙げると、提供部44は、音声情報に基づいて、「抱っこ」や、「授乳」、「おむつの交換」、「眠い」などの欲求以外に、「さみしい」や、「怒り」、「悲しい」、「疲れた」などの感情が推定される割合が所定の閾値以上となったタイミングで、対応者に対し、対応対象が順調に発達している旨のメッセージを提供する。これにより、情報処理装置10は、例えば、対応者の周囲に育児の経験者がおらず、対応者が対応対象の発達の度合いを確認できない場合であっても、対応対象が順調に発達していることを伝えることができるため、対応者の自己肯定感を高めることができる。
[5-1. Regarding the degree of development of the corresponding target]
In the above-described embodiment, an example was shown in which the providing unit 44 provides a message according to the response of the responder to the corresponding target, but the function of the providing unit 44 is not limited to such an example. For example, the providing unit 44 may provide a message indicating the degree to which emotion is included in the state of the correspondence target estimated based on the voice information (in other words, the degree of development of the correspondence target). To give a specific example, the providing unit 44, based on the voice information, selects desires such as “hold”, “breastfeeding”, “change diapers”, and “sleepy”, as well as desires such as “sadness” and “anger”. , ``sad'', ``tired'', etc., a message is provided to the responder to the effect that the subject is developing smoothly. As a result, the information processing apparatus 10 can ensure that the subject develops smoothly even when there is no child-rearing-experienced person around the responder and the responder cannot confirm the degree of development of the subject. Since it is possible to convey that there is a presence, it is possible to increase the self-affirmation of the respondent.
〔5-2.検知装置200について〕
 上述の実施形態において、検知装置200が、対応対象の音声情報を検知する例を示したが、検知装置200の機能はこのような例に限定されない。例えば、検知装置200は、情報処理装置10が提供する通知やメッセージ、コメントなどを示す音声を出力してもよい。また、検知装置200は、自装置が表示部(ディスプレイ)を有する場合、情報処理装置10が提供する通知やメッセージ、コメントなどを表示部に表示してもよい。
[5-2. Regarding the detection device 200]
In the above-described embodiment, an example in which the detection device 200 detects audio information to be handled has been described, but the function of the detection device 200 is not limited to such an example. For example, the detection device 200 may output a sound indicating a notification, message, comment, or the like provided by the information processing device 10 . Further, when the detection device 200 has a display unit (display), the detection device 200 may display notifications, messages, comments, etc. provided by the information processing device 10 on the display unit.
〔5-3.メッセージについて〕
 上述の実施形態において、提供部44が、推定部42による推定結果や、提供したコンテンツの利用に応じてメッセージを対応者に提供する例を示したが、提供部44の機能はこのような例に限定されない。例えば、上述の実施形態に係る通知処理を実現するアプリケーションを対応者が利用する頻度が所定の閾値以下となった場合、言い換えると、当該アプリケーションを用いずとも対応者が対応対象の泣き声で対応対象の状態を推定できるようになった場合、提供部44は、対応対象を称賛するメッセージを提供してもよい。また、通知部43が通知を提供する前に、対応対象が泣き止んだことが検知装置200により検知された場合、提供部44は、対応対象を称賛するメッセージを提供してもよい。
[5-3. About message]
In the above-described embodiment, the providing unit 44 provides a message to the respondent according to the result of estimation by the estimating unit 42 and the use of the provided content. is not limited to For example, when the frequency with which the responder uses the application that implements the notification process according to the above-described embodiment is equal to or less than a predetermined threshold value, in other words, even if the responder does not use the application, the responder cries as the response target. , the providing unit 44 may provide a message praising the corresponding target. Further, when the detecting device 200 detects that the corresponding target has stopped crying before the notification unit 43 provides the notification, the providing unit 44 may provide a message praising the corresponding target.
〔5-4.受付部45について〕
 上述の実施形態において、受付部45が、通知部43が提供する通知を表示する画面において、対応対象の状態に関する情報を対応者から受け付ける例を示したが、受付部45の機能はこのような例に限定されない。例えば、受付部45は、対応対象が泣き止んだことが検知されてから所定の時間が経過したタイミングや、授乳アプリ等の所定のアプリを利用者U1が利用しているタイミング(言い換えると、対応者の手が空いていると推定されるタイミング)に対応者に提供されるコンテンツを介して、対応対象の状態に関する情報を受け付けてもよい。また、受付部45は、所定の期間ごとに対応者に提供されるコンテンツ(例えば、毎晩または毎朝提供されるコンテンツ)を介して、当該期間内において通知部43が提供した通知と、実際の対応対象の状態との一致度の入力を受け付けてもよい。
[5-4. Regarding the reception unit 45]
In the above-described embodiment, an example is shown in which the receiving unit 45 receives information about the status of the response target from the responder on the screen displaying the notification provided by the notification unit 43. However, the function of the receiving unit 45 is such Examples are not limiting. For example, the reception unit 45 determines the timing at which a predetermined time has passed since it was detected that the subject has stopped crying, or the timing at which the user U1 is using a predetermined application such as a nursing application (in other words, the response Information about the status of the response target may be received via the content provided to the response person at the timing when it is estimated that the person is free. In addition, the reception unit 45 receives the notification provided by the notification unit 43 within the period and the actual response through content provided to the responder every predetermined period (for example, content provided every night or every morning). An input of the degree of matching with the target state may be received.
〔5-5.その他〕
 上記した各処理のうち、自動的に行われるものとして説明した処理の全部または一部は、手動的に行われてもよい。また、手動的に行われるものとして説明した処理の全部または一部は、公知の方法で自動的に行われてもよい。この他、上記文書中や図面中で示した処理手順、具体的名称、各種のデータやパラメータを含む情報については、特記する場合を除いて任意に変更することができる。例えば、各図に示した各種情報は、図示した情報に限られるものではない。
[5-5. others〕
Of the above processes, all or part of the processes described as being performed automatically may be performed manually. Also, all or part of the processes described as being performed manually may be performed automatically by known methods. In addition, information including processing procedures, specific names, various data and parameters shown in the above documents and drawings can be arbitrarily changed unless otherwise specified. For example, the various information shown in each drawing is not limited to the illustrated information.
 また、図示した各装置の各構成要素は機能概念的なものであり、必ずしも物理的に図示の如く構成されなくともよい。すなわち、各装置の分散・統合の具体的形態は図示のものに限られない。また、各構成要素は、その全部または一部を、各種の負荷や使用状況などに応じて、任意の単位で機能的または物理的に分散・統合して構成してもよい。また、上記してきた各処理は、矛盾しない範囲で適宜組み合わせて実行されてもよい。 Also, each component of each device illustrated is functionally conceptual and does not necessarily have to be physically configured as illustrated. That is, the specific form of distribution/integration of each device is not limited to the illustrated one. Further, all or part of each component may be functionally or physically distributed and integrated in arbitrary units according to various loads and usage conditions. Further, each of the processes described above may be executed in combination as long as there is no contradiction.
〔6.ハードウェア構成〕
 また、上述した実施形態に情報処理装置10や、端末装置100、検知装置200は、例えば図12に示すような構成のコンピュータ1000によって実現される。図12は、ハードウェア構成の一例を示す図である。コンピュータ1000は、出力装置1010、入力装置1020と接続され、演算装置1030、キャッシュ1040、メモリ1050、出力IF(Interface)1060、入力IF1070、ネットワークIF1080がバス1090により接続される。
[6. Hardware configuration]
Further, the information processing device 10, the terminal device 100, and the detection device 200 in the above-described embodiments are implemented by a computer 1000 configured as shown in FIG. 12, for example. FIG. 12 is a diagram illustrating an example of a hardware configuration; The computer 1000 is connected to an output device 1010 and an input device 1020 , and a bus 1090 connects an arithmetic device 1030 , a cache 1040 , a memory 1050 , an output IF (Interface) 1060 , an input IF 1070 and a network IF 1080 .
 演算装置1030は、キャッシュ1040やメモリ1050に格納されたプログラムや入力装置1020から読み出したプログラム等に基づいて動作し、各種の処理を実行する。キャッシュ1040は、RAM等、演算装置1030が各種の演算に用いるデータを一次的に記憶するキャッシュである。また、メモリ1050は、演算装置1030が各種の演算に用いるデータや、各種のデータベースが登録される記憶装置であり、ROM(Read Only Memory)、HDD(Hard Disk Drive)、フラッシュメモリ等により実現されるメモリである。 The arithmetic device 1030 operates based on programs stored in the cache 1040 and memory 1050, programs read from the input device 1020, and the like, and executes various processes. The cache 1040 is a cache such as a RAM that temporarily stores data used by the arithmetic device 1030 for various arithmetic operations. The memory 1050 is a storage device in which data used for various calculations by the arithmetic unit 1030 and various databases are registered, and is realized by ROM (Read Only Memory), HDD (Hard Disk Drive), flash memory, etc. memory.
 出力IF1060は、モニタやプリンタといった各種の情報を出力する出力装置1010に対し、出力対象となる情報を送信するためのインタフェースであり、例えば、USB(Universal Serial Bus)やDVI(Digital Visual Interface)、HDMI(登録商標)(High Definition Multimedia Interface)といった規格のコネクタにより実現されてよい。一方、入力IF1070は、マウス、キーボード、およびスキャナ等といった各種の入力装置1020から情報を受信するためのインタフェースであり、例えば、USB等により実現される。 The output IF 1060 is an interface for transmitting information to be output to the output device 1010 that outputs various information such as a monitor and a printer. It may be realized by a standard connector such as HDMI (registered trademark) (High Definition Multimedia Interface). On the other hand, the input IF 1070 is an interface for receiving information from various input devices 1020 such as a mouse, keyboard, scanner, etc., and is realized by, for example, USB.
 例えば、入力装置1020は、CD(Compact Disc)、DVD(Digital Versatile Disc)、PD(Phase change rewritable Disk)等の光学記録媒体、MO(Magneto-Optical disk)等の光磁気記録媒体、テープ媒体、磁気記録媒体、または半導体メモリ等から情報を読み出す装置により実現されてもよい。また、入力装置1020は、USBメモリ等の外付け記憶媒体により実現されてもよい。 For example, the input device 1020 includes optical recording media such as CD (Compact Disc), DVD (Digital Versatile Disc), PD (Phase change rewritable disk), magneto-optical recording media such as MO (Magneto-Optical disk), tape media, It may be implemented by a device that reads information from a magnetic recording medium, a semiconductor memory, or the like. Also, the input device 1020 may be realized by an external storage medium such as a USB memory.
 ネットワークIF1080は、ネットワークNを介して他の機器からデータを受信して演算装置1030へ送り、また、ネットワークNを介して演算装置1030が生成したデータを他の機器へ送信する機能を有する。 The network IF 1080 has a function of receiving data from another device via the network N and sending it to the arithmetic device 1030, and transmitting data generated by the arithmetic device 1030 via the network N to other devices.
 ここで、演算装置1030は、出力IF1060や入力IF1070を介して、出力装置1010や入力装置1020の制御を行うこととなる。例えば、演算装置1030は、入力装置1020やメモリ1050からプログラムをキャッシュ1040上にロードし、ロードしたプログラムを実行する。例えば、コンピュータ1000が情報処理装置10として機能する場合、コンピュータ1000の演算装置1030は、キャッシュ1040上にロードされたプログラムを実行することにより、制御部230の機能を実現することとなる。 Here, the arithmetic device 1030 controls the output device 1010 and the input device 1020 via the output IF 1060 and the input IF 1070. For example, arithmetic device 1030 loads a program from input device 1020 or memory 1050 onto cache 1040 and executes the loaded program. For example, when the computer 1000 functions as the information processing device 10 , the arithmetic device 1030 of the computer 1000 implements the functions of the control unit 230 by executing the program loaded on the cache 1040 .
 以上、本願の実施形態を図面に基づいて詳細に説明した。しかしながら、これらは例示であり、本願の実施形態は、発明の開示の欄に記載の態様を始めとして、所謂当業者の知識に基づいて種々の変形、改良を施した他の形態で実施することが可能である。また、上述してきた「部(section、module、unit)」は、「手段」や「回路」などに読み替えることができる。 The embodiments of the present application have been described in detail above based on the drawings. However, these are examples, and the embodiments of the present application can be implemented in other forms with various modifications and improvements based on the knowledge of those skilled in the art, including the aspects described in the disclosure of the invention. is possible. Also, the above-mentioned "section, module, unit" can be read as "means" or "circuit".
  10 情報処理装置
  20 通信部
  30 記憶部
  31 対応対象情報データベース
  32 対応者情報データベース
  33 モデルデータベース
  40 制御部
  41 取得部
  42 推定部
  43 通知部
  44 提供部
  45 受付部
  46 学習部
 100 端末装置
 200 検知装置
 210 通信部
 220 記憶部
 221 対応対象情報データベース
 222 対応者情報データベース
 223 モデルデータベース
 230 制御部
 231 取得部
 232 推定部
 233 通知部
 234 提供部
 235 受付部
 236 学習部
10 information processing device 20 communication unit 30 storage unit 31 correspondence target information database 32 correspondence person information database 33 model database 40 control unit 41 acquisition unit 42 estimation unit 43 notification unit 44 provision unit 45 reception unit 46 learning unit 100 terminal device 200 detection device 210 communication unit 220 storage unit 221 correspondence target information database 222 correspondence person information database 223 model database 230 control unit 231 acquisition unit 232 estimation unit 233 notification unit 234 provision unit 235 reception unit 236 learning unit

Claims (20)

  1.  対応者によって対応される対応対象が発した音声を示す音声情報を取得する取得手順と、
     前記取得手順により取得された音声情報に基づいて、前記対応対象の状態と、当該状態である確度とを推定する推定手順と、
     前記対応対象の欲求又は感情として前記推定手順により推定された前記対応対象の状態を、前記確度と、当該状態に対する前記対応者の推奨行動とともに通知する通知手順と、
     をコンピュータに実行させることを特徴とする通知プログラム。
    an acquisition procedure for acquiring voice information indicating a voice uttered by a response target to be handled by a responder;
    an estimation step for estimating the state of the correspondence target and the probability of being in the state based on the voice information obtained by the obtaining step;
    a notification procedure for notifying the state of the correspondence target estimated by the estimation procedure as the desire or emotion of the correspondence target together with the certainty and a recommended action of the respondent for the state;
    A notification program characterized by causing a computer to execute
  2.  前記通知手順は、
     前記対応対象が前記状態のいずれとも推定されない場合は、前記対応対象がいずれの状態でもないことを示す通知を提供する
     ことを特徴とする請求項1に記載の通知プログラム。
    The notification procedure includes:
    2. The notification program according to claim 1, further comprising providing a notification indicating that the target is not in any of the states when the target is not presumed to be in any of the states.
  3.  前記推定手順は
     前記音声情報に基づいて前記確度が推定できない場合は、前記対応対象の状態の履歴に基づいて前記確度を推定する
     ことを特徴とする請求項1または2に記載の通知プログラム。
    The notification program according to claim 1 or 2, wherein the estimation procedure estimates the accuracy based on a history of the state of the correspondence target when the accuracy cannot be estimated based on the voice information.
  4.  前記推定手順は
     前記音声情報に基づいて前記確度が推定できない場合は、前記対応者が入力した前記対応対象の状態の履歴に基づいて前記確度を推定する
     ことを特徴とする請求項1から3のうちいずれか1つに記載の通知プログラム。
    4. The method according to any one of claims 1 to 3, wherein the estimation procedure estimates the accuracy based on a history of the state of the correspondence target input by the responder when the accuracy cannot be estimated based on the voice information. A notification program according to any one of the preceding.
  5.  前記対応対象が発する音声に基づいて、前記対応対象の状態に対する前記対応者の対応に応じたメッセージを前記対応者に提供する第1提供手順
     をさらにコンピュータに実行させることを特徴とする請求項1から4のうちいずれか1つに記載の通知プログラム。
    2. The computer further causes the computer to execute a first provision step of providing the responder with a message corresponding to the response of the responder to the state of the response target based on the voice uttered by the response target. 4. The notification program according to any one of 4.
  6.  前記第1提供手順は、
     前記通知が提供されてから、前記対応対象が発する音声が所定の条件を満たすまでの時間に応じたメッセージを提供する
     ことを特徴とする請求項5に記載の通知プログラム。
    The first provision procedure includes:
    6. The notification program according to claim 5, wherein a message is provided according to the time from when the notification is provided until the voice uttered by the corresponding object satisfies a predetermined condition.
  7.  前記対応者が発する音声に基づいて、前記対応対象の状態に対する前記対応者の対応に応じたメッセージを前記対応者に提供する第2提供手順
     をさらにコンピュータに実行させることを特徴とする請求項1から6のうちいずれか1つに記載の通知プログラム。
    2. The computer further causes the computer to execute a second provision step of providing the responder with a message corresponding to the response of the responder to the state of the response target based on the voice uttered by the responder. 7. The notification program according to any one of 6.
  8.  前記対応対象の状態に関するコンテンツの利用に応じたメッセージを前記対応者に提供する第3提供手順
     をさらにコンピュータに実行させることを特徴とする請求項1から7のうちいずれか1つに記載の通知プログラム。
    8. The notification according to any one of claims 1 to 7, further causing the computer to execute a third provision step of providing the responder with a message corresponding to the use of the content regarding the status of the response target. program.
  9.  前記対応対象の音声と、前記対応対象の状態との対応関係に関する出題を行うコンテンツを前記対応者に提供する第4提供手順
     をさらにコンピュータに実行させることを特徴とする請求項1から8のうちいずれか1つに記載の通知プログラム。
    9. Among claims 1 to 8, further causing the computer to execute a fourth provision step of providing the responder with content asking a question about the correspondence relationship between the voice of the response target and the state of the response target. A notification program according to any one of the preceding claims.
  10.  前記対応対象の状態に対する前記対応者の対応に応じたメッセージを、前記対応者と所定の関連性を有する利用者に提供する第5提供手順
     をさらにコンピュータに実行させることを特徴とする請求項1から9のうちいずれか1つに記載の通知プログラム。
    1. The computer further causes the computer to execute a fifth providing step of providing a message according to the response of the responder to the state of the response target to a user having a predetermined relationship with the responder. 10. The notification program of any one of 9.
  11.  前記推定手順による推定結果に基づいて、前記対応対象の状態の履歴を示すコンテンツを前記対応者に提供する第6提供手順
     をさらにコンピュータに実行させることを特徴とする請求項1から10のうちいずれか1つに記載の通知プログラム。
    11. The computer further causes the computer to execute a sixth provision step of providing the responder with content indicating the history of the state of the response target based on the estimation result of the estimation step. or the notification program according to one.
  12.  前記通知が提供された後、前記対応対象の状態が所定の時間以上解消されない場合は、前記対応者に所定のメッセージを提供する第7提供手順
     をさらにコンピュータに実行させることを特徴とする請求項1から11のうちいずれか1つに記載の通知プログラム。
    7. The computer further causes the computer to execute a seventh provision step of providing a predetermined message to the responder if the state of the response target is not resolved for a predetermined time after the notification is provided. 12. The notification program of any one of 1-11.
  13.  前記対応者とは異なる他の対応者から投稿されたメッセージを前記対応者に提供する第8提供手順
     をさらにコンピュータに実行させることを特徴とする請求項1から12のうちいずれか1つに記載の通知プログラム。
    13. The computer according to any one of claims 1 to 12, further causing the computer to execute an eighth providing procedure of providing the message posted by another responder different from the responder to the responder. notification program.
  14.  前記第8提供手順は、
     前記対応者に提供するタイミングに応じた情報量である前記メッセージを提供する
     ことを特徴とする請求項13に記載の通知プログラム。
    The eighth providing procedure includes:
    14. The notification program according to claim 13, wherein the message is provided in an amount corresponding to the timing of providing the message to the responder.
  15.  前記第8提供手順は、
     前記対応対象の状態に応じた情報量である前記メッセージを提供する
     ことを特徴とする請求項13または14に記載の通知プログラム。
    The eighth providing procedure includes:
    15. The notification program according to claim 13 or 14, wherein the message is provided with an amount of information according to the state of the correspondence target.
  16.  前記対応者から投稿されたメッセージを前記対応者とは異なる他の対応者に提供する第9提供手順
     をさらにコンピュータに実行させることを特徴とする請求項1から15のうちいずれか1つに記載の通知プログラム。
    16. The computer according to any one of claims 1 to 15, further causing the computer to execute a ninth providing step of providing the message posted by the responder to another responder different from the responder. notification program.
  17.  前記推定手順は、
     前記対応対象が発する音声の特徴を学習させたモデルを用いて前記確度を推定し、
     前記通知が提供された後の前記対応対象の状態に関する情報と、前記音声情報が示す音声の特徴とを前記モデルに学習させる学習手順
     をさらにコンピュータに実行させることを特徴とする請求項1から16のうちいずれか1つに記載の通知プログラム。
    The estimation procedure includes:
    estimating the accuracy using a model that has learned the features of the voice uttered by the corresponding target;
    17. The computer further causes the computer to execute a learning procedure for causing the model to learn information about the state of the target after the notification is provided and features of speech indicated by the speech information. A notification program according to any one of
  18.  前記通知が提供された後に、前記対応対象の状態に関する情報を前記対応者から受け付ける受付手順
     をさらにコンピュータに実行させる、
     前記学習手順は、
     前記受付手順により受け付けられた情報と、前記音声情報が示す音声の特徴とを前記モデルに学習させる
     ことを特徴とする請求項17に記載の通知プログラム。
    causing the computer to further execute a reception procedure for receiving information about the status of the response target from the responder after the notification is provided;
    The learning procedure includes:
    18. The notification program according to claim 17, wherein the model learns the information received by the receiving procedure and the features of the voice indicated by the voice information.
  19.  対応者によって対応される対応対象が発した音声を示す音声情報を取得する取得部と、
     前記取得部により取得された音声情報に基づいて、前記対応対象の状態と、当該状態である確度とを推定する推定部と、
     前記対応対象の欲求又は感情として前記推定部により推定された前記対応対象の状態を、前記確度と、当該状態に対する前記対応者の推奨行動とともに通知する通知部と、
     を有することを特徴とする通知装置。
    an acquisition unit that acquires voice information indicating a voice uttered by a response target that is handled by a responder;
    an estimating unit that estimates the state of the correspondence target and the probability of being in the state based on the audio information acquired by the acquiring unit;
    a notification unit that notifies the state of the corresponding target estimated by the estimating unit as the desire or emotion of the corresponding target together with the certainty and the recommended action of the respondent for the state;
    A notification device comprising:
  20.  コンピュータが実行する通知方法であって、
     対応者によって対応される対応対象が発した音声を示す音声情報を取得する取得工程と、
     前記取得工程により取得された音声情報に基づいて、前記対応対象の状態と、当該状態である確度とを推定する推定工程と、
     前記対応対象の欲求又は感情として前記推定工程により推定された前記対応対象の状態を、前記確度と、当該状態に対する前記対応者の推奨行動とともに通知する通知工程と、
     を含むことを特徴とする通知方法。
    A computer implemented notification method comprising:
    an acquisition step of acquiring voice information indicating a voice uttered by a target to be handled by a responder;
    an estimating step of estimating the state of the correspondence target and the probability of being in the state based on the voice information obtained by the obtaining step;
    a notification step of notifying the state of the corresponding target estimated by the estimating step as the desire or emotion of the corresponding target together with the certainty and a recommended action of the respondent for the state;
    A notification method comprising:
PCT/JP2022/035243 2021-10-08 2022-09-21 Notification program, notification device, and notification method WO2023058461A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202280067832.8A CN118077005A (en) 2021-10-08 2022-09-21 Notification program, notification device, and notification method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-166339 2021-10-08
JP2021166339A JP2023056870A (en) 2021-10-08 2021-10-08 Report program, report device, and report method

Publications (1)

Publication Number Publication Date
WO2023058461A1 true WO2023058461A1 (en) 2023-04-13

Family

ID=85804186

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/035243 WO2023058461A1 (en) 2021-10-08 2022-09-21 Notification program, notification device, and notification method

Country Status (3)

Country Link
JP (1) JP2023056870A (en)
CN (1) CN118077005A (en)
WO (1) WO2023058461A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019095931A (en) * 2017-11-20 2019-06-20 ユニ・チャーム株式会社 Program and method for supporting childcare
WO2019130488A1 (en) * 2017-12-27 2019-07-04 ユニ・チャーム株式会社 Program used in assisting caregiver, caregiver assistance method, and caregiver assistance system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019095931A (en) * 2017-11-20 2019-06-20 ユニ・チャーム株式会社 Program and method for supporting childcare
WO2019130488A1 (en) * 2017-12-27 2019-07-04 ユニ・チャーム株式会社 Program used in assisting caregiver, caregiver assistance method, and caregiver assistance system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "Received the CES2021 Innovation Award for "identifying the reason why babies cry with an accuracy of 80% or more." What are the attention-grabbing baby tech companies from Japan?", DIGITAL SHIFT TIMES, 12 May 2021 (2021-05-12), XP093056375, Retrieved from the Internet <URL:https://digital-shift.jp/startup_technology/210512> [retrieved on 20230621] *
ANONYMOUS: "The Future Changed by Baby Tech -From Japan! Challenge to the world. "ベビーテックで変わる未来‐日本発!世界への挑戦‐" ", JETRO, 24 December 2020 (2020-12-24), XP093056355, Retrieved from the Internet <URL:https://www.jetro.go.jp/tv/internet/2020/12/85018b13b11f64f3.html> [retrieved on 20230621] *
DUNSTAN BABIES: "Japanese App Demo 2021", YOUTUBE, 4 May 2021 (2021-05-04), XP093056359, Retrieved from the Internet <URL::https://www.youtube.com/watch?v=3krV0-OcRnA>> *
FIRST ASCENT: "Cry Analyzer [iPad]", APPLION, 4 June 2021 (2021-06-04), XP093056364, Retrieved from the Internet <URL:https://applion.jp/ipad/app/1303091708/> *

Also Published As

Publication number Publication date
CN118077005A (en) 2024-05-24
JP2023056870A (en) 2023-04-20

Similar Documents

Publication Publication Date Title
Vaira et al. MamaBot: a System based on ML and NLP for supporting Women and Families during Pregnancy
Roulstone et al. Investigating the role of language in children's early educational outcomes
Radesky et al. Infant self-regulation and early childhood media exposure
Vallotton Do infants influence their quality of care? Infants’ communicative gestures predict caregivers’ responsiveness
Planalp et al. Trajectories of regulatory behaviors in early infancy: Determinants of infant self‐distraction and self‐comforting
Boman et al. Users’ and professionals’ contributions in the process of designing an easy-to-use videophone for people with dementia
Bergquist et al. App-based self-administrable clinical tests of physical function: development and usability study
Baker et al. A closer examination of aggressive subtypes in early childhood: Contributions of executive function and single-parent status
Portz et al. “Call a teenager… that’s what i do!”-Grandchildren help older adults use new technologies: Qualitative study
Spinelli et al. The regulation of infant negative emotions: The role of maternal sensitivity and infant‐directed speech prosody
Kersner et al. How to Manage Communication Problems in Young Children
Lieberman et al. Parents’ contingent responses in communication with 10-month-old children in a clinical group with typical or late babbling
Shin et al. Identifying opportunities and challenges: how children use technologies for managing diabetes
Ghio et al. Parents’ concerns and understandings around excessive infant crying: Qualitative study of discussions in online forums
WO2023058461A1 (en) Notification program, notification device, and notification method
JP7347414B2 (en) Information processing system, information processing method, and recording medium
Jahng The moderating effect of children’s language abilities on the relation between their shyness and play behavior during peer play
JP2022015330A (en) Information providing device, information providing method, and information providing program
Carey Teaching parents about infant temperament
Clemente et al. Alexa, how do i feel today? Smart speakers for healthcare and wellbeing: an analysis about uses and challenges
Johnson et al. Usability and acceptability of a text message-based developmental screening tool for young children: pilot study
KR102138973B1 (en) Collaborative child development tracking method and system
Moeller Language development: New insights and persistent puzzles
JP7398213B2 (en) Estimation device, estimation method, estimation program and system
Ben-Sasson et al. The feasibility of a crowd-based early developmental milestone tracking application

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22878332

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE