WO2022196042A1 - Support device, system, and method, and computer-readable medium - Google Patents

Support device, system, and method, and computer-readable medium Download PDF

Info

Publication number
WO2022196042A1
WO2022196042A1 PCT/JP2022/000279 JP2022000279W WO2022196042A1 WO 2022196042 A1 WO2022196042 A1 WO 2022196042A1 JP 2022000279 W JP2022000279 W JP 2022000279W WO 2022196042 A1 WO2022196042 A1 WO 2022196042A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
support
target person
captured image
subject
Prior art date
Application number
PCT/JP2022/000279
Other languages
French (fr)
Japanese (ja)
Inventor
久美子 高塚
真由美 伊藤
哲也 冬野
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to JP2023506775A priority Critical patent/JPWO2022196042A5/en
Publication of WO2022196042A1 publication Critical patent/WO2022196042A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services

Definitions

  • the present invention relates to a support device, system, method and program, and more particularly to a support device, system, method and program for supporting a subject by a supporter.
  • Patent Literature 1 discloses a technology related to a monitoring system.
  • a camera having a face recognition function photographs a person being watched over, and transmits the photographed moving image to an information processing device via the Internet.
  • the supporter who lives away from the target of support can grasp the potential interest of the target and provide appropriate support according to the interest. It has been demanded.
  • the present disclosure is made to solve such problems, and provides a support device, system, method, and program for performing appropriate support according to the potential interest of the support target person. for the purpose.
  • a support device includes: a registration unit that associates and registers a support target person and a supporter; an acquisition unit that acquires a first captured image captured by the subject; a specifying unit that specifies the subject's interest based on the first captured image; a determination unit that determines one or more candidates for assistance information based on the identified interest; a presentation unit that presents the determined support information candidate to the supporter associated with the target person; Prepare.
  • a support system includes: a first terminal of a support target person; a supporter's second terminal; a support device;
  • the support device is a registration unit that associates and registers the target person and the supporter; an acquisition unit that acquires a first captured image captured by the subject from the first terminal; a specifying unit that specifies the subject's interest based on the first captured image; a determination unit that determines one or more candidates for assistance information based on the identified interest; a presentation unit that presents the determined support information candidate to the second terminal of the supporter associated with the target person; Prepare.
  • a support method includes: the computer Corresponding and registering the target of support and the supporter, Acquiring a first captured image captured by the subject; Identifying the subject's interest based on the first captured image; determining one or more candidates for assistance information based on the identified interests; The determined support information candidate is presented to the supporter associated with the target person.
  • the support program is a registration process for registering a support target person and a supporter in association with each other; Acquisition processing for acquiring a first captured image captured by the subject; a specifying process of specifying the subject's interest based on the first captured image; a determination process of determining one or more candidates for assistance information based on the identified interest; a presentation process of presenting the determined support information candidate to the supporter associated with the target person; run on the computer.
  • FIG. 1 is a block diagram showing the configuration of a support device according to the first embodiment
  • FIG. 4 is a flow chart showing the flow of a support method according to the first embodiment
  • FIG. 11 is a block diagram showing the overall configuration of a support system according to a second embodiment
  • FIG. 11 is a block diagram showing the configuration of a tablet terminal according to the second embodiment
  • FIG. FIG. 11 is a block diagram showing the configuration of an authentication device according to the second embodiment
  • FIG. FIG. 11 is a block diagram showing the configuration of a support device according to a second embodiment
  • FIG. 11 is a block diagram showing the configuration of a support device according to a third embodiment;
  • FIG. FIG. 11 is a sequence diagram showing the flow of inquiry processing for a captured image according to the third embodiment;
  • FIG. 11 is a diagram showing an example of a selection screen for favorites of captured images according to the third embodiment;
  • FIG. 16 is a sequence diagram showing another example of the flow of inquiry processing for a captured image according to the third embodiment;
  • FIG. 11 is a diagram showing an example of a tag candidate selection screen in a quiz format according to the third embodiment;
  • FIG. 11 is a diagram showing an example of an input screen for attribute information of a captured image according to the third embodiment;
  • FIG. 14 is a sequence diagram showing the flow of storytelling processing according to the fourth embodiment;
  • FIG. 1 is a block diagram showing the configuration of a support device 1 according to the first embodiment.
  • the assistance device 1 is an information processing device for assisting a target person to be assisted by a supporter.
  • the support device 1 presents the supporter with support information candidates that meet at least the needs (direct or indirect requests) of the subject.
  • the support device 1 can also be said to be a watching support device that assists a watcher (related person of the target person) to remotely watch over the person being watched (the target person).
  • the target of support is a target of watching over, such as a child or the elderly. For example, if the target person is a child of elementary school age or younger, the supporter is the target person's grandparents or the like.
  • the supporter may be an elderly person living alone, or an elderly person who wishes to provide assistance to any subject even if there is no blood relationship.
  • the target of support is an elderly person or a person requiring nursing care
  • the supporter may be a relative of the target (children and couples of the target, etc.), a care worker, an adult guardian, or the like.
  • the support device 1 is connected to the target person's terminal and the supporter's terminal via a communication network (not shown, hereinafter, the communication network is simply referred to as a network) or predetermined wireless communication. . It does not matter whether the network is wired or wireless, and the type of communication protocol does not matter.
  • the terminal may be an information processing terminal equipped with a camera, a microphone, a speaker, a touch panel, etc., such as a tablet terminal.
  • the support device 1 includes a registration unit 11 , an acquisition unit 12 , an identification unit 13 , a determination unit 14 and a presentation unit 15 .
  • the registration unit 11 associates and registers a support target person and a support person.
  • Acquisition unit 12 acquires a first captured image captured by a subject. It is assumed that the first photographed image includes an object, scenery, or the like in which the subject is interested.
  • the specifying unit 13 specifies the subject's interest based on the first captured image.
  • the determination unit 14 determines one or more support information candidates based on the specified interest.
  • the presentation unit 15 presents the determined support information candidate to the supporter associated with the target person.
  • the identifying unit 13 may analyze the first captured image to extract attribute information (keywords), and identify interests using a model that inputs keywords and outputs interests.
  • the specifying unit 13 and the determining unit 14 may determine support information candidates using a model that inputs the first captured image, attribute information, and the like and outputs support information candidates.
  • a known technique such as an AI (Artificial Intelligence) model can be applied to the model.
  • the model can be machine-learned using stored captured images or attribute information as teacher data and interest or support information candidates as correct answers.
  • FIG. 2 is a flow chart showing the flow of the support method according to the first embodiment.
  • the registration unit 11 associates and registers a support target person and a supporter (S11).
  • the registration unit 11 acquires the identification information of the target person and the supporter from the terminal of the target person, the target person's guardian, or the supporter, and stores the identification information in the storage device in association with each other.
  • the storage device may be internal or external to the support device 1 and connected to the support device 1 .
  • the acquisition unit 12 acquires the first captured image captured by the subject (S12). For example, it is assumed that a subject uses a terminal to photograph an object of interest to the subject, and the terminal transmits the photographed image of the object to the support device 1 .
  • the identifying unit 13 identifies the subject's interest based on the first captured image (S13).
  • the determination unit 14 determines one or more support information candidates based on the specified interest (S14).
  • the presentation unit 15 presents the determined support information candidate to the supporter associated with the target person (S15). For example, the presentation unit 15 transmits the determined support information candidate to the terminal possessed by the supporter, and the terminal displays the received support information candidate. As a result, the supporter can visually recognize the recommended support information candidate through the terminal.
  • the supporter can select support information candidates that match the interest of the target person and provide it to the supporter. As a result, it is possible to appropriately and effectively support the target person.
  • the support device 1 includes a processor, memory, and storage device (not shown). Further, the storage device stores a computer program in which processing of the support method according to the present embodiment is implemented. Then, the processor loads the computer program from the storage device into the memory and executes the computer program. Thereby, the processor implements the functions of the registration unit 11 , the acquisition unit 12 , the identification unit 13 , the determination unit 14 and the presentation unit 15 .
  • the registration unit 11, the acquisition unit 12, the identification unit 13, the determination unit 14, and the presentation unit 15 may each be realized by dedicated hardware.
  • part or all of each component of each device may be implemented by general-purpose or dedicated circuitry, processors, etc., or combinations thereof. These may be composed of a single chip, or may be composed of multiple chips connected via a bus.
  • a part or all of each component of each device may be implemented by a combination of the above-described circuits and the like and programs.
  • a processor a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), an FPGA (field-programmable gate array), a quantum processor (quantum computer control chip), or the like can be used.
  • each component of the support device 1 when a part or all of each component of the support device 1 is realized by a plurality of information processing devices, circuits, etc., the plurality of information processing devices, circuits, etc. may be centrally arranged or distributed. may be placed.
  • the information processing device, circuits, and the like may be implemented as a form in which each is connected via a communication network, such as a client-server system, a cloud computing system, or the like.
  • the functions of the support device 1 may be provided in a SaaS (Software as a Service) format.
  • FIG. 3 is a block diagram showing the overall configuration of a support system 1000 according to the second embodiment.
  • the support system 1000 is an information system for supporting a target person (grandchild) U1 by a supporter (grandparent) U3 or a target person's parent U2.
  • a group of a target person (grandchild) U1, a target person's parent U2, and a supporter (grandparent) U3 will be targeted, but the present invention is not limited to this. That is, the support system 1000 may target two or more groups of one or more target persons and one or more supporters.
  • biometric authentication which is an example of biometric authentication
  • biometric information is used as personal identification information
  • biometric authentication and biometric information can be applied to other techniques that use captured images.
  • biometric information may be data (feature amounts) calculated from physical features unique to an individual, such as fingerprints, voiceprints, veins, retinas, and iris patterns.
  • the support system 1000 includes a tablet terminal 101 (first terminal), a smartphone 102, a tablet terminal 103 (second terminal), an authentication device 200, and a support device 300.
  • a tablet terminal 101 , a smartphone 102 , a tablet terminal 103 , an authentication device 200 and a support device 300 are connected via a network N.
  • the network N is a wired or wireless communication line such as the Internet.
  • the tablet terminal 101 is operated by the target person (grandchild) U1 and the target person's parent U2, and the smartphone 102 is possessed and operated by the target person's parent U2.
  • the tablet terminal 103 is operated by a supporter (grandparent) U3. It is assumed that the tablet terminals 101 and 103 are installed at least in different places (dwellings, etc.).
  • the tablet terminal 101 is a terminal used by the target person (grandchild) U1 and the target person's parent U2, and is installed in the residence of the target person (grandchild) U1 and the target person's parent U2.
  • the tablet terminal 101 may display a character imitating an animal or a robot by means of installed software, and the character may operate as a user interface with which the target person (grandchild) U1 interacts.
  • the tablet terminal 101 analyzes the content control information from the support device 300, displays the character, and performs voice output or text display as the character's words.
  • the tablet terminal 101 also collects the speech of the target person (grandchild) U1 and transmits the collected voice information or the voice recognition result to the support device 300 .
  • the target person (grandchild) U1 can interact (converse) with the character displayed on the tablet terminal 101 .
  • the tablet terminal 101 may transmit the health information of the subject U1 acquired from a health information measuring device (not shown) to the support device 300 via the network N.
  • FIG. 4 is a block diagram showing the configuration of the tablet terminal 101 according to the second embodiment.
  • the tablet terminal 101 includes a camera 110 , a microphone 120 , a speaker 130 , a touch panel 140 , a storage section 150 , a communication section 160 , a memory 170 and a control section 180 .
  • the camera 110 is an imaging device that performs imaging under the control of the control unit 180 .
  • the microphone 120 is a sound pickup device that picks up the voice uttered by the target person (grandchild) U1 or the like.
  • the speaker 130 is a device that outputs sound under the control of the controller 180 .
  • Touch panel 140 includes a display device (display unit) such as a screen and an input device.
  • the storage unit 150 is a storage device that stores a program 151 for realizing each function of the tablet terminal 101 .
  • the program 151 implements processing including software that operates as a user interface.
  • the communication unit 160 is a communication interface with the network N.
  • the memory 170 is a volatile storage device such as a RAM (Random Access Memory), and is a storage area for temporarily holding information when the control unit 180 operates.
  • the control unit 180 is a processor that controls hardware of the tablet terminal 101 .
  • the control unit 180 loads the program 151 from the storage unit 150 into the memory 170 and executes it. Thereby, the control unit 180 implements the functions of the registration unit 181 and the content processing unit 182 .
  • the registration unit 181 transmits a pre-registration request including the target person information and the supporter information to the support device 300 via the network N.
  • the target person information includes at least a face image (first biometric information) or identification information of the target person (grandchild) U1, and may further include personal information of the target person (grandchild) U1.
  • the target person information also includes the terminal ID of the tablet terminal 101 .
  • the face image of the subject (grandchild) U1 may be captured by the camera 110 or acquired from an external device such as the smartphone 102 or the like.
  • the target person information may include voice information obtained by picking up the voice of the target person (grandchild) U1, voice recognition results, and health information.
  • the registration unit 181 performs voice recognition on voice information obtained by collecting the utterance of the target person (grandchild) U1 by the microphone 120, and detects the physical condition and desires of the target person (grandchild) U1 from the voice recognition result.
  • the physical condition and requests may be included in the target person information. For example, when the subject (grandchild) U1 utters "I have a headache," the registration unit 181 detects that the subject (grandchild) U1 is in poor physical condition. Further, when the target person (grandchild) U1 utters "I want to go play", the registration unit 181 detects that the target person (grandchild) U1 wants to go to a park, an amusement park, or the like.
  • the supporter information includes at least identification information of the supporters (grandparents) U3, and may further include personal information of the supporters (grandparents) U3, especially payment information. Also, the supporter information may include the terminal ID of the tablet terminal 103 . As the identification information (supporter ID) of the supporter (grandparent) U3, information previously issued in the support device 300 or the like may be used.
  • the registration unit 181 acquires a photographed image (first photographed image) of the subject (grandchild) U1 photographed by the camera 110, and supports a registration request including the first photographed image via the network N. Send to device 300 .
  • the target person (grandchild) U1 may use the camera 110 to photograph an object or scenery that he or she is interested in.
  • the registration unit 181 acquires the first captured image from the camera 110 .
  • the subject (grandchild) U1 may capture the first captured image using a digital camera or the like. In this case, it is assumed that the subject's parent U2 transfers the first captured image from the digital camera to the tablet terminal 101 . Then, the registration unit 181 acquires the first captured image from the digital camera and transmits a registration request.
  • the registration unit 181 may include part or all of the captured moving images in the registration request as the first captured image group. Note that when the first captured image is acquired, the registration unit 181 automatically transmits a registration request including the first captured image to the support device 300 without depending on the operation of the subject (grandchild) U1 or the like. good too. Further, the registration unit 181 may acquire the position information of the photographing location of the photographed image together with the first photographed image. In that case, the registration unit 181 further includes the acquired location information in the registration request.
  • the registration unit 181 may include the face image (second biometric information) of the target person (grandchild) U1 as the photographer together with the first captured image in the registration request.
  • the registration unit 181 may include the terminal ID of the tablet terminal 101 or the identification information of the subject (grandchild) U1 together with the first captured image.
  • the content processing unit 182 analyzes content control information from the support device 300, displays characters on the touch panel 140, and provides the user interface described above. In particular, the content processing unit 182 displays the support information on the touch panel 140 when the support information is received from the support device 300 .
  • the support information may be display content of electronic data, service information, a product shipping notification message, or the like. Note that the content processing unit 182 may display the support information through a character.
  • the content processing unit 182 transmits thank-you information for the support information to the support device 300 via the network N in response to the operation of the target person (grandchild) U1 or the target person's parent U2.
  • the thank-you information includes text information such as a message input by the target person (grandchild) U1 or the target person's parent U2 via the touch panel 140, or the target person (grandchild) U1 being photographed when the support information is displayed or when the product is received.
  • a photographed image may also be included. It is assumed that the second photographed image includes the face area of the subject (grandchild) U1 captured by the user-side camera (camera 110 or the like) of tablet terminal 101 .
  • the subject's parent U2 may capture the second captured image using the smartphone 102 or the like, and transfer the second captured image from the smartphone 102 or the like to the tablet terminal 101 . Then, the content processing unit 182 transmits thank-you information including the second captured image acquired from the smartphone 102 or the like to the support device 300 .
  • the smart phone 102 is an example of a terminal possessed and used by the parent U2 of the subject.
  • the smartphone 102 may be a tablet terminal, a PC (Personal Computer) equipped with or connected to a camera, or the like.
  • the smartphone 102 exchanges text data with the support device 300 via a general-purpose SNS (Social Network Service) application.
  • SNS Social Network Service
  • the SNS application is a message application
  • the character is used as the user
  • messages are exchanged with the user of the target person's parent U2, thereby realizing message exchange with the supporter (grandparent) U3 via the character. do.
  • the smartphone 102 may perform the above-described pre-registration request, registration request for the first captured image, display of support information, and transmission of thank-you information.
  • the tablet terminal 103 is a terminal used by the supporter (grandparent) U3, and is installed in the residence of the supporter (grandparent) U3. Note that the configuration of the tablet terminal 103 is the same as that of the tablet terminal 101 described above, and illustration thereof is omitted.
  • the tablet terminal 103 like the tablet terminal 101, may operate as a user interface that displays a character to be a dialogue partner of the supporter (grandparent) U3.
  • the tablet terminal 103 displays the support information candidates received from the support device 300 on the touch panel. Then, the tablet terminal 103 receives support information selection operation by the support person (grandparent) U3 from among the support information candidates, and transmits a support instruction including the selected support information to the support device 300 .
  • the tablet terminal 103 displays the thank-you information on the touch panel.
  • the tablet terminal 103 may register supporter information regarding the supporter (grandparent) U3 in the support device 300 .
  • the supporter information may include identification information (supporter ID) of the supporter (grandparent) U3, personal information, payment information, and terminal ID of the tablet terminal 103 .
  • the tablet terminal 103 may include the face image of the supporter (grandparent) U3 in the supporter information. In that case, the tablet terminal 103 may acquire the supporter ID issued by the support device 300 . Further, the tablet terminal 103 may notify the tablet terminal 101 or the smartphone 102 of the supporter ID and the terminal ID of the tablet terminal 103 .
  • the authentication device 200 is an information processing device that stores facial feature information of the user. In addition, in response to a face authentication request received from the outside, the authentication device 200 compares the face image or face feature information included in the request with the face feature information of each user, and requests the result of matching (authentication result). Reply to original.
  • FIG. 5 is a block diagram showing the configuration of the authentication device 200 according to the second embodiment.
  • the authentication device 200 includes a face information DB (DataBase) 210 , a face detection section 220 , a feature point extraction section 230 , a registration section 240 and an authentication section 250 .
  • the face information DB 210 associates and stores a user ID 211 and face feature information 212 of the user ID.
  • the facial feature information 212 is a set of feature points extracted from the facial image.
  • the face detection unit 220 detects a face area included in a registered image for registering face information and outputs it to the feature point extraction unit 230 .
  • the feature point extraction section 230 extracts feature points from the face area detected by the face detection section 220 and outputs facial feature information to the registration section 240 . Further, the feature point extraction unit 230 extracts feature points included in the facial image received from the support device 300 or the like, and outputs facial feature information to the authentication unit 250 .
  • the registration unit 240 newly issues a user ID 211 when registering facial feature information.
  • the registration unit 240 associates the issued user ID 211 with the facial feature information 212 extracted from the registered image and registers them in the facial information DB 210 .
  • Authentication unit 250 performs face authentication using facial feature information 212 . Specifically, the authentication unit 250 collates the facial feature information extracted from the facial image with the facial feature information 212 in the facial information DB 210 . If the verification is successful, the authentication unit 250 identifies the user ID 211 associated with the verified facial feature information 212 .
  • the authenticating unit 250 replies to the requester as a face authentication result indicating whether or not the facial feature information matches. Whether the facial feature information matches or not corresponds to the success or failure of the authentication. Note that matching of facial feature information (matching) means a case where the degree of matching is equal to or greater than a threshold. Also, the face authentication result shall include the specified user ID when the face authentication is successful.
  • the support device 300 is an example of the support device 1 described above.
  • the support device 300 is an information processing device that performs pre-registration processing, collection processing, support information provision processing, thank-you information registration and presentation processing, and the like.
  • the support device 300 may be made redundant by a plurality of servers, and each functional block may be realized by a plurality of computers. Note that the support device 300 may generate content control information for controlling characters displayed on the tablet terminals 101 and 103 and transmit the content control information to the tablet terminals 101 and 103 .
  • FIG. 6 is a block diagram showing the configuration of the support device 300 according to the second embodiment.
  • the support device 300 includes a storage unit 310 , a memory 320 , a communication unit 330 and a control unit 340 .
  • the storage unit 310 is an example of a storage device such as a hard disk or flash memory.
  • the storage unit 310 stores a program 311 , subject management information 312 , supporter information 313 and a support information DB 314 .
  • the program 311 is a computer program in which processing including pre-registration processing, collection processing, support information provision processing, thank-you information registration and presentation processing, etc. according to the second embodiment is implemented.
  • the subject management information 312 is information for managing the subject (grandchild) U1.
  • the subject management information 312 is information in which a subject ID 3121, a supporter ID 3122, a terminal ID 3123, subject information 3124, a subject image 3125 and a subject image 3126 are associated with each other.
  • the subject ID 3121 is identification information of the subject (grandchild) U1.
  • the subject ID 3121 is information identically or uniquely corresponding to the user ID 211 managed in association with the facial feature information 212 managed in the face information DB 210 of the authentication device 200 .
  • the target person ID 3121 may be identification information of the target person (grandchild) U1 included in the above-described pre-registration request.
  • the supporter ID 3122 is identification information of the supporter (grandparent) U3.
  • the terminal ID 3123 is identification information of the tablet terminal 101 used by the subject (grandchild) U1.
  • the target person information 3124 is at least part of the target person information included in the pre-registration request described above.
  • the target person information 3124 includes, for example, personal information, health information, physical condition, and requests of the target person (grandchild) U1, but is not limited to these.
  • the object image 3125 is an example of the first captured image included in the registration request described above.
  • the target object image 3125 is an image including the target object photographed by the target person (grandchild) U1.
  • the target person management information 312 may be associated with the target person ID 3121 as a first photographed image including a landscape image taken by the target person (grandchild) U1.
  • the target person image 3126 is the second captured image included in the thank-you information described above. That is, the target person image 3126 is an image including the target person (grandchild) U1 photographed when the support information is displayed or when the product is received.
  • the supporter information 313 is information about supporters (grandparents) U3.
  • the supporter information 313 is information in which a supporter ID 3131, personal information 3132, and terminal ID 3133 are associated with each other.
  • the supporter ID 3131 is identification information of the supporter (grandparent) U3, and is information identical to or uniquely corresponding to the supporter ID 3122 described above.
  • the personal information 3132 includes personal information of the supporters (grandparents) U3, such as payment information.
  • the terminal ID 3133 is identification information of the tablet terminal 103 used by the supporter (grandparent) U3.
  • the support information DB 314 is a database that manages a plurality of support information candidates 3141 to 314n (n is a natural number of 2 or more).
  • Support information candidates 3141 and the like are information that are candidates for support information for the target person (grandchild) U1.
  • the support information candidates 3141 and the like are presents (products), services (educational contents), travel proposal information, electronic data display contents (electronic books, etc.), and the like.
  • the memory 320 is a volatile storage device such as RAM (Random Access Memory), and is a storage area for temporarily holding information when the control unit 340 operates.
  • a communication unit 330 is a communication interface with the network N. FIG.
  • the control unit 340 is a processor that controls each component of the support device 300, that is, a control device.
  • the control unit 340 loads the program 311 from the storage unit 310 into the memory 320 and executes the program 311 .
  • the control unit 340 realizes the functions of the registration unit 341 , the acquisition unit 342 , the authentication control unit 343 , the identification unit 344 , the determination unit 345 , the presentation unit 346 and the processing unit 347 .
  • the registration unit 341 is an example of the registration unit 11 described above. Registration unit 341 receives a pre-registration request from tablet terminal 101 or smartphone 102 and transmits a face information registration request including a face image (of subject (grandchild) U1) included in the pre-registration request to authentication device 200 . Then, the registration unit 341 acquires the user ID issued by the authentication device 200 in accordance with the registration of the face information. Then, the registration unit 341 generates the target person management information 312 by associating the acquired user ID (subject ID 3121) with the target person information and the supporter information included in the pre-registration request. Then, the registration unit 341 registers the generated subject management information 312 in the storage unit 310 .
  • the registration unit 341 registers the facial feature information 212 (first biometric information) of the subject (grandchild) U1 in the authentication device 200 in advance. Further, the registration unit 341 can be said to register the supporter ID 3122 , the terminal ID 3123 , the target person information 3124 , etc., and the facial feature information 212 of the target person (grandchild) U1 in association with each other via the target person ID 3121 .
  • the registration unit 341 sets the photographer of the first captured image specified by the specifying unit 344, which will be described later, as the subject ID 3121, and sets the first captured image as the target object image 3125. As such, the registration unit 341 registers (updates) the subject management information 312 by associating the subject ID 3121 with the subject image 3125 . The registration unit 341 also registers the position information of the shooting location acquired together with the first captured image in association with the object image 3125 .
  • the registration unit 341 registers the thank-you information for the support information in the target person management information 312 in association with the target person ID 3121 .
  • the registration unit 341 stores the target person ID 3121, the display content, and the target person image 3126 (second 2) are registered in the subject management information 312 in association with each other.
  • the registration unit 341 stores the target person ID 3121, the display content, the content position, and the target person image 3126 (the 2) are registered in the subject management information 312 in association with each other.
  • the registration unit 341 registers the second captured image, the display content, and the content when the satisfaction level of the target person (grandchild) U1 measured from the second captured image by the processing unit 347, which will be described later, is equal to or greater than a predetermined value. Positions may be associated with each other and registered.
  • the acquisition unit 342 is an example of the acquisition unit 12 described above. Acquisition unit 342 acquires a registration request including the first captured image from tablet terminal 101 . Acquisition unit 342 also acquires a second captured image of target person (grandchild) U1 captured when the display content is displayed on tablet terminal 101 (first terminal). The acquisition unit 342 may further acquire the content position (display position) of the display content displayed on the tablet terminal 101 at the time of capturing together with the second captured image. The acquisition unit 342 may acquire the second biometric information (face image, etc.) of the photographer from the tablet terminal 101 together with the first captured image. Alternatively, the acquisition unit 342 may acquire the identification information of the photographer or the terminal ID of the tablet terminal 101 together with the first captured image. Furthermore, the acquisition unit 342 may acquire the position information of the shooting location together with the first captured image.
  • the authentication control unit 343 controls face authentication for the face image included in the registration request acquired by the acquisition unit 342. Specifically, authentication control unit 343 transmits a face authentication request including a face image to authentication device 200 and receives a face authentication result from authentication device 200 . Note that the authentication control unit 343 may detect the user's face area from the face image and include the image of the face area in the face authentication request. Alternatively, the authentication control unit 343 may extract facial feature information from the face area and include the facial feature information in the face authentication request.
  • the acquisition unit 342 performs collection processing for collecting the other target person information described above.
  • the other target person information may include the face image of the target person (grandchild) U1, as well as the above-described voice information, voice recognition result, health information, physical condition, requests, and the like.
  • the authentication control unit 343 performs face authentication on the face image included in the target person information and identifies the target person ID 3121 .
  • the registration unit 341 associates the specified target person ID 3121 with the voice information, voice recognition result, health information, physical condition, requests, etc. included in the target person information, and registers them in the target person management information 312 .
  • the identification unit 344 is an example of the identification unit 13 described above.
  • the identifying unit 344 identifies the photographer of the first captured image. For example, when a face image is included in the registration request for the first captured image, the specifying unit 344 acquires the face authentication result from the authentication control unit 343, and if the face authentication result is successful, the face image is included in the face authentication result. Identifies the user ID that is The identification unit 344 identifies the user ID included in the face authentication result as the photographer, and identifies the user ID as the subject ID 3121 . Note that when the identification information of the photographer is included in the registration request for the first captured image, the identifying unit 344 identifies the photographer using the identification information as the subject ID 3121 . Further, when the terminal ID of the tablet terminal 101 is included in the registration request for the first captured image, the identifying unit 344 identifies the subject ID 3121 associated with the terminal ID 3123 as the photographer.
  • the identifying unit 344 identifies the interest of the subject based on the subject information 3124 and the subject image 3125 associated with the particular subject ID 3121 . Further, the specifying unit 344 identifies the target person based on the thank-you information associated with the specific target person ID 3121, the target person image 3126, the display content, the content position, the position information of the shooting location of the first captured image, and the like. may identify the interests of As a result, the accuracy of specifying the interest can be improved.
  • the content position is the display position of the display content displayed on the tablet terminal 101 when the second captured image was captured as described above.
  • the content position is the displayed page number or the like.
  • the second captured image is an image of the target person (grandchild) U1 captured when the tablet terminal 101 displays (reproduces) a predetermined page (content position) of the display content, which is the support information.
  • the target person's parent U2 confirms that the target person (grandchild) U1 is interested in a specific page of the display content, the target person's parent U2 uses the inner camera of the tablet terminal 101 to view the target person ( grandchild) U1 may be photographed.
  • the tablet terminal 101 may set the captured image at this time as the second captured image, and may set the page displayed at this time as the content position.
  • the displayed content and the content position may be registered when the above-described degree of satisfaction is equal to or higher than a predetermined value.
  • the specifying unit 344 can specify the interest of the target person with high accuracy by considering the display content and the content position.
  • the specifying unit 344 may exclude the face-recognized area from the first captured image and specify the subject's interest based on the excluded image. For example, when the target person (grandchild) U1 takes a picture, there is a case where privacy-related information such as the face of a third party is captured unintentionally. In such a case, the third person's face and the like have nothing to do with the interest of the target person (grandchild) U1. Therefore, the identification unit 344 can improve the accuracy of identification by excluding unintentionally reflected information from the identification target of interest. Also, the specifying unit 344 preferably excludes areas other than the target person from the areas in which the face is recognized in the first captured image.
  • the registration unit 341 described above may register the excluded image as the target object image 3125 .
  • the registration unit 341 may exclude the face-recognized area from the first captured image and register the excluded image as the target object image 3125 . These can protect the privacy of third parties. Therefore, the target person's parent U2 can allow the target person (grandchild) U1 to use the support system 1000 without worrying about the privacy of a third party.
  • the determination unit 345 is an example of the determination unit 14 described above.
  • the determination unit 345 determines one or more support information candidates 3141 to 314n from the support information DB 314 based on the interest identified by the identification unit 344 .
  • the presentation unit 346 is an example of the presentation unit 15 described above.
  • the presentation unit 346 presents the support information candidate to the supporter (grandparent) U3 by transmitting the determined support information candidate to the tablet terminal 103 .
  • the processing unit 347 receives support instructions including support information selected by the supporter (grandparents) U3 from among the presented support information candidates, and performs processing according to the support information. Specifically, the processing unit 347 performs processing for providing the support information to the target person (grandchild) U1. For example, when the support information is a product, the processing unit 347 performs the purchase and shipping procedures for the product as “processing for providing”. For example, the processing unit 347 transmits a purchase and shipping request to the sales system of the selected product, with the destination being the residence of the target person (grandchild) U1. Further, when the support information is travel, the processing unit 347 performs reservation processing for the selected travel as “processing for providing”.
  • the processing unit 347 sends a request to the server of the travel agency for the selected trip for a trip on a predetermined schedule with the target person (grandchild) U1, the target person's parent U2, and the supporter (grandparent) U3 as participants. Submit a reservation request for Also, in these cases, the processing unit 347 notifies the tablet terminal 101 of the content of the shipment and the content of the reservation.
  • the support information is an electronic book (display content)
  • the processing unit 347 transmits an electronic book purchase request using the payment information of the supporter (grandparent) U3 to the electronic book sales system. After the purchase, the processing unit 347 notifies the tablet terminal 101 of the download destination of the electronic book.
  • the processing unit 347 performs these purchase requests and notifications as "processing for providing”. In these cases, the processing unit 347 makes a payment using the payment information of the supporter (grandparent) U3.
  • the processing unit 347 measures the degree of satisfaction of the subject person (grandchild) U1 from the second captured image, and if the degree of satisfaction is equal to or higher than a predetermined value, the second captured image is sent to the supporter (grandparent).
  • the tablet terminal 103 of U3 is notified.
  • the processing unit 347 may identify the facial region of the subject (grandchild) U1 included in the second captured image, and measure the degree of satisfaction from the degree of smile using a predetermined smile analysis technique.
  • FIG. 7 is a sequence diagram showing the flow of pre-registration processing according to the second embodiment.
  • the target person's parent U2 performs pre-registration to associate the target person (grandchild) U1 with the supporter (grandparent) U3.
  • the smartphone 102 photographs the face of the target person (grandchild) U1 with the camera according to the operation of the target person's parent U2.
  • the smartphone 102 receives the target person information of the target person (grandchild) U1 and the supporter information of the supporter (grandparent) U3 from the input of the target person's parent U2.
  • the smartphone 102 transmits a pre-registration request including the target person information and the supporter information to the support device 300 via the network N according to the operation of the target person's parent U2 (S101).
  • the pre-registration request includes the face image of the target person (grandchild) U1.
  • the registration unit 341 of the support device 300 receives a pre-registration request from the smartphone 102 via the network N. Then, the registration unit 341 transmits a face information registration request including the face image of the target person (grandchild) U1 included in the pre-registration request to the authentication device 200 (S102). In response, the authentication device 200 performs face information registration processing (S103).
  • FIG. 8 is a flow chart showing the flow of face information registration processing by the authentication device according to the second embodiment.
  • the information registration terminal (not shown) photographs the user's body including the face, and transmits a face information registration request including the photographed image (registered image) to the authentication device 200 via the network N.
  • the information registration terminal is, for example, an information processing device such as a personal computer, a smart phone, or a tablet terminal.
  • the information registration terminals may be the tablet terminal 101, the smart phone 102, the tablet terminal 103, and the like.
  • the information registration terminal is the support device 300 that has received a pre-registration request from the smartphone 102 or the like.
  • the authentication device 200 receives a face information registration request (S201). For example, the authentication device 200 receives a face information registration request via the network N from the support device 300 .
  • the face detection unit 220 detects a face area from the face image included in the face information registration request (S202). Then, the feature point extraction unit 230 extracts feature points (facial feature information) from the face area detected in step S202 (S203). Then, the registration unit 240 issues the user ID 211 (S204). Then, the registration unit 240 associates the extracted facial feature information 212 with the issued user ID 211 and registers them in the facial information DB 210 (S205). After that, the registration unit 240 returns the issued user ID 211 to the request source (the information registration terminal, for example, the support device 300) (S206).
  • the request source the information registration terminal, for example, the support device 300
  • the registration unit 341 of the support device 300 acquires the issued user ID from the authentication device 200 via the network N (S104). Then, the registration unit 341 sets the acquired user ID as the subject ID 3121 . Further, the registration unit 341 extracts information (personal information, terminal ID of the tablet terminal 101) other than the face image included in the target person information included in the pre-registration request. The registration unit 341 also extracts the supporter ID included in the supporter information included in the pre-registration request. Then, the registration unit 341 generates the target person management information 312 by associating the target person ID 3121, the supporter ID 3122, the terminal ID 3123, and the target person information 3124 (excluding the face image). Then, the registration unit 341 registers the generated subject management information 312 in the storage unit 310 (S105).
  • FIG. 9 is a flowchart showing the flow of collection processing according to the second embodiment.
  • the tablet terminal 101 takes an image of an object with the camera 110 according to the operation of the object person (grandchild) U1 (S111).
  • the tablet terminal 101 transmits a registration request including the first captured image to the support device 300 via the network N (S112).
  • the registration request includes the face image and identification information of the target person (grandchild) U ⁇ b>1 or the terminal ID of the tablet terminal 101 .
  • the registration request includes the face image of the subject (grandchild) U1.
  • the acquisition unit 342 of the support device 300 acquires a registration request including the first captured image (and face image) from the tablet terminal 101 via the network N.
  • the authentication control unit 343 then transmits the face authentication request including the face image included in the registration request to the authentication device 200 via the network N (S113).
  • the authentication device 200 performs face authentication processing (S114).
  • FIG. 10 is a flow chart showing the flow of face authentication processing by the authentication device according to the second embodiment.
  • the authentication device 200 receives a face authentication request from the support device 300 via the network N (S211). Note that the authentication device 200 may receive a face authentication request from the tablet terminal 101 or the like.
  • the authentication device 200 extracts facial feature information from the face image included in the face authentication request, as in steps S202 and S203 described above.
  • the authentication unit 250 of the authentication device 200 collates the facial feature information extracted from the face image included in the face authentication request with the facial feature information 212 of the face information DB 210 (S212), and calculates the matching degree.
  • the authentication unit 250 determines whether or not the degree of matching is equal to or greater than the threshold (S213).
  • the authentication unit 250 identifies the user ID 211 associated with the facial feature information 212 (S214). Then, the authentication unit 250 returns the result of the face authentication including the result of the successful face authentication and the specified user ID 211 to the support device 300 via the network N (S215). If the degree of matching is less than the threshold in step S213, the authentication unit 250 returns the result of face authentication including failure of the face authentication to the support device 300 via the network N (S216).
  • the authentication control unit 343 of the support device 300 receives the face authentication result from the authentication device 200 via the network N (S115).
  • the face authentication is successful, and the face authentication result includes the fact that it was successful and the user ID.
  • the specifying unit 344 determines whether or not the face authentication is successful from the received face authentication results.
  • the specifying unit 344 specifies the user ID included in the face authentication result, and specifies the user ID as the photographer (S116).
  • the support device 300 may reply to that effect to the tablet terminal 101 .
  • the registration unit 341 registers the specified photographer as a target person in association with the first captured image (S117). Specifically, the registration unit 341 sets the user ID identified in step S116 as the subject ID 3121 and sets the first captured image included in the registration request as the subject image 3125 . Then, the registration unit 341 updates the subject management information 312 by associating the subject ID 3121 with the subject image 3125 .
  • FIG. 9 shows an example in which the photographer of the first captured image is identified by face authentication from the support device 300 to the authentication device 200
  • User identification information or the terminal ID of the tablet terminal 101 may be used to identify the photographer in the support device 300 .
  • the tablet terminal 101 may capture a face image of the target person (grandchild) U1 with an internal camera (for the user) and transmit a face authentication request including the face image to the authentication device 200 via the network N. good. Then, the tablet terminal 101 receives the face authentication result from the authentication device 200 via the network N.
  • the tablet terminal 101 can specify (acquire) the user ID (identification information) of the target person (grandchild) U1 from the face authentication result.
  • the tablet terminal 101 may identify the person using the ID and password (passcode) entered by the target person (grandchild) U1 when logging into the terminal.
  • the ID that has successfully logged in may be the same as the user ID managed by the authentication device 200 or can be identified at the position.
  • the tablet terminal 101 may store the face image and user ID of the target person (grandchild) U1 in the storage unit 150 in advance. In this case, when the target person (grandchild) U1 captures the first captured image, the face image of the target person (grandchild) U1 is captured by the inner camera (for the user), and the face image in the storage unit 150 is captured. Face authentication may be performed by matching with a photographed face image.
  • the tablet terminal 101 can identify the user ID when face authentication within the terminal is successful. Accordingly, in step S112, the tablet terminal 101 can transmit a registration request including the specified user ID (of the target person (grandchild) U1) together with the first captured image.
  • the support device 300 can identify the photographer from the user ID included in the received registration request instead of steps S113 and S115. Alternatively, the tablet terminal 101 may transmit the registration request including the terminal ID together with the first captured image.
  • the support device 300 instead of steps S113 and S115, can identify the subject ID 3121 associated with the terminal ID 3123 included in the received registration request as the photographer. Alternatively, identification of the photographer by the support device 300 may be performed by biometric authentication other than face authentication.
  • FIG. 11 is a flowchart showing the flow of support information provision processing according to the second embodiment.
  • the support device 300 starts support information provision processing at a predetermined timing in response to a presentation request from the smartphone 102 or the tablet terminal 103 . In the following description, it is assumed that there is a presentation request from the tablet terminal 103 .
  • the tablet terminal 103 transmits a request for presentation of support information candidates for the target person (grandchild) U1 to the support device 300 via the network N in accordance with the operation of the supporter (grandparent) U3 (S121).
  • the presentation request includes the terminal ID of the tablet terminal 103 .
  • the presentation request may include the supporter ID and face image of the supporter (grandparent) U3.
  • the acquisition unit 342 of the support device 300 acquires the presentation request from the tablet terminal 103 via the network N.
  • the identifying unit 344 identifies the supporter ID 3131 associated with the terminal ID 3133 included in the acquired presentation request from the supporter information 313, and identifies the supporter ID 3131 associated with the identified supporter ID 3131 (supporter ID 3122)
  • the target person ID 3121 is specified (S122).
  • the acquiring unit 342 acquires the target person information 3124 and the target object image 3125 associated with the identified target person ID 3121 from the target person management information 312 (S123).
  • the specifying unit 344 analyzes the target object image 3125 and specifies the interest of the target person (grandchild) U1 based on the analysis result and the target person information 3124 (S124). Then, the determination unit 345 determines one or more support information candidates 3141 and the like from the support information DB 314 based on the specified interest (S125).
  • An AI model may be used for part or all of steps S124 and S125.
  • the presentation unit 346 transmits the determined support information candidate to the tablet terminal 103 via the network N (S126).
  • the tablet terminal 103 displays the received support information candidate (group).
  • FIG. 12 is a diagram showing an example of a presentation screen 51 of support information candidates for supporters according to the second embodiment.
  • the presentation screen 51 includes selection columns 5111 to 5113 , support information candidate display columns 5121 to 5123 , other candidate display button 5131 , purchase button 5132 and cancel button 5133 .
  • Selection columns 5111 to 5113 are columns for receiving a selection operation by a supporter (grandparent) U3.
  • Support information candidate display columns 5121 to 5123 are display columns for support information candidates determined in step S125. For example, it is assumed that the object images 3125 (group) include many flowers as objects. Therefore, in the example of FIG.
  • the support information candidate display column 5121 presents (recommends) a plant illustrated book
  • the support information candidate display column 5122 presents (recommends) a flower arrangement experience course
  • the support information candidate display column 5123 presents (recommends) a sunflower cultivation set.
  • Another candidate display button 5131 is a button for displaying support information candidates other than the support information candidate display columns 5121 to 5123 .
  • the tablet terminal 103 transmits a request for presentation of other candidates to the support apparatus 300, and when receiving other candidates, displays the support information candidate display columns 5121 to 5123.
  • a purchase button 5132 is a column for accepting a purchase of a candidate selected from the selection columns 5111 to 5113 as support information.
  • the cancel button 5133 is a button for canceling or ending the presentation of the support information candidate (closing the presentation screen 51).
  • FIG. 12 shows that the supporter (grandparents) U3 has selected the selection field 5111, that is, "botanical picture book" as the support information (S127) and pressed the purchase button 5132.
  • the tablet terminal 103 transmits a support instruction including the ID (support information) of the support information candidate corresponding to the support information candidate display field 5121 (plant picture book) to the support device 300 via the network N (S128). ).
  • the processing unit 347 of the support device 300 receives a support instruction from the tablet terminal 103 via the network N.
  • the processing unit 347 performs processing for providing support information included in the support instruction (S129).
  • the processing unit 347 notifies the tablet terminal 101 of information regarding the support information via the network N (S130).
  • the processing unit 347 identifies the terminal ID 3123 associated with the subject ID 3121 identified in step S122, and notifies the provision of the support information with the terminal ID 3123 (tablet terminal 101) as the destination.
  • the support information is an e-book plant illustrated book
  • the processing unit 347 notifies the tablet terminal 101 of the download destination of the e-book.
  • step S130 may be omitted when presenting (delivering) a surprise gift (product) to the target person (grandchild) U1.
  • the target person (grandchild) U1 receives support information from the supporter (grandparent) U3 (S131). Then, the tablet terminal 101 transmits the thank-you information input by the target person (grandchild) U1 or the target person's parent U2 to the support device 300 via the network N (S132). At this time, the thank-you information includes the text information of the thank-you message and the second photographed image of the target person (grandchild) U1 at the time of receiving the support information.
  • the thank-you information includes a facial image for authentication of the target person (grandchild) U1, the target person ID, the terminal ID of the tablet terminal 101, and the like.
  • the acquisition unit 342 of the support device 300 acquires thank-you information from the tablet terminal 101 via the network N.
  • the specifying unit 344 specifies the target person (grandchild) U1 from the face image, the target person ID, the terminal ID of the tablet terminal 101, and the like included in the thank-you information.
  • the authentication control unit 343 may perform face authentication on the face image included in the thank-you information.
  • the registration unit 341 associates the target person ID 3121 of the target person (grandchild) U1 with the thank-you information, and performs registration processing in the target person management information 312 (S133).
  • the processing unit 347 notifies the obtained thank-you information to the tablet terminal 103 via the network N (S134).
  • the tablet terminal 103 displays the notified thank-you information (S135).
  • FIG. 13 is a diagram for explaining the concept of presentation of support information candidates, selective purchase, delivery, and notification of thank-you comments according to the second embodiment.
  • the tablet terminal 103 displays the presentation screen 51a, and the supporter (grandparent) U3 has selected and purchased the illustrated book of plants as support information.
  • the support device 300 performs the process of purchasing and delivering the pictorial book of plants.
  • the plant illustrated book is delivered to the target person (grandchild) U1.
  • the target person's parent U2 uses the smartphone 102 to capture a second captured image including the target person (grandchild) U1 who is pleased with the plant illustrated book.
  • the second captured image may be captured by the tablet terminal 101 .
  • the tablet terminal 101 or the smartphone 102 supports thank-you information including the second captured image and the thank-you comment of the target person (grandchild) U1 according to the operation of the target person (grandchild) U1 or the target person's parent U2.
  • Send to device 300 The support device 300 notifies the tablet terminal 103 of the thank-you information.
  • the tablet terminal 103 displays the thank-you information display screen 51b in response to the thank-you information notification.
  • the supporter (grandparent) U3 can visually recognize the thank-you comment of the target person (grandchild) U1 and the picture of the target person (grandchild) U1 looking at the botanical illustrated book and enjoying it through the thank-you information display screen 51b.
  • the second embodiment has the following effects in addition to the effects of the first embodiment described above.
  • the support information is provided to the target person according to the support instruction from the supporter, the support information that matches the interest of the target person can be provided.
  • the target person can know the provision from the supporter by the notification of the support information.
  • the request for registration of the first photographed image includes the target person's identification information (face image, identification information, terminal ID), it is not necessary for the target person or relatives to input the identification information each time they register. It is possible to specify the photographer of the first captured image accurately without any trouble. In particular, by performing biometric authentication using biometric information as personal identification information, the photographer can be identified with high accuracy.
  • the photographing location is registered together with the first photographed image, the precision of specifying the interest is improved.
  • the tablet terminal 101 may transmit the first captured image to the storage area allocated to the target person (grandchild) U1 on the network N.
  • the identification unit 344 of the support device 300 can detect that the first captured image has been saved in the storage area and identify the user ID assigned to the storage area as the photographer.
  • the support device 300 analyzes the photographed image in which the target person (grandchild) U1 is photographed among the photographs taken by the target person's parent U2, identifies the interest of the target person (grandchild) U1, and identifies the interest of the target person (grandchild) U1. grandparents) may be presented to U3.
  • the smartphone 102 transmits an image including the target person (grandchild) U1 and the object photographed by the target person's parent U2 to the support device 300 via the network N.
  • the support device 300 analyzes the received image. For example, if there are many “pictures of a child and an animal (or a specific character) together” in the images taken by the parent, the support device 300 identifies that the child is interested in animals (or a specific character). and present the animal picture book (or picture book of a particular character) to the grandparents. As a result, it is possible to appropriately identify the interest of the supporter and present support information candidates in line with the interest.
  • Embodiment 3 is a modification of Embodiment 2 described above. Note that the schematic configuration of the support system according to the third embodiment is the same as that of FIG. 3, so the differences will be mainly described below, and illustration and description of overlapping configurations will be omitted.
  • FIG. 14 is a block diagram showing the configuration of a support device 300a according to the third embodiment.
  • the support device 300a has a program 311a changed from that of FIG. 6, and an inquiry unit 348 is added.
  • the program 311a is a computer program in which, in addition to the program 311, the query processing according to the third embodiment is implemented.
  • the inquiry unit 348 inquires of the subject about the attribute information of the first captured image (object image 3125).
  • the acquisition unit 342 acquires attribute information in response to an inquiry from a subject.
  • the specifying unit 344 specifies the subject's interest based on the first captured image and the attribute information.
  • FIG. 15 is a sequence diagram showing the flow of inquiry processing for a captured image according to the third embodiment.
  • the support device 300a starts inquiry processing at a predetermined timing.
  • the support device 300a may start inquiry processing after the first captured image is registered in the collection processing of FIG. 9 .
  • the inquiry unit 348 reads the target object images 3125 (group) of a specific target person (for example, target person (grandchild) U1) (S301). Then, the inquiry unit 348 generates inquiry content (S302). Here, the inquiry unit 348 generates an inquiry sentence for selecting a favorite image of the subject (grandchild) U1 from the object image group. Then, the inquiry unit 348 transmits the generated inquiry content and the object image group to the tablet terminal 101 via the network N (S303). In response, the tablet terminal 101 displays the content of the inquiry and the object image group received (S304).
  • FIG. 16 is a diagram showing an example of a selection screen 52 for favorites of captured images according to the third embodiment.
  • the selection screen 52 includes an inquiry message 521 and a captured image group 522 .
  • the inquiry message 521 is a display field for the contents of the inquiry generated in step S302.
  • the photographed image group 522 is a display field for images photographed in the past for the subject (grandchild) U1 and their tags (labels).
  • the tag is information specified as attribute information of the image when the specifying unit 344 analyzes the first captured image. Also, the tag may be information that can be used as teaching data for machine learning of an image recognition engine, which will be described later, by being associated with the image.
  • the identification unit 344 may analyze the image using a predetermined image recognition engine to identify the tag.
  • the tablet terminal 101 accepts a selection operation by the subject (grandchild) U1 for one or more images from the captured image group 522 (S305). The tablet terminal 101 then transmits the selected image or tag (selection information) to the support device 300a via the network N (S306).
  • the acquisition unit 342 of the support device 300a acquires the selection information from the tablet terminal 101 via the network N. Then, the identifying unit 344 identifies the interest of the target person (grandchild) U1 based on the selected image indicated by the selection information (S307). Specifically, the identifying unit 344 identifies the tag (attribute information) of the selected image as the interest. Then, the determination unit 345 registers the specified interest in the target person information 3124 (S308). As a result, after this, the determination unit 345 considers that the selected image among the group of target images taken by the target person (grandchild) U1 is more interesting than the other images, and supports the target person (grandchild) U1. Information candidates can be determined.
  • the target person (grandchild) U1 it is possible to present support information candidates that match the interest of the target person (grandchild) U1. That is, by having the target person (grandchild) U1 select a favorite image, the accuracy of identifying the interest is increased. Conversely, the group of target images taken by the target person (grandchild) U1 includes photos that are not necessarily favorites. Therefore, by having the target person (grandchild) U1 select a favorite image, it is possible to narrow down the images of interest.
  • FIG. 17 is a sequence diagram showing another example of the flow of inquiry processing for a captured image according to the third embodiment. Here, it is assumed that one photographed image photographed by the subject person (grandchild) U1 is processed.
  • the inquiry unit 348 reads an arbitrary target object image 3125 of a specific target person (for example, target person (grandchild) U1) (S301a). For example, the inquiry unit 348 reads out an image that is not associated with a tag among the object images 3125 .
  • the identifying unit 344 analyzes the read object image 3125 and estimates tag candidates (S301b). For example, when the target object image 3125 is analyzed by the image recognition engine, the specifying unit 344 estimates a plurality of tags with the highest calculated scores as tag candidates (answer options). Then, the inquiry unit 348 generates inquiry content for selecting a tag candidate (S302a).
  • the inquiry unit 348 generates an inquiry sentence for making the target person (grandchild) U1 select the correct tag from a plurality of tag candidates.
  • the question sentence can be said to be in a quiz format.
  • the inquiry unit 348 transmits the generated inquiry contents, tag candidates, and object images to the tablet terminal 101 via the network N (S303a).
  • the tablet terminal 101 displays the content of the inquiry, the tag candidates, and the object image received (S304a).
  • FIG. 18 is a diagram showing an example of the tag candidate selection screen 53 in the quiz format according to the third embodiment.
  • the selection screen 53 has an inquiry message 531 and options 532 .
  • the inquiry message 531 is a display field for the contents of the inquiry generated in step S302a.
  • the message "I will display the registered photo. Tell me what kind of photo it is! Do you understand?"
  • An option 532 indicates an option in which a candidate for the name of an object appearing in the image of the object is used as a tag candidate.
  • Choice 532 shows an example in which “gerbera”, “chrysanthemum” and “sunflower” are choices.
  • the tablet terminal 101 accepts a selection operation by the target person (grandchild) U1 for one option from the options 532 (S305a). The tablet terminal 101 then transmits the selected tag (selected tag) to the support device 300a via the network N (S306a).
  • the acquisition unit 342 of the support device 300a acquires the selection tag from the tablet terminal 101 via the network N. Then, the processing unit 347 notifies the tablet terminal 103 of the object image and the selection tag (S309).
  • the notification destination may include the smartphone 102 .
  • the tablet terminal 103 displays the notified object image and tag (S310).
  • the supporters (grandparents) U3 and the target person's parent U2) can share the tag answers of the target person (grandchild) U1. Therefore, the supporters (grandparents) U3 and the like can grasp the growth of the target person (grandchild) U1. Further, the supporter (grandparent) U3 or the like may confirm whether the tag for the object image is correct and input the correct tag to the tablet terminal 103 .
  • the tablet terminal 103 transmits the input correct tag to the support device 300a via the network N (S311). Then, the support device 300a notifies the tablet terminal 101 of the received correct tag (S312). Then, the tablet terminal 101 displays the notified correct tag together with the object image (S313). Thereby, the subject (grandchild) U1 can know the correct answer to the quiz, and learning is promoted. In other words, it is possible to support the education of the target person (grandchild) U1. In addition, it is possible to support interaction between the target person (grandchild) U1 and the supporter (grandparent) U3.
  • the registration unit 341 of the support device 300a registers the correct tag in association with the object image 3125 (S314). This makes it possible to accumulate accurate tags (attribute information) for the captured image (object image) of the subject (grandchild) U1. Then, the support device 300a may provide a set of the target object image and the correct tag as teacher data for machine learning of the image recognition engine. This improves the accuracy, creation efficiency, and collection efficiency of training data for machine learning. Therefore, the recognition accuracy of the image recognition engine can be efficiently improved.
  • the target person (grandchild) U1 may be prompted to enter a tag.
  • the inquiry unit 348 may generate an inquiry that prompts the user to enter a tag (attribute information). In that case, the inquiry unit 348 transmits the generated inquiry content and the object image to the tablet terminal 101 via the network N.
  • the tablet terminal 101 displays the content of the inquiry received, the entry field for the tag, and the object image.
  • the supporter does not necessarily have to input the correct tag.
  • the support device 300 may determine whether or not the selection tag acquired in step S306a is correct, and notify the tablet terminal 103 and the smartphone 102 of the determination result. This also allows the supporters (grandparents) U3 and the target person's parent U2 to grasp whether the target person (grandchild) U1 is right or wrong with respect to the content of the question. In addition, it is possible to support interaction between the target person (grandchild) U1 and the supporter (grandparent) U3.
  • FIG. 19 is a diagram showing an example of an input screen 54 for attribute information of a captured image according to the third embodiment.
  • the input screen 54 has an inquiry message 541 , a speech recognition result display field 542 and a character input field 543 .
  • the inquiry message 541 is a display field for the contents of the inquiry generated in step S302a. Here, the message "Please enter the name of the visitor! is displayed to indicate that the target person (grandchild) U1 is being asked.
  • the voice recognition result display column 542 is a column for displaying text information as a result of voice recognition, which is picked up by the microphone 120 when the target person (grandchild) U1 speaks the name of the target object. Here, it indicates that "gerbera" has been recognized by voice.
  • the character input column 543 is a column for displaying the result of character input by the target person (grandchild) U1.
  • the tablet terminal 101 transmits the character information input in the voice recognition result display field 542 or the character input field 543 as a tag to the support device 300a.
  • the subsequent steps are the same as steps S309 to S314.
  • the support device 300a provides pairs of the target object image and the correct tag as training data for machine learning of the image recognition engine, thereby improving the accuracy, creation efficiency, and collection efficiency of the training data for machine learning. do. Therefore, the recognition accuracy of the image recognition engine can be efficiently improved.
  • the inquiry unit 348 may target the image associated with the correct tag among the object images 3125 .
  • the identifying unit 344 may estimate an arbitrary tag candidate, and the inquiry unit 348 may generate inquiry content including the correct tag and the arbitrary tag candidate. This also serves as educational support for the target person (grandchild) U1, and promotes interaction with the supporter (grandparent) U3.
  • Embodiment 4 is a modification of Embodiment 2 or 3 described above.
  • the supporter (grandparent) U3 when text information such as a picture book is used as the support information, the supporter (grandparent) U3 further provides the reading voice to the target person (grandchild) U1.
  • the schematic configuration of the support system according to the fourth embodiment is the same as that shown in FIG. 3, so differences will be mainly described below, and illustrations and descriptions of overlapping configurations will be omitted.
  • the support information is text information that is an electronic book (display content), but it may be a real book.
  • the processing unit requests the supporter to read out the text information. Then, the acquisition unit acquires the reading voice for the text information from the supporter. Then, the processing unit notifies the first terminal of the subject as support information including the text information and the reading voice so that the reading voice is reproduced when the text information is displayed on the first terminal of the subject.
  • the target person can also receive the reading voice of the provided display content as support information, which is most suitable for infants and the like.
  • the supporter is an elderly person, the supporter is motivated to read a picture book for his grandchildren and the like as dementia prevention training.
  • the acquisition unit preferably acquires, as the second captured image, an image captured by the target person when the sentence information is displayed and the reading voice is played back on the first terminal.
  • the acquisition unit preferably acquires, as the second captured image, an image captured by the target person when the sentence information is displayed and the reading voice is played back on the first terminal.
  • FIG. 20 is a sequence diagram showing the flow of storytelling processing according to the fourth embodiment.
  • the supporter (grandparents) U3 has selected the picture book of the electronic book as the support information and pressed the purchase button 5132 .
  • the tablet terminal 103 transmits a support instruction including the ID (support information) of the selected text information (electronic book) to the support device 300 via the network N (S401).
  • the processing unit 347 transmits a read-aloud request for text information included in the support instruction to the tablet terminal 103 via the network N (S402).
  • the tablet terminal 103 displays text information (S403) and prompts the supporter (grandparent) U3 to read it.
  • the tablet terminal 103 inputs the voice information picked up by the microphone 120 as read-out voice (S404).
  • the tablet terminal 103 transmits the reading voice to the support device 300 via the network N (S405).
  • the acquisition unit 342 acquires the reading voice.
  • the registration unit 341 associates the text information acquired in step S401 with the read-out voice acquired and registers them as support information in association with the subject ID 3121 (S406).
  • the processing unit 347 transmits the support information including text information and reading voice to the tablet terminal 101 via the network N (S407). At this time, the processing unit 347 instructs the tablet terminal 101 to reproduce the reading voice when the text information is displayed.
  • the tablet terminal 101 displays the received text information and reproduces the reading voice (S408). At this time, the tablet terminal 101 photographs the face of the target person (grandchild) U1 with the camera 110, and specifies the display position (content position) of the text information at the time of photographing (S409).
  • the content position is, for example, the displayed page of the picture book.
  • the tablet terminal 101 transmits the second captured image captured in step S409 and the specified display position to the support device 300 via the network N (S410). Then, the obtaining unit 342 obtains the second captured image and the display position. Then, the registration unit 341 registers the support information in association with the second captured image and the display position (S411). As a result, as in the above-described second embodiment, the identification unit 344 can identify the interest of the target person by taking into account the content of the displayed page of the picture book. Therefore, the accuracy of specifying the interest is improved. Also, the processing unit 347 transmits the second captured image to the tablet terminal 103 via the network N (S412). In response, the tablet terminal 103 displays the received second captured image (S413).
  • the supporter (grandparent) U3 can visually recognize the picture book presented by him/herself and the state of the target person (grandchild) U1 at the time of reproducing the read-out voice, thereby improving satisfaction.
  • the grandchild wants to read a book, the grandmother's voice can be read aloud.
  • the grandparents can also check the images and videos during the reading, it will motivate them to read next time.
  • step S411 the processing unit 347 measures the degree of satisfaction of the subject (grandchild) U1 from the second photographed image, and if the degree of satisfaction is equal to or higher than a predetermined value, the second photographed image and the display The position may be notified to the tablet terminal 103 of the supporter (grandparent) U3.
  • the supporters (grandparents) U3 can grasp where in the picture book the target person (grandchildren) U1 was pleased, and the level of satisfaction of the supporters (grandparents) U3 is further improved.
  • the processing unit 347 may associate and register the second captured image and the display position when the degree of satisfaction is equal to or higher than a predetermined value.
  • the specifying unit 344 can specify the interest of the target person (grandchild) U1 in consideration of the display position with high satisfaction. Therefore, it is possible to improve the accuracy of specifying the interest.
  • the tablet terminals 101 and 103 may perform video and audio communication, and the supporter (grandparents) U3 may read to the target person (grandchild) U1 in real time.
  • the support device 300 and the authentication device 200 are described as separate information processing devices, but they may be the same.
  • the support device 300 may register facial feature information in association with the subject ID 3121 of the subject management information 312 .
  • the control unit 340 may include the face detection unit 220, the feature point extraction unit 230, the registration unit 240, and the authentication unit 250 shown in FIG.
  • the support device 300 may generate an album image in which a plurality of target person images 3126 are aggregated, and provide the album image to the supporter (grandparent) U3. That is, the acquisition unit acquires a plurality of third captured images in which the target person is captured in response to provision of the support information to the target person. Then, the registration unit associates and registers the subject with the plurality of third captured images. Then, the processing unit may aggregate at least part of the plurality of third captured images associated with the subject to generate a composite image, and transmit the composite image to the supporter. At this time, the third captured image may include an object image 3125 captured by the target person (grandchild) U1 in addition to the target person image 3126 including the target person (grandchild) U1.
  • the support system 1000 described above may present support information candidates with a third party unrelated to the subject as a supporter. For example, it is also applicable when an elderly person wants to support a child regardless of the presence or absence of a blood relationship. For example, if the target person is interested in flowers, an elderly person familiar with flowers may be associated and registered as a supporter.
  • the hardware configuration has been described, but the configuration is not limited to this.
  • the present disclosure can also implement arbitrary processing by causing a CPU to execute a computer program.
  • Non-transitory computer readable media include various types of tangible storage media.
  • Examples of non-transitory computer-readable media include magnetic recording media (e.g., flexible discs, magnetic tapes, hard disk drives), magneto-optical recording media (e.g., magneto-optical discs), CD-ROMs (Read Only Memory), CD-Rs, Includes CD-R/W, DVD (Digital Versatile Disc), semiconductor memory (eg, mask ROM, PROM (Programmable ROM), EPROM (Erasable PROM), flash ROM, RAM (Random Access Memory)).
  • magnetic recording media e.g., flexible discs, magnetic tapes, hard disk drives
  • magneto-optical recording media e.g., magneto-optical discs
  • CD-ROMs Read Only Memory
  • CD-Rs Includes CD-R/W
  • DVD Digital Versatile Disc
  • semiconductor memory eg, mask ROM, PROM (Programmable ROM), EPROM (Erasable PROM), flash ROM,
  • the program may also be delivered to the computer on various types of transitory computer readable medium.
  • Examples of transitory computer-readable media include electrical signals, optical signals, and electromagnetic waves.
  • Transitory computer-readable media can deliver the program to the computer via wired channels, such as wires and optical fibers, or wireless channels.
  • (Appendix A1) a registration unit that associates and registers a support target person and a supporter; an acquisition unit that acquires a first captured image captured by the subject; a specifying unit that specifies the subject's interest based on the first captured image; a determination unit that determines one or more candidates for assistance information based on the identified interest; a presentation unit that presents the determined support information candidate to the supporter associated with the target person;
  • a support device comprising: (Appendix A2) The support device according to appendix A1, further comprising a processing unit that receives support instructions including support information selected by the support person from the presented support information candidates, and performs processing according to the support information.
  • the processing unit measures the satisfaction level of the subject from the second captured image, and notifies the terminal of the supporter of the second captured image when the satisfaction level is equal to or higher than a predetermined value.
  • a support device according to A5.
  • the acquisition unit further acquires a content position of the display content displayed on the first terminal at the time of capturing together with the second captured image,
  • the registration unit associates and registers the target person, the content position, and the second captured image,
  • the processing unit requests the supporter to read out the text information,
  • the acquisition unit acquires a reading voice for the text information from the supporter,
  • the processing unit notifies the first terminal as the support information including the text information and the reading voice so that the reading voice is reproduced when the text information is displayed on the first terminal.
  • Assistance device according to any one of appendices A5 to A7.
  • Appendix A9 The support device according to Appendix A8, wherein the acquisition unit acquires, as the second captured image, an image captured by the target person during display of the text information and playback of the readout sound on the first terminal.
  • the acquisition unit acquires a plurality of third captured images in which the target person is captured in response to the provision of the support information to the target person,
  • the registration unit associates and registers the target person and the plurality of third captured images,
  • the processing unit aggregates at least part of the plurality of third captured images associated with the subject to generate a composite image, and transmits the composite image to the supporter.
  • the support device according to any one of Claims 1 to 3.
  • the specifying unit specifies a photographer of the first captured image, The support device according to any one of Appendixes A1 to A10, wherein the registration unit registers the specified photographer as the target person in association with the first captured image.
  • the registration unit pre-registers the first biometric information of the subject,
  • the acquisition unit acquires second biometric information of the photographer together with the first captured image,
  • the assisting apparatus according to Appendix A11, wherein the specifying unit specifies the photographer as the target person when biometric authentication in the second biometric information is successful using the first biometric information.
  • (Appendix A13) further comprising an inquiry unit that inquires of the subject about attribute information of the first captured image;
  • the acquisition unit acquires the attribute information in response to the inquiry from the subject,
  • the support device according to any one of Appendixes A1 to A12, wherein the specifying unit specifies the subject's interest based on the first captured image and the attribute information.
  • the acquisition unit acquires position information of a shooting location together with the first captured image,
  • the registration unit associates and registers the photographing location with the first photographed image,
  • the support device according to any one of Appendixes A1 to A13, wherein the specifying unit specifies the subject's interest based on the first captured image and the shooting location.
  • the identification unit according to any one of appendices A1 to A14, wherein the identification unit excludes a face-recognized area from the first captured image, and identifies the subject's interest based on the excluded image. support equipment.
  • the assisting apparatus according to Appendix A15, wherein the specifying unit excludes a region other than the target person from among the face-recognized regions of the first captured image.
  • Appendix B1 a first terminal of a support target person; a supporter's second terminal; a support device;
  • the support device is a registration unit that associates and registers the target person and the supporter; an acquisition unit that acquires a first captured image captured by the subject from the first terminal; a specifying unit that specifies the subject's interest based on the first captured image; a determination unit that determines one or more candidates for assistance information based on the identified interest; a presentation unit that presents the determined support information candidate to the second terminal of the supporter associated with the target person;
  • Support system with (Appendix B2)
  • the support device is The support system according to appendix B1, further comprising a processing unit that receives support instructions including support information selected by the support person from the presented support information candidates, and performs processing according to the support information.
  • (Appendix C1) the computer Corresponding and registering the target of support and the supporter, Acquiring a first captured image captured by the subject; Identifying the subject's interest based on the first captured image; determining one or more candidates for assistance information based on the identified interests; Presenting the determined support information candidate to the supporter associated with the target person; how to help.
  • (Appendix D1) a registration process for registering a support target person and a supporter in association with each other; Acquisition processing for acquiring a first captured image captured by the subject; a specifying process of specifying the subject's interest based on the first captured image; a determination process of determining one or more candidates for assistance information based on the identified interest; a presentation process of presenting the determined support information candidate to the supporter associated with the target person;
  • a support program that causes a computer to execute

Landscapes

  • Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present invention provides appropriate support according to the potential interests of a person to be supported. A support device (1) comprises a registration unit (11) that registers a person to be supported and a supporting person in association with each other, an acquisition unit (12) that acquires a first captured image taken by the person to be supported, a specification unit (13) that specifies the interests of the person to be supported on the basis of the first captured image, a determination unit (14) that determines one or more support information candidates based on the specified interests, and a presentation unit (15) that presents the determined support information candidate(s) to the supporting person associated with the person to be supported.

Description

支援装置、システム、及び、方法、並びに、コンピュータ可読媒体SUPPORT DEVICE, SYSTEM AND METHOD, AND COMPUTER-READABLE MEDIUM
 本発明は、支援装置、システム、方法及びプログラムに関し、特に、支援者により対象者をサポートするための支援装置、システム、方法及びプログラムに関する。 The present invention relates to a support device, system, method and program, and more particularly to a support device, system, method and program for supporting a subject by a supporter.
 近年、少子化や過疎化が進み、子供や高齢者の見守りの必要性が高まっている。特許文献1には、見守りシステムに関する技術が開示されている。当該見守りシステムは、顔認識機能を有するカメラが見守り対象者を撮影し、撮影した動画を、インターネットを介して情報処理装置へ送信するものである。 In recent years, the declining birthrate and depopulation have progressed, and the need to watch over children and the elderly has increased. Patent Literature 1 discloses a technology related to a monitoring system. In the watching system, a camera having a face recognition function photographs a person being watched over, and transmits the photographed moving image to an information processing device via the Internet.
特開2017-111506号公報JP 2017-111506 A
 ここで、支援の対象者(例えば、小学生以下の子供又は高齢者)とは離れて居住する支援者が、対象者の潜在的な興味を把握し、興味に応じた適切な支援を行うことが求められている。 Here, the supporter who lives away from the target of support (for example, children under elementary school age or the elderly) can grasp the potential interest of the target and provide appropriate support according to the interest. It has been demanded.
 本開示は、このような問題点を解決するためになされたものであり、支援の対象者の潜在的な興味に応じた適切な支援を行うための支援装置、システム、方法及びプログラムを提供することを目的とする。 The present disclosure is made to solve such problems, and provides a support device, system, method, and program for performing appropriate support according to the potential interest of the support target person. for the purpose.
 本開示の第1の態様にかかる支援装置は、
 支援の対象者と支援者とを対応付けて登録する登録部と、
 前記対象者により撮影された第1の撮影画像を取得する取得部と、
 前記第1の撮影画像に基づいて前記対象者の興味を特定する特定部と、
 特定した興味に基づく1以上の支援情報候補を決定する決定部と、
 前記対象者に対応付けられた支援者へ、前記決定した支援情報候補を提示する提示部と、
 を備える。
A support device according to a first aspect of the present disclosure includes:
a registration unit that associates and registers a support target person and a supporter;
an acquisition unit that acquires a first captured image captured by the subject;
a specifying unit that specifies the subject's interest based on the first captured image;
a determination unit that determines one or more candidates for assistance information based on the identified interest;
a presentation unit that presents the determined support information candidate to the supporter associated with the target person;
Prepare.
 本開示の第2の態様にかかる支援システムは、
 支援の対象者の第1の端末と、
 支援者の第2の端末と、
 支援装置と、を備え、
 前記支援装置は、
 前記対象者と前記支援者とを対応付けて登録する登録部と、
 前記第1の端末から前記対象者により撮影された第1の撮影画像を取得する取得部と、
 前記第1の撮影画像に基づいて前記対象者の興味を特定する特定部と、
 特定した興味に基づく1以上の支援情報候補を決定する決定部と、
 前記対象者に対応付けられた支援者の前記第2の端末へ、前記決定した支援情報候補を提示する提示部と、
 を備える。
A support system according to a second aspect of the present disclosure includes:
a first terminal of a support target person;
a supporter's second terminal;
a support device;
The support device is
a registration unit that associates and registers the target person and the supporter;
an acquisition unit that acquires a first captured image captured by the subject from the first terminal;
a specifying unit that specifies the subject's interest based on the first captured image;
a determination unit that determines one or more candidates for assistance information based on the identified interest;
a presentation unit that presents the determined support information candidate to the second terminal of the supporter associated with the target person;
Prepare.
 本開示の第3の態様にかかる支援方法は、
 コンピュータが、
 支援の対象者と支援者とを対応付けて登録し、
 前記対象者により撮影された第1の撮影画像を取得し、
 前記第1の撮影画像に基づいて前記対象者の興味を特定し、
 特定した興味に基づく1以上の支援情報候補を決定し、
 前記対象者に対応付けられた支援者へ、前記決定した支援情報候補を提示する。
A support method according to a third aspect of the present disclosure includes:
the computer
Corresponding and registering the target of support and the supporter,
Acquiring a first captured image captured by the subject;
Identifying the subject's interest based on the first captured image;
determining one or more candidates for assistance information based on the identified interests;
The determined support information candidate is presented to the supporter associated with the target person.
 本開示の第4の態様にかかる支援プログラムは、
 支援の対象者と支援者とを対応付けて登録する登録処理と、
 前記対象者により撮影された第1の撮影画像を取得する取得処理と、
 前記第1の撮影画像に基づいて前記対象者の興味を特定する特定処理と、
 特定した興味に基づく1以上の支援情報候補を決定する決定処理と、
 前記対象者に対応付けられた支援者へ、前記決定した支援情報候補を提示する提示処理と、
 をコンピュータに実行させる。
The support program according to the fourth aspect of the present disclosure is
a registration process for registering a support target person and a supporter in association with each other;
Acquisition processing for acquiring a first captured image captured by the subject;
a specifying process of specifying the subject's interest based on the first captured image;
a determination process of determining one or more candidates for assistance information based on the identified interest;
a presentation process of presenting the determined support information candidate to the supporter associated with the target person;
run on the computer.
 本開示により、支援の対象者の潜在的な興味に応じた適切な支援を行うための支援装置、システム、方法及びプログラムを提供することができる。 With the present disclosure, it is possible to provide a support device, system, method, and program for providing appropriate support according to the latent interest of the support target.
本実施形態1にかかる支援装置の構成を示すブロック図である。1 is a block diagram showing the configuration of a support device according to the first embodiment; FIG. 本実施形態1にかかる支援方法の流れを示すフローチャートである。4 is a flow chart showing the flow of a support method according to the first embodiment; 本実施形態2にかかる支援システムの全体構成を示すブロック図である。FIG. 11 is a block diagram showing the overall configuration of a support system according to a second embodiment; FIG. 本実施形態2にかかるタブレット端末の構成を示すブロック図である。FIG. 11 is a block diagram showing the configuration of a tablet terminal according to the second embodiment; FIG. 本実施形態2にかかる認証装置の構成を示すブロック図である。FIG. 11 is a block diagram showing the configuration of an authentication device according to the second embodiment; FIG. 本実施形態2にかかる支援装置の構成を示すブロック図である。FIG. 11 is a block diagram showing the configuration of a support device according to a second embodiment; FIG. 本実施形態2にかかる事前登録処理の流れを示すシーケンス図である。FIG. 11 is a sequence diagram showing the flow of pre-registration processing according to the second embodiment; 本実施形態2にかかる認証装置による顔情報登録処理の流れを示すフローチャートである。9 is a flow chart showing the flow of face information registration processing by the authentication device according to the second embodiment; 本実施形態2にかかる収集処理の流れを示すフローチャートである。10 is a flowchart showing the flow of collection processing according to the second embodiment; 本実施形態2にかかる認証装置による顔認証処理の流れを示すフローチャートである。9 is a flow chart showing the flow of face authentication processing by the authentication device according to the second embodiment; 本実施形態2にかかる支援情報提供処理の流れを示すフローチャートである。10 is a flow chart showing the flow of support information provision processing according to the second embodiment; 本実施形態2にかかる支援者への支援情報候補の提示画面の例を示す図である。FIG. 11 is a diagram showing an example of a screen for presenting support information candidates to a supporter according to the second embodiment; 本実施形態2にかかる支援情報候補の提示、選択購入、配送、お礼コメントの通知の概念を説明するための図である。FIG. 11 is a diagram for explaining the concept of presentation of support information candidates, selective purchase, delivery, and notification of a thank-you comment according to the second embodiment; 本実施形態3にかかる支援装置の構成を示すブロック図である。FIG. 11 is a block diagram showing the configuration of a support device according to a third embodiment; FIG. 本実施形態3にかかる撮影画像に対する問合せ処理の流れを示すシーケンス図である。FIG. 11 is a sequence diagram showing the flow of inquiry processing for a captured image according to the third embodiment; 本実施形態3にかかる撮影画像のお気に入りの選択画面の例を示す図である。FIG. 11 is a diagram showing an example of a selection screen for favorites of captured images according to the third embodiment; 本実施形態3にかかる撮影画像に対する問合せ処理の他の例の流れを示すシーケンス図である。FIG. 16 is a sequence diagram showing another example of the flow of inquiry processing for a captured image according to the third embodiment; 本実施形態3にかかるクイズ形式でのタグ候補の選択画面の例を示す図である。FIG. 11 is a diagram showing an example of a tag candidate selection screen in a quiz format according to the third embodiment; 本実施形態3にかかる撮影画像の属性情報の入力画面の例を示す図である。FIG. 11 is a diagram showing an example of an input screen for attribute information of a captured image according to the third embodiment; 本実施形態4にかかる読み聞かせ処理の流れを示すシーケンス図である。FIG. 14 is a sequence diagram showing the flow of storytelling processing according to the fourth embodiment;
 以下では、本開示の実施形態について、図面を参照しながら詳細に説明する。各図面において、同一又は対応する要素には同一の符号が付されており、説明の明確化のため、必要に応じて重複説明は省略される。 Below, embodiments of the present disclosure will be described in detail with reference to the drawings. In each drawing, the same reference numerals are given to the same or corresponding elements, and redundant description will be omitted as necessary for clarity of description.
<実施形態1>
 図1は、本実施形態1にかかる支援装置1の構成を示すブロック図である。支援装置1は、支援者による支援対象の対象者に対する支援を行うための情報処理装置である。支援装置1は、少なくとも対象者のニーズ(直接的又は間接的な要望)に応じた支援情報候補を支援者に対して提示する。尚、支援装置1は、見守り者(対象者の関係者)が被見守り者(対象者)を遠隔で見守ることを支援する見守り支援装置ともいえる。支援の対象者は、子供や高齢者等の見守りの対象者である。例えば、対象者が小学生以下の子供である場合、支援者は対象者の祖父母等である。但し、支援者は独居の高齢者、又は、血縁関係がなくても任意の対象者への支援を希望する高齢者であってもよい。また、支援の対象者が高齢者や要介護者である場合、支援者は対象者の親族(対象者の子供夫婦等)、介護職員、成年後見人等が挙げられる。ここで、支援装置1は、通信ネットワーク(不図示、以降、通信ネットワークを単にネットワークとも称する)、又は、所定の無線通信により、対象者の端末及び支援者の端末と接続されているものとする。尚、ネットワークは、有線か無線であるかを問わないし、通信プロトコルの種別を問わない。ここで、上記端末は、カメラ、マイク、スピーカ、タッチパネル等を搭載した情報処理端末、例えば、タブレット端末等であってもよい。
<Embodiment 1>
FIG. 1 is a block diagram showing the configuration of a support device 1 according to the first embodiment. The assistance device 1 is an information processing device for assisting a target person to be assisted by a supporter. The support device 1 presents the supporter with support information candidates that meet at least the needs (direct or indirect requests) of the subject. Note that the support device 1 can also be said to be a watching support device that assists a watcher (related person of the target person) to remotely watch over the person being watched (the target person). The target of support is a target of watching over, such as a child or the elderly. For example, if the target person is a child of elementary school age or younger, the supporter is the target person's grandparents or the like. However, the supporter may be an elderly person living alone, or an elderly person who wishes to provide assistance to any subject even if there is no blood relationship. In addition, when the target of support is an elderly person or a person requiring nursing care, the supporter may be a relative of the target (children and couples of the target, etc.), a care worker, an adult guardian, or the like. Here, it is assumed that the support device 1 is connected to the target person's terminal and the supporter's terminal via a communication network (not shown, hereinafter, the communication network is simply referred to as a network) or predetermined wireless communication. . It does not matter whether the network is wired or wireless, and the type of communication protocol does not matter. Here, the terminal may be an information processing terminal equipped with a camera, a microphone, a speaker, a touch panel, etc., such as a tablet terminal.
 支援装置1は、登録部11、取得部12、特定部13、決定部14及び提示部15を備える。登録部11は、支援の対象者と支援者とを対応付けて登録する。取得部12は、対象者により撮影された第1の撮影画像を取得する。第1の撮影画像には、対象者が興味を持った対象物や風景等が含まれているものとする。特定部13は、第1の撮影画像に基づいて対象者の興味を特定する。決定部14は、特定した興味に基づく1以上の支援情報候補を決定する。提示部15は、対象者に対応付けられた支援者へ、決定した支援情報候補を提示する。尚、特定部13は、第1の撮影画像を解析して属性情報等(キーワード)を抽出し、キーワードを入力して興味を出力するモデルを用いて興味を特定してもよい。または、特定部13及び決定部14は、第1の撮影画像や属性情報等を入力して支援情報候補を出力するモデルを用いて支援情報候補を決定してもよい。尚、モデルには、AI(Artificial Intelligence)モデル等の公知技術を適用することができる。また、モデルは、蓄積された撮影画像又は属性情報を教師データとし、興味又は支援情報候補を正解として機械学習されたものを用いることができる。 The support device 1 includes a registration unit 11 , an acquisition unit 12 , an identification unit 13 , a determination unit 14 and a presentation unit 15 . The registration unit 11 associates and registers a support target person and a support person. Acquisition unit 12 acquires a first captured image captured by a subject. It is assumed that the first photographed image includes an object, scenery, or the like in which the subject is interested. The specifying unit 13 specifies the subject's interest based on the first captured image. The determination unit 14 determines one or more support information candidates based on the specified interest. The presentation unit 15 presents the determined support information candidate to the supporter associated with the target person. Note that the identifying unit 13 may analyze the first captured image to extract attribute information (keywords), and identify interests using a model that inputs keywords and outputs interests. Alternatively, the specifying unit 13 and the determining unit 14 may determine support information candidates using a model that inputs the first captured image, attribute information, and the like and outputs support information candidates. It should be noted that a known technique such as an AI (Artificial Intelligence) model can be applied to the model. Also, the model can be machine-learned using stored captured images or attribute information as teacher data and interest or support information candidates as correct answers.
 図2は、本実施形態1にかかる支援方法の流れを示すフローチャートである。まず、登録部11は、支援の対象者と支援者とを対応付けて登録する(S11)。例えば、登録部11は、対象者、対象者の保護者又は支援者のいずれかの端末から、対象者及び支援者の識別情報等を取得し、各識別情報を対応付けて記憶装置に格納する。ここで、記憶装置は、支援装置1の内部のもの又は外部のものであって支援装置1と接続されたものを用いることができる。 FIG. 2 is a flow chart showing the flow of the support method according to the first embodiment. First, the registration unit 11 associates and registers a support target person and a supporter (S11). For example, the registration unit 11 acquires the identification information of the target person and the supporter from the terminal of the target person, the target person's guardian, or the supporter, and stores the identification information in the storage device in association with each other. . Here, the storage device may be internal or external to the support device 1 and connected to the support device 1 .
 次に、取得部12は、対象者により撮影された第1の撮影画像を取得する(S12)。例えば、対象者が端末を用いて自身の興味のある対象物を撮影し、端末が対象物の撮影画像を支援装置1へ送信したものとする。 Next, the acquisition unit 12 acquires the first captured image captured by the subject (S12). For example, it is assumed that a subject uses a terminal to photograph an object of interest to the subject, and the terminal transmits the photographed image of the object to the support device 1 .
 そして、特定部13は、第1の撮影画像に基づいて対象者の興味を特定する(S13)。決定部14は、特定した興味に基づく1以上の支援情報候補を決定する(S14)。その後、提示部15は、対象者に対応付けられた支援者へ、決定した支援情報候補を提示する(S15)。例えば、提示部15は、支援者が所持する端末へ決定した支援情報候補を送信し、端末は、受信した支援情報候補を表示する。これにより、支援者は端末を介して推薦された支援情報候補を視認できる。 Then, the identifying unit 13 identifies the subject's interest based on the first captured image (S13). The determination unit 14 determines one or more support information candidates based on the specified interest (S14). Thereafter, the presentation unit 15 presents the determined support information candidate to the supporter associated with the target person (S15). For example, the presentation unit 15 transmits the determined support information candidate to the terminal possessed by the supporter, and the terminal displays the received support information candidate. As a result, the supporter can visually recognize the recommended support information candidate through the terminal.
 このように本実施形態では、支援の対象者が撮影した撮影画像を手掛かりに、対象者の潜在的な興味を特定し、興味に応じた支援情報候補を支援者に推薦することができる。そのため、支援者は、支援情報候補の中から対象者の興味に沿ったものを選択して、支援者へ提供することができる。これにより、対象者に対する支援を適切かつ効果的に行うことができる。 As described above, in this embodiment, it is possible to identify the potential interest of the target person based on the photographed image taken by the target person of the support, and recommend the support information candidate according to the interest to the supporter. Therefore, the supporter can select support information candidates that match the interest of the target person and provide it to the supporter. As a result, it is possible to appropriately and effectively support the target person.
 尚、支援装置1は、図示しない構成としてプロセッサ、メモリ及び記憶装置を備えるものである。また、当該記憶装置には、本実施形態にかかる支援方法の処理が実装されたコンピュータプログラムが記憶されている。そして、当該プロセッサは、記憶装置からコンピュータプログラムを前記メモリへ読み込ませ、当該コンピュータプログラムを実行する。これにより、前記プロセッサは、登録部11、取得部12、特定部13、決定部14及び提示部15の機能を実現する。 The support device 1 includes a processor, memory, and storage device (not shown). Further, the storage device stores a computer program in which processing of the support method according to the present embodiment is implemented. Then, the processor loads the computer program from the storage device into the memory and executes the computer program. Thereby, the processor implements the functions of the registration unit 11 , the acquisition unit 12 , the identification unit 13 , the determination unit 14 and the presentation unit 15 .
 または、登録部11、取得部12、特定部13、決定部14及び提示部15は、それぞれが専用のハードウェアで実現されていてもよい。また、各装置の各構成要素の一部又は全部は、汎用または専用の回路(circuitry)、プロセッサ等やこれらの組合せによって実現されてもよい。これらは、単一のチップによって構成されてもよいし、バスを介して接続される複数のチップによって構成されてもよい。各装置の各構成要素の一部又は全部は、上述した回路等とプログラムとの組合せによって実現されてもよい。また、プロセッサとして、CPU(Central Processing Unit)、GPU(Graphics Processing Unit)、FPGA(field-programmable gate array)、量子プロセッサ(量子コンピュータ制御チップ)等を用いることができる。 Alternatively, the registration unit 11, the acquisition unit 12, the identification unit 13, the determination unit 14, and the presentation unit 15 may each be realized by dedicated hardware. Also, part or all of each component of each device may be implemented by general-purpose or dedicated circuitry, processors, etc., or combinations thereof. These may be composed of a single chip, or may be composed of multiple chips connected via a bus. A part or all of each component of each device may be implemented by a combination of the above-described circuits and the like and programs. As a processor, a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), an FPGA (field-programmable gate array), a quantum processor (quantum computer control chip), or the like can be used.
 また、支援装置1の各構成要素の一部又は全部が複数の情報処理装置や回路等により実現される場合には、複数の情報処理装置や回路等は、集中配置されてもよいし、分散配置されてもよい。例えば、情報処理装置や回路等は、クライアントサーバシステム、クラウドコンピューティングシステム等、各々が通信ネットワークを介して接続される形態として実現されてもよい。また、支援装置1の機能がSaaS(Software as a Service)形式で提供されてもよい。 Further, when a part or all of each component of the support device 1 is realized by a plurality of information processing devices, circuits, etc., the plurality of information processing devices, circuits, etc. may be centrally arranged or distributed. may be placed. For example, the information processing device, circuits, and the like may be implemented as a form in which each is connected via a communication network, such as a client-server system, a cloud computing system, or the like. Also, the functions of the support device 1 may be provided in a SaaS (Software as a Service) format.
<実施形態2>
 本実施形態2は、上述した実施形態1の具体例である。図3は、本実施形態2にかかる支援システム1000の全体構成を示すブロック図である。支援システム1000は、支援者(祖父母)U3や対象者の親U2による対象者(孫)U1に対する支援を行うための情報システムである。尚、以下では、説明の便宜上、対象者(孫)U1、対象者の親U2及び支援者(祖父母)U3の組を対象とするが、これに限定されない。つまり、支援システム1000は、1以上の対象者と1以上の支援者の組について2以上を対象としてもよい。
<Embodiment 2>
The second embodiment is a specific example of the first embodiment described above. FIG. 3 is a block diagram showing the overall configuration of a support system 1000 according to the second embodiment. The support system 1000 is an information system for supporting a target person (grandchild) U1 by a supporter (grandparent) U3 or a target person's parent U2. In the following, for convenience of explanation, a group of a target person (grandchild) U1, a target person's parent U2, and a supporter (grandparent) U3 will be targeted, but the present invention is not limited to this. That is, the support system 1000 may target two or more groups of one or more target persons and one or more supporters.
 尚、以下の説明では、本人確認の認証を生体認証の一例である顔認証とし、本人確認情報を生体情報の一例である顔特徴情報とする。但し、生体認証及び生体情報は撮影画像を利用する他の技術を適用可能である。例えば、生体情報には、指紋、声紋、静脈、網膜、瞳の虹彩の模様(パターン)といった個人に固有の身体的特徴から計算されるデータ(特徴量)を用いても構わない。 In the following description, face authentication, which is an example of biometric authentication, is used as authentication for personal identification, and facial feature information, which is an example of biometric information, is used as personal identification information. However, biometric authentication and biometric information can be applied to other techniques that use captured images. For example, biometric information may be data (feature amounts) calculated from physical features unique to an individual, such as fingerprints, voiceprints, veins, retinas, and iris patterns.
 支援システム1000は、タブレット端末101(第1の端末)、スマートフォン102、タブレット端末103(第2の端末)、認証装置200及び支援装置300を備える。タブレット端末101、スマートフォン102、タブレット端末103、認証装置200及び支援装置300は、ネットワークNを介して接続される。ここで、ネットワークNは、有線又は無線の通信回線、例えばインターネットである。タブレット端末101は、対象者(孫)U1及び対象者の親U2により操作され、スマートフォン102は、対象者の親U2により所持及び操作される。タブレット端末103は、支援者(祖父母)U3により操作される。タブレット端末101と103とは、少なくとも異なる場所(住居等)に設置されているものとする。 The support system 1000 includes a tablet terminal 101 (first terminal), a smartphone 102, a tablet terminal 103 (second terminal), an authentication device 200, and a support device 300. A tablet terminal 101 , a smartphone 102 , a tablet terminal 103 , an authentication device 200 and a support device 300 are connected via a network N. Here, the network N is a wired or wireless communication line such as the Internet. The tablet terminal 101 is operated by the target person (grandchild) U1 and the target person's parent U2, and the smartphone 102 is possessed and operated by the target person's parent U2. The tablet terminal 103 is operated by a supporter (grandparent) U3. It is assumed that the tablet terminals 101 and 103 are installed at least in different places (dwellings, etc.).
 タブレット端末101は、対象者(孫)U1や対象者の親U2が利用する端末であり、対象者(孫)U1及び対象者の親U2の住居等に設置されている。タブレット端末101は、インストールされたソフトウェアにより動物やロボットを模したキャラクタを表示し、キャラクタを対象者(孫)U1の対話相手となるユーザインタフェースとして動作してもよい。その場合、タブレット端末101は、支援装置300からのコンテンツ制御情報を解析して、キャラクタを表示し、キャラクタのセリフとして音声出力やテキスト表示を行う。また、タブレット端末101は、対象者(孫)U1の発話を収音し、収音した音声情報又は音声認識結果を支援装置300へ送信する。これにより、対象者(孫)U1は、タブレット端末101に表示されたキャラクタと対話(会話)を行うことができる。また、タブレット端末101は、健康情報測定装置(不図示)から取得した対象者U1の健康情報を、ネットワークNを介して支援装置300へ送信してもよい。 The tablet terminal 101 is a terminal used by the target person (grandchild) U1 and the target person's parent U2, and is installed in the residence of the target person (grandchild) U1 and the target person's parent U2. The tablet terminal 101 may display a character imitating an animal or a robot by means of installed software, and the character may operate as a user interface with which the target person (grandchild) U1 interacts. In this case, the tablet terminal 101 analyzes the content control information from the support device 300, displays the character, and performs voice output or text display as the character's words. The tablet terminal 101 also collects the speech of the target person (grandchild) U1 and transmits the collected voice information or the voice recognition result to the support device 300 . Thereby, the target person (grandchild) U1 can interact (converse) with the character displayed on the tablet terminal 101 . Moreover, the tablet terminal 101 may transmit the health information of the subject U1 acquired from a health information measuring device (not shown) to the support device 300 via the network N. FIG.
 図4は、本実施形態2にかかるタブレット端末101の構成を示すブロック図である。タブレット端末101は、カメラ110、マイク120、スピーカ130、タッチパネル140、記憶部150、通信部160、メモリ170及び制御部180を備える。カメラ110は、制御部180の制御に応じて撮影を行う撮影装置である。マイク120は、対象者(孫)U1等が発話した音声を収音する収音装置である。スピーカ130は、制御部180の制御に応じて音を出力する装置である。タッチパネル140は、画面等の表示装置(表示部)と入力装置を含む。記憶部150は、タブレット端末101の各機能を実現するためのプログラム151が格納された記憶装置である。プログラム151は、ユーザインタフェースとして動作するソフトウェアを含む処理が実装されたものである。通信部160は、ネットワークNとの通信インタフェースである。メモリ170は、RAM(Random Access Memory)等の揮発性記憶装置であり、制御部180の動作時に一時的に情報を保持するための記憶領域である。制御部180は、タブレット端末101が有するハードウェアの制御を行うプロセッサである。制御部180は、記憶部150からプログラム151をメモリ170へ読み込ませ、実行する。これにより、制御部180は、登録部181及びコンテンツ処理部182の機能を実現する。 FIG. 4 is a block diagram showing the configuration of the tablet terminal 101 according to the second embodiment. The tablet terminal 101 includes a camera 110 , a microphone 120 , a speaker 130 , a touch panel 140 , a storage section 150 , a communication section 160 , a memory 170 and a control section 180 . The camera 110 is an imaging device that performs imaging under the control of the control unit 180 . The microphone 120 is a sound pickup device that picks up the voice uttered by the target person (grandchild) U1 or the like. The speaker 130 is a device that outputs sound under the control of the controller 180 . Touch panel 140 includes a display device (display unit) such as a screen and an input device. The storage unit 150 is a storage device that stores a program 151 for realizing each function of the tablet terminal 101 . The program 151 implements processing including software that operates as a user interface. The communication unit 160 is a communication interface with the network N. FIG. The memory 170 is a volatile storage device such as a RAM (Random Access Memory), and is a storage area for temporarily holding information when the control unit 180 operates. The control unit 180 is a processor that controls hardware of the tablet terminal 101 . The control unit 180 loads the program 151 from the storage unit 150 into the memory 170 and executes it. Thereby, the control unit 180 implements the functions of the registration unit 181 and the content processing unit 182 .
 登録部181は、対象者情報及び支援者情報を含む事前登録要求を、ネットワークNを介して支援装置300へ送信する。対象者情報は、対象者(孫)U1の顔画像(第1の生体情報)又は識別情報を少なくとも含み、さらに、対象者(孫)U1の個人情報を含んでも良い。また、対象者情報は、タブレット端末101の端末IDを含む。尚、対象者(孫)U1の顔画像は、カメラ110により撮影されたものか、スマートフォン102等の外部装置から取得されたものであってもよい。また、対象者情報は、対象者(孫)U1の発話が収音された音声情報又は音声認識結果や、健康情報を含んでも良い。また、登録部181は、対象者(孫)U1の発話がマイク120により収音された音声情報を音声認識し、音声認識結果から対象者(孫)U1の体調や要望を検出し、検出した体調や要望を対象者情報に含めても良い。例えば、対象者(孫)U1が「頭が痛い」と発話した場合、登録部181は、対象者(孫)U1の体調が悪いことを検出する。また、対象者(孫)U1が「遊びに行きたい」と発話した場合、登録部181は、対象者(孫)U1が公園や遊園地等へ行くことの要望を検出する。支援者情報は、支援者(祖父母)U3の識別情報を少なくとも含み、さらに、支援者(祖父母)U3の個人情報、特に決済情報を含んでも良い。また、支援者情報は、タブレット端末103の端末IDを含んでもよい。尚、支援者(祖父母)U3の識別情報(支援者ID)は、支援装置300等において予め発行されたものを用いても良い。 The registration unit 181 transmits a pre-registration request including the target person information and the supporter information to the support device 300 via the network N. The target person information includes at least a face image (first biometric information) or identification information of the target person (grandchild) U1, and may further include personal information of the target person (grandchild) U1. The target person information also includes the terminal ID of the tablet terminal 101 . Note that the face image of the subject (grandchild) U1 may be captured by the camera 110 or acquired from an external device such as the smartphone 102 or the like. Further, the target person information may include voice information obtained by picking up the voice of the target person (grandchild) U1, voice recognition results, and health information. Further, the registration unit 181 performs voice recognition on voice information obtained by collecting the utterance of the target person (grandchild) U1 by the microphone 120, and detects the physical condition and desires of the target person (grandchild) U1 from the voice recognition result. The physical condition and requests may be included in the target person information. For example, when the subject (grandchild) U1 utters "I have a headache," the registration unit 181 detects that the subject (grandchild) U1 is in poor physical condition. Further, when the target person (grandchild) U1 utters "I want to go play", the registration unit 181 detects that the target person (grandchild) U1 wants to go to a park, an amusement park, or the like. The supporter information includes at least identification information of the supporters (grandparents) U3, and may further include personal information of the supporters (grandparents) U3, especially payment information. Also, the supporter information may include the terminal ID of the tablet terminal 103 . As the identification information (supporter ID) of the supporter (grandparent) U3, information previously issued in the support device 300 or the like may be used.
 また、登録部181は、対象者(孫)U1がカメラ110により撮影された撮影画像(第1の撮影画像)を取得し、第1の撮影画像を含む登録要求を、ネットワークNを介して支援装置300へ送信する。ここで、対象者(孫)U1は、カメラ110を用いて自身が興味のある対象物又は風景等を撮影してもよい。この場合、登録部181は、カメラ110から第1の撮影画像を取得する。尚、対象者(孫)U1は、デジタルカメラ等を用いて、第1の撮影画像を撮影してもよい。その場合、対象者の親U2がデジタルカメラからタブレット端末101へ第1の撮影画像を転送するものとする。そして、登録部181は、デジタルカメラから第1の撮影画像を取得して、登録要求を送信する。尚、対象者(孫)U1は、カメラ110等を用いて動画を撮影してもよい。その場合、登録部181は、撮影動画の一部又は全てを第1の撮影画像群として登録要求に含めても良い。尚、登録部181は、第1の撮影画像を取得した場合、対象者(孫)U1等の操作によらず、自動的に第1の撮影画像を含む登録要求を支援装置300へ送信してもよい。また、登録部181は、第1の撮影画像と共に当該撮影画像の撮影場所の位置情報を取得してもよい。その場合、登録部181は、取得した位置情報をさらに登録要求に含めるものとする。また、登録部181は、第1の撮影画像と共に撮影者として対象者(孫)U1の顔画像(第2の生体情報)を登録要求に含めても良い。または、登録部181は、第1の撮影画像と共にタブレット端末101の端末ID又は対象者(孫)U1の識別情報を含めても良い。 Further, the registration unit 181 acquires a photographed image (first photographed image) of the subject (grandchild) U1 photographed by the camera 110, and supports a registration request including the first photographed image via the network N. Send to device 300 . Here, the target person (grandchild) U1 may use the camera 110 to photograph an object or scenery that he or she is interested in. In this case, the registration unit 181 acquires the first captured image from the camera 110 . Note that the subject (grandchild) U1 may capture the first captured image using a digital camera or the like. In this case, it is assumed that the subject's parent U2 transfers the first captured image from the digital camera to the tablet terminal 101 . Then, the registration unit 181 acquires the first captured image from the digital camera and transmits a registration request. Note that the subject (grandchild) U1 may shoot a moving image using the camera 110 or the like. In that case, the registration unit 181 may include part or all of the captured moving images in the registration request as the first captured image group. Note that when the first captured image is acquired, the registration unit 181 automatically transmits a registration request including the first captured image to the support device 300 without depending on the operation of the subject (grandchild) U1 or the like. good too. Further, the registration unit 181 may acquire the position information of the photographing location of the photographed image together with the first photographed image. In that case, the registration unit 181 further includes the acquired location information in the registration request. Further, the registration unit 181 may include the face image (second biometric information) of the target person (grandchild) U1 as the photographer together with the first captured image in the registration request. Alternatively, the registration unit 181 may include the terminal ID of the tablet terminal 101 or the identification information of the subject (grandchild) U1 together with the first captured image.
 コンテンツ処理部182は、支援装置300からのコンテンツ制御情報を解析して、タッチパネル140にキャラクタを表示し、上述したユーザインタフェースを提供する。特に、コンテンツ処理部182は、支援装置300から支援情報を受信した場合、タッチパネル140に支援情報を表示する。この場合、支援情報は、電子データの表示コンテンツ、サービス情報、商品の発送通知メッセージ等であってもよい。尚、コンテンツ処理部182は、キャラクタを介して支援情報を表示してもよい。 The content processing unit 182 analyzes content control information from the support device 300, displays characters on the touch panel 140, and provides the user interface described above. In particular, the content processing unit 182 displays the support information on the touch panel 140 when the support information is received from the support device 300 . In this case, the support information may be display content of electronic data, service information, a product shipping notification message, or the like. Note that the content processing unit 182 may display the support information through a character.
 また、コンテンツ処理部182は、対象者(孫)U1又は対象者の親U2の操作に応じて、支援情報に対するお礼情報を、ネットワークNを介して支援装置300へ送信する。お礼情報は、対象者(孫)U1又は対象者の親U2によりタッチパネル140を介して入力されたメッセージ等のテキスト情報や、支援情報の表示または商品の受領時に対象者(孫)U1が撮影された撮影画像(第2の撮影画像)を含んでも良い。第2の撮影画像は、タブレット端末101のユーザ側のカメラ(カメラ110等)により対象者(孫)U1の顔領域を含むものとする。または、対象者の親U2がスマートフォン102等を用いて第2の撮影画像を撮影し、スマートフォン102等からタブレット端末101へ第2の撮影画像を転送してもよい。そして、コンテンツ処理部182は、スマートフォン102等から取得した第2の撮影画像を含めたお礼情報を支援装置300へ送信する。 In addition, the content processing unit 182 transmits thank-you information for the support information to the support device 300 via the network N in response to the operation of the target person (grandchild) U1 or the target person's parent U2. The thank-you information includes text information such as a message input by the target person (grandchild) U1 or the target person's parent U2 via the touch panel 140, or the target person (grandchild) U1 being photographed when the support information is displayed or when the product is received. A photographed image (second photographed image) may also be included. It is assumed that the second photographed image includes the face area of the subject (grandchild) U1 captured by the user-side camera (camera 110 or the like) of tablet terminal 101 . Alternatively, the subject's parent U2 may capture the second captured image using the smartphone 102 or the like, and transfer the second captured image from the smartphone 102 or the like to the tablet terminal 101 . Then, the content processing unit 182 transmits thank-you information including the second captured image acquired from the smartphone 102 or the like to the support device 300 .
 図3に戻り説明を続ける。スマートフォン102は、対象者の親U2が所持及び利用する端末の一例である。尚、スマートフォン102は、タブレット端末、カメラを搭載又は接続したPC(Personal Computer)等であってもよい。スマートフォン102は、汎用的なSNS(Social Network Service)アプリケーションを介して、支援装置300とテキストデータのやり取りを行う。例えば、SNSアプリケーションがメッセージアプリケーションの場合、上記キャラクタをユーザとし、対象者の親U2のユーザとのメッセージのやり取りを行うことで、キャラクタを介して支援者(祖父母)U3とのメッセージのやり取りを実現する。特に、スマートフォン102は、上述した事前登録要求、第1の撮影画像の登録要求、支援情報の表示及びお礼情報の送信を行っても良い。 Return to Figure 3 and continue the explanation. The smart phone 102 is an example of a terminal possessed and used by the parent U2 of the subject. Note that the smartphone 102 may be a tablet terminal, a PC (Personal Computer) equipped with or connected to a camera, or the like. The smartphone 102 exchanges text data with the support device 300 via a general-purpose SNS (Social Network Service) application. For example, if the SNS application is a message application, the character is used as the user, and messages are exchanged with the user of the target person's parent U2, thereby realizing message exchange with the supporter (grandparent) U3 via the character. do. In particular, the smartphone 102 may perform the above-described pre-registration request, registration request for the first captured image, display of support information, and transmission of thank-you information.
 タブレット端末103は、支援者(祖父母)U3が利用する端末であり、支援者(祖父母)U3の住居等に設置されている。尚、タブレット端末103の構成は、上述したタブレット端末101と同等であり、図示を省略する。タブレット端末103は、タブレット端末101と同様に、支援者(祖父母)U3の対話相手となるキャラクタを表示したユーザインタフェースとして動作してもよい。タブレット端末103は、支援装置300から受信した支援情報候補をタッチパネルに表示する。そして、タブレット端末103は、支援情報候補の中から、支援者(祖父母)U3による支援情報の選択操作を受け付け、選択された支援情報を含めた支援指示を支援装置300へ送信する。また、タブレット端末103は、支援装置300からお礼情報を受信した場合、お礼情報をタッチパネルに表示する。尚、タブレット端末103は、支援者(祖父母)U3に関する支援者情報を支援装置300に登録してもよい。支援者情報は、支援者(祖父母)U3の識別情報(支援者ID)、個人情報、決済情報、タブレット端末103の端末IDを含んでも良い。または、支援者IDが未発行の場合、タブレット端末103は、支援者(祖父母)U3の顔画像を支援者情報に含めても良い。その場合、タブレット端末103は、支援装置300から発行された支援者IDを取得してもよい。また、タブレット端末103は、支援者ID及びタブレット端末103の端末IDをタブレット端末101やスマートフォン102へ通知してもよい。 The tablet terminal 103 is a terminal used by the supporter (grandparent) U3, and is installed in the residence of the supporter (grandparent) U3. Note that the configuration of the tablet terminal 103 is the same as that of the tablet terminal 101 described above, and illustration thereof is omitted. The tablet terminal 103, like the tablet terminal 101, may operate as a user interface that displays a character to be a dialogue partner of the supporter (grandparent) U3. The tablet terminal 103 displays the support information candidates received from the support device 300 on the touch panel. Then, the tablet terminal 103 receives support information selection operation by the support person (grandparent) U3 from among the support information candidates, and transmits a support instruction including the selected support information to the support device 300 . Further, when the tablet terminal 103 receives the thank-you information from the support device 300, the tablet terminal 103 displays the thank-you information on the touch panel. Note that the tablet terminal 103 may register supporter information regarding the supporter (grandparent) U3 in the support device 300 . The supporter information may include identification information (supporter ID) of the supporter (grandparent) U3, personal information, payment information, and terminal ID of the tablet terminal 103 . Alternatively, if the supporter ID has not yet been issued, the tablet terminal 103 may include the face image of the supporter (grandparent) U3 in the supporter information. In that case, the tablet terminal 103 may acquire the supporter ID issued by the support device 300 . Further, the tablet terminal 103 may notify the tablet terminal 101 or the smartphone 102 of the supporter ID and the terminal ID of the tablet terminal 103 .
 認証装置200は、ユーザの顔特徴情報を記憶する情報処理装置である。また、認証装置200は、外部から受信した顔認証要求に応じて、当該要求に含まれる顔画像又は顔特徴情報について、各ユーザの顔特徴情報と照合を行い、照合結果(認証結果)を要求元へ返信する。 The authentication device 200 is an information processing device that stores facial feature information of the user. In addition, in response to a face authentication request received from the outside, the authentication device 200 compares the face image or face feature information included in the request with the face feature information of each user, and requests the result of matching (authentication result). Reply to original.
 図5は、本実施形態2にかかる認証装置200の構成を示すブロック図である。認証装置200は、顔情報DB(DataBase)210と、顔検出部220と、特徴点抽出部230と、登録部240と、認証部250とを備える。顔情報DB210は、ユーザID211と当該ユーザIDの顔特徴情報212とを対応付けて記憶する。顔特徴情報212は、顔画像から抽出された特徴点の集合である。 FIG. 5 is a block diagram showing the configuration of the authentication device 200 according to the second embodiment. The authentication device 200 includes a face information DB (DataBase) 210 , a face detection section 220 , a feature point extraction section 230 , a registration section 240 and an authentication section 250 . The face information DB 210 associates and stores a user ID 211 and face feature information 212 of the user ID. The facial feature information 212 is a set of feature points extracted from the facial image.
 顔検出部220は、顔情報を登録するための登録画像に含まれる顔領域を検出し、特徴点抽出部230に出力する。特徴点抽出部230は、顔検出部220が検出した顔領域から特徴点を抽出し、登録部240に顔特徴情報を出力する。また、特徴点抽出部230は、支援装置300等から受信した顔画像に含まれる特徴点を抽出し、認証部250に顔特徴情報を出力する。 The face detection unit 220 detects a face area included in a registered image for registering face information and outputs it to the feature point extraction unit 230 . The feature point extraction section 230 extracts feature points from the face area detected by the face detection section 220 and outputs facial feature information to the registration section 240 . Further, the feature point extraction unit 230 extracts feature points included in the facial image received from the support device 300 or the like, and outputs facial feature information to the authentication unit 250 .
 登録部240は、顔特徴情報の登録に際して、ユーザID211を新規に発行する。登録部240は、発行したユーザID211と、登録画像から抽出した顔特徴情報212とを対応付けて顔情報DB210へ登録する。認証部250は、顔特徴情報212を用いた顔認証を行う。具体的には、認証部250は、顔画像から抽出された顔特徴情報と、顔情報DB210内の顔特徴情報212との照合を行う。認証部250は、照合に成功した場合、照合された顔特徴情報212に対応付けられたユーザID211を特定する。認証部250は、顔特徴情報の一致の有無を顔認証結果として要求元に返信する。顔特徴情報の一致の有無は、認証の成否に対応する。尚、顔特徴情報が一致する(一致有)とは、一致度が閾値以上である場合をいうものとする。また、顔認証結果は、顔認証に成功した場合、特定されたユーザIDを含むものとする。 The registration unit 240 newly issues a user ID 211 when registering facial feature information. The registration unit 240 associates the issued user ID 211 with the facial feature information 212 extracted from the registered image and registers them in the facial information DB 210 . Authentication unit 250 performs face authentication using facial feature information 212 . Specifically, the authentication unit 250 collates the facial feature information extracted from the facial image with the facial feature information 212 in the facial information DB 210 . If the verification is successful, the authentication unit 250 identifies the user ID 211 associated with the verified facial feature information 212 . The authenticating unit 250 replies to the requester as a face authentication result indicating whether or not the facial feature information matches. Whether the facial feature information matches or not corresponds to the success or failure of the authentication. Note that matching of facial feature information (matching) means a case where the degree of matching is equal to or greater than a threshold. Also, the face authentication result shall include the specified user ID when the face authentication is successful.
 図3に戻り説明を続ける。
 支援装置300は、上述した支援装置1の一例である。支援装置300は、事前登録処理、収集処理、支援情報提供処理、お礼情報登録提示処理等を行う情報処理装置である。支援装置300は、複数台のサーバに冗長化されてもよく、各機能ブロックが複数台のコンピュータで実現されてもよい。尚、支援装置300は、タブレット端末101及び103に表示されるキャラクタを制御するための、コンテンツ制御情報を生成し、タブレット端末101及び103へ送信してもよい。
Returning to FIG. 3, the description continues.
The support device 300 is an example of the support device 1 described above. The support device 300 is an information processing device that performs pre-registration processing, collection processing, support information provision processing, thank-you information registration and presentation processing, and the like. The support device 300 may be made redundant by a plurality of servers, and each functional block may be realized by a plurality of computers. Note that the support device 300 may generate content control information for controlling characters displayed on the tablet terminals 101 and 103 and transmit the content control information to the tablet terminals 101 and 103 .
 次に、支援装置300について詳細に説明する。図6は、本実施形態2にかかる支援装置300の構成を示すブロック図である。支援装置300は、記憶部310、メモリ320、通信部330及び制御部340を備える。記憶部310は、ハードディスク、フラッシュメモリ等の記憶装置の一例である。記憶部310は、プログラム311、対象者管理情報312、支援者情報313及び支援情報DB314を記憶する。プログラム311は、本実施形態2にかかる事前登録処理、収集処理、支援情報提供処理、お礼情報登録提示処理等を含む処理が実装されたコンピュータプログラムである。 Next, the support device 300 will be described in detail. FIG. 6 is a block diagram showing the configuration of the support device 300 according to the second embodiment. The support device 300 includes a storage unit 310 , a memory 320 , a communication unit 330 and a control unit 340 . The storage unit 310 is an example of a storage device such as a hard disk or flash memory. The storage unit 310 stores a program 311 , subject management information 312 , supporter information 313 and a support information DB 314 . The program 311 is a computer program in which processing including pre-registration processing, collection processing, support information provision processing, thank-you information registration and presentation processing, etc. according to the second embodiment is implemented.
 対象者管理情報312は、対象者(孫)U1を管理するための情報である。対象者管理情報312は、対象者ID3121、支援者ID3122、端末ID3123、対象者情報3124、対象物画像3125及び対象者画像3126を対応付けた情報である。対象者ID3121は、対象者(孫)U1の識別情報である。対象者ID3121は、認証装置200の顔情報DB210において管理される顔特徴情報212に対応付けて管理されるユーザID211と同一又は一意に対応する情報である。尚、対象者ID3121は、上述した事前登録要求に含まれる対象者(孫)U1の識別情報であってもよい。支援者ID3122は、支援者(祖父母)U3の識別情報である。端末ID3123は、対象者(孫)U1が利用するタブレット端末101の識別情報である。 The subject management information 312 is information for managing the subject (grandchild) U1. The subject management information 312 is information in which a subject ID 3121, a supporter ID 3122, a terminal ID 3123, subject information 3124, a subject image 3125 and a subject image 3126 are associated with each other. The subject ID 3121 is identification information of the subject (grandchild) U1. The subject ID 3121 is information identically or uniquely corresponding to the user ID 211 managed in association with the facial feature information 212 managed in the face information DB 210 of the authentication device 200 . The target person ID 3121 may be identification information of the target person (grandchild) U1 included in the above-described pre-registration request. The supporter ID 3122 is identification information of the supporter (grandparent) U3. The terminal ID 3123 is identification information of the tablet terminal 101 used by the subject (grandchild) U1.
 対象者情報3124は、上述した事前登録要求に含まれる対象者情報の少なくとも一部である。対象者情報3124は、例えば、対象者(孫)U1の個人情報、健康情報、体調、要望が挙げられるが、これらに限定されない。 The target person information 3124 is at least part of the target person information included in the pre-registration request described above. The target person information 3124 includes, for example, personal information, health information, physical condition, and requests of the target person (grandchild) U1, but is not limited to these.
 対象物画像3125は、上述した登録要求に含まれる第1の撮影画像の一例である。対象物画像3125は、対象者(孫)U1により撮影された対象物を含む画像である。尚、対象者管理情報312は、対象物画像3125の他に、対象者(孫)U1により撮影された風景を含む風景画像を第1の撮影画像として対象者ID3121と対応付けられても良い。 The object image 3125 is an example of the first captured image included in the registration request described above. The target object image 3125 is an image including the target object photographed by the target person (grandchild) U1. In addition to the target object image 3125, the target person management information 312 may be associated with the target person ID 3121 as a first photographed image including a landscape image taken by the target person (grandchild) U1.
 対象者画像3126は、上述したお礼情報に含まれる第2の撮影画像である。つまり、対象者画像3126は、支援情報の表示または商品の受領時に撮影された対象者(孫)U1を含む画像である。 The target person image 3126 is the second captured image included in the thank-you information described above. That is, the target person image 3126 is an image including the target person (grandchild) U1 photographed when the support information is displayed or when the product is received.
 支援者情報313は、支援者(祖父母)U3に関する情報である。支援者情報313は、支援者ID3131、個人情報3132及び端末ID3133を対応付けた情報である。支援者ID3131は、支援者(祖父母)U3の識別情報であり、上述した支援者ID3122と同一又は一意に対応する情報である。個人情報3132は、支援者(祖父母)U3の個人情報、例えば、決済情報を含む。端末ID3133は、支援者(祖父母)U3が利用するタブレット端末103の識別情報である。 The supporter information 313 is information about supporters (grandparents) U3. The supporter information 313 is information in which a supporter ID 3131, personal information 3132, and terminal ID 3133 are associated with each other. The supporter ID 3131 is identification information of the supporter (grandparent) U3, and is information identical to or uniquely corresponding to the supporter ID 3122 described above. The personal information 3132 includes personal information of the supporters (grandparents) U3, such as payment information. The terminal ID 3133 is identification information of the tablet terminal 103 used by the supporter (grandparent) U3.
 支援情報DB314は、複数の支援情報候補3141から314n(nは2以上の自然数。)を管理するデータベースである。支援情報候補3141等は、対象者(孫)U1に対する支援情報の候補となる情報である。支援情報候補3141等は、プレゼント(商品)、サービス(教育コンテンツ)、旅行の提案情報、電子データの表示コンテンツ(電子書籍等)等である。 The support information DB 314 is a database that manages a plurality of support information candidates 3141 to 314n (n is a natural number of 2 or more). Support information candidates 3141 and the like are information that are candidates for support information for the target person (grandchild) U1. The support information candidates 3141 and the like are presents (products), services (educational contents), travel proposal information, electronic data display contents (electronic books, etc.), and the like.
 メモリ320は、RAM(Random Access Memory)等の揮発性記憶装置であり、制御部340の動作時に一時的に情報を保持するための記憶領域である。通信部330は、ネットワークNとの通信インタフェースである。 The memory 320 is a volatile storage device such as RAM (Random Access Memory), and is a storage area for temporarily holding information when the control unit 340 operates. A communication unit 330 is a communication interface with the network N. FIG.
 制御部340は、支援装置300の各構成を制御するプロセッサつまり制御装置である。制御部340は、記憶部310からプログラム311をメモリ320へ読み込ませ、プログラム311を実行する。これにより、制御部340は、登録部341、取得部342、認証制御部343、特定部344、決定部345、提示部346及び処理部347の機能を実現する。 The control unit 340 is a processor that controls each component of the support device 300, that is, a control device. The control unit 340 loads the program 311 from the storage unit 310 into the memory 320 and executes the program 311 . Thereby, the control unit 340 realizes the functions of the registration unit 341 , the acquisition unit 342 , the authentication control unit 343 , the identification unit 344 , the determination unit 345 , the presentation unit 346 and the processing unit 347 .
 登録部341は、上述した登録部11の一例である。登録部341は、タブレット端末101又はスマートフォン102から事前登録要求を受け付け、事前登録要求に含まれる(対象者(孫)U1の)顔画像を含めた顔情報登録要求を認証装置200へ送信する。そして、登録部341は、認証装置200から顔情報の登録に応じて発行されたユーザIDを取得する。そして、登録部341は、取得したユーザID(対象者ID3121)に、事前登録要求に含まれる対象者情報及び支援者情報を対応付けて対象者管理情報312を生成する。そして、登録部341は、生成した対象者管理情報312を記憶部310に登録する。つまり、登録部341は、対象者(孫)U1の顔特徴情報212(第1の生体情報)を予め認証装置200に登録するものといえる。また、登録部341は、支援者ID3122、端末ID3123、対象者情報3124等と、対象者(孫)U1の顔特徴情報212とを、対象者ID3121を介して対応付けて登録するものといえる。 The registration unit 341 is an example of the registration unit 11 described above. Registration unit 341 receives a pre-registration request from tablet terminal 101 or smartphone 102 and transmits a face information registration request including a face image (of subject (grandchild) U1) included in the pre-registration request to authentication device 200 . Then, the registration unit 341 acquires the user ID issued by the authentication device 200 in accordance with the registration of the face information. Then, the registration unit 341 generates the target person management information 312 by associating the acquired user ID (subject ID 3121) with the target person information and the supporter information included in the pre-registration request. Then, the registration unit 341 registers the generated subject management information 312 in the storage unit 310 . That is, it can be said that the registration unit 341 registers the facial feature information 212 (first biometric information) of the subject (grandchild) U1 in the authentication device 200 in advance. Further, the registration unit 341 can be said to register the supporter ID 3122 , the terminal ID 3123 , the target person information 3124 , etc., and the facial feature information 212 of the target person (grandchild) U1 in association with each other via the target person ID 3121 .
 また、登録部341は、後述する特定部344により特定された第1の撮影画像の撮影者を対象者ID3121とし、第1の撮影画像を対象物画像3125とする。として、登録部341は、対象者ID3121と対象物画像3125を対応付けて対象者管理情報312を登録(更新)する。また、登録部341は、第1の撮影画像と共に取得された撮影場所の位置情報を、対象物画像3125に対応付けて登録する。 Also, the registration unit 341 sets the photographer of the first captured image specified by the specifying unit 344, which will be described later, as the subject ID 3121, and sets the first captured image as the target object image 3125. As such, the registration unit 341 registers (updates) the subject management information 312 by associating the subject ID 3121 with the subject image 3125 . The registration unit 341 also registers the position information of the shooting location acquired together with the first captured image in association with the object image 3125 .
 また、登録部341は、支援情報に対するお礼情報を、対象者ID3121と対応付けて対象者管理情報312に登録する。例えば、支援情報が表示コンテンツであり、お礼情報に対象者(孫)U1の第2の撮影画像が含まれる場合、登録部341は、対象者ID3121と、表示コンテンツと、対象者画像3126(第2の撮影画像)とを対応付けて対象者管理情報312に登録する。さらに、第2の撮影画像と共にタブレット端末101に表示された表示コンテンツのコンテンツ位置が取得された場合、登録部341は、対象者ID3121と、表示コンテンツと、コンテンツ位置と、対象者画像3126(第2の撮影画像)とを対応付けて対象者管理情報312に登録する。さらに、登録部341は、後述する処理部347により第2の撮影画像から測定された対象者(孫)U1の満足度が所定値以上である場合に、第2の撮影画像、表示コンテンツ及びコンテンツ位置を対応付けて登録してもよい。 Also, the registration unit 341 registers the thank-you information for the support information in the target person management information 312 in association with the target person ID 3121 . For example, when the support information is display content and the thank-you information includes the second captured image of the target person (grandchild) U1, the registration unit 341 stores the target person ID 3121, the display content, and the target person image 3126 (second 2) are registered in the subject management information 312 in association with each other. Furthermore, when the content position of the display content displayed on the tablet terminal 101 together with the second captured image is acquired, the registration unit 341 stores the target person ID 3121, the display content, the content position, and the target person image 3126 (the 2) are registered in the subject management information 312 in association with each other. Further, the registration unit 341 registers the second captured image, the display content, and the content when the satisfaction level of the target person (grandchild) U1 measured from the second captured image by the processing unit 347, which will be described later, is equal to or greater than a predetermined value. Positions may be associated with each other and registered.
 取得部342は、上述した取得部12の一例である。取得部342は、タブレット端末101から第1の撮影画像を含む登録要求を取得する。また、取得部342は、タブレット端末101(第1の端末)における表示コンテンツの表示時に対象者(孫)U1が撮影された第2の撮影画像を取得する。取得部342は、第2の撮影画像と共に撮影時にタブレット端末101に表示されていた表示コンテンツのコンテンツ位置(表示位置)をさらに取得してもよい。取得部342は、タブレット端末101から第1の撮影画像と共に撮影者の第2の生体情報(顔画像等)を取得してもよい。または、取得部342は、第1の撮影画像と共に撮影者の識別情報やタブレット端末101の端末IDを取得してもよい。さらに、取得部342は、第1の撮影画像と共に撮影場所の位置情報を取得してもよい。 The acquisition unit 342 is an example of the acquisition unit 12 described above. Acquisition unit 342 acquires a registration request including the first captured image from tablet terminal 101 . Acquisition unit 342 also acquires a second captured image of target person (grandchild) U1 captured when the display content is displayed on tablet terminal 101 (first terminal). The acquisition unit 342 may further acquire the content position (display position) of the display content displayed on the tablet terminal 101 at the time of capturing together with the second captured image. The acquisition unit 342 may acquire the second biometric information (face image, etc.) of the photographer from the tablet terminal 101 together with the first captured image. Alternatively, the acquisition unit 342 may acquire the identification information of the photographer or the terminal ID of the tablet terminal 101 together with the first captured image. Furthermore, the acquisition unit 342 may acquire the position information of the shooting location together with the first captured image.
 認証制御部343は、取得部342が取得した登録要求に含まれる顔画像についての顔認証を制御する。具体的には、認証制御部343は、顔画像含めた顔認証要求を認証装置200へ送信し、認証装置200から顔認証結果を受信する。尚、認証制御部343は、顔画像からユーザの顔領域を検出し、顔領域の画像を顔認証要求に含めてもよい。または、認証制御部343は、顔領域から顔特徴情報を抽出し、顔特徴情報を顔認証要求に含めてもよい。 The authentication control unit 343 controls face authentication for the face image included in the registration request acquired by the acquisition unit 342. Specifically, authentication control unit 343 transmits a face authentication request including a face image to authentication device 200 and receives a face authentication result from authentication device 200 . Note that the authentication control unit 343 may detect the user's face area from the face image and include the image of the face area in the face authentication request. Alternatively, the authentication control unit 343 may extract facial feature information from the face area and include the facial feature information in the face authentication request.
 尚、取得部342は、上述したその他の対象者情報を収集する収集処理を行う。その他の対象者情報には、対象者(孫)U1の顔画像等と共に、上述した音声情報、音声認識結果、健康情報、体調、要望等を含んでも良い。認証制御部343は、対象者情報に含まれる顔画像について顔認証を行い、対象者ID3121を特定する。そして、登録部341は、特定した対象者ID3121と、対象者情報に含まれる音声情報、音声認識結果、健康情報、体調、要望等を対応付けて対象者管理情報312に登録する。 It should be noted that the acquisition unit 342 performs collection processing for collecting the other target person information described above. The other target person information may include the face image of the target person (grandchild) U1, as well as the above-described voice information, voice recognition result, health information, physical condition, requests, and the like. The authentication control unit 343 performs face authentication on the face image included in the target person information and identifies the target person ID 3121 . Then, the registration unit 341 associates the specified target person ID 3121 with the voice information, voice recognition result, health information, physical condition, requests, etc. included in the target person information, and registers them in the target person management information 312 .
 特定部344は、上述した特定部13の一例である。特定部344は、第1の撮影画像の撮影者を特定する。例えば、第1の撮影画像の登録要求に顔画像が含まれる場合、特定部344は、認証制御部343から顔認証結果を取得し、顔認証結果が成功である場合に、顔認証結果に含まれるユーザIDを特定する。特定部344は、顔認証結果に含まれるユーザIDを撮影者とし、当該ユーザIDを対象者ID3121として特定する。尚、第1の撮影画像の登録要求に撮影者の識別情報が含まれる場合、特定部344は、当該識別情報を対象者ID3121として撮影者を特定する。また、第1の撮影画像の登録要求にタブレット端末101の端末IDが含まれる場合、特定部344は、端末ID3123に対応付けられた対象者ID3121を撮影者として特定する。 The identification unit 344 is an example of the identification unit 13 described above. The identifying unit 344 identifies the photographer of the first captured image. For example, when a face image is included in the registration request for the first captured image, the specifying unit 344 acquires the face authentication result from the authentication control unit 343, and if the face authentication result is successful, the face image is included in the face authentication result. Identifies the user ID that is The identification unit 344 identifies the user ID included in the face authentication result as the photographer, and identifies the user ID as the subject ID 3121 . Note that when the identification information of the photographer is included in the registration request for the first captured image, the identifying unit 344 identifies the photographer using the identification information as the subject ID 3121 . Further, when the terminal ID of the tablet terminal 101 is included in the registration request for the first captured image, the identifying unit 344 identifies the subject ID 3121 associated with the terminal ID 3123 as the photographer.
 また、特定部344は、特定の対象者ID3121に対応付けられた対象者情報3124及び対象物画像3125に基づいて、当該対象者の興味を特定する。さらに、特定部344は、特定の対象者ID3121に対応付けられたお礼情報、対象者画像3126、表示コンテンツ、コンテンツ位置、第1の撮影画像の撮影場所の位置情報等に基づいて、当該対象者の興味を特定してもよい。これにより、興味の特定精度が向上できる。ここで、コンテンツ位置は、上述したように第2の撮影画像の撮影時にタブレット端末101に表示されていた表示コンテンツの表示位置である。例えば、表示コンテンツが複数のページを含み、ページ単位でタブレット端末101の画面に表示されている場合、コンテンツ位置(表示位置)とは、表示されたページ番号等である。また、第2の撮影画像は、タブレット端末101が支援情報である表示コンテンツの所定のページ(コンテンツ位置)を表示(再生)時に、対象者(孫)U1が撮影された画像である。例えば、対象者(孫)U1が表示コンテンツの特定のページにおいて興味を示していることを対象者の親U2が確認した場合、対象者の親U2は、タブレット端末101の内側カメラにより対象者(孫)U1を撮影してもよい。タブレット端末101は、この時の撮影画像を第2の撮影画像とし、この時に表示されていたページをコンテンツ位置としてもよい。または、上述した満足度が所定値以上である場合に表示コンテンツ及びコンテンツ位置が登録されていてもよい。これらの場合、コンテンツ位置は、表示コンテンツ内で対象者(孫)U1が興味を示したページ及び内容を示している可能性が高い。よって、特定部344は、表示コンテンツ及びコンテンツ位置を加味することで、精度良く対象者の興味を特定できる。 Also, the identifying unit 344 identifies the interest of the subject based on the subject information 3124 and the subject image 3125 associated with the particular subject ID 3121 . Further, the specifying unit 344 identifies the target person based on the thank-you information associated with the specific target person ID 3121, the target person image 3126, the display content, the content position, the position information of the shooting location of the first captured image, and the like. may identify the interests of As a result, the accuracy of specifying the interest can be improved. Here, the content position is the display position of the display content displayed on the tablet terminal 101 when the second captured image was captured as described above. For example, when the display content includes a plurality of pages and is displayed on the screen of the tablet terminal 101 page by page, the content position (display position) is the displayed page number or the like. The second captured image is an image of the target person (grandchild) U1 captured when the tablet terminal 101 displays (reproduces) a predetermined page (content position) of the display content, which is the support information. For example, when the target person's parent U2 confirms that the target person (grandchild) U1 is interested in a specific page of the display content, the target person's parent U2 uses the inner camera of the tablet terminal 101 to view the target person ( grandchild) U1 may be photographed. The tablet terminal 101 may set the captured image at this time as the second captured image, and may set the page displayed at this time as the content position. Alternatively, the displayed content and the content position may be registered when the above-described degree of satisfaction is equal to or higher than a predetermined value. In these cases, there is a high possibility that the content position indicates the page and content that the target person (grandchild) U1 is interested in within the displayed content. Therefore, the specifying unit 344 can specify the interest of the target person with high accuracy by considering the display content and the content position.
 また、特定部344は、第1の撮影画像に対して顔認識された領域を除外し、除外後の画像に基づいて対象者の興味を特定してもよい。例えば、対象者(孫)U1が撮影する際に、意図せずに、第三者の顔等のプライバシーに関する情報が映り込んでしまう場合がある。そのような場合、第三者の顔等は対象者(孫)U1の興味とは関係がない。よって、特定部344は、興味の特定対象から意図せずに映り込んだ情報を除外することで、特定の精度が向上できる。また、特定部344は、第1の撮影画像に対して顔認識された領域のうち、対象者以外の領域を除外するとよい。この場合、上述した登録部341は、除外後の画像を対象物画像3125として登録してもよい。または、登録部341は、第1の撮影画像に対して顔認識された領域を除外し、除外後の画像を対象物画像3125として登録してもよい。これらにより、第三者のプライバシーを保護できる。そのため、対象者の親U2は、第三者のプライバシーを配慮した上で、安心して対象者(孫)U1に支援システム1000を利用させることができる。 Further, the specifying unit 344 may exclude the face-recognized area from the first captured image and specify the subject's interest based on the excluded image. For example, when the target person (grandchild) U1 takes a picture, there is a case where privacy-related information such as the face of a third party is captured unintentionally. In such a case, the third person's face and the like have nothing to do with the interest of the target person (grandchild) U1. Therefore, the identification unit 344 can improve the accuracy of identification by excluding unintentionally reflected information from the identification target of interest. Also, the specifying unit 344 preferably excludes areas other than the target person from the areas in which the face is recognized in the first captured image. In this case, the registration unit 341 described above may register the excluded image as the target object image 3125 . Alternatively, the registration unit 341 may exclude the face-recognized area from the first captured image and register the excluded image as the target object image 3125 . These can protect the privacy of third parties. Therefore, the target person's parent U2 can allow the target person (grandchild) U1 to use the support system 1000 without worrying about the privacy of a third party.
 決定部345は、上述した決定部14の一例である。決定部345は、特定部344が特定した興味に基づき、支援情報DB314の中から、1以上の支援情報候補3141から314nを決定する。 The determination unit 345 is an example of the determination unit 14 described above. The determination unit 345 determines one or more support information candidates 3141 to 314n from the support information DB 314 based on the interest identified by the identification unit 344 .
 提示部346は、上述した提示部15の一例である。提示部346は、決定された支援情報候補を、タブレット端末103へ送信することにより、支援者(祖父母)U3に支援情報候補を提示する。 The presentation unit 346 is an example of the presentation unit 15 described above. The presentation unit 346 presents the support information candidate to the supporter (grandparent) U3 by transmitting the determined support information candidate to the tablet terminal 103 .
 処理部347は、提示された支援情報候補の中から支援者(祖父母)U3により選択された支援情報を含む支援指示を受け付け、支援情報に応じた処理を行う。具体的には、処理部347は、支援情報を対象者(孫)U1へ提供するための処理を行う。例えば、支援情報が商品である場合、処理部347は、商品の購入及び発送手続きを「提供するための処理」として行う。例えば、処理部347は、選択された商品の販売システムに対して、発送先を対象者(孫)U1の住居として購入及び発送依頼を送信する。また、支援情報が旅行である場合、処理部347は、選択された旅行の予約処理を「提供するための処理」として行う。例えば、処理部347は、選択された旅行の旅行代理店のサーバに対して、対象者(孫)U1、対象者の親U2及び支援者(祖父母)U3を参加者とした所定の日程の旅行の予約要求を送信する。また、これらの場合、処理部347は、発送内容や予約内容をタブレット端末101へ通知する。また、支援情報が電子書籍(表示コンテンツ)である場合、処理部347は、支援者(祖父母)U3の決済情報を用いた電子書籍の購入要求を、電子書籍の販売システムへ送信する。そして、処理部347は、購入後に、電子書籍のダウンロード先をタブレット端末101へ通知する。処理部347は、これらの購入要求及び通知を「提供するための処理」として行う。尚、これらの場合、処理部347は、支援者(祖父母)U3の決済情報を用いて決済するものとする。 The processing unit 347 receives support instructions including support information selected by the supporter (grandparents) U3 from among the presented support information candidates, and performs processing according to the support information. Specifically, the processing unit 347 performs processing for providing the support information to the target person (grandchild) U1. For example, when the support information is a product, the processing unit 347 performs the purchase and shipping procedures for the product as “processing for providing”. For example, the processing unit 347 transmits a purchase and shipping request to the sales system of the selected product, with the destination being the residence of the target person (grandchild) U1. Further, when the support information is travel, the processing unit 347 performs reservation processing for the selected travel as “processing for providing”. For example, the processing unit 347 sends a request to the server of the travel agency for the selected trip for a trip on a predetermined schedule with the target person (grandchild) U1, the target person's parent U2, and the supporter (grandparent) U3 as participants. Submit a reservation request for Also, in these cases, the processing unit 347 notifies the tablet terminal 101 of the content of the shipment and the content of the reservation. When the support information is an electronic book (display content), the processing unit 347 transmits an electronic book purchase request using the payment information of the supporter (grandparent) U3 to the electronic book sales system. After the purchase, the processing unit 347 notifies the tablet terminal 101 of the download destination of the electronic book. The processing unit 347 performs these purchase requests and notifications as "processing for providing". In these cases, the processing unit 347 makes a payment using the payment information of the supporter (grandparent) U3.
 また、処理部347は、第2の撮影画像から対象者(孫)U1の満足度を測定し、当該満足度が所定値以上である場合に、当該第2の撮影画像を支援者(祖父母)U3のタブレット端末103へ通知する。例えば、処理部347は、第2の撮影画像に含まれる対象者(孫)U1の顔領域を特定し、所定の笑顔解析技術を用いて笑顔の度合い等から満足度を測定してもよい。 Further, the processing unit 347 measures the degree of satisfaction of the subject person (grandchild) U1 from the second captured image, and if the degree of satisfaction is equal to or higher than a predetermined value, the second captured image is sent to the supporter (grandparent). The tablet terminal 103 of U3 is notified. For example, the processing unit 347 may identify the facial region of the subject (grandchild) U1 included in the second captured image, and measure the degree of satisfaction from the degree of smile using a predetermined smile analysis technique.
 図7は、本実施形態2にかかる事前登録処理の流れを示すシーケンス図である。ここでは、対象者の親U2が対象者(孫)U1と支援者(祖父母)U3とを対応付けるための事前登録を行うものとする。まず、スマートフォン102は、対象者の親U2の操作に応じてカメラにより対象者(孫)U1の顔を撮影する。また、スマートフォン102は、対象者の親U2の入力により、対象者(孫)U1の対象者情報及び支援者(祖父母)U3の支援者情報を受け付ける。そして、スマートフォン102は、対象者の親U2の操作に応じて、対象者情報及び支援者情報を含む事前登録要求を、ネットワークNを介して支援装置300へ送信する(S101)。ここで、事前登録要求には、対象者(孫)U1の顔画像を含むものとする。 FIG. 7 is a sequence diagram showing the flow of pre-registration processing according to the second embodiment. Here, it is assumed that the target person's parent U2 performs pre-registration to associate the target person (grandchild) U1 with the supporter (grandparent) U3. First, the smartphone 102 photographs the face of the target person (grandchild) U1 with the camera according to the operation of the target person's parent U2. Further, the smartphone 102 receives the target person information of the target person (grandchild) U1 and the supporter information of the supporter (grandparent) U3 from the input of the target person's parent U2. Then, the smartphone 102 transmits a pre-registration request including the target person information and the supporter information to the support device 300 via the network N according to the operation of the target person's parent U2 (S101). Here, the pre-registration request includes the face image of the target person (grandchild) U1.
 支援装置300の登録部341は、スマートフォン102からネットワークNを介して、事前登録要求を受け付ける。そして、登録部341は、事前登録要求に含まれる対象者(孫)U1の顔画像を含めた顔情報登録要求を認証装置200へ送信する(S102)。これに応じて、認証装置200は、顔情報登録処理を行う(S103)。 The registration unit 341 of the support device 300 receives a pre-registration request from the smartphone 102 via the network N. Then, the registration unit 341 transmits a face information registration request including the face image of the target person (grandchild) U1 included in the pre-registration request to the authentication device 200 (S102). In response, the authentication device 200 performs face information registration processing (S103).
 図8は、本実施形態2にかかる認証装置による顔情報登録処理の流れを示すフローチャートである。ここで、情報登録端末(不図示)は、ユーザの顔を含む身体を撮影し、撮影画像(登録画像)を含む顔情報登録要求を、ネットワークNを介して認証装置200へ送信する。情報登録端末は、例えば、パーソナルコンピュータ、スマートフォン又はタブレット端末等の情報処理装置である。例えば、情報登録端末は、タブレット端末101、スマートフォン102及びタブレット端末103等であってもよい。ここでは、情報登録端末は、スマートフォン102等から事前登録要求を受け付けた支援装置300であるものとする。 FIG. 8 is a flow chart showing the flow of face information registration processing by the authentication device according to the second embodiment. Here, the information registration terminal (not shown) photographs the user's body including the face, and transmits a face information registration request including the photographed image (registered image) to the authentication device 200 via the network N. The information registration terminal is, for example, an information processing device such as a personal computer, a smart phone, or a tablet terminal. For example, the information registration terminals may be the tablet terminal 101, the smart phone 102, the tablet terminal 103, and the like. Here, it is assumed that the information registration terminal is the support device 300 that has received a pre-registration request from the smartphone 102 or the like.
 まず、認証装置200は、顔情報登録要求を受信する(S201)。例えば、認証装置200は、支援装置300からネットワークNを介して顔情報登録要求を受信する。次に、顔検出部220は、顔情報登録要求に含まれる顔画像から顔領域を検出する(S202)。そして、特徴点抽出部230は、ステップS202で検出した顔領域から特徴点(顔特徴情報)を抽出する(S203)。そして、登録部240は、ユーザID211を発行する(S204)。そして、登録部240は、抽出した顔特徴情報212と発行したユーザID211を対応付けて顔情報DB210に登録する(S205)。その後、登録部240は、発行したユーザID211を要求元(情報登録端末、例えば、支援装置300)へ返信する(S206)。 First, the authentication device 200 receives a face information registration request (S201). For example, the authentication device 200 receives a face information registration request via the network N from the support device 300 . Next, the face detection unit 220 detects a face area from the face image included in the face information registration request (S202). Then, the feature point extraction unit 230 extracts feature points (facial feature information) from the face area detected in step S202 (S203). Then, the registration unit 240 issues the user ID 211 (S204). Then, the registration unit 240 associates the extracted facial feature information 212 with the issued user ID 211 and registers them in the facial information DB 210 (S205). After that, the registration unit 240 returns the issued user ID 211 to the request source (the information registration terminal, for example, the support device 300) (S206).
 図7に戻り説明を続ける。支援装置300の登録部341は、認証装置200からネットワークNを介して、発行されたユーザIDを取得する(S104)。そして、登録部341は、取得したユーザIDを対象者ID3121とする。また、登録部341は、事前登録要求に含まれる対象者情報に含まれる顔画像以外の情報(個人情報、タブレット端末101の端末ID)を抽出する。また、登録部341は、事前登録要求に含まれる支援者情報含まれる支援者IDと抽出する。そして、登録部341は、対象者ID3121、支援者ID3122、端末ID3123及び対象者情報3124(顔画像を除く)を対応付けて対象者管理情報312を生成する。そして、登録部341は、生成した対象者管理情報312を記憶部310に登録する(S105)。 Return to Fig. 7 and continue the explanation. The registration unit 341 of the support device 300 acquires the issued user ID from the authentication device 200 via the network N (S104). Then, the registration unit 341 sets the acquired user ID as the subject ID 3121 . Further, the registration unit 341 extracts information (personal information, terminal ID of the tablet terminal 101) other than the face image included in the target person information included in the pre-registration request. The registration unit 341 also extracts the supporter ID included in the supporter information included in the pre-registration request. Then, the registration unit 341 generates the target person management information 312 by associating the target person ID 3121, the supporter ID 3122, the terminal ID 3123, and the target person information 3124 (excluding the face image). Then, the registration unit 341 registers the generated subject management information 312 in the storage unit 310 (S105).
 図9は、本実施形態2にかかる収集処理の流れを示すフローチャートである。まず、タブレット端末101は、対象者(孫)U1の操作に応じてカメラ110により対象物を撮影する(S111)。次に、タブレット端末101は、第1の撮影画像を含む登録要求を、ネットワークNを介して支援装置300へ送信する(S112)。ここで、当該登録要求は、対象者(孫)U1の顔画像、識別情報、又は、タブレット端末101の端末IDを含む。以下の説明では、登録要求は、対象者(孫)U1の顔画像を含むものとする。 FIG. 9 is a flowchart showing the flow of collection processing according to the second embodiment. First, the tablet terminal 101 takes an image of an object with the camera 110 according to the operation of the object person (grandchild) U1 (S111). Next, the tablet terminal 101 transmits a registration request including the first captured image to the support device 300 via the network N (S112). Here, the registration request includes the face image and identification information of the target person (grandchild) U<b>1 or the terminal ID of the tablet terminal 101 . In the following description, it is assumed that the registration request includes the face image of the subject (grandchild) U1.
 支援装置300の取得部342は、タブレット端末101からネットワークNを介して、第1の撮影画像(と顔画像)を含む登録要求を取得する。そして、認証制御部343は、登録要求に含まれる顔画像含めた顔認証要求を、ネットワークNを介して認証装置200へ送信する(S113)。これに応じて、認証装置200は、顔認証処理を行う(S114)。 The acquisition unit 342 of the support device 300 acquires a registration request including the first captured image (and face image) from the tablet terminal 101 via the network N. The authentication control unit 343 then transmits the face authentication request including the face image included in the registration request to the authentication device 200 via the network N (S113). In response, the authentication device 200 performs face authentication processing (S114).
 図10は、本実施形態2にかかる認証装置による顔認証処理の流れを示すフローチャートである。まず、認証装置200は、支援装置300からネットワークNを介して、顔認証要求を受信する(S211)。尚、認証装置200は、タブレット端末101等から顔認証要求を受信してもよい。次に、認証装置200は、顔認証要求に含まれる顔画像に対して、上述したステップS202及びS203と同様に、顔特徴情報を抽出する。そして、認証装置200の認証部250は、顔認証要求に含まれる顔画像から抽出した顔特徴情報を、顔情報DB210の顔特徴情報212と照合し(S212)、一致度を算出する。そして、認証部250は、一致度が閾値以上か否かを判定する(S213)。顔特徴情報が一致した場合、つまり、顔特徴情報の一致度が閾値以上である場合、認証部250は、顔特徴情報212に対応付けられたユーザID211を特定する(S214)。そして、認証部250は、顔認証に成功した旨と特定したユーザID211とを含めた顔認証結果を、ネットワークNを介して支援装置300へ返信する(S215)。ステップS213で一致度が閾値未満である場合、認証部250は、顔認証に失敗した旨を含めた顔認証結果を、ネットワークNを介して支援装置300へ返信する(S216)。 FIG. 10 is a flow chart showing the flow of face authentication processing by the authentication device according to the second embodiment. First, the authentication device 200 receives a face authentication request from the support device 300 via the network N (S211). Note that the authentication device 200 may receive a face authentication request from the tablet terminal 101 or the like. Next, the authentication device 200 extracts facial feature information from the face image included in the face authentication request, as in steps S202 and S203 described above. Then, the authentication unit 250 of the authentication device 200 collates the facial feature information extracted from the face image included in the face authentication request with the facial feature information 212 of the face information DB 210 (S212), and calculates the matching degree. Then, the authentication unit 250 determines whether or not the degree of matching is equal to or greater than the threshold (S213). If the facial feature information matches, that is, if the degree of matching of the facial feature information is equal to or greater than the threshold, the authentication unit 250 identifies the user ID 211 associated with the facial feature information 212 (S214). Then, the authentication unit 250 returns the result of the face authentication including the result of the successful face authentication and the specified user ID 211 to the support device 300 via the network N (S215). If the degree of matching is less than the threshold in step S213, the authentication unit 250 returns the result of face authentication including failure of the face authentication to the support device 300 via the network N (S216).
 図9に戻り説明を続ける。支援装置300の認証制御部343は、認証装置200からネットワークNを介して、顔認証結果を受信する(S115)。ここでは、顔認証に成功したものとし、顔認証結果は、成功した旨とユーザIDとが含まれるものとする。 Return to Fig. 9 and continue the explanation. The authentication control unit 343 of the support device 300 receives the face authentication result from the authentication device 200 via the network N (S115). Here, it is assumed that the face authentication is successful, and the face authentication result includes the fact that it was successful and the user ID.
 続いて、特定部344は、受信した顔認証結果から顔認証に成功したか否かを判定する。顔認証に成功したと判定した場合、特定部344は、顔認証結果に含まれるユーザIDを特定し、当該ユーザIDを撮影者として特定する(S116)。尚、顔認証に失敗した場合、支援装置300は、その旨をタブレット端末101へ返信しても良い。 Subsequently, the specifying unit 344 determines whether or not the face authentication is successful from the received face authentication results. When determining that the face authentication has succeeded, the specifying unit 344 specifies the user ID included in the face authentication result, and specifies the user ID as the photographer (S116). Incidentally, when the face authentication fails, the support device 300 may reply to that effect to the tablet terminal 101 .
 ステップS116の後、登録部341は、特定した撮影者を対象者として第1の撮影画像と対応付けて登録する(S117)。具体的には、登録部341は、ステップS116で特定したユーザIDを対象者ID3121とし、登録要求に含まれる第1の撮影画像を対象物画像3125とする。そして、登録部341は、対象者ID3121と対象物画像3125とを対応付けて対象者管理情報312を更新する。 After step S116, the registration unit 341 registers the specified photographer as a target person in association with the first captured image (S117). Specifically, the registration unit 341 sets the user ID identified in step S116 as the subject ID 3121 and sets the first captured image included in the registration request as the subject image 3125 . Then, the registration unit 341 updates the subject management information 312 by associating the subject ID 3121 with the subject image 3125 .
 尚、図9では、第1の撮影画像の撮影者の特定を、支援装置300から認証装置200への顔認証により行う例を示したが、これに限定されない。支援装置300における撮影者の特定は、ユーザの識別情報、又は、タブレット端末101の端末IDを用いても良い。例えば、タブレット端末101は、内側(ユーザ向け)のカメラにより対象者(孫)U1の顔画像を撮影し、顔画像を含む顔認証要求を、ネットワークNを介して認証装置200へ送信してもよい。そして、タブレット端末101は、認証装置200からネットワークNを介して顔認証結果を受信する。顔認証に成功した場合、タブレット端末101は、顔認証結果から対象者(孫)U1のユーザID(識別情報)を特定(取得)できる。または、タブレット端末101は、対象者(孫)U1が端末にログインする際に入力したID及びパスワード(パスコード)により、本人特定を行っても良い。この場合、ログインに成功したIDを認証装置200で管理されたユーザIDと同一又は位置に特定できるものとすれば良い。または、タブレット端末101は、記憶部150に予め対象者(孫)U1の顔画像及びユーザIDを記憶していてもよい。この場合、対象者(孫)U1が第1の撮影画像を撮影した際に、内側(ユーザ向け)のカメラにより対象者(孫)U1の顔画像を撮影し、記憶部150内の顔画像と撮影された顔画像とを照合して顔認証を行っても良い。タブレット端末101は、端末内の顔認証に成功した場合、ユーザIDを特定できる。これらにより、ステップS112において、タブレット端末101は、第1の撮影画像と共に特定された(対象者(孫)U1の)ユーザIDを含めて登録要求を送信できる。この場合、支援装置300は、ステップS113及びS115の代わりに、受信した登録要求に含まれるユーザIDから撮影者を特定できる。または、タブレット端末101は、第1の撮影画像と共に端末IDを含めて登録要求を送信してもよい。この場合、支援装置300は、ステップS113及びS115の代わりに、受信した登録要求に含まれる端末ID3123に対応付けられた対象者ID3121を撮影者として特定できる。または、支援装置300における撮影者の特定は、顔認証以外の生体認証により行われても良い。 Although FIG. 9 shows an example in which the photographer of the first captured image is identified by face authentication from the support device 300 to the authentication device 200, the present invention is not limited to this. User identification information or the terminal ID of the tablet terminal 101 may be used to identify the photographer in the support device 300 . For example, the tablet terminal 101 may capture a face image of the target person (grandchild) U1 with an internal camera (for the user) and transmit a face authentication request including the face image to the authentication device 200 via the network N. good. Then, the tablet terminal 101 receives the face authentication result from the authentication device 200 via the network N. FIG. When the face authentication is successful, the tablet terminal 101 can specify (acquire) the user ID (identification information) of the target person (grandchild) U1 from the face authentication result. Alternatively, the tablet terminal 101 may identify the person using the ID and password (passcode) entered by the target person (grandchild) U1 when logging into the terminal. In this case, the ID that has successfully logged in may be the same as the user ID managed by the authentication device 200 or can be identified at the position. Alternatively, the tablet terminal 101 may store the face image and user ID of the target person (grandchild) U1 in the storage unit 150 in advance. In this case, when the target person (grandchild) U1 captures the first captured image, the face image of the target person (grandchild) U1 is captured by the inner camera (for the user), and the face image in the storage unit 150 is captured. Face authentication may be performed by matching with a photographed face image. The tablet terminal 101 can identify the user ID when face authentication within the terminal is successful. Accordingly, in step S112, the tablet terminal 101 can transmit a registration request including the specified user ID (of the target person (grandchild) U1) together with the first captured image. In this case, the support device 300 can identify the photographer from the user ID included in the received registration request instead of steps S113 and S115. Alternatively, the tablet terminal 101 may transmit the registration request including the terminal ID together with the first captured image. In this case, instead of steps S113 and S115, the support device 300 can identify the subject ID 3121 associated with the terminal ID 3123 included in the received registration request as the photographer. Alternatively, identification of the photographer by the support device 300 may be performed by biometric authentication other than face authentication.
 図11は、本実施形態2にかかる支援情報提供処理の流れを示すフローチャートである。支援装置300は、所定のタイミング、スマートフォン102又はタブレット端末103からの提示要求に応じて、支援情報提供処理を開始する。以下の説明では、タブレット端末103からの提示要求があったものとする。 FIG. 11 is a flowchart showing the flow of support information provision processing according to the second embodiment. The support device 300 starts support information provision processing at a predetermined timing in response to a presentation request from the smartphone 102 or the tablet terminal 103 . In the following description, it is assumed that there is a presentation request from the tablet terminal 103 .
 まず、タブレット端末103は、支援者(祖父母)U3の操作に応じて、対象者(孫)U1に対する支援情報候補の提示要求を、ネットワークNを介して支援装置300へ送信する(S121)。ここで、提示要求には、タブレット端末103の端末IDが含まれる。尚、提示要求には、支援者(祖父母)U3の支援者IDや顔画像が含まれていても良い。 First, the tablet terminal 103 transmits a request for presentation of support information candidates for the target person (grandchild) U1 to the support device 300 via the network N in accordance with the operation of the supporter (grandparent) U3 (S121). Here, the presentation request includes the terminal ID of the tablet terminal 103 . The presentation request may include the supporter ID and face image of the supporter (grandparent) U3.
 支援装置300の取得部342は、タブレット端末103からネットワークNを介して、提示要求を取得する。特定部344は、支援者情報313の中から、取得した提示要求に含まれる端末ID3133に対応付けられた支援者ID3131を特定し、当該特定した支援者ID3131(支援者ID3122)に対応付けられた対象者ID3121を特定する(S122)。そして、取得部342は、対象者管理情報312の中から、特定した対象者ID3121に対応付けられた対象者情報3124及び対象物画像3125を取得する(S123)。そして、特定部344は、対象物画像3125を解析し、解析結果と対象者情報3124とに基づいて、対象者(孫)U1の興味を特定する(S124)。そして、決定部345は、特定した興味に基づいて支援情報DB314の中から1以上の支援情報候補3141等を決定する(S125)。尚、ステップS124及びS125の一部又は全ては、AIモデルが用いられても良い。 The acquisition unit 342 of the support device 300 acquires the presentation request from the tablet terminal 103 via the network N. The identifying unit 344 identifies the supporter ID 3131 associated with the terminal ID 3133 included in the acquired presentation request from the supporter information 313, and identifies the supporter ID 3131 associated with the identified supporter ID 3131 (supporter ID 3122) The target person ID 3121 is specified (S122). Then, the acquiring unit 342 acquires the target person information 3124 and the target object image 3125 associated with the identified target person ID 3121 from the target person management information 312 (S123). Then, the specifying unit 344 analyzes the target object image 3125 and specifies the interest of the target person (grandchild) U1 based on the analysis result and the target person information 3124 (S124). Then, the determination unit 345 determines one or more support information candidates 3141 and the like from the support information DB 314 based on the specified interest (S125). An AI model may be used for part or all of steps S124 and S125.
 そして、提示部346は、決定された支援情報候補を、ネットワークNを介してタブレット端末103へ送信する(S126)。これに応じて、タブレット端末103は、受信した支援情報候補(群)を表示する。 Then, the presentation unit 346 transmits the determined support information candidate to the tablet terminal 103 via the network N (S126). In response, the tablet terminal 103 displays the received support information candidate (group).
 図12は、本実施形態2にかかる支援者への支援情報候補の提示画面51の例を示す図である。提示画面51は、選択欄5111から5113、支援情報候補表示欄5121から5123、他の候補表示ボタン5131、購入ボタン5132及び取消ボタン5133を備える。選択欄5111から5113は、支援者(祖父母)U3の選択操作を受け付ける欄である。支援情報候補表示欄5121から5123は、ステップS125で決定された支援情報候補の表示欄である。例えば、対象物画像3125(群)には、対象物として花が多く含まれていたものとする。そのため、図12の例では、支援情報候補表示欄5121は植物図鑑、支援情報候補表示欄5122にはフラワーアレンジメントの体験講座、支援情報候補表示欄5123にはひまわり栽培セットが提示(推薦)されていることを示す。他の候補表示ボタン5131は、支援情報候補表示欄5121から5123以外の支援情報候補を表示させるためのボタンである。例えば、他の候補表示ボタン5131が押下された場合、タブレット端末103は、支援装置300へ他の候補の提示要求を送信し、他の候補を受信した場合、支援情報候補表示欄5121から5123を受信した支援情報候補で更新する。購入ボタン5132は、選択欄5111から5113のうち選択された候補を支援情報として購入を受け付ける欄である。取消ボタン5133は、支援情報候補の提示を取り消す又は終了する(提示画面51を閉じる)ためのボタンである。 FIG. 12 is a diagram showing an example of a presentation screen 51 of support information candidates for supporters according to the second embodiment. The presentation screen 51 includes selection columns 5111 to 5113 , support information candidate display columns 5121 to 5123 , other candidate display button 5131 , purchase button 5132 and cancel button 5133 . Selection columns 5111 to 5113 are columns for receiving a selection operation by a supporter (grandparent) U3. Support information candidate display columns 5121 to 5123 are display columns for support information candidates determined in step S125. For example, it is assumed that the object images 3125 (group) include many flowers as objects. Therefore, in the example of FIG. 12, the support information candidate display column 5121 presents (recommends) a plant illustrated book, the support information candidate display column 5122 presents (recommends) a flower arrangement experience course, and the support information candidate display column 5123 presents (recommends) a sunflower cultivation set. indicate that Another candidate display button 5131 is a button for displaying support information candidates other than the support information candidate display columns 5121 to 5123 . For example, when the display other candidates button 5131 is pressed, the tablet terminal 103 transmits a request for presentation of other candidates to the support apparatus 300, and when receiving other candidates, displays the support information candidate display columns 5121 to 5123. Update with received support information candidates. A purchase button 5132 is a column for accepting a purchase of a candidate selected from the selection columns 5111 to 5113 as support information. The cancel button 5133 is a button for canceling or ending the presentation of the support information candidate (closing the presentation screen 51).
 図12の例では、支援者(祖父母)U3が選択欄5111つまり「植物図鑑」を支援情報として選択し(S127)、購入ボタン5132を押下したことを示す。この場合、タブレット端末103は、支援情報候補表示欄5121(植物図鑑)に対応する支援情報候補のID(支援情報)を含めた支援指示を、ネットワークNを介して支援装置300へ送信する(S128)。 The example of FIG. 12 shows that the supporter (grandparents) U3 has selected the selection field 5111, that is, "botanical picture book" as the support information (S127) and pressed the purchase button 5132. In this case, the tablet terminal 103 transmits a support instruction including the ID (support information) of the support information candidate corresponding to the support information candidate display field 5121 (plant picture book) to the support device 300 via the network N (S128). ).
 支援装置300の処理部347は、タブレット端末103からネットワークNを介して支援指示を受け付ける。処理部347は、支援指示に含まれる支援情報の提供処理を行う(S129)。また、必要に応じて、処理部347は、支援情報に関する情報を、ネットワークNを介してタブレット端末101へ通知する(S130)。具体的には、処理部347は、ステップS122で特定された対象者ID3121に対応付けられた端末ID3123を特定し、端末ID3123(タブレット端末101)を宛先として支援情報の提供について通知する。例えば、支援情報が電子書籍の植物図鑑である場合、処理部347は、電子書籍のダウンロード先をタブレット端末101へ通知する。尚、対象者(孫)U1へサプライズでプレゼント(商品)を贈る(配送する)場合などには、ステップS130は省略してもよい。 The processing unit 347 of the support device 300 receives a support instruction from the tablet terminal 103 via the network N. The processing unit 347 performs processing for providing support information included in the support instruction (S129). In addition, as necessary, the processing unit 347 notifies the tablet terminal 101 of information regarding the support information via the network N (S130). Specifically, the processing unit 347 identifies the terminal ID 3123 associated with the subject ID 3121 identified in step S122, and notifies the provision of the support information with the terminal ID 3123 (tablet terminal 101) as the destination. For example, if the support information is an e-book plant illustrated book, the processing unit 347 notifies the tablet terminal 101 of the download destination of the e-book. Note that step S130 may be omitted when presenting (delivering) a surprise gift (product) to the target person (grandchild) U1.
 その後、対象者(孫)U1(又はタブレット端末101)は、支援者(祖父母)U3からの支援情報を受領する(S131)。そして、タブレット端末101は、対象者(孫)U1又は対象者の親U2により入力されたお礼情報を、ネットワークNを介して支援装置300へ送信する(S132)。このとき、お礼情報には、お礼のメッセージのテキスト情報や支援情報受領時に対象者(孫)U1が撮影された第2の撮影画像が含まれる。また、お礼情報には、対象者(孫)U1の認証用の顔画像、対象者ID、タブレット端末101の端末ID等が含まれるものとする。 After that, the target person (grandchild) U1 (or the tablet terminal 101) receives support information from the supporter (grandparent) U3 (S131). Then, the tablet terminal 101 transmits the thank-you information input by the target person (grandchild) U1 or the target person's parent U2 to the support device 300 via the network N (S132). At this time, the thank-you information includes the text information of the thank-you message and the second photographed image of the target person (grandchild) U1 at the time of receiving the support information. The thank-you information includes a facial image for authentication of the target person (grandchild) U1, the target person ID, the terminal ID of the tablet terminal 101, and the like.
 支援装置300の取得部342は、タブレット端末101からネットワークNを介して、お礼情報を取得する。特定部344は、お礼情報に含まれる顔画像、対象者ID、タブレット端末101の端末ID等から対象者(孫)U1を特定する。例えば、認証制御部343は、お礼情報に含まれる顔画像について顔認証を行っても良い。そして、登録部341は、対象者(孫)U1の対象者ID3121と、お礼情報を対応付けて対象者管理情報312への登録処理を行う(S133)。また、処理部347は、取得したお礼情報を、ネットワークNを介してタブレット端末103へ通知する(S134)。これに応じて、タブレット端末103は、通知されたお礼情報を表示する(S135)。 The acquisition unit 342 of the support device 300 acquires thank-you information from the tablet terminal 101 via the network N. The specifying unit 344 specifies the target person (grandchild) U1 from the face image, the target person ID, the terminal ID of the tablet terminal 101, and the like included in the thank-you information. For example, the authentication control unit 343 may perform face authentication on the face image included in the thank-you information. Then, the registration unit 341 associates the target person ID 3121 of the target person (grandchild) U1 with the thank-you information, and performs registration processing in the target person management information 312 (S133). In addition, the processing unit 347 notifies the obtained thank-you information to the tablet terminal 103 via the network N (S134). In response, the tablet terminal 103 displays the notified thank-you information (S135).
 図13は、本実施形態2にかかる支援情報候補の提示、選択購入、配送、お礼コメントの通知の概念を説明するための図である。タブレット端末103は提示画面51aを表示し、支援者(祖父母)U3は植物図鑑を支援情報として選択及び購入したものとする。これに応じて、支援装置300は、植物図鑑の購入及び配送処理を行う。そして、対象者(孫)U1の元に植物図鑑が配送されたものとする。このとき、対象者の親U2はスマートフォン102を用いて、植物図鑑を見て喜んでいる対象者(孫)U1を含む第2の撮影画像を撮影する。尚、第2の撮影画像は、タブレット端末101により撮影されても良い。そして、タブレット端末101又はスマートフォン102は、対象者(孫)U1又は対象者の親U2の操作に応じて、第2の撮影画像と対象者(孫)U1のお礼コメントを含めたお礼情報を支援装置300へ送信する。支援装置300は、お礼情報をタブレット端末103へ通知する。タブレット端末103は、お礼情報の通知に応じてお礼情報表示画面51bを表示する。これにより、支援者(祖父母)U3は、お礼情報表示画面51bを介して、対象者(孫)U1のお礼コメントや対象者(孫)U1が植物図鑑を見て喜んでいる写真を視認できる。 FIG. 13 is a diagram for explaining the concept of presentation of support information candidates, selective purchase, delivery, and notification of thank-you comments according to the second embodiment. It is assumed that the tablet terminal 103 displays the presentation screen 51a, and the supporter (grandparent) U3 has selected and purchased the illustrated book of plants as support information. In response to this, the support device 300 performs the process of purchasing and delivering the pictorial book of plants. Then, it is assumed that the plant illustrated book is delivered to the target person (grandchild) U1. At this time, the target person's parent U2 uses the smartphone 102 to capture a second captured image including the target person (grandchild) U1 who is pleased with the plant illustrated book. Note that the second captured image may be captured by the tablet terminal 101 . Then, the tablet terminal 101 or the smartphone 102 supports thank-you information including the second captured image and the thank-you comment of the target person (grandchild) U1 according to the operation of the target person (grandchild) U1 or the target person's parent U2. Send to device 300 . The support device 300 notifies the tablet terminal 103 of the thank-you information. The tablet terminal 103 displays the thank-you information display screen 51b in response to the thank-you information notification. As a result, the supporter (grandparent) U3 can visually recognize the thank-you comment of the target person (grandchild) U1 and the picture of the target person (grandchild) U1 looking at the botanical illustrated book and enjoying it through the thank-you information display screen 51b.
 このように、本実施形態2では、上述した実施形態1の効果に加えて、次の効果を奏する。まず、支援者からの支援指示に応じて、対象者に対して支援情報の提供を行うため、対象者の興味に合った支援情報を提供できる。また、対象者は、支援情報の通知により支援者からの提供を知ることができる。また、第1の撮影画像の登録要求に、対象者の本人特定情報(顔画像、識別情報、端末ID)が含まれるため、対象者や親等が登録の度に本人特定情報を入力する必要がなく、かつ、正確に、第1の撮影画像の撮影者を特定できる。特に、本人特定情報として生体情報を用いて、生体認証を行うことで、高精度に撮影者を特定できる。また、第1の撮影画像と共に撮影場所を登録するため、興味の特定精度が向上する。 As described above, the second embodiment has the following effects in addition to the effects of the first embodiment described above. First, since the support information is provided to the target person according to the support instruction from the supporter, the support information that matches the interest of the target person can be provided. In addition, the target person can know the provision from the supporter by the notification of the support information. In addition, since the request for registration of the first photographed image includes the target person's identification information (face image, identification information, terminal ID), it is not necessary for the target person or relatives to input the identification information each time they register. It is possible to specify the photographer of the first captured image accurately without any trouble. In particular, by performing biometric authentication using biometric information as personal identification information, the photographer can be identified with high accuracy. In addition, since the photographing location is registered together with the first photographed image, the precision of specifying the interest is improved.
 尚、タブレット端末101は、ネットワークN上で対象者(孫)U1に割り当てられた記憶領域に対して、第1の撮影画像を送信してもよい。この場合、支援装置300の特定部344は、記憶領域に第1の撮影画像が保存されたことを検出し、当該記憶領域に割り当てられたユーザIDを撮影者として特定することができる。 Note that the tablet terminal 101 may transmit the first captured image to the storage area allocated to the target person (grandchild) U1 on the network N. In this case, the identification unit 344 of the support device 300 can detect that the first captured image has been saved in the storage area and identify the user ID assigned to the storage area as the photographer.
 尚、支援装置300は、対象者の親U2が撮影した写真のうち、対象者(孫)U1が写っている撮影画像を解析して対象者(孫)U1の興味を特定し、支援者(祖父母)U3に提示してもよい。具体的には、スマートフォン102は、対象者の親U2により撮影された対象者(孫)U1と対象物とを含む画像を、ネットワークNを介して支援装置300へ送信する。このとき、支援装置300は、受信した画像を解析する。例えば、親が撮影した画像に、「子と動物(や特定のキャラクター)が一緒に写った写真」が多ければ、支援装置300は、その子は動物(や特定のキャラクター)に興味があると特定し、祖父母に動物図鑑(や特定のキャラクターの絵本)を提示する。これにより、支援者の興味を適切に特定し、興味に沿った支援情報候補を提示できる。 Note that the support device 300 analyzes the photographed image in which the target person (grandchild) U1 is photographed among the photographs taken by the target person's parent U2, identifies the interest of the target person (grandchild) U1, and identifies the interest of the target person (grandchild) U1. grandparents) may be presented to U3. Specifically, the smartphone 102 transmits an image including the target person (grandchild) U1 and the object photographed by the target person's parent U2 to the support device 300 via the network N. At this time, the support device 300 analyzes the received image. For example, if there are many “pictures of a child and an animal (or a specific character) together” in the images taken by the parent, the support device 300 identifies that the child is interested in animals (or a specific character). and present the animal picture book (or picture book of a particular character) to the grandparents. As a result, it is possible to appropriately identify the interest of the supporter and present support information candidates in line with the interest.
<実施形態3>
 本実施形態3は、上述した実施形態2の変形例である。尚、本実施形態3にかかる支援システムの概要構成は、図3と同様であるため、以下では違いを中心に説明し、重複する構成については図示及び説明を省略する。
<Embodiment 3>
Embodiment 3 is a modification of Embodiment 2 described above. Note that the schematic configuration of the support system according to the third embodiment is the same as that of FIG. 3, so the differences will be mainly described below, and illustration and description of overlapping configurations will be omitted.
 図14は、本実施形態3にかかる支援装置300aの構成を示すブロック図である。支援装置300aは、図6と比べてプログラム311aが変更され、問合せ部348が追加されたものである。プログラム311aは、プログラム311に加え、本実施形態3にかかる問合せ処理が実装されたコンピュータプログラムである。 FIG. 14 is a block diagram showing the configuration of a support device 300a according to the third embodiment. The support device 300a has a program 311a changed from that of FIG. 6, and an inquiry unit 348 is added. The program 311a is a computer program in which, in addition to the program 311, the query processing according to the third embodiment is implemented.
 問合せ部348は、対象者に対して第1の撮影画像(対象物画像3125)の属性情報の問合せを行う。取得部342は、対象者から、問合せに応じた属性情報を取得する。特定部344は、第1の撮影画像及び属性情報に基づいて、対象者の興味を特定する。 The inquiry unit 348 inquires of the subject about the attribute information of the first captured image (object image 3125). The acquisition unit 342 acquires attribute information in response to an inquiry from a subject. The specifying unit 344 specifies the subject's interest based on the first captured image and the attribute information.
 図15は、本実施形態3にかかる撮影画像に対する問合せ処理の流れを示すシーケンス図である。まず、支援装置300aは、所定のタイミングで問合せ処理を開始する。例えば、支援装置300aは、図9の収集処理で第1の撮影画像が登録された後に、続いて、問合せ処理を開始してもよい。 FIG. 15 is a sequence diagram showing the flow of inquiry processing for a captured image according to the third embodiment. First, the support device 300a starts inquiry processing at a predetermined timing. For example, the support device 300a may start inquiry processing after the first captured image is registered in the collection processing of FIG. 9 .
 問合せ部348は、特定の対象者(例えば対象者(孫)U1)の対象物画像3125(群)を読み出す(S301)。そして、問合せ部348は、問いかけ内容を生成する(S302)。ここでは、問合せ部348は、対象物画像群の中から対象者(孫)U1のお気に入りの画像を選択させるための問いかけ文を生成する。そして、問合せ部348は、生成した問いかけ内容と対象物画像群とを、ネットワークNを介してタブレット端末101へ送信する(S303)。これに応じて、タブレット端末101は、受信した問いかけ内容及び対象物画像群を表示する(S304)。 The inquiry unit 348 reads the target object images 3125 (group) of a specific target person (for example, target person (grandchild) U1) (S301). Then, the inquiry unit 348 generates inquiry content (S302). Here, the inquiry unit 348 generates an inquiry sentence for selecting a favorite image of the subject (grandchild) U1 from the object image group. Then, the inquiry unit 348 transmits the generated inquiry content and the object image group to the tablet terminal 101 via the network N (S303). In response, the tablet terminal 101 displays the content of the inquiry and the object image group received (S304).
 図16は、本実施形態3にかかる撮影画像のお気に入りの選択画面52の例を示す図である。選択画面52は、問いかけメッセージ521及び撮影画像群522を備える。問いかけメッセージ521は、ステップS302で生成された問いかけ内容の表示欄である。ここでは、「登録された写真を表示するよ。お気に入りはあるかな?」というメッセージを表示して、対象者(孫)U1へ問いかけていることを示す。撮影画像群522は、過去に対象者(孫)U1に撮影された画像とそのタグ(ラベル)との表示欄である。尚、タグは、特定部344が第1の撮影画像を解析した際に、当該画像の属性情報として特定された情報とする。また、タグは、画像に対応付けることで後述する画像認識エンジンの機械学習用の教師データとして利用可能な情報であってもよい。特定部344は、所定の画像認識エンジンを用いて画像を解析してタグを特定してもよい。 FIG. 16 is a diagram showing an example of a selection screen 52 for favorites of captured images according to the third embodiment. The selection screen 52 includes an inquiry message 521 and a captured image group 522 . The inquiry message 521 is a display field for the contents of the inquiry generated in step S302. Here, the message "The registered photos will be displayed. Do you have any favorites?" The photographed image group 522 is a display field for images photographed in the past for the subject (grandchild) U1 and their tags (labels). The tag is information specified as attribute information of the image when the specifying unit 344 analyzes the first captured image. Also, the tag may be information that can be used as teaching data for machine learning of an image recognition engine, which will be described later, by being associated with the image. The identification unit 344 may analyze the image using a predetermined image recognition engine to identify the tag.
 タブレット端末101は、撮影画像群522の中から1以上の画像に対して、対象者(孫)U1による選択操作を受け付ける(S305)。そして、タブレット端末101は、選択された画像又はタグ(選択情報)を、ネットワークNを介して支援装置300aへ送信する(S306)。 The tablet terminal 101 accepts a selection operation by the subject (grandchild) U1 for one or more images from the captured image group 522 (S305). The tablet terminal 101 then transmits the selected image or tag (selection information) to the support device 300a via the network N (S306).
 支援装置300aの取得部342は、タブレット端末101からネットワークNを介して、選択情報を取得する。そして、特定部344は、選択情報が示す選択画像に基づいて、対象者(孫)U1の興味を特定する(S307)。具体的には、特定部344は、選択画像のタグ(属性情報)を興味として特定する。そして、決定部345は、特定した興味を対象者情報3124へ登録する(S308)。これにより、この後、決定部345は、対象者(孫)U1が撮影した対象画像群のうち選択画像における興味が他の画像よりも高いことを考慮して、対象者(孫)U1の支援情報候補を決定できる。よって、対象者(孫)U1の興味に即した支援情報候補を提示できる。つまり、対象者(孫)U1にお気に入り画像を選択させることで、興味を特定する精度が高まる。逆に言うと、対象者(孫)U1が撮影した対象画像群の中には、必ずしもお気に入りではない写真が含まれる。よって、対象者(孫)U1にお気に入り画像を選択させることで、興味のある画像を絞り込むことができる。 The acquisition unit 342 of the support device 300a acquires the selection information from the tablet terminal 101 via the network N. Then, the identifying unit 344 identifies the interest of the target person (grandchild) U1 based on the selected image indicated by the selection information (S307). Specifically, the identifying unit 344 identifies the tag (attribute information) of the selected image as the interest. Then, the determination unit 345 registers the specified interest in the target person information 3124 (S308). As a result, after this, the determination unit 345 considers that the selected image among the group of target images taken by the target person (grandchild) U1 is more interesting than the other images, and supports the target person (grandchild) U1. Information candidates can be determined. Therefore, it is possible to present support information candidates that match the interest of the target person (grandchild) U1. That is, by having the target person (grandchild) U1 select a favorite image, the accuracy of identifying the interest is increased. Conversely, the group of target images taken by the target person (grandchild) U1 includes photos that are not necessarily favorites. Therefore, by having the target person (grandchild) U1 select a favorite image, it is possible to narrow down the images of interest.
 図17は、本実施形態3にかかる撮影画像に対する問合せ処理の他の例の流れを示すシーケンス図である。ここでは、対象者(孫)U1により撮影された1つの撮影画像を対象に処理を行うものとする。 FIG. 17 is a sequence diagram showing another example of the flow of inquiry processing for a captured image according to the third embodiment. Here, it is assumed that one photographed image photographed by the subject person (grandchild) U1 is processed.
 まず、問合せ部348は、特定の対象者(例えば対象者(孫)U1)の任意の対象物画像3125を読み出す(S301a)。例えば、問合せ部348は、対象物画像3125のうちタグが対応付けられていない画像を読み出す。特定部344は、読み出した対象物画像3125を解析してタグ候補を推定する(S301b)。例えば、特定部344は、対象物画像3125を画像認識エンジンで解析した際に、算出されたスコアが上位の複数のタグをタグ候補(回答の選択肢)として推定する。そして、問合せ部348は、タグ候補を選択させる問いかけ内容を生成する(S302a)。ここでは、問合せ部348は、複数のタグ候補の中から対象者(孫)U1に正しいタグを選択させるための問いかけ文を生成する。問いかけ文は、クイズ形式といえる。そして、問合せ部348は、生成した問いかけ内容とタグ候補と対象物画像とを、ネットワークNを介してタブレット端末101へ送信する(S303a)。これに応じて、タブレット端末101は、受信した問いかけ内容、タグ候補及び対象物画像を表示する(S304a)。 First, the inquiry unit 348 reads an arbitrary target object image 3125 of a specific target person (for example, target person (grandchild) U1) (S301a). For example, the inquiry unit 348 reads out an image that is not associated with a tag among the object images 3125 . The identifying unit 344 analyzes the read object image 3125 and estimates tag candidates (S301b). For example, when the target object image 3125 is analyzed by the image recognition engine, the specifying unit 344 estimates a plurality of tags with the highest calculated scores as tag candidates (answer options). Then, the inquiry unit 348 generates inquiry content for selecting a tag candidate (S302a). Here, the inquiry unit 348 generates an inquiry sentence for making the target person (grandchild) U1 select the correct tag from a plurality of tag candidates. The question sentence can be said to be in a quiz format. Then, the inquiry unit 348 transmits the generated inquiry contents, tag candidates, and object images to the tablet terminal 101 via the network N (S303a). In response, the tablet terminal 101 displays the content of the inquiry, the tag candidates, and the object image received (S304a).
 図18は、本実施形態3にかかるクイズ形式でのタグ候補の選択画面53の例を示す図である。選択画面53は、問いかけメッセージ531及び選択肢532を備える。問いかけメッセージ531は、ステップS302aで生成された問いかけ内容の表示欄である。ここでは、「登録された写真を表示するよ。何の写真かなまえを教えてね!わかるかな?」というメッセージを表示して、対象者(孫)U1へ問いかけていることを示す。選択肢532は、対象物画像に写る対象物の名前の候補をタグ候補とした選択肢を示す。選択肢532は、「ガーベラ」、「キク」及び「ヒマワリ」が選択肢である例を示す。 FIG. 18 is a diagram showing an example of the tag candidate selection screen 53 in the quiz format according to the third embodiment. The selection screen 53 has an inquiry message 531 and options 532 . The inquiry message 531 is a display field for the contents of the inquiry generated in step S302a. Here, the message "I will display the registered photo. Tell me what kind of photo it is! Do you understand?" An option 532 indicates an option in which a candidate for the name of an object appearing in the image of the object is used as a tag candidate. Choice 532 shows an example in which “gerbera”, “chrysanthemum” and “sunflower” are choices.
 タブレット端末101は、選択肢532の中から1つの選択肢に対して、対象者(孫)U1による選択操作を受け付ける(S305a)。そして、タブレット端末101は、選択されたタグ(選択タグ)を、ネットワークNを介して支援装置300aへ送信する(S306a)。 The tablet terminal 101 accepts a selection operation by the target person (grandchild) U1 for one option from the options 532 (S305a). The tablet terminal 101 then transmits the selected tag (selected tag) to the support device 300a via the network N (S306a).
 支援装置300aの取得部342は、タブレット端末101からネットワークNを介して、選択タグを取得する。そして、処理部347は、対象物画像と選択タグをタブレット端末103へ通知する(S309)。尚、通知先は、スマートフォン102を含んでも良い。タブレット端末103は、通知された対象物画像とタグを表示する(S310)。これにより、支援者(祖父母)U3(や対象者の親U2)は、対象者(孫)U1のタグの解答を共有できる。よって、支援者(祖父母)U3等は、対象者(孫)U1の成長を把握できる。さらに、支援者(祖父母)U3等は、対象物画像に対するタグが正しいか否かを確認し、タブレット端末103に対して正解のタグを入力してもよい。その場合、タブレット端末103は、入力された正解タグを、ネットワークNを介して支援装置300aへ送信する(S311)。そして、支援装置300aは、受信した正解タグをタブレット端末101へ通知する(S312)。そして、タブレット端末101は、通知された正解タグを対象物画像と共に表示する(S313)。これにより、対象者(孫)U1は、クイズに対する正解を知ることができ、学びが促進される。つまり、対象者(孫)U1の教育を支援できる。また、対象者(孫)U1と支援者(祖父母)U3との交流を支援できる。 The acquisition unit 342 of the support device 300a acquires the selection tag from the tablet terminal 101 via the network N. Then, the processing unit 347 notifies the tablet terminal 103 of the object image and the selection tag (S309). Note that the notification destination may include the smartphone 102 . The tablet terminal 103 displays the notified object image and tag (S310). Thereby, the supporters (grandparents) U3 (and the target person's parent U2) can share the tag answers of the target person (grandchild) U1. Therefore, the supporters (grandparents) U3 and the like can grasp the growth of the target person (grandchild) U1. Further, the supporter (grandparent) U3 or the like may confirm whether the tag for the object image is correct and input the correct tag to the tablet terminal 103 . In that case, the tablet terminal 103 transmits the input correct tag to the support device 300a via the network N (S311). Then, the support device 300a notifies the tablet terminal 101 of the received correct tag (S312). Then, the tablet terminal 101 displays the notified correct tag together with the object image (S313). Thereby, the subject (grandchild) U1 can know the correct answer to the quiz, and learning is promoted. In other words, it is possible to support the education of the target person (grandchild) U1. In addition, it is possible to support interaction between the target person (grandchild) U1 and the supporter (grandparent) U3.
 さらに、支援装置300aの登録部341は、正解タグを対象物画像3125に対応付けて登録する(S314)。これにより、対象者(孫)U1による撮影画像(対象物画像)に対する正確なタグ(属性情報)を蓄積できる。そして、支援装置300aは、対象物画像と正解タグの組を教師データとして画像認識エンジンの機械学習用に提供してもよい。これにより、機械学習の教師データの正確性、作成効率や収集効率が向上する。そのため、画像認識エンジンの認識精度を効率的に向上できる。 Further, the registration unit 341 of the support device 300a registers the correct tag in association with the object image 3125 (S314). This makes it possible to accumulate accurate tags (attribute information) for the captured image (object image) of the subject (grandchild) U1. Then, the support device 300a may provide a set of the target object image and the correct tag as teacher data for machine learning of the image recognition engine. This improves the accuracy, creation efficiency, and collection efficiency of training data for machine learning. Therefore, the recognition accuracy of the image recognition engine can be efficiently improved.
 尚、図17において、タグ候補(回答)の選択肢を列挙する代わりに、対象者(孫)U1に対してタグを入力させてもよい。例えば、ステップS301b及びS302aを行わず、問合せ部348は、タグ(属性情報)の入力を促す問いかけ内容を生成してもよい。その場合、問合せ部348は、生成した問いかけ内容と対象物画像とを、ネットワークNを介してタブレット端末101へ送信する。これに応じて、タブレット端末101は、受信した問いかけ内容、タグの入力欄及び対象物画像を表示する。 In FIG. 17, instead of listing options for tag candidates (answers), the target person (grandchild) U1 may be prompted to enter a tag. For example, without performing steps S301b and S302a, the inquiry unit 348 may generate an inquiry that prompts the user to enter a tag (attribute information). In that case, the inquiry unit 348 transmits the generated inquiry content and the object image to the tablet terminal 101 via the network N. FIG. In response to this, the tablet terminal 101 displays the content of the inquiry received, the entry field for the tag, and the object image.
 尚、図17において必ずしも支援者が正解タグを入力しなくてもよい。例えば、支援装置300は、ステップS306aにおいて取得した選択タグについて正解か否かを判定し、判定結果をタブレット端末103及びスマートフォン102へ通知してもよい。これによっても、支援者(祖父母)U3や対象者の親U2は、問いかけ内容に対する対象者(孫)U1の正否を把握することができる。また、対象者(孫)U1と支援者(祖父母)U3との交流を支援できる。 In addition, in FIG. 17, the supporter does not necessarily have to input the correct tag. For example, the support device 300 may determine whether or not the selection tag acquired in step S306a is correct, and notify the tablet terminal 103 and the smartphone 102 of the determination result. This also allows the supporters (grandparents) U3 and the target person's parent U2 to grasp whether the target person (grandchild) U1 is right or wrong with respect to the content of the question. In addition, it is possible to support interaction between the target person (grandchild) U1 and the supporter (grandparent) U3.
 図19は、本実施形態3にかかる撮影画像の属性情報の入力画面54の例を示す図である。入力画面54は、問いかけメッセージ541、音声認識結果表示欄542及び文字入力欄543を備える。問いかけメッセージ541は、ステップS302aで生成された問いかけ内容の表示欄である。ここでは、「とうろくする名前を入力してね!」というメッセージを表示して、対象者(孫)U1へ問いかけていることを示す。音声認識結果表示欄542は、対象者(孫)U1が対象物の名前を発話した際に、マイク120により収音され、音声認識された結果のテキスト情報を表示する欄である。ここでは、「ガーベラ」と音声認識されたことを示す。文字入力欄543は、対象者(孫)U1による文字入力された結果を表示する欄である。タブレット端末101は、音声認識結果表示欄542又は文字入力欄543に入力された文字情報を、タグとして支援装置300aへ送信する。以降は、ステップS309からS314と同様である。この場合も、支援装置300aは、対象物画像と正解タグの組を教師データとして画像認識エンジンの機械学習用に提供することで、機械学習の教師データの正確性、作成効率や収集効率が向上する。そのため、画像認識エンジンの認識精度を効率的に向上できる。 FIG. 19 is a diagram showing an example of an input screen 54 for attribute information of a captured image according to the third embodiment. The input screen 54 has an inquiry message 541 , a speech recognition result display field 542 and a character input field 543 . The inquiry message 541 is a display field for the contents of the inquiry generated in step S302a. Here, the message "Please enter the name of the visitor!" is displayed to indicate that the target person (grandchild) U1 is being asked. The voice recognition result display column 542 is a column for displaying text information as a result of voice recognition, which is picked up by the microphone 120 when the target person (grandchild) U1 speaks the name of the target object. Here, it indicates that "gerbera" has been recognized by voice. The character input column 543 is a column for displaying the result of character input by the target person (grandchild) U1. The tablet terminal 101 transmits the character information input in the voice recognition result display field 542 or the character input field 543 as a tag to the support device 300a. The subsequent steps are the same as steps S309 to S314. In this case as well, the support device 300a provides pairs of the target object image and the correct tag as training data for machine learning of the image recognition engine, thereby improving the accuracy, creation efficiency, and collection efficiency of the training data for machine learning. do. Therefore, the recognition accuracy of the image recognition engine can be efficiently improved.
 尚、問合せ部348は、対象物画像3125のうち正解タグが対応付けられている画像を対象にしてもよい。その場合、特定部344は、任意のタグ候補を推定し、問合せ部348は、正解タグと任意のタグ候補とを含めた問いかけ内容を生成するとよい。これによっても、対象者(孫)U1の教育支援となり、支援者(祖父母)U3との交流が促進される。 It should be noted that the inquiry unit 348 may target the image associated with the correct tag among the object images 3125 . In that case, the identifying unit 344 may estimate an arbitrary tag candidate, and the inquiry unit 348 may generate inquiry content including the correct tag and the arbitrary tag candidate. This also serves as educational support for the target person (grandchild) U1, and promotes interaction with the supporter (grandparent) U3.
<実施形態4>
 本実施形態4は、上述した実施形態2又は3の変形例である。本実施形態4は、支援情報を絵本等の文章情報とした場合に、支援者(祖父母)U3による読み上げ音声をさらに対象者(孫)U1へ提供するものである。尚、本実施形態4にかかる支援システムの概要構成は、図3と同様であるため、以下では違いを中心に説明し、重複する構成については図示及び説明を省略する。また、以下では、支援情報を電子書籍(表示コンテンツ)である文章情報の場合について説明するが、実物の書籍であってもよい。
<Embodiment 4>
Embodiment 4 is a modification of Embodiment 2 or 3 described above. In the fourth embodiment, when text information such as a picture book is used as the support information, the supporter (grandparent) U3 further provides the reading voice to the target person (grandchild) U1. Note that the schematic configuration of the support system according to the fourth embodiment is the same as that shown in FIG. 3, so differences will be mainly described below, and illustrations and descriptions of overlapping configurations will be omitted. In the following description, the support information is text information that is an electronic book (display content), but it may be a real book.
 すなわち、処理部は、表示コンテンツが文章情報を含む場合、支援者に対して当該文章情報の読み上げを要求する。そして、取得部は、支援者から文章情報に対する読み上げ音声を取得する。そして、処理部は、対象者の第1の端末において文章情報を表示する際に読み上げ音声を再生させるように、当該文章情報と当該読み上げ音声を含めて支援情報として第1の端末へ通知する。これにより、対象者は、提供された表示コンテンツの読み上げ音声も支援情報として受領でき、幼児などには最適である。また、支援者が高齢者である場合、認知症予防の訓練として、孫等のために絵本の読み上げをするという動機付けとなる。 That is, when the display content includes text information, the processing unit requests the supporter to read out the text information. Then, the acquisition unit acquires the reading voice for the text information from the supporter. Then, the processing unit notifies the first terminal of the subject as support information including the text information and the reading voice so that the reading voice is reproduced when the text information is displayed on the first terminal of the subject. As a result, the target person can also receive the reading voice of the provided display content as support information, which is most suitable for infants and the like. In addition, when the supporter is an elderly person, the supporter is motivated to read a picture book for his grandchildren and the like as dementia prevention training.
 さらに、取得部は、第1の端末における文章情報の表示及び読み上げ音声の再生時に対象者が撮影された画像を第2の撮影画像として取得するとよい。これにより、絵本等の読み上げ時の幼児の表情等を画像や動画として記録でき、記録した画像や動画を支援者に提供できる。よって、支援者の満足度がさらに向上する。 Furthermore, the acquisition unit preferably acquires, as the second captured image, an image captured by the target person when the sentence information is displayed and the reading voice is played back on the first terminal. As a result, it is possible to record the expression of the infant when reading a picture book or the like as an image or moving image, and provide the recorded image or moving image to the supporter. Therefore, the degree of satisfaction of the supporter is further improved.
 図20は、本実施形態4にかかる読み聞かせ処理の流れを示すシーケンス図である。まず、上述した図11のステップS127において、支援者(祖父母)U3が電子書籍の絵本を支援情報として選択し、購入ボタン5132を押下したものとする。このとき、タブレット端末103は、選択された文章情報(電子書籍)のID(支援情報)を含めた支援指示を、ネットワークNを介して支援装置300へ送信する(S401)。これに応じて、処理部347は、支援指示に含まれる文章情報に対する読み上げ要求を、ネットワークNを介してタブレット端末103へ送信する(S402)。 FIG. 20 is a sequence diagram showing the flow of storytelling processing according to the fourth embodiment. First, in step S127 of FIG. 11 described above, it is assumed that the supporter (grandparents) U3 has selected the picture book of the electronic book as the support information and pressed the purchase button 5132 . At this time, the tablet terminal 103 transmits a support instruction including the ID (support information) of the selected text information (electronic book) to the support device 300 via the network N (S401). In response, the processing unit 347 transmits a read-aloud request for text information included in the support instruction to the tablet terminal 103 via the network N (S402).
 そして、タブレット端末103は、文章情報を表示し(S403)、支援者(祖父母)U3に対して読み上げを促す。支援者(祖父母)U3が表示された文章情報を読み上げると、タブレット端末103は、マイク120により収音された音声情報を読み上げ音声として入力する(S404)。そして、タブレット端末103は、読み上げ音声を、ネットワークNを介して支援装置300へ送信する(S405)。これに応じて、取得部342は、読み上げ音声を取得する。 Then, the tablet terminal 103 displays text information (S403) and prompts the supporter (grandparent) U3 to read it. When the supporter (grandparent) U3 reads out the displayed text information, the tablet terminal 103 inputs the voice information picked up by the microphone 120 as read-out voice (S404). The tablet terminal 103 then transmits the reading voice to the support device 300 via the network N (S405). In response, the acquisition unit 342 acquires the reading voice.
 そして、登録部341は、ステップS401で取得した文章情報と取得した読み上げ音声を対応付けて支援情報として、対象者ID3121と対応付けて登録する(S406)。また、処理部347は、文章情報と読み上げ音声を含む支援情報を、ネットワークNを介してタブレット端末101へ送信する(S407)。このとき、処理部347は、タブレット端末101において文章情報を表示する際に読み上げ音声を再生させるように指示する。 Then, the registration unit 341 associates the text information acquired in step S401 with the read-out voice acquired and registers them as support information in association with the subject ID 3121 (S406). In addition, the processing unit 347 transmits the support information including text information and reading voice to the tablet terminal 101 via the network N (S407). At this time, the processing unit 347 instructs the tablet terminal 101 to reproduce the reading voice when the text information is displayed.
 タブレット端末101は、受信した文章情報を表示し、読み上げ音声を再生する(S408)。このとき、タブレット端末101は、カメラ110により対象者(孫)U1の顔を撮影し、撮影時の文章情報の表示位置(コンテンツ位置)を特定する(S409)。コンテンツ位置は、例えば、絵本のうち表示されているページ等である。 The tablet terminal 101 displays the received text information and reproduces the reading voice (S408). At this time, the tablet terminal 101 photographs the face of the target person (grandchild) U1 with the camera 110, and specifies the display position (content position) of the text information at the time of photographing (S409). The content position is, for example, the displayed page of the picture book.
 タブレット端末101は、ステップS409で撮影された第2の撮影画像及び特定された表示位置を、ネットワークNを介して支援装置300へ送信する(S410)。そして、取得部342は、第2の撮影画像及び表示位置を取得する。そして、登録部341は、支援情報に第2の撮影画像及び表示位置を対応付けて登録する(S411)。これにより、上述した実施形態2のように、特定部344は、絵本のうち表示されているページ内容を加味して、対象者の興味を特定できる。よって、興味の特定精度が向上する。また、処理部347は、第2の撮影画像を、ネットワークNを介してタブレット端末103へ送信する(S412)。これに応じて、タブレット端末103は、受信した第2の撮影画像を表示する(S413)。これにより、支援者(祖父母)U3は、自身がプレゼントした絵本とその読み上げ音声の再生時の対象者(孫)U1の様子を視認でき、満足度が向上する。尚、上述した読み聞かせの効果として、例えば、親や祖父母が孫に直接に読み聞かせすることが難しい場合でも、孫が興味を持っている内容について特定し、事前に音声を入力することで、孫が本を読みたいタイミングで祖母の声で読み聞かせを実施することができる。また、読み聞かせをしている時の画像や動画を、祖父母も確認できるので、次回の読み聞かせへのモチベーションにもつながる。 The tablet terminal 101 transmits the second captured image captured in step S409 and the specified display position to the support device 300 via the network N (S410). Then, the obtaining unit 342 obtains the second captured image and the display position. Then, the registration unit 341 registers the support information in association with the second captured image and the display position (S411). As a result, as in the above-described second embodiment, the identification unit 344 can identify the interest of the target person by taking into account the content of the displayed page of the picture book. Therefore, the accuracy of specifying the interest is improved. Also, the processing unit 347 transmits the second captured image to the tablet terminal 103 via the network N (S412). In response, the tablet terminal 103 displays the received second captured image (S413). As a result, the supporter (grandparent) U3 can visually recognize the picture book presented by him/herself and the state of the target person (grandchild) U1 at the time of reproducing the read-out voice, thereby improving satisfaction. As an effect of reading aloud as described above, for example, even if it is difficult for parents or grandparents to read directly to their grandchildren, they can identify the content that their grandchildren are interested in and input voice in advance. When the grandchild wants to read a book, the grandmother's voice can be read aloud. In addition, since the grandparents can also check the images and videos during the reading, it will motivate them to read next time.
 尚、ステップS411において、処理部347は、第2の撮影画像から対象者(孫)U1の満足度を測定し、当該満足度が所定値以上である場合に、当該第2の撮影画像及び表示位置を支援者(祖父母)U3のタブレット端末103へ通知してもよい。これにより、支援者(祖父母)U3は、絵本のどのあたりで対象者(孫)U1が喜んだのかを把握でき、支援者(祖父母)U3の満足度がより向上する。また、処理部347は、満足度が所定値以上である場合に、第2の撮影画像及び表示位置を対応付けて登録してもよい。これにより、特定部344は、満足度の高い表示位置を考慮して対象者(孫)U1の興味を特定できる。よって、興味の特定精度が向上できる。 In step S411, the processing unit 347 measures the degree of satisfaction of the subject (grandchild) U1 from the second photographed image, and if the degree of satisfaction is equal to or higher than a predetermined value, the second photographed image and the display The position may be notified to the tablet terminal 103 of the supporter (grandparent) U3. As a result, the supporters (grandparents) U3 can grasp where in the picture book the target person (grandchildren) U1 was pleased, and the level of satisfaction of the supporters (grandparents) U3 is further improved. Moreover, the processing unit 347 may associate and register the second captured image and the display position when the degree of satisfaction is equal to or higher than a predetermined value. Thereby, the specifying unit 344 can specify the interest of the target person (grandchild) U1 in consideration of the display position with high satisfaction. Therefore, it is possible to improve the accuracy of specifying the interest.
 尚、本実施形態において、タブレット端末101とタブレット端末103が映像及び音声通信を行い、支援者(祖父母)U3によりリアルタイムの読み聞かせを、対象者(孫)U1に対して行ってもよい。 In this embodiment, the tablet terminals 101 and 103 may perform video and audio communication, and the supporter (grandparents) U3 may read to the target person (grandchild) U1 in real time.
<その他の実施形態>
 尚、上述した実施形態2において、支援装置300と認証装置200とは別の情報処理装置として説明したが、同一であってもよい。例えば、支援装置300は、対象者管理情報312の対象者ID3121に顔特徴情報をさらに対応付けて登録してもよい。その場合、制御部340は、図5の顔検出部220、特徴点抽出部230、登録部240及び認証部250を備えていれば良い。
<Other embodiments>
In the second embodiment described above, the support device 300 and the authentication device 200 are described as separate information processing devices, but they may be the same. For example, the support device 300 may register facial feature information in association with the subject ID 3121 of the subject management information 312 . In that case, the control unit 340 may include the face detection unit 220, the feature point extraction unit 230, the registration unit 240, and the authentication unit 250 shown in FIG.
 尚、支援装置300は、複数の対象者画像3126を集約したアルバム画像を生成し、支援者(祖父母)U3へ提供してもよい。すなわち、取得部は、支援情報の対象者への提供に応じて前記対象者が撮影された複数の第3の撮影画像を取得する。そして、登録部は、対象者と複数の第3の撮影画像を対応付けて登録する。そして、処理部は、対象者に対応付けられた複数の第3の撮影画像の少なくともと一部を集約して合成画像を生成し、支援者へ当該合成画像を送信してもよい。このとき、第3の撮影画像には、対象者(孫)U1が含まれる対象者画像3126に加えて、対象者(孫)U1が撮影した対象物画像3125を含めても良い。 Note that the support device 300 may generate an album image in which a plurality of target person images 3126 are aggregated, and provide the album image to the supporter (grandparent) U3. That is, the acquisition unit acquires a plurality of third captured images in which the target person is captured in response to provision of the support information to the target person. Then, the registration unit associates and registers the subject with the plurality of third captured images. Then, the processing unit may aggregate at least part of the plurality of third captured images associated with the subject to generate a composite image, and transmit the composite image to the supporter. At this time, the third captured image may include an object image 3125 captured by the target person (grandchild) U1 in addition to the target person image 3126 including the target person (grandchild) U1.
 尚、上述した支援システム1000は、対象者とは血縁関係のない第三者を支援者として支援情報候補を提示してもよい。例えば、高齢者が血縁関係の有無によらず子供を支援したい場合にも適用可能である。例えば、対象者が花に興味がある場合、花に詳しい高齢者を支援者として対応付けて登録してもよい。 Note that the support system 1000 described above may present support information candidates with a third party unrelated to the subject as a supporter. For example, it is also applicable when an elderly person wants to support a child regardless of the presence or absence of a blood relationship. For example, if the target person is interested in flowers, an elderly person familiar with flowers may be associated and registered as a supporter.
 尚、上述の実施形態では、ハードウェアの構成として説明したが、これに限定されるものではない。本開示は、任意の処理を、CPUにコンピュータプログラムを実行させることにより実現することも可能である。 In addition, in the above-described embodiment, the hardware configuration has been described, but the configuration is not limited to this. The present disclosure can also implement arbitrary processing by causing a CPU to execute a computer program.
 上述の例において、プログラムは、様々なタイプの非一時的なコンピュータ可読媒体(non-transitory computer readable medium)を用いて格納され、コンピュータに供給することができる。非一時的なコンピュータ可読媒体は、様々なタイプの実体のある記録媒体(tangible storage medium)を含む。非一時的なコンピュータ可読媒体の例は、磁気記録媒体(例えばフレキシブルディスク、磁気テープ、ハードディスクドライブ)、光磁気記録媒体(例えば光磁気ディスク)、CD-ROM(Read Only Memory)、CD-R、CD-R/W、DVD(Digital Versatile Disc)、半導体メモリ(例えば、マスクROM、PROM(Programmable ROM)、EPROM(Erasable PROM)、フラッシュROM、RAM(Random Access Memory))を含む。また、プログラムは、様々なタイプの一時的なコンピュータ可読媒体(transitory computer readable medium)によってコンピュータに供給されてもよい。一時的なコンピュータ可読媒体の例は、電気信号、光信号、及び電磁波を含む。一時的なコンピュータ可読媒体は、電線及び光ファイバ等の有線通信路、又は無線通信路を介して、プログラムをコンピュータに供給できる。 In the above example, the program can be stored and supplied to the computer using various types of non-transitory computer readable media. Non-transitory computer readable media include various types of tangible storage media. Examples of non-transitory computer-readable media include magnetic recording media (e.g., flexible discs, magnetic tapes, hard disk drives), magneto-optical recording media (e.g., magneto-optical discs), CD-ROMs (Read Only Memory), CD-Rs, Includes CD-R/W, DVD (Digital Versatile Disc), semiconductor memory (eg, mask ROM, PROM (Programmable ROM), EPROM (Erasable PROM), flash ROM, RAM (Random Access Memory)). The program may also be delivered to the computer on various types of transitory computer readable medium. Examples of transitory computer-readable media include electrical signals, optical signals, and electromagnetic waves. Transitory computer-readable media can deliver the program to the computer via wired channels, such as wires and optical fibers, or wireless channels.
 なお、本開示は上記実施形態に限られたものではなく、趣旨を逸脱しない範囲で適宜変更することが可能である。また、本開示は、それぞれの実施形態を適宜組み合わせて実施されてもよい。 It should be noted that the present disclosure is not limited to the above embodiments, and can be modified as appropriate without departing from the scope. In addition, the present disclosure may be implemented by appropriately combining each embodiment.
 上記の実施形態の一部又は全部は、以下の付記のようにも記載され得るが、以下には限られない。
 (付記A1)
 支援の対象者と支援者とを対応付けて登録する登録部と、
 前記対象者により撮影された第1の撮影画像を取得する取得部と、
 前記第1の撮影画像に基づいて前記対象者の興味を特定する特定部と、
 特定した興味に基づく1以上の支援情報候補を決定する決定部と、
 前記対象者に対応付けられた支援者へ、前記決定した支援情報候補を提示する提示部と、
 を備える支援装置。
 (付記A2)
 前記提示された支援情報候補の中から前記支援者により選択された支援情報を含む支援指示を受け付け、支援情報に応じた処理を行う処理部をさらに備える
 付記A1に記載の支援装置。
 (付記A3)
 前記処理部は、前記支援情報を前記対象者へ提供するための処理を行う
 付記A2に記載の支援装置。
 (付記A4)
 前記処理部は、前記支援情報を前記対象者へ通知する
 付記A2又はA3に記載の支援装置。
 (付記A5)
 前記処理部は、表示コンテンツを含む前記支援情報を前記対象者の第1の端末へ通知し、
 前記取得部は、前記第1の端末における前記表示コンテンツの表示時に前記対象者が撮影された第2の撮影画像を取得し、
 前記登録部は、前記対象者と前記表示コンテンツ及び前記第2の撮影画像を対応付けて登録する
 付記A4に記載の支援装置。
 (付記A6)
 前記処理部は、前記第2の撮影画像から前記対象者の満足度を測定し、当該満足度が所定値以上である場合に、当該第2の撮影画像を前記支援者の端末へ通知する
 付記A5に記載の支援装置。
 (付記A7)
 前記取得部は、前記第2の撮影画像と共に撮影時に前記第1の端末に表示されていた前記表示コンテンツのコンテンツ位置をさらに取得し、
 前記登録部は、前記対象者と前記コンテンツ位置及び前記第2の撮影画像を対応付けて登録し、
 前記特定部は、前記コンテンツ位置及び前記第2の撮影画像に基づいて前記対象者の興味を特定する
 付記A5又はA6に記載の支援装置。
 (付記A8)
 前記処理部は、前記表示コンテンツが文章情報を含む場合、前記支援者に対して当該文章情報の読み上げを要求し、
 前記取得部は、前記支援者から前記文章情報に対する読み上げ音声を取得し、
 前記処理部は、前記第1の端末において前記文章情報を表示する際に前記読み上げ音声を再生させるように、当該文章情報と当該読み上げ音声を含めて前記支援情報として前記第1の端末へ通知する
 付記A5乃至A7のいずれか1項に記載の支援装置。
 (付記A9)
 前記取得部は、前記第1の端末における前記文章情報の表示及び前記読み上げ音声の再生時に前記対象者が撮影された画像を前記第2の撮影画像として取得する
 付記A8に記載の支援装置。
 (付記A10)
 前記取得部は、前記支援情報の前記対象者への提供に応じて前記対象者が撮影された複数の第3の撮影画像を取得し、
 前記登録部は、前記対象者と前記複数の第3の撮影画像を対応付けて登録し、
 前記処理部は、前記対象者に対応付けられた前記複数の第3の撮影画像の少なくともと一部を集約して合成画像を生成し、前記支援者へ当該合成画像を送信する
 付記A2乃至A9のいずれか1項に記載の支援装置。
 (付記A11)
 前記特定部は、前記第1の撮影画像の撮影者を特定し、
 前記登録部は、前記特定した撮影者を前記対象者として前記第1の撮影画像に対応付けて登録する
 付記A1乃至A10のいずれか1項に記載の支援装置。
 (付記A12)
 前記登録部は、前記対象者の第1の生体情報を予め登録し、
 前記取得部は、前記第1の撮影画像と共に前記撮影者の第2の生体情報を取得し、
 前記特定部は、前記第1の生体情報を用いて前記第2の生体情報における生体認証に成功した場合、前記撮影者を前記対象者として特定する
 付記A11に記載の支援装置。
 (付記A13)
 前記対象者に対して前記第1の撮影画像の属性情報の問合せを行う問合せ部をさらに備え、
 前記取得部は、前記対象者から、前記問合せに応じた前記属性情報を取得し、
 前記特定部は、前記第1の撮影画像及び前記属性情報に基づいて、前記対象者の興味を特定する
 付記A1乃至A12のいずれか1項に記載の支援装置。
 (付記A14)
 前記取得部は、前記第1の撮影画像と共に撮影場所の位置情報を取得し、
 前記登録部は、前記第1の撮影画像に前記撮影場所を対応付けて登録し、
 前記特定部は、前記第1の撮影画像及び前記撮影場所に基づいて、前記対象者の興味を特定する
 付記A1乃至A13のいずれか1項に記載の支援装置。
 (付記A15)
 前記特定部は、前記第1の撮影画像に対して顔認識された領域を除外し、除外後の画像に基づいて前記対象者の興味を特定する
 付記A1乃至A14のいずれか1項に記載の支援装置。
 (付記A16)
 前記特定部は、前記第1の撮影画像に対して顔認識された領域のうち、前記対象者以外の領域を除外する
 付記A15に記載の支援装置。
 (付記B1)
 支援の対象者の第1の端末と、
 支援者の第2の端末と、
 支援装置と、を備え、
 前記支援装置は、
 前記対象者と前記支援者とを対応付けて登録する登録部と、
 前記第1の端末から前記対象者により撮影された第1の撮影画像を取得する取得部と、
 前記第1の撮影画像に基づいて前記対象者の興味を特定する特定部と、
 特定した興味に基づく1以上の支援情報候補を決定する決定部と、
 前記対象者に対応付けられた支援者の前記第2の端末へ、前記決定した支援情報候補を提示する提示部と、
 を備える支援システム。
 (付記B2)
 前記支援装置は、
 前記提示された支援情報候補の中から前記支援者により選択された支援情報を含む支援指示を受け付け、支援情報に応じた処理を行う処理部をさらに備える
 付記B1に記載の支援システム。
 (付記C1)
 コンピュータが、
 支援の対象者と支援者とを対応付けて登録し、
 前記対象者により撮影された第1の撮影画像を取得し、
 前記第1の撮影画像に基づいて前記対象者の興味を特定し、
 特定した興味に基づく1以上の支援情報候補を決定し、
 前記対象者に対応付けられた支援者へ、前記決定した支援情報候補を提示する、
 支援方法。
 (付記D1)
 支援の対象者と支援者とを対応付けて登録する登録処理と、
 前記対象者により撮影された第1の撮影画像を取得する取得処理と、
 前記第1の撮影画像に基づいて前記対象者の興味を特定する特定処理と、
 特定した興味に基づく1以上の支援情報候補を決定する決定処理と、
 前記対象者に対応付けられた支援者へ、前記決定した支援情報候補を提示する提示処理と、
 をコンピュータに実行させる支援プログラム。
Some or all of the above embodiments may also be described in the following additional remarks, but are not limited to the following.
(Appendix A1)
a registration unit that associates and registers a support target person and a supporter;
an acquisition unit that acquires a first captured image captured by the subject;
a specifying unit that specifies the subject's interest based on the first captured image;
a determination unit that determines one or more candidates for assistance information based on the identified interest;
a presentation unit that presents the determined support information candidate to the supporter associated with the target person;
A support device comprising:
(Appendix A2)
The support device according to appendix A1, further comprising a processing unit that receives support instructions including support information selected by the support person from the presented support information candidates, and performs processing according to the support information.
(Appendix A3)
The support device according to Appendix A2, wherein the processing unit performs processing for providing the support information to the target person.
(Appendix A4)
The support device according to appendix A2 or A3, wherein the processing unit notifies the target person of the support information.
(Appendix A5)
The processing unit notifies the target person's first terminal of the support information including the display content,
The acquisition unit acquires a second captured image of the target person when the display content is displayed on the first terminal,
The support device according to Appendix A4, wherein the registration unit associates and registers the target person, the display content, and the second captured image.
(Appendix A6)
The processing unit measures the satisfaction level of the subject from the second captured image, and notifies the terminal of the supporter of the second captured image when the satisfaction level is equal to or higher than a predetermined value. A support device according to A5.
(Appendix A7)
The acquisition unit further acquires a content position of the display content displayed on the first terminal at the time of capturing together with the second captured image,
The registration unit associates and registers the target person, the content position, and the second captured image,
The support device according to appendix A5 or A6, wherein the specifying unit specifies the interest of the target person based on the content position and the second captured image.
(Appendix A8)
When the display content includes text information, the processing unit requests the supporter to read out the text information,
The acquisition unit acquires a reading voice for the text information from the supporter,
The processing unit notifies the first terminal as the support information including the text information and the reading voice so that the reading voice is reproduced when the text information is displayed on the first terminal. Assistance device according to any one of appendices A5 to A7.
(Appendix A9)
The support device according to Appendix A8, wherein the acquisition unit acquires, as the second captured image, an image captured by the target person during display of the text information and playback of the readout sound on the first terminal.
(Appendix A10)
The acquisition unit acquires a plurality of third captured images in which the target person is captured in response to the provision of the support information to the target person,
The registration unit associates and registers the target person and the plurality of third captured images,
The processing unit aggregates at least part of the plurality of third captured images associated with the subject to generate a composite image, and transmits the composite image to the supporter. The support device according to any one of Claims 1 to 3.
(Appendix A11)
The specifying unit specifies a photographer of the first captured image,
The support device according to any one of Appendixes A1 to A10, wherein the registration unit registers the specified photographer as the target person in association with the first captured image.
(Appendix A12)
The registration unit pre-registers the first biometric information of the subject,
The acquisition unit acquires second biometric information of the photographer together with the first captured image,
The assisting apparatus according to Appendix A11, wherein the specifying unit specifies the photographer as the target person when biometric authentication in the second biometric information is successful using the first biometric information.
(Appendix A13)
further comprising an inquiry unit that inquires of the subject about attribute information of the first captured image;
The acquisition unit acquires the attribute information in response to the inquiry from the subject,
The support device according to any one of Appendixes A1 to A12, wherein the specifying unit specifies the subject's interest based on the first captured image and the attribute information.
(Appendix A14)
The acquisition unit acquires position information of a shooting location together with the first captured image,
The registration unit associates and registers the photographing location with the first photographed image,
The support device according to any one of Appendixes A1 to A13, wherein the specifying unit specifies the subject's interest based on the first captured image and the shooting location.
(Appendix A15)
The identification unit according to any one of appendices A1 to A14, wherein the identification unit excludes a face-recognized area from the first captured image, and identifies the subject's interest based on the excluded image. support equipment.
(Appendix A16)
The assisting apparatus according to Appendix A15, wherein the specifying unit excludes a region other than the target person from among the face-recognized regions of the first captured image.
(Appendix B1)
a first terminal of a support target person;
a supporter's second terminal;
a support device;
The support device is
a registration unit that associates and registers the target person and the supporter;
an acquisition unit that acquires a first captured image captured by the subject from the first terminal;
a specifying unit that specifies the subject's interest based on the first captured image;
a determination unit that determines one or more candidates for assistance information based on the identified interest;
a presentation unit that presents the determined support information candidate to the second terminal of the supporter associated with the target person;
Support system with
(Appendix B2)
The support device is
The support system according to appendix B1, further comprising a processing unit that receives support instructions including support information selected by the support person from the presented support information candidates, and performs processing according to the support information.
(Appendix C1)
the computer
Corresponding and registering the target of support and the supporter,
Acquiring a first captured image captured by the subject;
Identifying the subject's interest based on the first captured image;
determining one or more candidates for assistance information based on the identified interests;
Presenting the determined support information candidate to the supporter associated with the target person;
how to help.
(Appendix D1)
a registration process for registering a support target person and a supporter in association with each other;
Acquisition processing for acquiring a first captured image captured by the subject;
a specifying process of specifying the subject's interest based on the first captured image;
a determination process of determining one or more candidates for assistance information based on the identified interest;
a presentation process of presenting the determined support information candidate to the supporter associated with the target person;
A support program that causes a computer to execute
 以上、実施形態(及び実施例)を参照して本願発明を説明したが、本願発明は上記実施形態(及び実施例)に限定されるものではない。本願発明の構成や詳細には、本願発明のスコープ内で当業者が理解し得る様々な変更をすることができる。 Although the present invention has been described with reference to the embodiments (and examples), the present invention is not limited to the above-described embodiments (and examples). Various changes that can be understood by those skilled in the art can be made to the configuration and details of the present invention within the scope of the present invention.
 この出願は、2021年3月19日に出願された日本出願特願2021-045723を基礎とする優先権を主張し、その開示の全てをここに取り込む。 This application claims priority based on Japanese Patent Application No. 2021-045723 filed on March 19, 2021, and the entire disclosure thereof is incorporated herein.
 1 支援装置
 11 登録部
 12 取得部
 13 特定部
 14 決定部
 15 提示部
 1000 支援システム
 U1 対象者(孫)
 U2 対象者の親
 U3 支援者(祖父母)
 N ネットワーク
 101 タブレット端末
 102 スマートフォン
 103 タブレット端末
 110 カメラ
 120 マイク
 130 スピーカ
 140 タッチパネル
 150 記憶部
 151 プログラム
 160 通信部
 170 メモリ
 180 制御部
 181 登録部
 182 コンテンツ処理部
 410 カメラ
 420 マイク
 430 スピーカ
 440 タッチパネル
 450 記憶部
 451 プログラム
 460 通信部
 470 メモリ
 480 制御部
 481 SNSアプリケーション
 200 認証装置
 210 顔情報DB
 211 ユーザID
 212 顔特徴情報
 220 顔検出部
 230 特徴点抽出部
 240 登録部
 250 認証部
 300 支援装置
 310 記憶部
 311 プログラム
 312 対象者管理情報
 3121 対象者ID
 3122 支援者ID
 3123 端末ID
 3124 対象者情報
 3125 対象物画像
 3126 対象者画像
 313 支援者情報
 3131 支援者ID
 3132 個人情報
 3133 端末ID
 314 支援情報DB
 3141 支援情報候補
 314n 支援情報候補
 320 メモリ
 330 通信部
 340 制御部
 341 登録部
 342 取得部
 343 認証制御部
 344 特定部
 345 決定部
 346 提示部
 347 処理部
 51 提示画面
 5111 選択欄
 5112 選択欄
 5113 選択欄
 5121 支援情報候補表示欄
 5122 支援情報候補表示欄
 5123 支援情報候補表示欄
 5131 他の候補表示ボタン
 5132 購入ボタン
 5133 取消ボタン
 51a 提示画面
 51b お礼情報表示画面
 300a 支援装置
 311a プログラム
 348 問合せ部
 52 選択画面
 521 問いかけメッセージ
 522 撮影画像群
 53 選択画面
 531 問いかけメッセージ
 532 選択肢
 54 入力画面
 541 問いかけメッセージ
 542 音声認識結果表示欄
 543 文字入力欄
1 support device 11 registration unit 12 acquisition unit 13 identification unit 14 determination unit 15 presentation unit 1000 support system U1 target person (grandchild)
U2 Subject's parents U3 Supporters (grandparents)
N network 101 tablet terminal 102 smartphone 103 tablet terminal 110 camera 120 microphone 130 speaker 140 touch panel 150 storage unit 151 program 160 communication unit 170 memory 180 control unit 181 registration unit 182 content processing unit 410 camera 420 microphone 430 speaker 440 touch panel 450 storage unit 451 Program 460 Communication Unit 470 Memory 480 Control Unit 481 SNS Application 200 Authentication Device 210 Face Information DB
211 User ID
212 facial feature information 220 face detection unit 230 feature point extraction unit 240 registration unit 250 authentication unit 300 support device 310 storage unit 311 program 312 subject management information 3121 subject ID
3122 Supporter ID
3123 Terminal ID
3124 Subject information 3125 Subject image 3126 Subject image 313 Supporter information 3131 Supporter ID
3132 Personal information 3133 Terminal ID
314 Support Information DB
3141 support information candidate 314n support information candidate 320 memory 330 communication unit 340 control unit 341 registration unit 342 acquisition unit 343 authentication control unit 344 identification unit 345 determination unit 346 presentation unit 347 processing unit 51 presentation screen 5111 selection field 5112 selection field 5113 selection field 5121 Support information candidate display field 5122 Support information candidate display field 5123 Support information candidate display field 5131 Other candidate display button 5132 Purchase button 5133 Cancel button 51a Presentation screen 51b Thank you information display screen 300a Support device 311a Program 348 Inquiry unit 52 Selection screen 521 Inquiry message 522 Photographed image group 53 Selection screen 531 Inquiry message 532 Options 54 Input screen 541 Inquiry message 542 Voice recognition result display field 543 Character input field

Claims (20)

  1.  支援の対象者と支援者とを対応付けて登録する登録手段と、
     前記対象者により撮影された第1の撮影画像を取得する取得手段と、
     前記第1の撮影画像に基づいて前記対象者の興味を特定する特定手段と、
     特定した興味に基づく1以上の支援情報候補を決定する決定手段と、
     前記対象者に対応付けられた支援者へ、前記決定した支援情報候補を提示する提示手段と、
     を備える支援装置。
    a registration means for registering a support target person and a supporter in association with each other;
    Acquisition means for acquiring a first captured image captured by the subject;
    identifying means for identifying the subject's interest based on the first captured image;
    determining means for determining one or more candidate assistance information based on the identified interest;
    presenting means for presenting the determined support information candidate to the supporter associated with the target person;
    A support device comprising:
  2.  前記提示された支援情報候補の中から前記支援者により選択された支援情報を含む支援指示を受け付け、支援情報に応じた処理を行う処理手段をさらに備える
     請求項1に記載の支援装置。
    2. The support apparatus according to claim 1, further comprising processing means for receiving a support instruction including support information selected by said supporter from said presented support information candidates and performing a process according to said support information.
  3.  前記処理手段は、前記支援情報を前記対象者へ提供するための処理を行う
     請求項2に記載の支援装置。
    3. The support device according to claim 2, wherein the processing means performs processing for providing the support information to the subject.
  4.  前記処理手段は、前記支援情報を前記対象者へ通知する
     請求項2又は3に記載の支援装置。
    The support device according to claim 2 or 3, wherein the processing means notifies the target person of the support information.
  5.  前記処理手段は、表示コンテンツを含む前記支援情報を前記対象者の第1の端末へ通知し、
     前記取得手段は、前記第1の端末における前記表示コンテンツの表示時に前記対象者が撮影された第2の撮影画像を取得し、
     前記登録手段は、前記対象者と前記表示コンテンツ及び前記第2の撮影画像を対応付けて登録する
     請求項4に記載の支援装置。
    The processing means notifies the first terminal of the target person of the support information including the display content,
    The acquisition means acquires a second captured image of the target person when the display content is displayed on the first terminal,
    5. The support device according to claim 4, wherein said registration means associates and registers said target person, said display content, and said second captured image.
  6.  前記処理手段は、前記第2の撮影画像から前記対象者の満足度を測定し、当該満足度が所定値以上である場合に、当該第2の撮影画像を前記支援者の端末へ通知する
     請求項5に記載の支援装置。
    The processing means measures the degree of satisfaction of the subject from the second captured image, and notifies the terminal of the supporter of the second captured image when the degree of satisfaction is equal to or higher than a predetermined value. Item 6. The support device according to item 5.
  7.  前記取得手段は、前記第2の撮影画像と共に撮影時に前記第1の端末に表示されていた前記表示コンテンツのコンテンツ位置をさらに取得し、
     前記登録手段は、前記対象者と前記コンテンツ位置及び前記第2の撮影画像を対応付けて登録し、
     前記特定手段は、前記コンテンツ位置及び前記第2の撮影画像に基づいて前記対象者の興味を特定する
     請求項5又は6に記載の支援装置。
    The acquisition means further acquires a content position of the display content displayed on the first terminal at the time of capturing together with the second captured image,
    The registration means associates and registers the target person, the content position, and the second captured image,
    7. The assisting apparatus according to claim 5, wherein said identifying means identifies the subject's interest based on said content position and said second captured image.
  8.  前記処理手段は、前記表示コンテンツが文章情報を含む場合、前記支援者に対して当該文章情報の読み上げを要求し、
     前記取得手段は、前記支援者から前記文章情報に対する読み上げ音声を取得し、
     前記処理手段は、前記第1の端末において前記文章情報を表示する際に前記読み上げ音声を再生させるように、当該文章情報と当該読み上げ音声を含めて前記支援情報として前記第1の端末へ通知する
     請求項5乃至7のいずれか1項に記載の支援装置。
    When the display content includes text information, the processing means requests the supporter to read out the text information,
    The acquisition means acquires a reading voice for the text information from the supporter,
    The processing means notifies the first terminal as the support information including the text information and the reading voice so that the reading voice is reproduced when the text information is displayed on the first terminal. 8. A support device according to any one of claims 5 to 7.
  9.  前記取得手段は、前記第1の端末における前記文章情報の表示及び前記読み上げ音声の再生時に前記対象者が撮影された画像を前記第2の撮影画像として取得する
     請求項8に記載の支援装置。
    9. The support device according to claim 8, wherein said acquisition means acquires, as said second captured image, an image captured by said target person when said text information is displayed on said first terminal and said reading voice is reproduced.
  10.  前記取得手段は、前記支援情報の前記対象者への提供に応じて前記対象者が撮影された複数の第3の撮影画像を取得し、
     前記登録手段は、前記対象者と前記複数の第3の撮影画像を対応付けて登録し、
     前記処理手段は、前記対象者に対応付けられた前記複数の第3の撮影画像の少なくともと一部を集約して合成画像を生成し、前記支援者へ当該合成画像を送信する
     請求項2乃至9のいずれか1項に記載の支援装置。
    The acquisition means acquires a plurality of third captured images in which the target person is captured in response to the provision of the support information to the target person,
    The registration means associates and registers the target person and the plurality of third captured images,
    3. The processing means aggregates at least part of the plurality of third captured images associated with the target person to generate a composite image, and transmits the composite image to the support person. 10. The support device according to any one of 9.
  11.  前記特定手段は、前記第1の撮影画像の撮影者を特定し、
     前記登録手段は、前記特定した撮影者を前記対象者として前記第1の撮影画像に対応付けて登録する
     請求項1乃至10のいずれか1項に記載の支援装置。
    The specifying means specifies a photographer of the first captured image,
    11. The support device according to any one of claims 1 to 10, wherein said registration means registers said specified photographer as said target person in association with said first captured image.
  12.  前記登録手段は、前記対象者の第1の生体情報を予め登録し、
     前記取得手段は、前記第1の撮影画像と共に前記撮影者の第2の生体情報を取得し、
     前記特定手段は、前記第1の生体情報を用いて前記第2の生体情報における生体認証に成功した場合、前記撮影者を前記対象者として特定する
     請求項11に記載の支援装置。
    The registration means pre-registers the first biometric information of the subject,
    The acquisition means acquires second biometric information of the photographer together with the first captured image,
    12. The assisting apparatus according to claim 11, wherein said specifying means specifies said photographer as said subject when biometric authentication in said second biometric information is successful using said first biometric information.
  13.  前記対象者に対して前記第1の撮影画像の属性情報の問合せを行う問合せ手段をさらに備え、
     前記取得手段は、前記対象者から、前記問合せに応じた前記属性情報を取得し、
     前記特定手段は、前記第1の撮影画像及び前記属性情報に基づいて、前記対象者の興味を特定する
     請求項1乃至12のいずれか1項に記載の支援装置。
    further comprising inquiry means for inquiring of the subject about attribute information of the first captured image;
    The acquisition means acquires the attribute information in response to the inquiry from the subject,
    13. The assisting device according to any one of claims 1 to 12, wherein the identifying means identifies the subject's interest based on the first captured image and the attribute information.
  14.  前記取得手段は、前記第1の撮影画像と共に撮影場所の位置情報を取得し、
     前記登録手段は、前記第1の撮影画像に前記撮影場所を対応付けて登録し、
     前記特定手段は、前記第1の撮影画像及び前記撮影場所に基づいて、前記対象者の興味を特定する
     請求項1乃至13のいずれか1項に記載の支援装置。
    The acquisition means acquires position information of a shooting location together with the first captured image,
    The registration means associates and registers the photographing location with the first photographed image,
    14. The assisting device according to any one of claims 1 to 13, wherein the specifying means specifies the subject's interest based on the first captured image and the shooting location.
  15.  前記特定手段は、前記第1の撮影画像に対して顔認識された領域を除外し、除外後の画像に基づいて前記対象者の興味を特定する
     請求項1乃至14のいずれか1項に記載の支援装置。
    15. The specifying unit according to any one of claims 1 to 14, wherein the identifying means excludes a face-recognized area from the first captured image and identifies the subject's interest based on the excluded image. support device.
  16.  前記特定手段は、前記第1の撮影画像に対して顔認識された領域のうち、前記対象者以外の領域を除外する
     請求項15に記載の支援装置。
    16. The support device according to claim 15, wherein the identifying means excludes areas other than the target person from areas in which the face of the first captured image has been recognized.
  17.  支援の対象者の第1の端末と、
     支援者の第2の端末と、
     支援装置と、を備え、
     前記支援装置は、
     前記対象者と前記支援者とを対応付けて登録する登録手段と、
     前記第1の端末から前記対象者により撮影された第1の撮影画像を取得する取得手段と、
     前記第1の撮影画像に基づいて前記対象者の興味を特定する特定手段と、
     特定した興味に基づく1以上の支援情報候補を決定する決定手段と、
     前記対象者に対応付けられた支援者の前記第2の端末へ、前記決定した支援情報候補を提示する提示手段と、
     を備える支援システム。
    a first terminal of a support target person;
    a supporter's second terminal;
    a support device;
    The support device is
    a registration means for registering the target person and the supporter in association with each other;
    Acquisition means for acquiring a first captured image captured by the subject from the first terminal;
    identifying means for identifying the subject's interest based on the first captured image;
    determining means for determining one or more candidate assistance information based on the identified interest;
    presenting means for presenting the determined support information candidate to the second terminal of the supporter associated with the target person;
    Support system with
  18.  前記支援装置は、
     前記提示された支援情報候補の中から前記支援者により選択された支援情報を含む支援指示を受け付け、支援情報に応じた処理を行う処理手段をさらに備える
     請求項17に記載の支援システム。
    The support device is
    18. The support system according to claim 17, further comprising processing means for receiving a support instruction including the support information selected by the supporter from the presented support information candidates and performing a process according to the support information.
  19.  コンピュータが、
     支援の対象者と支援者とを対応付けて登録し、
     前記対象者により撮影された第1の撮影画像を取得し、
     前記第1の撮影画像に基づいて前記対象者の興味を特定し、
     特定した興味に基づく1以上の支援情報候補を決定し、
     前記対象者に対応付けられた支援者へ、前記決定した支援情報候補を提示する、
     支援方法。
    the computer
    Corresponding and registering the target of support and the supporter,
    Acquiring a first captured image captured by the subject;
    Identifying the subject's interest based on the first captured image;
    determining one or more candidates for assistance information based on the identified interests;
    Presenting the determined support information candidate to the supporter associated with the target person;
    how to help.
  20.  支援の対象者と支援者とを対応付けて登録する登録処理と、
     前記対象者により撮影された第1の撮影画像を取得する取得処理と、
     前記第1の撮影画像に基づいて前記対象者の興味を特定する特定処理と、
     特定した興味に基づく1以上の支援情報候補を決定する決定処理と、
     前記対象者に対応付けられた支援者へ、前記決定した支援情報候補を提示する提示処理と、
     をコンピュータに実行させる支援プログラムが格納された非一時的なコンピュータ可読媒体。
    a registration process for registering a support target person and a supporter in association with each other;
    Acquisition processing for acquiring a first captured image captured by the subject;
    a specifying process of specifying the subject's interest based on the first captured image;
    a determination process of determining one or more candidates for assistance information based on the identified interest;
    a presentation process of presenting the determined support information candidate to the supporter associated with the target person;
    A non-transitory computer-readable medium that stores a support program that causes a computer to execute
PCT/JP2022/000279 2021-03-19 2022-01-06 Support device, system, and method, and computer-readable medium WO2022196042A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2023506775A JPWO2022196042A5 (en) 2022-01-06 Support devices, support methods, and support programs

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021045723 2021-03-19
JP2021-045723 2021-03-19

Publications (1)

Publication Number Publication Date
WO2022196042A1 true WO2022196042A1 (en) 2022-09-22

Family

ID=83320151

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/000279 WO2022196042A1 (en) 2021-03-19 2022-01-06 Support device, system, and method, and computer-readable medium

Country Status (1)

Country Link
WO (1) WO2022196042A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014146242A (en) * 2013-01-30 2014-08-14 Toshiba Tec Corp Information distribution device, information display system and program
WO2016021157A1 (en) * 2014-08-04 2016-02-11 パナソニックIpマネジメント株式会社 Information provision device, information provision method, and information provision system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014146242A (en) * 2013-01-30 2014-08-14 Toshiba Tec Corp Information distribution device, information display system and program
WO2016021157A1 (en) * 2014-08-04 2016-02-11 パナソニックIpマネジメント株式会社 Information provision device, information provision method, and information provision system

Also Published As

Publication number Publication date
JPWO2022196042A1 (en) 2022-09-22

Similar Documents

Publication Publication Date Title
US20210005224A1 (en) System and Method for Determining a State of a User
JP7440020B2 (en) Information processing method, terminal device, information processing device, and information processing system
CN107257338B (en) media data processing method, device and storage medium
US20190052724A1 (en) Systems and methods for establishing a safe online communication network and for alerting users of the status of their mental health
US20210022603A1 (en) Techniques for providing computer assisted eye examinations
TW201117114A (en) System, apparatus and method for message simulation
JP6649005B2 (en) Robot imaging system and image management method
US20210313041A1 (en) Reminiscence therapy and media sharing platform
JP7034808B2 (en) Information processing method, information processing device and information processing system
US20220386559A1 (en) Reminiscence therapy and media sharing platform
CN110507996A (en) Keep the user experience in gaming network personalized
WO2022196042A1 (en) Support device, system, and method, and computer-readable medium
US10880602B2 (en) Method of objectively utilizing user facial expressions when viewing media presentations for evaluating a marketing campaign
JP7437684B2 (en) Lifelog provision system and lifelog provision method
KR20160073903A (en) A system and the method of improved personal remembrane and regeneration using personal memory and attribute information
US20200349948A1 (en) Information processing device, information processing method, and program
KR102439704B1 (en) Platform system for a self-improvementing and thereof operating method
JP6758351B2 (en) Image management system and image management method
KR20130082902A (en) Method and system for providing services of user observation recording in education robot device and robot device for use therein
JP7307576B2 (en) Program and information processing device
JP7521886B2 (en) PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING APPARATUS
JP2022179841A (en) Donation apparatus, donation method, and donation program
JP6637917B2 (en) Education support system and education support method
JP2020201669A (en) Information processor
JP7392306B2 (en) Information processing system, information processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22770804

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023506775

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22770804

Country of ref document: EP

Kind code of ref document: A1