WO2022124419A1 - Information processing apparatus, information processing method, and information processing system - Google Patents

Information processing apparatus, information processing method, and information processing system Download PDF

Info

Publication number
WO2022124419A1
WO2022124419A1 PCT/JP2021/045713 JP2021045713W WO2022124419A1 WO 2022124419 A1 WO2022124419 A1 WO 2022124419A1 JP 2021045713 W JP2021045713 W JP 2021045713W WO 2022124419 A1 WO2022124419 A1 WO 2022124419A1
Authority
WO
WIPO (PCT)
Prior art keywords
chunk
image
information
scene
model
Prior art date
Application number
PCT/JP2021/045713
Other languages
French (fr)
Japanese (ja)
Inventor
聡 黒田
Original Assignee
株式会社 情報システムエンジニアリング
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社 情報システムエンジニアリング filed Critical 株式会社 情報システムエンジニアリング
Priority to JP2022568365A priority Critical patent/JPWO2022124419A1/ja
Publication of WO2022124419A1 publication Critical patent/WO2022124419A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"

Definitions

  • the present invention relates to an information processing apparatus, an information processing method and an information processing system.
  • a rule describing the judgment conditions of the work target or the work situation is generated based on the manual describing the work procedure, contents, points to be noted or other matters, and the worker. Recognizes the work target and work status based on the sensor information from the device worn by the user, and outputs work support information based on the recognition result of the generated rule and recognition means.
  • Patent Document 1 the information stored as a document such as a manual can only be searched for each document. For example, if you want to search a document paragraph by paragraph, you need to reconstruct the document into structured information. Reconstruction of all documents to be searched is often not realistic in terms of cost-effectiveness, and information on a document-by-document basis often browses unnecessary information, so that the viewer of the document can quickly browse. There is a problem that it may not be possible to deal with it.
  • One aspect of the embodiment of the present invention is information processing that presents the required amount of information to the responder and the collaborator when the responder needs it, without reconstructing the information on a large scale. It is an object of the present invention to provide an apparatus, an information processing method and an information processing system.
  • An information processing device that outputs work information that is information about work performed by a responder, and is a plurality of objects corresponding to the responder and a target person including at least one of the respondents to which the responder corresponds.
  • An image acquisition unit that acquires an original image that is an image including the corresponding object, a target image obtained by dividing the original image and captured by the target person, and a plurality of corresponding object images obtained by capturing each corresponding object.
  • the scene ID that uniquely indicates the scene that the correspondent performs, and the scene ID are stored.
  • a scene estimation unit that estimates a scene, a plurality of object images, a chunk ID that uniquely indicates a chunk that is information that divides or suggests work information, and one or a plurality of chunks that are associated with one-to-one.
  • a chunk estimator that estimates chunks and a chunk output unit that outputs chunks using one of a plurality of second trained models in which the association between the meta IDs is stored.
  • One of a plurality of second trained models is recommended as a search key using a combination of a model ID and one or a plurality of chunk meta IDs associated with a scene ID in a one-to-one manner.
  • a recommendation image output unit that searches for an image and outputs a recommended image that is an image of a corresponding object that is not captured in the original image but is presumed to be necessary, and chunks and chunks output by the chunk output unit.
  • Each of the recommended images output by the recommended image output unit is provided with a display unit for allocating and displaying each of the recommended images to each surface of the object model having a plurality of display areas, and the chunk estimation unit is a plurality of second trained models. One of them is selected using the scene ID and the model ID associated with one-to-one, and the chunk meta ID uniquely indicates the chunk meta value which is information on the property of the corresponding object.
  • a step and one of a plurality of second trained models are recommended as a search key using a combination of a model ID and one or a plurality of chunk meta IDs associated with a scene ID in a one-to-one manner.
  • Each is provided with a seventh step of assigning and displaying each side of an object model having a plurality of display areas, and one of the plurality of second trained models has a one-to-one correspondence with the scene ID.
  • the chunk meta ID which is selected using the attached model ID, provides an information processing method that uniquely indicates the chunk meta value which is information about the property of the corresponding object.
  • An information processing system that outputs work information that is information about work performed by a responder, and identifies the responder, a target person including at least one of the respondents to which the responder corresponds, and the target person.
  • An image acquisition means for acquiring an original image which is an image including at least one of the target person identification information and a plurality of corresponding objects to which the corresponding person corresponds, and the target person using the original image.
  • An image dividing means for dividing an image into a plurality of images of the corresponding objects captured by each of the corresponding objects, an image of the target person, and a scene ID uniquely indicating a scene in which the corresponding object is performed.
  • the scene estimation means for estimating the scene, the plurality of objects to be imaged, and the information obtained by dividing or suggesting the work information.
  • One of a plurality of second trained models in which the association between a chunk ID uniquely indicating a certain chunk and one or more chunk meta IDs associated with one-to-one is stored.
  • One of the chunk estimation means for estimating the chunk, the chunk output means for outputting the chunk, and the plurality of second trained models has a one-to-one correspondence with the scene ID.
  • the attached model ID and a combination of one or more chunk meta IDs as a search key, the recommended recommended object image and the shared shared object information are searched, and the original image is captured.
  • the recommendation image output means for outputting the recommendation image, the chunk output by the chunk output means, and the recommendation image output by the recommendation image output means are assigned to the display area of the object model having a plurality of display areas.
  • the chunk estimation means is selected by using the model ID, and the chunk meta ID uniquely indicates a chunk meta value which is information about the property of the corresponding object. Provides an information processing system.
  • an information processing device that presents the required amount of information to the responder when the responder needs it, without reconstructing the information on a large scale.
  • Information processing method and information processing system can be realized.
  • FIG. 1 is a block diagram showing a configuration of an information processing apparatus at a utilization stage according to the present embodiment.
  • FIG. 2 is a block diagram showing a configuration of an information processing apparatus in the learning stage according to the present embodiment.
  • FIG. 3 is a diagram showing an original image, a subject image, and a plurality of objects to be imaged according to the present embodiment.
  • FIG. 4 is a diagram showing a tree structure which is a relationship between a subject image and a plurality of objects to be imaged according to the present embodiment.
  • FIG. 5 is a diagram showing a first trained model and a second trained model according to the present embodiment.
  • FIG. 6 is a diagram showing information stored in the auxiliary storage device according to the present embodiment.
  • FIG. 7 is a sequence diagram for explaining the scene estimation function, chunk estimation function, and chunk output function according to the present embodiment.
  • FIG. 8 is a sequence diagram provided for explaining the first trained model generation function and the second trained model generation function according to the present embodiment.
  • FIG. 9 is a flowchart showing a processing procedure of information processing in the usage stage according to the present embodiment.
  • FIG. 10 is a flowchart showing a processing procedure of information processing in the display stage according to the present embodiment.
  • FIG. 11 is a flowchart showing a processing procedure of information processing in the learning stage according to the present embodiment.
  • FIG. 12A is a schematic diagram showing an example of the operation of the information processing system according to the second embodiment, and FIG.
  • FIG. 12B is an original image, a subject image, and a plurality of objects to be supported according to the second embodiment. It is a figure which shows the image.
  • FIG. 13 is a diagram showing information stored in the auxiliary storage device according to the present embodiment.
  • FIG. 14 is a diagram showing information stored in the auxiliary storage device according to the present embodiment.
  • FIG. 15 is a diagram showing information stored in the auxiliary storage device according to the present embodiment.
  • FIG. 16 is a diagram showing information stored in the auxiliary storage device according to the present embodiment.
  • 17 (a) to 17 (h) are diagrams showing a display pattern in a subject according to the second embodiment.
  • 18 (a) and 18 (b) are schematic views showing an example of a display of a user terminal according to the second embodiment.
  • a university window or a pharmacy window including a target person who is a student, a guardian of a student, a patient, etc., and a person who has a different position or role, such as a responder who is a worker who handles the target person.
  • the information on the object to be referred to by the responder will be explained.
  • the corresponding object is, for example, a document in the case of a university window and a drug in the case of a pharmacy window.
  • the workers are mainly the workers to perform the tasks.
  • Other workers for the work, customers who receive the work, etc. may be the target people, including those who have different positions and roles, and multiple people who share the work at the site.
  • the object is, for example, a device, a product, or a part installed at a work site or a place.
  • the information processing system 100 in this embodiment has an information processing device 1 as shown in FIG. 1, for example.
  • the information processing device 1 includes a central processing unit 2, a main storage device 3, and an auxiliary storage device 11.
  • the central processing unit 2 is, for example, a CPU (Central Processing Unit), and executes processing by calling a program stored in the main storage device 3.
  • the main storage device 3 is, for example, a RAM (RandomAccessMemory), which is an image acquisition unit 4, an image division unit 5, a scene estimation unit 6, a chunk estimation unit 7, a chunk output unit 8, and a first trained model, which will be described later.
  • a program such as a generation unit 9, a second learned model generation unit 10, a recommendation image output unit 13, and an object model identification unit 15 is stored.
  • a program including an image acquisition unit 4, an image division unit 5, a scene estimation unit 6, a chunk estimation unit 7, a chunk output unit 8, and a recommendation image output unit 13 may be called a control unit 15, and the first trained model generation may be performed.
  • a program including the unit 9 and the second trained model generation unit 10 may be referred to as a trained model generation unit 16.
  • the auxiliary storage device 11 is, for example, an SSD (Solid State Drive) or an HDD (Hard Disk Drive), such as a first trained model DB 1, a first learning model DB 1', or a second trained model DB 2, which will be described later.
  • Databases such as the second learning model DB2', scene table TB1, model table TB2, content table TB3, scene content table TB4, content chunk table TB5, chunk metatable TB6, chunk table TB7, chunk metatable TB8, etc.
  • Stores tables such as the recommendation table TB9, the object table TB10, the object allocation table TB11, the annotation table TB12, the attention table TB13, the camera table TB14, and the roll table TB15.
  • the information processing apparatus 1 that outputs work information that is information about the work performed by the corresponding person has an image acquisition unit 4, an image division unit 5, a scene estimation unit 6, and chunk estimation at the usage stage.
  • a unit 7 and a chunk output unit 8 that outputs chunks that are information that divides or suggests work information are provided.
  • the work information may be referred to as content, and the content ID uniquely indicates the work information.
  • the image acquisition unit 4 is an original image 20 (FIG. 3) which is an image including a corresponding person 21 (FIG. 3) corresponding to the corresponding person and a plurality of corresponding objects 22 to 25 (FIG. 3) corresponding to the corresponding person.
  • a user terminal 12 such as a personal computer equipped with a camera.
  • the image segmentation unit 5 divides the original image 20 into an image of the person to be imaged by the person to be dealt with 21, an object identification information 61 for identifying the object (such as a person to be correspondent or a person to be correspondent), and each subject.
  • the corresponding objects 22 to 25 are divided into a plurality of imaged objects 40 to 43.
  • the scene estimation unit 6 estimates the scene that is the situation performed by the responder. Specifically, the scene estimation unit 6 acquires, for example, the corresponding person image 30 (35), for example, the corresponding person image, the target person identification information 61, and the like as the target person image. The scene estimation unit 6 uses, for example, the first trained model DB1 in which the association between the corresponding person image 30 (35) and the scene ID uniquely indicating the scene is stored. And estimate the scene. In addition to the corresponding person image 30 (35), the scene estimation unit 6 uses, for example, an image of the corresponding person (target person image), identification information for identifying the corresponding person and the corresponding person (target person identification information), and a scene. The scene may be estimated using the first trained model DB1 in which the association between the uniquely shown scene ID and the scene ID is stored.
  • the scene estimation unit 6 has a relationship between the corresponding person identification information and the scene ID uniquely indicating the scene in which the corresponding person performs the situation.
  • the first trained model in which is stored may be used to further estimate the scene.
  • the scene estimation unit 6 uniquely indicates the person identification information and the scene which is the situation performed by the person.
  • a first trained model in which the association between and is stored may be used to further estimate the scene.
  • the scene estimation unit 6 acquires a scene name using the scene ID as a search key from the scene table TB1 which is a table in which the scene ID and the scene name which is the name of the scene are linked one-to-one, and the user terminal 12 is used. Send.
  • the user terminal 12 presents the scene name received from the scene estimation unit 6 to the target person.
  • the presentation of the scene name is displayed, for example, on one side of the object model previously assigned by the target person by the display unit 17 described later.
  • the chunk estimation unit 7 acquires the corresponding object images 40 to 43, which are images of the corresponding object 22 (23 to 25) related to the work, and uniquely identifies the chunk with the corresponding object images 40 (41 to 43). Using one of a plurality of second trained model DB2s in which the association between the indicated chunk ID and one or more chunk meta-IDs associated one-to-one is stored. , Estimate chunks.
  • the chunk estimation unit 7 selects one of a plurality of second trained models in which the association between the plurality of object images, the chunk ID, and the plurality of chunk meta IDs is stored. For example, when the target person information is the corresponding person identification information, the chunk associated with the corresponding person identification information may be further estimated.
  • the chunk estimation unit 7 is one of a plurality of second trained models in which the association between the plurality of object images, the chunk ID, and the plurality of chunk meta IDs is stored. For example, when the target person information is the corresponding person identification information, the chunk associated with the corresponding person identification information may be further estimated.
  • the chunk estimation unit 7 selects one of the plurality of second trained model DB2s by using the model ID associated with the scene ID on a one-to-one basis. Further, the chunk meta ID uniquely indicates a chunk meta value which is information regarding the properties of the corresponding objects 22 to 25.
  • the chunk estimation unit 7 acquires the model ID from the model table TB2, which is a table in which the model ID and the scene ID are linked one-to-one, using the scene ID as a search key. Further, the chunk estimation unit 7 acquires the chunk ID from the chunk meta table TB6, which is a table in which the chunk ID and the chunk meta ID are linked one-to-one or one-to-many, using the chunk meta ID as a search key. ..
  • the chunk estimation unit 7 acquires a chunk summary showing the outline of the chunk from the chunk table TB7 using the chunk ID as a search key, and transmits the chunk summary to the user terminal 12.
  • the user terminal 12 presents the chunk summary received from the chunk estimation unit 7 to the target person.
  • the presentation of the chunk summary is displayed, for example, on one side of the object model previously assigned by the target person by the display unit 14 described later.
  • the chunk estimation unit 7 acquires chunks from the chunk table TB7 using the chunk ID as a search key, and transmits the chunks to the user terminal 12.
  • the user terminal 12 presents the chunk received from the chunk estimation unit 7 to the target person.
  • the chunk presentation is displayed, for example, on one side of the object model previously assigned by the target person by the display unit 14 described later.
  • the chunk table TB7 is a table in which chunks, chunk summaries, and hash values are associated with a chunk ID on a one-to-one basis.
  • the hash value is used, for example, to confirm whether or not the chunk has been changed.
  • the image acquisition unit 4 acquires the target person identification information for identifying the target person.
  • the target person identification information is, for example, a face image that identifies the responder and the respondent, and is, for example, a bar code of an ID card such as a photo certificate, a two-dimensional code, or the like. It may be imaged by a camera or the like.
  • the information processing apparatus 1 identifies the captured target person identification information and confirms that the responder is a legitimate target person (correspondent) for the work.
  • the target person identification information may, for example, identify a plurality of target persons, and may further identify a remote co-owner who shares information.
  • FIG. 13 shows the object model table TB10.
  • the object model table TB10 displays, for example, an object model ID for identifying the object model, operation information indicating the type of operation for the object model, a basic size in which the object model is displayed, and a plurality of object models 6 as information for identifying the object model. Possible additional number, display coordinates indicating the position where the object model is displayed, estimated scene ID and chunk ID, area ID (affiliation / department) that identifies the affiliation, department, location, etc. to perform work, target person (correspondence) Persons, respondents, etc.) and role IDs that identify the skills, attributes, roles, qualifications, etc. of the responders are stored in association with each other.
  • an object model ID for identifying the object model
  • operation information indicating the type of operation for the object model
  • a basic size in which the object model is displayed a plurality of object models 6 as information for identifying the object model.
  • Possible additional number display coordinates indicating the position where the object model is displayed, estimated scene ID
  • FIG. 14 shows the object allocation table TB11.
  • the object allocation table TB11 is, for example, linked to the object model ID, display area information regarding the number of display areas of the object model, display area ID to which the recommended image to be displayed and reference information are linked, scene ID, chunk ID. Etc. are stored in association with each.
  • FIG. 15 shows the annotation table TB12 and the attention table TB13.
  • the annotation table TB12 for example, a video ID for identifying a video related to work, a camera ID for identifying a camera that shot, a scene ID for identifying a work scene, a shooting time indicating the time when the video was shot, and a video were shot.
  • the shooting time, image quality information, image quality and viewpoint coordinates of the shot image, meta ID, attention ID, etc. are stored in association with each other.
  • the attention table TB13 is, for example, an attention ID that identifies the priority of chunks to be displayed, attention type information indicating the display of attention information, a scene ID, attention information that occupies the content, attention information data (content), and higher reference information. , High-ranking attention ID indicating the presence or absence, etc. are stored in association with each.
  • FIG. 16 shows the camera table TB14 and the roll table TB15.
  • the camera table TB14 can be operated, for example, a camera ID that identifies the camera to be photographed, an area ID in which the camera is installed, model information indicating camera specifications, operations, roles, etc., line-of-sight information, switching information, external connection information, and operation. Person ID, role ID, etc. are stored in association with each other.
  • the role table TB15 has a role ID, an employee ID, a name, a qualification ID, a department ID, an area ID, a related role ID, etc. that identify a target person (correspondent, respondent, other responder, co-owner, etc.). , Correspond to each and stored.
  • the information processing device 1 may further include a recommendation image output unit 13, and the auxiliary storage device 11 may further include a recommendation table TB9.
  • the recommendation image output unit 13 searches for a recommended object image using the recommendation table TB9 using a combination of the model ID and one or a plurality of chunk meta IDs as a search key.
  • the recommendation image output unit 13 outputs the searched recommended object image to the user terminal 12.
  • the recommended object to be image refers to an image of the object to which the original image 20 is not captured, but is presumed to be necessary.
  • the recommendation table TB9 is a table in which the combination of the model ID and the chunk meta ID and the recommended object image are linked one-to-one-to-one.
  • the recommendation image output unit 13 searches for one of the plurality of second trained models as a search key for a combination of a model ID and one or a plurality of chunk meta IDs associated with a scene ID on a one-to-one basis.
  • the shared person who shares information in the work and the shared information shared with the shared person may be further searched, and the recommended information associated with the shared person and the shared information may be output.
  • the recommendation image output unit is, for example, a collaborator who collaborates with the target person (correspondent), a trainer who is a leader of the target person (correspondent), or at least an inspector who monitors the target person (correspondent).
  • a person identified in any position may be searched as a shared person, and the recommended information associated with the shared person and the shared information may be output.
  • the display unit 14 allocates each of the chunks and the recommended image output by the chunk output unit 13 to each surface of the object model having a plurality of display areas, and assigns the assigned object model. It is displayed on the user terminal 12 via the user terminal 12.
  • the display unit 14 further includes an object model identification unit 15.
  • the object model specifying unit 15 identifies the object model that displays the recommended image and the recommended information output by the recommended image output unit 13, and associates the scene and chunk with the object model ID that uniquely indicates the object model. , Identify the object model.
  • the display unit 14 displays the recommendation image output by the recommendation image output unit 13 and the recommendation information in any of the display areas of the plurality of display areas included in the object model specified by the object model identification unit 15. , Assign as a state that can be shared with the shared person.
  • the object model specifying unit 15 uniquely displays an object model that displays reference information including at least a scene or chunk based on at least one of a scene estimated by the scene estimation unit 6 and a chunk estimated by the chunk estimation unit 7.
  • the object model is specified by associating with the indicated object model ID.
  • the object model specifying unit 15 refers to the object model table TB10 acquired by, for example, the image acquisition unit 4 and stored in the auxiliary storage device 11, and displays reference information narrowed down to the corresponding person when the corresponding person needs it. Identify a suitable object model 6 to do.
  • the object model specifying unit 15 refers to the object model table TB10 shown in FIG. 13, and based on, for example, an estimated scene ID, chunk ID, etc., based on various information such as a work area and a position of a corresponding person. Identify the object model that can be displayed under that condition.
  • the object model specifying unit 15 is aware of the information narrowed down to the target person and the corresponding person when the corresponding person is required by, for example, the basic size, display coordinates, area ID, etc. of the object model identified by the object model ID. Identify an object model that can present information to the responder that displays no useful information.
  • the object model specifying unit 15 identifies an object model, for example, a qualification ID for specifying skill information of a correspondent, display coordinates for specifying spatial information for performing work, a basic size for specifying feature information of an object, or work. It may be specified based on at least one of the work level information of.
  • the object model specifying unit 15 refers to the object model table based on the estimated scene ID, chunk ID, etc., and for example, from various data such as the shape, the basic size, and the number of additional objects associated with the object model ID, at least 2 It is also possible to specify an object model having a shape having a display area 8 equal to or larger than a surface, and capable of displaying a plurality of the same or different object models 6.
  • the object model specifying unit 15 refers to the object model table TB10 based on the estimated scene ID, chunk ID, etc., and for example, by various operation information for the object model 6, rotation display, enlarged display, based on the working state, You may want to specify an object model that can be displayed in at least one of reduced display, protruding display, vibration display, state display, discoloration display, and shading display.
  • the object model specifying unit 15 may be specified based on various information such as the positional relationship between the object and the corresponding person, the dominant hand, the language used, and the like, and the displayed position may be determined.
  • the object model specifying unit 15 identifies an object model having a display area that can be displayed based on, for example, the type of chunk estimated by the chunk estimation unit 7, the number of characters in the chunk, and the like, and the chunks are displayed in each display area of the object model. You may make an assignment.
  • the object model specifying unit 15 may preferentially display the customized object model.
  • the object model may be rotated or moved by grasping any of the left and right or upper and lower sides of the object model, for example, via the user terminal 12 worn by the corresponding person.
  • the object model specifying unit 15 projects the specified object model toward the corresponding person and displays it. May be good.
  • the display area of the object model specified by the object model specifying unit 15 includes at least a scene or chunk based on at least one of the scene estimated by the scene estimation unit 6 and the chunk estimated by the chunk estimation unit 7. Assign reference information.
  • the object model specifying unit 15 refers to the object allocation table TB11 shown in FIG. 14, for example, the estimated scene ID, the displayable information indicated by the chuck ID, the work area, the position of the correspondent, and the chunk data. Based on various information such as the amount, the object model specified by the object model specifying unit 15 is assigned to the display area.
  • the object model specifying unit 15 may refer to the attention table TB13 shown in FIG. 15 in addition to the object allocation table and allocate the chunks associated with the scene ID. good.
  • the attention ID of the attention table TB13 is referred to, and the presence or absence of the attention information set in association with the image of the object to be matched is determined.
  • the object model identification unit 15 has reference information (for example, a display having priority over the estimated chunk, or a display attached to the chunk). (Attention information, attention information data, etc.) are assigned.
  • reference information for example, a display having priority over the estimated chunk, or a display attached to the chunk.
  • the object model specifying unit 15 refers to the annotation table TB12 based on the estimated scene ID, chunk ID, etc., and for example, all or part of various videos and data associated with each of the attention IDs are the videos. May be assigned based on the time, length, and viewpoint at which the image was taken.
  • the position information of the room or place where the person in charge works for example, the position information of the room or place where the person in charge works, the environmental information such as the ambient temperature and humidity, the work of the target person, and the work of the target person.
  • the display of the display area in the object model may be assigned based on various information related to the movement, the biological information of the target person, and the like.
  • the object model specifying unit 15 has an object model, each display area of the object model, and a display area of the object model so that the virtual reality displayed on the transmissive display of the user terminal 12 is superimposed and displayed to the corresponding person. Set the recommended image to be assigned to the display area.
  • the object model specifying unit 15 may acquire evaluation target information including at least position information which is information on the position where a corresponding person is present and work-related information related to work.
  • the work-related information is, for example, information around the object to be addressed, the dominant arm that the responder (target person) works on, and the like, based on these, for example, "the work position of the responder is to the right of the object to be addressed", "The distance between the correspondent and the object to be dealt with is 3 meters", “there is no arrangement around the object to be correspondent", “the dominant arm of the correspondent is to the right", and the like may be displayed.
  • the object model specifying unit 15 determines that, for example, "the space on the left side of the object to be corresponded is empty", “the correspondent is right-handed”, and "the estimated chunk amount is 2 screens", the object model identification unit 15 is to be dealt with.
  • “Chunk (1B827-01.txt_0) ”and the like are set in the display area.
  • the object model specifying unit 15 is an object model specified based on, for example, a scene as a work place, a positional relationship between a correspondent and a corresponding object, chunk data or information amount as reference information, a size of a recommended image, and the like.
  • a plurality of cubes for example, a plurality of cubes may be positioned vertically or horizontally and displayed.
  • the object model specifying unit 15 may assign and display a working state or the like on the upper surface of the object model, for example, as a "face mark” indicating emotions and emotions of the face.
  • the face mark may be indicated by, for example, "normal (smile mark)", “abnormality (sadness mark)", or "notification (speech mark)".
  • the information processing device 1 in the learning stage will be described with reference to FIG.
  • the corresponding person image 30 (35) input from an input device (not shown) and one or a plurality of corresponding object images 40 to 43 are learned as a set.
  • learning refers to, for example, supervised learning.
  • the corresponding person image 30 (35) will be described as an example, but in addition to the corresponding person image 30 (35), for example, the corresponding person image, the corresponding person identification information, the corresponding person information, and the like may be used.
  • the target person including the responder and the subject, and the target person identification information 61 including the responder identification information and the respondent information.
  • FIG. 2 is a block diagram showing the configuration of the information processing apparatus 1 in the learning stage according to the present embodiment.
  • the information processing apparatus 1 includes a first trained model generation unit 9 and a second trained model generation unit 10.
  • the first trained model generation unit 9 generates the first trained model DB1 by training the first learning model DB1'with the scene ID and the corresponding person image 30 (35) as a pair. It is a program to do.
  • the first trained model generation unit 9 acquires the scene ID from the scene table TB1 with respect to the corresponding person image 30 (35), and acquires the model ID corresponding to the scene ID from the model table TB2.
  • the second trained model generation unit 10 designates a model ID and learns from the second learning model DB 2'with one or a plurality of chunk meta IDs and the corresponding object images 40 (41 to 43) as a pair. It is a program that generates the second trained model DB2 by making it.
  • the second learned model generation unit 10 acquires the content ID from the scene / content table TB4, which is a table in which the scene ID and the content ID are linked one-to-many, using the scene ID as a search key.
  • the scene ID that serves as a search key is associated with the corresponding person image 30 (35) that is paired with the corresponding object image 40 (41 to 43) to be processed.
  • the second learned model generation unit 10 acquires content from the content table TB3, which is a table in which the content ID and the content are linked one-to-one, using the content ID as a search key.
  • the second learned model generation unit 10 acquires the chunk ID from the content chunk table TB5, which is a table in which the content ID and the chunk ID are linked one-to-one or many, using the content ID as a search key.
  • the second learned model generation unit 10 acquires chunks from the chunk table TB7 using the chunk ID as a search key, and acquires a chunk meta ID from the chunk meta table TB6 using the chunk ID as a search key.
  • the second trained model generation unit 10 acquires the chunk meta value from the chunk meta table TB8 using the chunk meta ID as a search key.
  • the chunk meta table TB8 is a table in which the chunk category ID, the chunk category name, and the chunk meta value are linked to the chunk meta ID on a one-to-one basis.
  • the chunk category ID uniquely indicates the chunk category name, which is the name of the category to which the chunk meta value belongs.
  • the second trained model generation unit 10 refers to the corresponding object images 40 (41 to 43) and confirms that there is no problem in the acquired chunks, contents, and chunk meta values.
  • the second trained model generation unit 10 can generate a highly accurate trained model DB2, and the usage stage.
  • the information processing apparatus 1 can perform highly accurate processing.
  • FIG. 3 is a diagram showing an original image 20, a corresponding person image 30, a target person identification image 44, and a plurality of corresponding object images 40 to 43 according to the present embodiment.
  • the original image 20, the corresponding person image 30, the target person identification image 44, and the plurality of corresponding object images 40 to 43 are displayed on, for example, the user terminal 12.
  • FIG. 3 shows an example of being displayed at the same time, the original image 20, the corresponding person image 30, the target person identification image 44, and the plurality of corresponding object images 40 to 43 are separately displayed on the user terminal 12. You may.
  • the original image 20 captures the corresponding person 21, the target person identification image 44, and the corresponding objects 22 to 25.
  • the size of the corresponding objects 22 to 25 is estimated based on information that does not change for each scene in the booth such as a desk.
  • the corresponding objects 22 to 25 include, like the corresponding object 24, information on the contents such as the attached photo 26, the internal text 27, and the signature 28, as well as code information such as a bar code and a two-dimensional code.
  • code information such as a bar code and a two-dimensional code.
  • Various coupons and the like may be obtained.
  • the target person identification image 44 may include, for example, the target person identification information 61, the face photograph 61a of the target person (for example, the corresponding person or the person to be dealt with), the name of the target person 61b, and the barcode 61c for identifying the target person. good.
  • Code information such as barcodes and two-dimensional codes, and various coupons are printed on paper media in advance, or may be displayed on the screen of the user terminal 12 of the respondent or the respondent 21, for example. good.
  • FIG. 4 is a diagram showing a tree structure in which the respondent 21 and the plurality of counterparts 22 to 25 are related according to the present embodiment.
  • the corresponding person 21 for example, the corresponding person and the target person identification information (for example, the corresponding person identification information, the corresponding person identification information, etc.) may be used.
  • the subject will be described in detail by taking the respondent 21 as an example.
  • the image segmentation section 5 has a plurality of respondents 21 and a plurality of respondents 21 as a tree structure in which the respondent 21 is a root node and a plurality of counterparts 22 to 25 are leaf nodes or internal nodes. It is associated with the corresponding objects 22 to 25.
  • the image dividing unit 5 further includes information such as an attached photograph 26, a text 27, and a signature 28, which are information included in at least one of the objects 22 to 25, as well as code information such as a barcode and a two-dimensional code. , Various coupons, etc. may be acquired and associated with the tree structure as a leaf node.
  • FIG. 5 shows a first trained model DB1 and a second trained model DB2 according to the present embodiment.
  • the first trained model DB 1 includes a plurality of respondent images 30 (35), a plurality of scene IDs, and a plurality of respondent images 30 (35) generated by machine learning using a plurality of the trained model DB 1 as a pair of learning data. The connection between them is remembered.
  • machine learning is, for example, a convolutional neural network (CNN).
  • the association between the respondent image 30 (35) and the scene ID is specifically represented by the nodes indicated by circles in FIG. 5, the edges indicated by arrows, and the weighting factors set for the edges. It can be represented by a convolutional neural network. As shown in FIG. 5, the input of the corresponding person image 30 (35) to the first trained model DB1 is for each pixel such as pixels p1 and p2.
  • the second trained model DB2 is associated with the model ID on a one-to-one basis, and there are a plurality of them.
  • Each of the second trained model DB2 is generated by machine learning using a plurality of object images 40 (41 to 43) and one or a plurality of chunk meta IDs as a pair of training data.
  • the association between the plurality of object images 40 (41 to 43) and the plurality of one or a plurality of chunk meta IDs is stored.
  • machine learning is, for example, a convolutional neural network (CNN).
  • the association between the plurality of object images 40 (41 to 43) and the plurality of one or a plurality of chunk meta IDs is specifically the node indicated by a circle and the edge indicated by an arrow in FIG. It can be represented by a convolutional neural network represented by and the weighting factor set on the edge.
  • the input of the object image 40 (41 to 43) to the second trained model DB2 is for each pixel such as pixels p1 and p2.
  • FIG. 6 is a diagram showing information stored in the auxiliary storage device 11 according to the present embodiment.
  • the scene ID stored in the scene table TB1 or the like is a 3-digit hexadecimal number such as 0FD.
  • the scene name stored in the scene table TB1 or the like is, for example, a grade inquiry or a career counseling.
  • the model ID stored in the model table TB2 or the like is represented by a two-character alphabetic character and a one-digit decimal number, for example, MD1.
  • the content ID stored in the content table TB3 or the like is represented by a 5-digit hexadecimal number and a 2-digit decimal number, for example, 1B827-01.
  • the content stored in the content table TB3 or the like is, for example, 1B827-01.
  • a file name that is a content ID, such as txt, is indicated with an extension, and a pointer to the substance of the content is stored.
  • the chunk ID stored in the content chunk table TB5 or the like is represented by a 5-digit and 2-digit decimal number such as 82700-01.
  • the chunk meta ID stored in the chunk meta table TB6 or the like is a 4-digit hexadecimal number such as 24FD.
  • the chunks stored in the chunk table TB7 are, for example, 1B827-01. It is indicated by a file name of the content corresponding to the target chunk and a one-digit decimal number such as pxt_0, and a pointer to a part of the substance of the content corresponding to the target chunk is stored.
  • the chunk summary stored in the chunk table TB7 is a document summarizing the contents of the chunk, for example, "Hello Work, ".
  • the hash value stored in the chunk table TB7 is a 15-digit hexadecimal number such as 564544d8f0b746e.
  • the chunk category ID stored in the chunk meta table TB8 is a 3-digit decimal number such as 394.
  • the chunk category name stored in the chunk meta table TB8 is, for example, the size of the paper, the color of the paper, or the presence or absence of holes in the paper.
  • the chunk meta values stored in the chunk meta table TB8 are, for example, A4, B4, white, blue, with holes on the sides, and without holes.
  • the value of the chunk category ID and the chunk category name may be NULL.
  • the combination of chunk meta IDs stored in the recommendation table TB9 is (24FD, 83D9), (25FD), etc., and one or more chunk meta IDs are combined.
  • the recommended object image stored in the recommendation table TB9 is, for example, IMG001.
  • the shared object information stored in the recommendation table TB9 is, for example, IMG111.
  • Shared object information includes, for example, images, various video videos, texts such as documents and emails, online (online conferences, teleconferencing, videophones, etc.), applications for chats, link information, and the like. May be good.
  • the co-owner information stored in the recommendation table TB9 stores, for example, the attributes and conditions of the other party with whom the information is shared, such as the target person and the inspector, and the information for identifying the co-owner, in association with each other.
  • the information processing device 1 refers to, for example, the co-owner information stored in the recommendation table 7, and shares the shared object information through the display area of the object model described later.
  • the data structure of the work information has the chunk ID as the first layer and the chunk ID as the second layer. It has a hierarchical structure in which the content ID is a third layer and the scene ID is a fourth layer, which is the uppermost layer.
  • FIG. 7 is a sequence diagram for explaining the scene estimation function, chunk estimation function, and chunk output function according to the present embodiment.
  • the information processing functions at the usage stage include a scene estimation function realized by the scene estimation process S60 described later, a chunk estimation function realized by the chunk estimation process S80 described later, and a chunk output realized by the chunk output process S100 described later. It consists of functions.
  • the image acquisition unit 4 included in the control unit 15 receives the original image 20 from the user terminal 12 (S1).
  • the image segmentation unit 5 included in the control unit 15 divides the original image 20 into the corresponding person image 30 and the corresponding object images 40 to 43.
  • the image segmentation unit 5 transmits the corresponding person image 30 to the scene estimation unit 6 and transmits the corresponding object images 40 to 43 to the chunk estimation unit 7.
  • the scene estimation unit 6 included in the control unit 15 inputs the corresponding person image 30 into the first trained model DB 1 (S2).
  • the first trained model DB 1 selects one or a plurality of scene IDs strongly associated with the received image 30, and selects one or a plurality of scene IDs for the scene estimation unit 6 (hereinafter, this is the first first). It may be called a scene ID list) (S3).
  • the scene estimation unit 6 When the scene estimation unit 6 acquires the first scene ID list, it transmits it to the user terminal 12 as it is (S4).
  • the user terminal 12 transmits to the scene estimation unit 6 whether or not there is a cache for each scene ID included in the first scene ID list (S5).
  • the user terminal 12 holds a table equivalent to the scene table TB1 with respect to the information processed in the past.
  • the user terminal 12 searches the table held by the user terminal 12 using the scene ID of the received first scene ID list as a search key.
  • the scene ID in which the search result is found has a cache, and the scene ID in which the search result cannot be found has no cache.
  • the scene estimation unit 6 has one or a plurality of scene IDs (hereinafter, referred to as a second scene ID) having no cache in the user terminal 12 among the respective scene IDs included in the first scene ID list received from the user terminal 12.
  • the scene table TB1 is searched using (may be called a list) as a search key (S6).
  • the scene estimation unit 6 acquires, as a search result, a scene name corresponding to each scene ID included in the second scene ID list (hereinafter, this may be referred to as a scene name list) from the scene table TB1 (S7). ..
  • the scene estimation unit 6 transmits the acquired scene name list to the user terminal 12 as it is (S8).
  • the information processing apparatus 1 realizes a scene estimation function for estimating the scene of the person to be addressed image 30 by estimating the scene name in steps S1 to S8.
  • the user terminal 12 presents the received scene name list to the target person.
  • the presentation of the scene name list is displayed, for example, on one side of the object model previously assigned by the target person.
  • the subject selects, for example, one scene name from the presented scene name list.
  • the user terminal 12 transmits the scene name selected by the target person to the chunk estimation unit 7 included in the control unit 15 (S9).
  • the chunk estimation unit 7 uses the scene ID corresponding to the scene name received from the user terminal 12 as a search key (S10), searches the model table TB2, and acquires the model ID (S11).
  • the chunk estimation unit 7 receives the corresponding object image 40 (41 to 43) from the image segmentation unit 5 (S12).
  • the chunk estimation unit 7 designates one of the plurality of second trained model DB2s by the model ID acquired from the model table TB2, and designates the corresponding object images 40 (41 to 43) for the second learning. It is input to the completed model DB2 (S13).
  • the second trained model DB 2 selects one or a plurality of chunk meta IDs strongly associated with the object image 40 (41 to 43), and selects one or a plurality of the chunk estimation unit 7.
  • a plurality of one or a plurality of chunk meta IDs (hereinafter, this may be referred to as a chunk meta ID list) are output (S14).
  • the chunk estimation unit 7 searches the chunk meta table TB6 using each one or a plurality of chunk meta IDs included in the chunk meta ID list as a search key (S15).
  • the chunk estimation unit 7 acquires one or a plurality of chunk IDs (hereinafter, this may be referred to as a first chunk ID list) from the chunk metatable TB6 as a search result (S16).
  • the chunk estimation unit 7 transmits the acquired first chunk ID list to the user terminal 12 as it is (S17).
  • the user terminal 12 transmits to the chunk estimation unit 7 whether or not there is a cache for each chunk ID included in the first chunk ID list (S18).
  • the user terminal 12 holds a table including a chunk ID column and a chunk summary column in the chunk table TB7 with respect to the information processed in the past.
  • the user terminal 12 searches the table held by the user terminal 12 using the chunk ID of the received first chunk ID list as a search key. Chunk IDs for which search results are found are cached, and chunk IDs for which search results are not found are cached.
  • the chunk estimation unit 7 has one or a plurality of chunk IDs (hereinafter, referred to as a second chunk ID) having no cache in the user terminal 12 among the respective chunk IDs included in the first chunk ID list received from the user terminal 12.
  • the chunk table TB7 is searched using (which may be called a list) as a search key (S19).
  • the chunk estimation unit 7 acquires a chunk summary (hereinafter, this may be referred to as a chunk summary list) corresponding to each chunk ID included in the second chunk ID list as a search result from the chunk table TB7 (S20). ..
  • the chunk estimation unit 7 transmits the acquired chunk summary list to the user terminal 12 as it is (S21).
  • the information processing apparatus 1 realizes a chunk estimation function for estimating chunks of the corresponding object 22 (23 to 25) by estimating chunk summaries in steps S9 to S21.
  • the user terminal 12 presents the received chunk summary list to the target person.
  • the chunk summary list presentation is displayed, for example, on one side of the object model pre-assigned by the subject.
  • the subject selects, for example, one chunk summary from the presented chunk summary list.
  • the user terminal 12 transmits the chunk summary selected by the target person to the chunk output unit 8 included in the control unit 15 (S22).
  • the chunk output unit 8 uses the chunk ID corresponding to the chunk summary received from the user terminal 12 as a search key (S23), searches the chunk table TB7, and acquires chunks (S24).
  • the chunk output unit 8 transmits the acquired chunk to the user terminal 12 as it is (S25).
  • the user terminal 12 presents the received chunk to the user.
  • the chunk presentation is displayed, for example, on one side of the object model pre-assigned by the subject.
  • the information processing apparatus 1 realizes a chunk output function for outputting chunks of the corresponding object 22 (23 to 25) by steps S22 to S25.
  • FIG. 8 is a sequence diagram provided for explaining the first trained model generation function and the second trained model generation function according to the present embodiment.
  • the information processing functions at the learning stage include a first trained model generation function realized by the first trained model generation process and a second trained model generation function realized by the second trained model generation process. And consists of.
  • the first trained model generation unit 9 included in the trained model generation unit 16 is a set of a scene name to be processed, a corresponding person image 30, and one or a plurality of corresponding object images 40 to 43. Is determined, and the scene table TB1 generated in advance is searched for in the scene table TB1 using the scene name as a search key (S31).
  • the first trained model generation unit 9 acquires the scene ID from the scene table TB1 as a search result (S32), and sets the corresponding person image 30 and the scene ID in the first learning model DB 1'as a pair. (S33).
  • the first trained model generation unit 9 transmits the acquired scene ID to the model table TB2 and makes a model ID acquisition request (S34).
  • the model table TB2 generates a model ID corresponding to the received scene ID and stores the combination of the scene ID and the model ID.
  • the first trained model generation unit 9 acquires the model ID from the model table TB2 (S35).
  • the information processing apparatus 1 realizes the first trained model generation function of generating the first trained model DB1 by steps S31 to S35.
  • the second trained model generation unit 10 included in the trained model generation unit 16 uses the scene ID received by the first trained model generation unit 9 in step S32 as a search key to generate a scene content table in advance. Search for TB4 (S36).
  • the second learned model generation unit 10 acquires the content ID from the scene content table TB4 as a search result (S37), and searches the content table TB3 generated in advance using the acquired content ID as a search key (S38). ..
  • the second learned model generation unit 10 acquires the content from the content table TB3 as a search result (S39), and searches the content chunk table TB5 generated in advance using the content ID acquired in step S37 as a search key (S). S40).
  • the second trained model generation unit 10 acquires the chunk ID from the content chunk table TB5 as a search result (S41), and searches the chunk table TB7 generated in advance using the acquired chunk ID as a search key (S42). ..
  • the second trained model generation unit 10 acquires chunks from the chunk table TB7 as a search result (S43), and searches the chunk metatable TB6 generated in advance using the chunk ID acquired in step S41 as a search key (S). S44).
  • the second trained model generation unit 10 acquires one or a plurality of chunk meta IDs from the chunk meta table TB6 as search results (S45), and pre-generates each acquired meta ID for chunks as a search key.
  • the meta table TB8 for chunks is searched (S46).
  • the second trained model generation unit 10 acquires the chunk meta value corresponding to each chunk meta ID as a search result from the chunk meta table TB8 (S47).
  • the second trained model generation unit 10 checks whether there is a problem in the content acquired in step S39, the chunk acquired in step S43, and the respective chunk meta values acquired in step S47, with the corresponding person image 30 and the corresponding. Confirmation is performed with reference to the object images 40 to 43.
  • the second trained model generation unit 10 confirms by referring to the facial expressions of the person to be corresponded 21 and the document names described in the objects to be dealt with 22 to 25.
  • the second trained model generation unit 10 determines, for example, the facial expression of the corresponding person 21 from the corresponding person image 30, and assigns the document names described in the corresponding objects 22 to 25 from the corresponding object images 40 to 43. judge.
  • the second trained model generation unit 10 sets a pair of the model ID, the corresponding object images 40 (41 to 43), and one or a plurality of chunk meta IDs in the second learning model DB 2'. (S48).
  • the information processing apparatus 1 realizes the second trained model generation function of generating the second trained model DB 2 by steps S36 to S48.
  • FIG. 9 is a flowchart showing a processing procedure of information processing in the usage stage according to the present embodiment.
  • Information processing in the usage stage is composed of a scene estimation process S60, a chunk estimation process S80, and a chunk output process S100.
  • the scene estimation process S60 is composed of steps S61 to S67.
  • the scene estimation unit 6 receives the corresponding person image 30 (35) from the image segmentation unit 5 (S61)
  • the scene estimation unit 6 inputs the corresponding person image 30 (35) into the first trained model DB 1 (S62).
  • the scene estimation unit 6 acquires the first scene ID list as output from the first trained model DB 1 (S63), transmits the first scene ID list to the user terminal 12 as it is, and determines whether or not there is a cache. Contact 12 (S64).
  • the scene estimation process S60 ends and the chunk estimation process S80 starts.
  • the scene estimation unit 6 acquires the scene name list from the scene table TB1 (S66) and transmits it to the user terminal 12 as it is (S67). ), The scene estimation process S60 ends.
  • the chunk estimation process S80 is composed of steps S81 to S88.
  • the chunk estimation unit 7 receives the scene name selected by the target person from the user terminal 12 (S81).
  • the chunk estimation unit 7 Upon receiving the scene name from the user terminal 12, the chunk estimation unit 7 acquires the model ID from the model table TB2 (S82). Next, the chunk estimation unit 7 designates one of the plurality of second learned models DB 2 by the model ID, and designates the corresponding object images 40 (41 to 43) received from the image division unit 5. It is input to the trained model DB2 of 2 (S83).
  • the chunk estimation unit 7 acquires a chunk meta ID list as an output from the second trained model DB 2 (S84), and acquires a first chunk ID list from the chunk meta table TB6 (S85). Next, the chunk estimation unit 7 transmits the first chunk ID list to the user terminal 12 as it is, and inquires the user terminal 12 whether or not there is a cache (S86).
  • the chunk estimation process S80 ends and the chunk output process S100 starts.
  • the chunk estimation unit 7 acquires a chunk summary list from the chunk table TB7 (S87) and transmits it to the user terminal 12 as it is (S88). ), The chunk estimation process S80 ends.
  • the chunk output process S100 is composed of steps S101 to S103.
  • the chunk output unit 8 receives the chunk summary selected by the target person from the user terminal 12 (S101).
  • the chunk output unit 8 acquires the chunk from the chunk table TB7 (S102) and transmits it to the user terminal 12 as it is (S103), and the chunk output process S100 ends.
  • FIG. 10 is a flowchart showing information processing in the display stage according to the present embodiment.
  • the display on the display unit 14 is composed of the display process S110.
  • the display process S110 is composed of steps S111 to S114.
  • the display unit 14 includes an object model specifying unit 15, for example, acquiring object model information stored in advance (S111) and acquiring information on each display area constituting the object model (S112). Specifically, for the information of each display area, for example, the object model specifying unit 15 specifies the recommended image output by the recommended image output unit and the object model for displaying the recommended information.
  • the object model specifying unit 15 identifies the object model by associating the scene and chunk with the object model ID that uniquely indicates the object model.
  • the object model is specified, for example, with reference to the object model table TB10, and is specified based on various information such as a scene ID, a chunk ID, an area ID such as a target person or an information sharer, or a role ID. You may.
  • the display unit 14 is not assigned a display area of the object model based on various information such as, for example, the attribute, type, amount of information of the information presented to the user terminal, the type of the target person, the operation of the target person, and the like.
  • S113: NO displays information other than the display target and default display.
  • various information to be displayed is assigned to each display area and displayed (S114).
  • Allocation of various information (objects) to be displayed to each display area of the object model refers to the object allocation table TB11, and is assigned to the display area based on, for example, a scene ID, a chunk ID, a role ID, and the like. ..
  • the display area is allocated based on, for example, the display area information identified by the object model ID and the display area ID.
  • the display unit 14 displays the recommendation image and the recommendation information output by the recommendation image output unit in the display area of any of the plurality of display areas included in the object model specified by the object model identification unit 15 with the target person. It is assigned and displayed as a state that can be shared with the co-owner, and the display process S110 ends.
  • FIG. 11 is a flowchart showing a processing procedure of information processing in the learning stage according to the present embodiment.
  • Information processing in the learning stage is composed of a first trained model generation process S120 and a second trained model generation process S140.
  • the first trained model generation process S120 is composed of steps S121 to S124.
  • the first trained model generation unit 9 determines a set of the scene name, the corresponding person image 30 (35), and one or more corresponding object images 40 (41 to 43), the scene name is determined. Search the scene table TB1 as a search key (S121).
  • the first learned model generation unit 9 acquires the scene ID from the scene table TB1 as a search result (S122), and puts the scene ID and the corresponding person image 30 (35) in the first learning model DB 1'. Learn as a pair (S123).
  • the first trained model generation unit 9 transmits the scene ID acquired in step S122 to the model table TB2, makes a model ID acquisition request, and acquires the model ID (S124).
  • the second trained model generation process S140 is composed of steps S141 to S150.
  • the second learned model generation unit 10 searches the scene content table TB4 using the scene ID acquired in step S122 as a search key, and acquires the content ID (S141).
  • the second learned model generation unit 10 searches the content table TB3 using the acquired content ID as a search key and acquires the content (S142). Further, the second learned model generation unit 10 searches the content chunk table TB5 using the acquired content ID as a search key, and acquires the chunk ID (S143).
  • the second trained model generation unit 10 searches the chunk table TB7 using the acquired chunk ID as a search key and acquires chunks (S144). Further, the second learned model generation unit 10 searches the chunk meta table TB6 using the acquired chunk ID as a search key, and acquires one or a plurality of chunk meta IDs (S145).
  • the second trained model generation unit 10 searches the chunk meta table TB8 using each of the acquired one or a plurality of chunk meta IDs as a search key, and the chunk meta value corresponding to each chunk meta ID. (S146).
  • the second trained model generation unit 10 checks whether there is a problem with the content acquired in step S142, the chunk acquired in step S144, and the respective chunk meta values acquired in step S146 (corresponding person image 30 (35)). And the corresponding object images 40 (41 to 43) are referred to for confirmation (S147).
  • the second trained model generation unit 10 has a model ID, one or more chunk meta IDs, and a corresponding object image in the second learning model DB 2'. 40 (41 to 43) and 40 (41 to 43) are learned as a pair (S149), and the information processing in the learning stage regarding the set being processed is completed.
  • the chunk that divides or suggests the work information by the information processing apparatus 1 according to the present embodiment is presented via the user terminal 12. Therefore, it is possible to present the required amount of information by appropriately setting chunks. Also, if the chunk is information that suggests the entire document, there is no need to reconstruct the information on a large scale.
  • FIG. 12A is a schematic diagram showing an example of the operation of the information processing system 100
  • FIG. 12B is a diagram showing an original image, a target person image, and a plurality of objects to be imaged in the information processing system 100.
  • Is. 17 (a) to 17 (h) are diagrams showing display patterns in the target person in the information processing system 100
  • FIGS. 18 (a) to 18 (b) are user terminals 12 in the information processing system 100.
  • It is a schematic diagram which shows an example of the display of.
  • the target person (correspondent, correspondent, co-owner, trainee, etc.) 50a with respect to the corresponding object 60 in the manufacturing area.
  • the case where the work information about the work is output when the work is performed is shown.
  • the information processing system 100 is, for example, via a user terminal 12 (for example, a head-mounted display) worn by the target person 50a, for example, an employee ID card 61 which is the target person identification information, and a corresponding object 60 which performs work. Get an image.
  • the target person identification information may be, for example, an image of the subject's face, fingerprint, palmistry, vein, or the like, and may be unique information that can identify the target person.
  • the information processing system 100 acquires the target person identification information of the employee ID card 61 taken by the target person 50b if, for example, the target person 50b who collaborates with the target person 50b is in the same work area in addition to the target person 50a in the manufacturing area. You may try to do it.
  • the information processing system 100 acquires, for example, the target person identification information of the employee ID card 61 of the target person 50b and the image of the corresponding object 60 captured from the target person 50b side from the target person 50b.
  • the information processing system 100 for example, when a plurality of target persons 50a, target persons 50b, and the like collaborate in a manufacturing area, for example, an image taken by a camera of each user device 12 is used as a corresponding object in the collaborative work. It may be specified as an image of the image or divided.
  • the information processing system 100 may, for example, determine the divided image and search for the recommended recommended object image and the shared shared object information.
  • the information processing system 100 needs to share information in the work of the target person, for example, an image of a corresponding object that is not captured in the original image but is presumed to be necessary in the original image. It is possible to output a recommended image including information on the presumed object to be handled and the work of the target person, and it is possible to prevent forgetting to work.
  • the manufacturing area is connected to the customer / support area, which is the area of the customer / trainer 51, via a communication network such as the Internet, and the monitoring is, for example, the area of the inspector 52 in which the target person 50a monitors the work. Connected to the area.
  • the target person 50a outputs information necessary for one work from a viewpoint according to each position to the customer / trainer 51 and the inspector 52 from various aspects, and has a plurality of different positions, work places, and work hours. Enables information sharing among workers.
  • the target person 50a shares information in the customer / trainer 51 and the inspector 52 via the above-mentioned object model.
  • the information processing system 100 searches for, for example, a recommended recommended object image and shared shared object information by the recommendation image output unit 13, and although it is not captured in the original image, it is originally necessary. It outputs a recommended image including an image of the object to be presumed to be, an object to be presumed to need information sharing in the work of the subject, and information on the work of the subject.
  • the recommendation image output unit 13 allocates and outputs to a plurality of display areas of the object model.
  • FIG. 12B is a diagram showing an original image, a target person image, and a plurality of objects to be imaged in the information processing system 100.
  • the information processing system 100 acquires, for example, the target person identification information 70 regarding the target person and the corresponding object 60 image taken by the user terminal 12 worn by the target person 50a by the image acquisition unit 4, and causes the auxiliary storage device 11 to acquire the target person identification information 70 and the corresponding object 60 image. It is associated with each and stored.
  • the image stored in the auxiliary storage device 11 may be, for example, an image of a target person or target person identification information, for example, an image of an employee ID card 61.
  • the employee ID card 61 may include, for example, a face image 61a, a name 61b, and code information 61c of a person who performs work.
  • the image stored in the auxiliary storage device 11 includes, for example, an image of the corresponding object 60.
  • the image of the object 60 may include, for example, images of the parts 60a, 60b, and 60c constituting the object 60.
  • the original image captured by the image acquisition unit 4 is divided by the image segmentation unit 5.
  • the image segmentation unit 4 may be divided into parts 60a to 60c, for example, after the image acquisition unit 4 acquires an image of the object 60.
  • the target person identification information may be, for example, information for identifying the target person, and may be, for example, an image of the employee ID card 61.
  • the target person identification information is, for example, an employee ID card 61, the face image 61a, the name 61b, and the code information 61c of the target person who performs the work may be included.
  • the images divided by the original image segmentation unit 4 acquired by the image acquisition unit 4 are stored as images 70, 71, 71a to 71c in association with each other in the auxiliary storage device 11.
  • FIG. 17 shows various display contents assigned and displayed by the object model specifying unit 15 to a plurality of display areas of the object model specified by the object model specifying unit 15 in the user terminal 2.
  • FIG. 17A is an example in which a plurality of scene candidates estimated by the scene estimation unit 6 are displayed in one display area of the object model.
  • FIG. 17B is an example in which content / difference information is displayed in the display area as, for example, an image or information associated with a chunk ID.
  • the user terminal 12 is a device such as a smartphone, the image information of the user's viewpoint is switched to the rear camera, and the object to be photographed is photographed as a second image, and the photographed object is displayed in the display area. It is an example displayed in.
  • FIG. 17C for example, the user terminal 12 is a device such as a smartphone, the image information of the user's viewpoint is switched to the rear camera, and the object to be photographed is photographed as a second image, and the photographed object is displayed in the display area. It is an example displayed in.
  • FIG. 17C for example, the user terminal 12 is a device such as a smartphone, the image information of the user's viewpoint is switched to the rear camera, and the object to be photographed is photographed
  • FIG. 17D for example, a skilled person (trainer) photographs a target person (trainee) to work on a corresponding object, and a work checklist displayed based on the second image is displayed in the display area. It is an example to be done.
  • FIG. 17E is an example of being displayed in the display area as an image of a corresponding object recorded by, for example, a user terminal end 12 of a skilled person (trainer).
  • FIG. 17 (f) is an example in which, for example, an image for confirming the behavior of a target person performing a work from a bird's-eye view and related information are displayed together in a display area.
  • FIG. 17 (g) is an example in which the work information for generating the expert / AI learning data when acquiring the learning information recorded by the expert (trainer) is displayed in the display area.
  • FIG. 17H is an example in which related moving image information and origin information are displayed in the display area as related narrative information associated with the chunk ID, for example.
  • each reference information displayed in FIGS. 17A to 17H is displayed as a recommended image.
  • the scene estimation unit 6 selects a scene and estimates chunks.
  • FIG. 17A may be displayed in front of the subject.
  • the display area of the object model may be rotated based on the work content, work status, etc. of the target person, and more important information, attention information, and the like may be preferentially displayed.
  • FIGS. 18A to 18B show the display contents on the user terminal 12 by the information processing system 100.
  • the user terminal 12 is a smartphone and is displayed as a flat display.
  • chunks and recommendations are made in the display areas 80a of the display screen 80 of the target person (correspondent) such as a smartphone or tablet, and in the respective display areas of the specified object models 80b and 80c. This is an example in which various information such as images are displayed.
  • the image of the object model may be shared between different user terminals 12.
  • the image of the object model may be shared between different user terminals 12.
  • the user terminal 12 is a personal computer or the like, and the target person is an inspector.
  • a display area 81a for displaying a bird's-eye view of the target person working on the object to be handled
  • a display area 81b for displaying a viewpoint image of a skilled person in charge of the skilled person
  • the target person for example, a trainer, a trainee, etc.
  • a display area 81c for selecting an object model for transmitting information and a display area 81d for transmitting an alert to a target person for example, a trainer, a trainee, etc.
  • the target person needs it, the necessary information, shared information, related information, etc. are presented to the target person (correspondent), etc. from a viewpoint and role different from the information narrowed down to the target person. Is possible.
  • the effectiveness of information provision and the usefulness of information can be further improved.
  • the object model specifying unit 15 may allocate, for example, schematic information to the display area.
  • Schematized information includes, for example, figures and illustrations showing facial expressions such as “smile”, “anxiety”, and “tension” that simplify human facial expressions, and “caution” and “warning” for the work situation of the subject.
  • Words and messages, as well as light emission states such as red, blue, and yellow may be displayed as indicator lights and the like.
  • model table TB2 By using the model table TB2, even if the relationship between the first trained model DB1 and the second trained model DB2 changes, it is possible to deal with it simply by changing the model table TB2, which is excellent in maintainability. Can provide the equipment.
  • the image acquisition unit 4, the image division unit 5, the scene estimation unit 6, the chunk estimation unit 7, the chunk output unit 8, the first trained model generation unit 9, and the second trained model generation unit 9 are used.
  • the 10 and the recommendation image output unit 13 are programmed, but the program is not limited to this, and a logic circuit may be used.
  • the table TB15 is not mounted on one device, but may be distributed and mounted on a plurality of devices connected by a network.
  • the present invention is not limited to this, and the first method is not limited to this.
  • the trained model DB1 and the second trained model DB2 may be generated separately.
  • the first trained model DB1 and the second trained model DB2 are generated separately, for example, when the scene is an existing one and only the content is added, it is not necessary to learn about the scene.
  • the present invention is not limited to this, and only one second trained model DB2 may be used.
  • the case of displaying the image of the corresponding object that is presumed to be originally necessary has been described, but the present invention is not limited to this, and a part of the corresponding object that is presumed to be originally necessary is displayed. May be displayed. Further, in the present embodiment, a corresponding object or a part of the corresponding object, which is presumed to be originally unnecessary, may be suggested.
  • the information processing apparatus 1 of the present embodiment is composed of a tree structure associated with the image dividing unit and values output from the first trained model DB1 and the second trained model DB2 in the usage stage. By comparing with the hierarchical structure, excess / deficiency points may be determined.
  • the recommendation image and the recommendation information assigned to the plurality of display areas of the object model by the display unit 14 are linked to the scene information and the scene information indicating the contents of the scene estimated by the scene estimation unit 6.
  • Work information related to the work performed by the target person work check information indicating the work process related to the work performed by the target person linked to the work information, chunk information linked to the work information, difference information of the work content related to the work, hand by the work trainer Includes at least one of model information showing the content of this work, work information showing the work video of the work scene of the target person, or instruction information shown according to the difference in work between the model information and the work information. You may do so.
  • the display unit 14 may output the object model in the vicinity of the corresponding object in the virtual display space of the target person. Further, the display unit 14 may output the object model by fixing the display position in the vicinity of the corresponding object in the virtual display space of the target person.
  • the object model specifying unit specifies the object model based on at least one of the skill information of the target person, the spatial information for performing the work, the characteristic information of the corresponding object, or the work level information for the work. You may do so.
  • the object model specifying unit may specify one or more object models having two or more display areas.
  • the object model identification unit is in at least one of rotation display, enlarged display, reduced display, protrusion display, vibration display, state display, discoloration display, and shading display based on the state of the work performed by the subject.
  • the object model may be displayed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Databases & Information Systems (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

[Problem] To provide an information processing apparatus, an information processing method, and an information processing system which enable, when needed by an operator, provision of necessary amount of information to the operator and a co-operator, without requiring large-scale information reconstruction. [Solution] The present invention provides an information processing apparatus which is for outputting work information relating to work to be performed by an operator, and which is provided with: an image acquisition unit that acquires original images which include subject persons including an operator and an operation-receiving person and include a plurality of operation items to be handled by the operator; an image division unit that divides the original images into a subject person image in which the subject persons are captured of and a plurality of operation item images in which the operation items are captured of; a scene deduction unit that deduces a scene by using a first trained model; a chunk deduction unit that deduces a chunk by using one of a plurality of second trained models; an output unit that outputs the chunk; a recommendation image output unit that searches for a recommending operation item image and outputs a recommendation image; and a display unit that allocates the outputted chunk and the outputted recommendation image to an object model display area and displays the same. The chunk deduction unit selects one of the plurality of the second trained models by using a model ID associated with a scene ID on a one-by-one basis. A meta ID for a chunk uniquely indicates a meta value for the chunk, which is information relating to the property of an operation item.

Description

情報処理装置、情報処理方法及び情報処理システムInformation processing equipment, information processing methods and information processing systems
 本発明は、情報処理装置、情報処理方法及び情報処理システムに関する。 The present invention relates to an information processing apparatus, an information processing method and an information processing system.
 例えば特許文献1の作業支援システムでは、作業の手順、内容、留意点又はその他の事項を記述したマニュアルに基づいて、作業の対象又は作業の状況の判定条件を記述したルールを生成し、作業者が装着した機器からのセンサ情報に基づいて作業の対象及び作業の状況を認識し、生成したルール及び認識手段の認識結果に基づいて作業支援情報を出力する。 For example, in the work support system of Patent Document 1, a rule describing the judgment conditions of the work target or the work situation is generated based on the manual describing the work procedure, contents, points to be noted or other matters, and the worker. Recognizes the work target and work status based on the sensor information from the device worn by the user, and outputs work support information based on the recognition result of the generated rule and recognition means.
特開2019-109844号公報Japanese Unexamined Patent Publication No. 2019-109844
 しかしながら特許文献1に記載されたような従来の手法においては、マニュアルなどの文書として蓄積されている情報は、文書単位の検索しかできない。例えば段落単位での検索を文書に対して行う場合には、文書を構造化された情報に再構築する必要がある。検索対象となる全ての文書の再構築は費用対効果を考慮すると現実的ではないことが多く、また文書単位での情報では不要な情報を多分に閲覧してしまい、文書の閲覧者が迅速な対応ができないことがあるという課題がある。 However, in the conventional method as described in Patent Document 1, the information stored as a document such as a manual can only be searched for each document. For example, if you want to search a document paragraph by paragraph, you need to reconstruct the document into structured information. Reconstruction of all documents to be searched is often not realistic in terms of cost-effectiveness, and information on a document-by-document basis often browses unnecessary information, so that the viewer of the document can quickly browse. There is a problem that it may not be possible to deal with it.
 本発明の実施の形態の一態様は、大規模な情報の再構築をせずに、対応者が必要な際に、必要な量の情報を、対応者及び共同作業者に提示する、情報処理装置、情報処理方法及び情報処理システムを提供することを目的とする。 One aspect of the embodiment of the present invention is information processing that presents the required amount of information to the responder and the collaborator when the responder needs it, without reconstructing the information on a large scale. It is an object of the present invention to provide an apparatus, an information processing method and an information processing system.
 対応者の行う作業に関する情報である作業情報を出力する情報処理装置であって、対応者、及び前記対応者が対応する被対応者の少なくとも何れかを含む対象者と前記対応者が対応する複数の被対応物とを含む画像である元画像を取得する画像取得部と、元画像を分割し対象者が撮像された対象者画像とそれぞれの被対応物が撮像された複数の被対応物画像とに分割する画像分割部と、対象者画像と、対応者が行う状況であるシーンを一意に示すシーンIDと、の間における連関性が記憶されている第1の学習済みモデルを使用して、シーンを推定するシーン推定部と、複数の被対応物画像と、作業情報を分割又は示唆した情報であるチャンクを一意に示すチャンクIDと、1対1に対応付けられた1又は複数のチャンク用メタIDと、の間における連関性が記憶されている複数の第2の学習済みモデルのうちの1つを使用して、チャンクを推定するチャンク推定部と、チャンクを出力するチャンク出力部と、複数の第2の学習済みモデルのうちの1つを、シーンIDと1対1に対応付けられたモデルIDと1又は複数のチャンク用メタIDの組み合わせとを検索キーとして、推奨被対応物画像を検索し、元画像には撮像されていないが本来は必要であると推測される被対応物の画像であるレコメンド画像を出力するレコメンド画像出力部と、チャンク出力部により出力されるチャンク及びレコメンド画像出力部により出力されるレコメンド画像の各々を、複数の表示エリアを備えるオブジェクトモデルの各面に割り当てて表示する表示部と、を備え、チャンク推定部は、複数の第2の学習済みモデルのうちの1つを、シーンIDと1対1に対応付けられたモデルIDを用いて選定し、チャンク用メタIDは被対応物の性質に関する情報であるチャンク用メタ値を一意に示す、情報処理装置を提供する。 An information processing device that outputs work information that is information about work performed by a responder, and is a plurality of objects corresponding to the responder and a target person including at least one of the respondents to which the responder corresponds. An image acquisition unit that acquires an original image that is an image including the corresponding object, a target image obtained by dividing the original image and captured by the target person, and a plurality of corresponding object images obtained by capturing each corresponding object. Using the first trained model in which the relationship between the image division unit that divides into and the image of the target person, the scene ID that uniquely indicates the scene that the correspondent performs, and the scene ID are stored. , A scene estimation unit that estimates a scene, a plurality of object images, a chunk ID that uniquely indicates a chunk that is information that divides or suggests work information, and one or a plurality of chunks that are associated with one-to-one. A chunk estimator that estimates chunks and a chunk output unit that outputs chunks using one of a plurality of second trained models in which the association between the meta IDs is stored. , One of a plurality of second trained models is recommended as a search key using a combination of a model ID and one or a plurality of chunk meta IDs associated with a scene ID in a one-to-one manner. A recommendation image output unit that searches for an image and outputs a recommended image that is an image of a corresponding object that is not captured in the original image but is presumed to be necessary, and chunks and chunks output by the chunk output unit. Each of the recommended images output by the recommended image output unit is provided with a display unit for allocating and displaying each of the recommended images to each surface of the object model having a plurality of display areas, and the chunk estimation unit is a plurality of second trained models. One of them is selected using the scene ID and the model ID associated with one-to-one, and the chunk meta ID uniquely indicates the chunk meta value which is information on the property of the corresponding object. Provides a processing device.
 対応者の行う作業に関する情報である作業情報を出力する情報処理装置が行う情報処理方法であって、対応者、及び前記対応者が被対応者の少なくとも何れかを含む対象者と複数の被対応物とを含む画像である元画像を取得する第1のステップと、元画像を分割し対象者が撮像された対象者画像とそれぞれの被対応物が撮像された複数の被対応物画像とに分割する第2のステップと、対象者画像と、対応者が行う状況であるシーンを一意に示すシーンIDと、の間における連関性が記憶されている第1の学習済みモデルを使用して、シーンを推定する第3のステップと、複数の被対応物画像と、作業情報を分割又は示唆した情報であるチャンクを一意に示すチャンクIDと、1対1に対応付けられた1又は複数のチャンク用メタIDと、の間における連関性が記憶されている複数の第2の学習済みモデルのうちの1つを使用して、チャンクを推定する第4のステップと、チャンクを出力する第5のステップと、複数の第2の学習済みモデルのうちの1つを、シーンIDと1対1に対応付けられたモデルIDと1又は複数のチャンク用メタIDの組み合わせとを検索キーとして、推奨被対応物画像を検索し、元画像には撮像されていないが本来は必要であると推測される前記被対応物の画像であるレコメンド画像を出力する第6のステップと、チャンク及び前記レコメンド画像の各々を、複数の表示エリアを備えるオブジェクトモデルの各面に割り当てて表示する第7のステップと、を備え、複数の第2の学習済みモデルのうちの1つはシーンIDと1対1に対応付けられたモデルIDを用いて選定され、チャンク用メタIDは被対応物の性質に関する情報であるチャンク用メタ値を一意に示す、情報処理方法を提供する。 It is an information processing method performed by an information processing device that outputs work information that is information about the work performed by the responder, and the responder and the respondent include at least one of the respondents and a plurality of respondents. The first step of acquiring the original image which is an image including an object, the subject image obtained by dividing the original image and the subject imaged, and the plurality of object images captured by each corresponding object. Using the first trained model, which stores the association between the second step of division, the subject image, and the scene ID that uniquely indicates the scene that the correspondent is doing, A third step of estimating the scene, a plurality of object images, a chunk ID uniquely indicating a chunk that is information that divides or suggests work information, and one or more chunks associated with one-to-one. A fourth step of estimating chunks and a fifth of outputting chunks using one of a plurality of second trained models in which the association between the meta IDs is stored. A step and one of a plurality of second trained models are recommended as a search key using a combination of a model ID and one or a plurality of chunk meta IDs associated with a scene ID in a one-to-one manner. The sixth step of searching the corresponding object image and outputting the recommended image which is the image of the corresponding object which is not captured in the original image but is presumed to be necessary, and the chunk and the recommended image. Each is provided with a seventh step of assigning and displaying each side of an object model having a plurality of display areas, and one of the plurality of second trained models has a one-to-one correspondence with the scene ID. The chunk meta ID, which is selected using the attached model ID, provides an information processing method that uniquely indicates the chunk meta value which is information about the property of the corresponding object.
 対応者の行う作業に関する情報である作業情報を出力する情報処理システムであって、前記対応者、及び前記対応者が対応する被対応者の少なくとも何れかを含む対象者、及び前記対象者を識別する対象者識別情報の少なくとも何れかを含む対象者画像と、前記対応者が対応する複数の被対応物とを含む画像である元画像を取得する画像取得手段と、前記元画像を前記対象者画像とそれぞれの前記被対応物が撮像された複数の被対応物画像とに分割する画像分割手段と、前記対象者画像と、対応者が行う状況であるシーンを一意に示すシーンIDと、の間における連関性が記憶されている第1の学習済みモデルを使用して、前記シーンを推定するシーン推定手段と、前記複数の前記被対応物画像と、前記作業情報を分割又は示唆した情報であるチャンクを一意に示すチャンクIDと、1対1に対応付けられた1又は複数のチャンク用メタIDと、の間における連関性が記憶されている複数の第2の学習済みモデルのうちの1つを使用して、前記チャンクを推定するチャンク推定手段と、前記チャンクを出力するチャンク出力手段と、前記複数の第2の学習済みモデルのうちの1つを、シーンIDと1対1に対応付けられたモデルIDと1又は複数のチャンク用メタIDの組み合わせとを検索キーとして、推奨される推奨被対応物画像及び共有される共有被対応物情報を検索し、前記元画像には撮像されていないが本来は必要であると推測される前記被対応物の画像、前記対象者の前記作業において情報共有が必要であると推測される前記被対応物及び前記対象者の作業に関する情報を含むレコメンド画像を出力するレコメンド画像出力手段と、前記チャンク出力手段により出力される前記チャンク及び前記レコメンド画像出力手段により出力される前記レコメンド画像を、複数の表示エリアを備えるオブジェクトモデルの前記表示エリアに割り当てて表示する表示手段と、を備え、前記チャンク推定手段は、前記モデルIDを用いて選定し、前記チャンク用メタIDは前記被対応物の性質に関する情報であるチャンク用メタ値を一意に示す、情報処理システムを提供する。 An information processing system that outputs work information that is information about work performed by a responder, and identifies the responder, a target person including at least one of the respondents to which the responder corresponds, and the target person. An image acquisition means for acquiring an original image which is an image including at least one of the target person identification information and a plurality of corresponding objects to which the corresponding person corresponds, and the target person using the original image. An image dividing means for dividing an image into a plurality of images of the corresponding objects captured by each of the corresponding objects, an image of the target person, and a scene ID uniquely indicating a scene in which the corresponding object is performed. Using the first trained model in which the relationships between the two are stored, the scene estimation means for estimating the scene, the plurality of objects to be imaged, and the information obtained by dividing or suggesting the work information. One of a plurality of second trained models in which the association between a chunk ID uniquely indicating a certain chunk and one or more chunk meta IDs associated with one-to-one is stored. One of the chunk estimation means for estimating the chunk, the chunk output means for outputting the chunk, and the plurality of second trained models has a one-to-one correspondence with the scene ID. Using the attached model ID and a combination of one or more chunk meta IDs as a search key, the recommended recommended object image and the shared shared object information are searched, and the original image is captured. Includes images of the subject that are not, but are presumed to be necessary, the subject and information about the subject's work that is presumed to require information sharing in the subject's work. The recommendation image output means for outputting the recommendation image, the chunk output by the chunk output means, and the recommendation image output by the recommendation image output means are assigned to the display area of the object model having a plurality of display areas. The chunk estimation means is selected by using the model ID, and the chunk meta ID uniquely indicates a chunk meta value which is information about the property of the corresponding object. Provides an information processing system.
 本発明の実施の形態の一態様によれば大規模な情報の再構築をせずに、対応者が必要な際に、対応者が必要な量の情報を、対応者に提示する情報処理装置、情報処理方法及び情報処理システムを実現できる。 According to one aspect of the embodiment of the present invention, an information processing device that presents the required amount of information to the responder when the responder needs it, without reconstructing the information on a large scale. , Information processing method and information processing system can be realized.
図1は、本実施の形態による利用段階における情報処理装置の構成を示すブロック図である。FIG. 1 is a block diagram showing a configuration of an information processing apparatus at a utilization stage according to the present embodiment. 図2は、本実施の形態による学習段階における情報処理装置の構成を示すブロック図である。FIG. 2 is a block diagram showing a configuration of an information processing apparatus in the learning stage according to the present embodiment. 図3は、本実施の形態による元画像、対象者画像及び複数の被対応物画像を示す図である。FIG. 3 is a diagram showing an original image, a subject image, and a plurality of objects to be imaged according to the present embodiment. 図4は、本実施の形態による対象者画像及び複数の被対応物画像の関係である木構造を示す図である。FIG. 4 is a diagram showing a tree structure which is a relationship between a subject image and a plurality of objects to be imaged according to the present embodiment. 図5は、本実施の形態による第1の学習済みモデル及び第2の学習済みモデルを示す図である。FIG. 5 is a diagram showing a first trained model and a second trained model according to the present embodiment. 図6は、本実施の形態による補助記憶装置に記憶されている情報を示す図である。FIG. 6 is a diagram showing information stored in the auxiliary storage device according to the present embodiment. 図7は、本実施の形態によるシーン推定機能、チャンク推定機能及びチャンク出力機能の説明に供するシーケンス図である。FIG. 7 is a sequence diagram for explaining the scene estimation function, chunk estimation function, and chunk output function according to the present embodiment. 図8は、本実施の形態による第1の学習済みモデル生成機能及び第2の学習済みモデル生成機能の説明に供するシーケンス図である。FIG. 8 is a sequence diagram provided for explaining the first trained model generation function and the second trained model generation function according to the present embodiment. 図9は、本実施の形態による利用段階における情報処理の処理手順を示すフローチャートである。FIG. 9 is a flowchart showing a processing procedure of information processing in the usage stage according to the present embodiment. 図10は、本実施の形態による表示段階における情報処理の処理手順を示すフローチャートである。FIG. 10 is a flowchart showing a processing procedure of information processing in the display stage according to the present embodiment. 図11は、本実施の形態による学習段階における情報処理の処理手順を示すフローチャートである。FIG. 11 is a flowchart showing a processing procedure of information processing in the learning stage according to the present embodiment. 図12(a)は、第2実施形態における情報処理システムの動作の一例を示す模式図であり、図12(b)は、第2実施形態による元画像、対象者画像、複数の被対応物画像を示す図である。FIG. 12A is a schematic diagram showing an example of the operation of the information processing system according to the second embodiment, and FIG. 12B is an original image, a subject image, and a plurality of objects to be supported according to the second embodiment. It is a figure which shows the image. 図13は、本実施の形態による補助記憶装置に記憶されている情報を示す図である。FIG. 13 is a diagram showing information stored in the auxiliary storage device according to the present embodiment. 図14は、本実施の形態による補助記憶装置に記憶されている情報を示す図である。FIG. 14 is a diagram showing information stored in the auxiliary storage device according to the present embodiment. 図15は、本実施の形態による補助記憶装置に記憶されている情報を示す図である。FIG. 15 is a diagram showing information stored in the auxiliary storage device according to the present embodiment. 図16は、本実施の形態による補助記憶装置に記憶されている情報を示す図である。FIG. 16 is a diagram showing information stored in the auxiliary storage device according to the present embodiment. 図17(a)~(h)は、第2実施形態による対象者における表示パターンを示す図である。17 (a) to 17 (h) are diagrams showing a display pattern in a subject according to the second embodiment. 図18(a)、(b)は、第2実施形態によるユーザ端末の表示の一例を示す模式図である。18 (a) and 18 (b) are schematic views showing an example of a display of a user terminal according to the second embodiment.
 以下図面を用いて、本発明の実施の形態の一態様を詳述する。例えば、大学の窓口や薬局の窓口において、学生及び学生の保護者などや患者などである対象者と、対象者の対応を行う作業者である対応者など立場や役割が異なる複数の者を含めて対象者とし、対応者が参照する被対応物の情報について説明する。被対応物とは、例えば大学の窓口の場合は書類であって、薬局の窓口の場合は薬とする。 Hereinafter, one aspect of the embodiment of the present invention will be described in detail with reference to the drawings. For example, at a university window or a pharmacy window, including a target person who is a student, a guardian of a student, a patient, etc., and a person who has a different position or role, such as a responder who is a worker who handles the target person. The information on the object to be referred to by the responder will be explained. The corresponding object is, for example, a document in the case of a university window and a drug in the case of a pharmacy window.
 また、例えば大学の窓口や薬局の窓口の他に、工場など複数の作業員などが協働して1又は複数の作業を遂行する作業場所においては、遂行する作業を主体として作業員を対応者、作業に対する他の作業者や作業を受ける顧客などを対象者としてもよく、それらの立場や役割が異なる者、現場で作業を共有する複数の者を含めて対象者とする。この場合、被対象物とは、例えば作業現場や場所などに設置される機器や製品、部品となる。 In addition to, for example, a university window or a pharmacy window, in a work place where multiple workers such as factories collaborate to perform one or more tasks, the workers are mainly the workers to perform the tasks. , Other workers for the work, customers who receive the work, etc. may be the target people, including those who have different positions and roles, and multiple people who share the work at the site. In this case, the object is, for example, a device, a product, or a part installed at a work site or a place.
(本実施の形態:第1実施形態)
 本実施形態における情報処理システム100は、例えば図1に示すように、情報処理装置1を有する。まず図1を用いて利用段階における情報処理装置1について説明する。図1は、本実施の形態による利用段階における情報処理装置1の構成を示すブロック図である。情報処理装置1は、中央演算装置2、主記憶装置3及び補助記憶装置11を備える。
(The present embodiment: the first embodiment)
The information processing system 100 in this embodiment has an information processing device 1 as shown in FIG. 1, for example. First, the information processing apparatus 1 in the usage stage will be described with reference to FIG. FIG. 1 is a block diagram showing a configuration of an information processing apparatus 1 in a utilization stage according to the present embodiment. The information processing device 1 includes a central processing unit 2, a main storage device 3, and an auxiliary storage device 11.
 中央演算装置2は、例えばCPU(Central Processing Unit)であって、主記憶装置3に記憶されたプログラムを呼び出すことで処理を実行する。主記憶装置3は、例えばRAM(Random Access Memory)であって、後述の画像取得部4、画像分割部5、シーン推定部6、チャンク推定部7、チャンク出力部8、第1の学習済みモデル生成部9、第2の学習済みモデル生成部10、レコメンド画像出力部13及びオブジェクトモデル特定部15といったプログラムを記憶する。 The central processing unit 2 is, for example, a CPU (Central Processing Unit), and executes processing by calling a program stored in the main storage device 3. The main storage device 3 is, for example, a RAM (RandomAccessMemory), which is an image acquisition unit 4, an image division unit 5, a scene estimation unit 6, a chunk estimation unit 7, a chunk output unit 8, and a first trained model, which will be described later. A program such as a generation unit 9, a second learned model generation unit 10, a recommendation image output unit 13, and an object model identification unit 15 is stored.
 なお画像取得部4、画像分割部5、シーン推定部6、チャンク推定部7、チャンク出力部8及びレコメンド画像出力部13を含むプログラムを制御部15と呼んでもよく、第1の学習済みモデル生成部9及び第2の学習済みモデル生成部10を含むプログラムを学習済みモデル生成部16と呼んでもよい。 A program including an image acquisition unit 4, an image division unit 5, a scene estimation unit 6, a chunk estimation unit 7, a chunk output unit 8, and a recommendation image output unit 13 may be called a control unit 15, and the first trained model generation may be performed. A program including the unit 9 and the second trained model generation unit 10 may be referred to as a trained model generation unit 16.
 補助記憶装置11は、例えばSSD(Solid State Drive)やHDD(Hard Disk Drive)であって、後述の第1の学習済みモデルDB1や第1の学習モデルDB1’や第2の学習済みモデルDB2や第2の学習モデルDB2’といったデータベースやシーンテーブルTB1やモデルテーブルTB2やコンテンツテーブルTB3やシーン・コンテンツテーブルTB4やコンテンツ・チャンクテーブルTB5やチャンク・メタテーブルTB6やチャンクテーブルTB7やチャンク用メタテーブルTB8・レコメンドテーブルTB9・オブジェクトテーブルTB10・オブジェクト割当テーブルTB11・アノテーションテーブルTB12・注目テーブルTB13・カメラテーブルTB14・ロールテーブルTB15といったテーブルを記憶する。 The auxiliary storage device 11 is, for example, an SSD (Solid State Drive) or an HDD (Hard Disk Drive), such as a first trained model DB 1, a first learning model DB 1', or a second trained model DB 2, which will be described later. Databases such as the second learning model DB2', scene table TB1, model table TB2, content table TB3, scene content table TB4, content chunk table TB5, chunk metatable TB6, chunk table TB7, chunk metatable TB8, etc. Stores tables such as the recommendation table TB9, the object table TB10, the object allocation table TB11, the annotation table TB12, the attention table TB13, the camera table TB14, and the roll table TB15.
 図1に示すように対応者の行う作業に関する情報である作業情報を出力する情報処理装置1は、利用段階において、画像取得部4と、画像分割部5と、シーン推定部6と、チャンク推定部7と、作業情報を分割又は示唆した情報であるチャンクを出力するチャンク出力部8と、を備える。ここで作業情報をコンテンツと呼んでもよく、コンテンツIDは作業情報を一意に示すものとする。 As shown in FIG. 1, the information processing apparatus 1 that outputs work information that is information about the work performed by the corresponding person has an image acquisition unit 4, an image division unit 5, a scene estimation unit 6, and chunk estimation at the usage stage. A unit 7 and a chunk output unit 8 that outputs chunks that are information that divides or suggests work information are provided. Here, the work information may be referred to as content, and the content ID uniquely indicates the work information.
 画像取得部4は、対応者が対応する被対応者21(図3)と対応者が対応する複数の被対応物22~25(図3)とを含む画像である元画像20(図3)を、カメラを備えたパーソナルコンピュータなどのユーザ端末12から取得する。画像分割部5は、元画像20を分割し前記被対応者21が撮像された被対応者画像30と対象者(対応者または被対応者など)を識別する対象者識別情報61とそれぞれの被対応物22~25が撮像された複数の被対応物画像40~43とに分割する。 The image acquisition unit 4 is an original image 20 (FIG. 3) which is an image including a corresponding person 21 (FIG. 3) corresponding to the corresponding person and a plurality of corresponding objects 22 to 25 (FIG. 3) corresponding to the corresponding person. Is acquired from a user terminal 12 such as a personal computer equipped with a camera. The image segmentation unit 5 divides the original image 20 into an image of the person to be imaged by the person to be dealt with 21, an object identification information 61 for identifying the object (such as a person to be correspondent or a person to be correspondent), and each subject. The corresponding objects 22 to 25 are divided into a plurality of imaged objects 40 to 43.
 シーン推定部6は、対応者が行う状況であるであるシーンを推定する。具体的にはシーン推定部6は、対象者画像として、例えば被対応者画像30(35)の他、例えば対応者画像、対象者識別情報61などを取得する。シーン推定部6は、この中から、例えば、被対応者画像30(35)とシーンを一意に示すシーンIDと、の間における連関性が記憶されている第1の学習済みモデルDB1を使用して、シーンを推定する。シーン推定部6は、被対応者画像30(35)の他に、例えば対応者の画像(対象者画像)や、対応者や被対応者を識別する識別情報(対象者識別情報)とシーンを一意に示すシーンIDと、の間における連関性が記憶されている第1の学習済みモデルDB1を使用して、シーンを推定するようにしてもよい。 The scene estimation unit 6 estimates the scene that is the situation performed by the responder. Specifically, the scene estimation unit 6 acquires, for example, the corresponding person image 30 (35), for example, the corresponding person image, the target person identification information 61, and the like as the target person image. The scene estimation unit 6 uses, for example, the first trained model DB1 in which the association between the corresponding person image 30 (35) and the scene ID uniquely indicating the scene is stored. And estimate the scene. In addition to the corresponding person image 30 (35), the scene estimation unit 6 uses, for example, an image of the corresponding person (target person image), identification information for identifying the corresponding person and the corresponding person (target person identification information), and a scene. The scene may be estimated using the first trained model DB1 in which the association between the uniquely shown scene ID and the scene ID is stored.
 シーン推定部6は、例えば対象者識別情報が被対応者識別情報である場合に、被対応者識別情報と、対応者が行う状況であるシーンを一意に示すシーンIDと、の間における連関性が記憶されている第1の学習済みモデルを使用して、シーンをさらに推定するようにしてもよい。 For example, when the target person identification information is the corresponding person identification information, the scene estimation unit 6 has a relationship between the corresponding person identification information and the scene ID uniquely indicating the scene in which the corresponding person performs the situation. The first trained model in which is stored may be used to further estimate the scene.
 シーン推定部6は、例えば被対応者が存在しない場合(対象者識別情報が対応者識別情報である場合)は、対応者識別情報と、対応者が行う状況であるシーンを一意に示すシーンIDと、の間における連関性が記憶されている第1の学習済みモデルを使用して、シーンをさらに推定するようにしてもよい。 For example, when the person to be corresponded does not exist (when the target person identification information is the person identification information), the scene estimation unit 6 uniquely indicates the person identification information and the scene which is the situation performed by the person. A first trained model in which the association between and is stored may be used to further estimate the scene.
 シーン推定部6は、シーンIDとシーンの名称であるシーン名とが1対1で紐づけられたテーブルであるシーンテーブルTB1から、シーンIDを検索キーとしてシーン名を取得してユーザ端末12に送信する。ユーザ端末12はシーン推定部6から受信したシーン名を対象者に提示する。シーン名の提示は、後述する表示部17により、例えば予め対象者によって割り当てられたオブジェクトモデルの1面に表示される。 The scene estimation unit 6 acquires a scene name using the scene ID as a search key from the scene table TB1 which is a table in which the scene ID and the scene name which is the name of the scene are linked one-to-one, and the user terminal 12 is used. Send. The user terminal 12 presents the scene name received from the scene estimation unit 6 to the target person. The presentation of the scene name is displayed, for example, on one side of the object model previously assigned by the target person by the display unit 17 described later.
 チャンク推定部7は、作業に関係する被対応物22(23~25)の画像である被対応物画像40~43を取得し、被対応物画像40(41~43)と、チャンクを一意に示すチャンクIDと1対1に対応付けられた1又は複数のチャンク用メタIDと、の間における連関性が記憶されている複数の第2の学習済みモデルDB2のうちの1つを使用して、チャンクを推定する。 The chunk estimation unit 7 acquires the corresponding object images 40 to 43, which are images of the corresponding object 22 (23 to 25) related to the work, and uniquely identifies the chunk with the corresponding object images 40 (41 to 43). Using one of a plurality of second trained model DB2s in which the association between the indicated chunk ID and one or more chunk meta-IDs associated one-to-one is stored. , Estimate chunks.
 チャンク推定部7は、複数の被対応物画像と、チャンクIDと、複数のチャンク用メタIDと、の間における連関性が記憶されている複数の第2の学習済みモデルのうちの1つを使用して、例えば対象者情報が被対応者識別情報である場合は、被対応者識別情報に紐づく前記チャンクをさらに推定するようにしてもよい。 The chunk estimation unit 7 selects one of a plurality of second trained models in which the association between the plurality of object images, the chunk ID, and the plurality of chunk meta IDs is stored. For example, when the target person information is the corresponding person identification information, the chunk associated with the corresponding person identification information may be further estimated.
 またチャンク推定部7は、複数の被対応物画像と、チャンクIDと、複数のチャンク用メタIDと、の間における連関性が記憶されている複数の第2の学習済みモデルのうちの1つを使用して、例えば対象者情報が対応者識別情報である場合は、対応者識別情報に紐づく前記チャンクをさらに推定するようにしてもよい。 Further, the chunk estimation unit 7 is one of a plurality of second trained models in which the association between the plurality of object images, the chunk ID, and the plurality of chunk meta IDs is stored. For example, when the target person information is the corresponding person identification information, the chunk associated with the corresponding person identification information may be further estimated.
 チャンク推定部7は、複数の第2の学習済みモデルDB2のうちの1つを、シーンIDと1対1に対応付けられたモデルIDを用いて選定する。またチャンク用メタIDは被対応物22~25の性質に関する情報であるチャンク用メタ値を一意に示す。 The chunk estimation unit 7 selects one of the plurality of second trained model DB2s by using the model ID associated with the scene ID on a one-to-one basis. Further, the chunk meta ID uniquely indicates a chunk meta value which is information regarding the properties of the corresponding objects 22 to 25.
 チャンク推定部7は、モデルIDとシーンIDとが1対1で紐づけられたテーブルであるモデルテーブルTB2からシーンIDを検索キーとしてモデルIDを取得する。またチャンク推定部7は、チャンクIDとチャンク用メタIDとが1対1又は1対複数で紐づけられたテーブルであるチャンク・メタテーブルTB6からチャンク用メタIDを検索キーとしてチャンクIDを取得する。 The chunk estimation unit 7 acquires the model ID from the model table TB2, which is a table in which the model ID and the scene ID are linked one-to-one, using the scene ID as a search key. Further, the chunk estimation unit 7 acquires the chunk ID from the chunk meta table TB6, which is a table in which the chunk ID and the chunk meta ID are linked one-to-one or one-to-many, using the chunk meta ID as a search key. ..
 またチャンク推定部7は、チャンクテーブルTB7からチャンクIDを検索キーとしてチャンクの概要を示すチャンクサマリを取得して、チャンクサマリをユーザ端末12に送信する。ユーザ端末12は、チャンク推定部7から受信したチャンクサマリを対象者に提示する。チャンクサマリの提示は、後述する表示部14により、例えば予め対象者によって割り当てられたオブジェクトモデルの1面に表示される。 Further, the chunk estimation unit 7 acquires a chunk summary showing the outline of the chunk from the chunk table TB7 using the chunk ID as a search key, and transmits the chunk summary to the user terminal 12. The user terminal 12 presents the chunk summary received from the chunk estimation unit 7 to the target person. The presentation of the chunk summary is displayed, for example, on one side of the object model previously assigned by the target person by the display unit 14 described later.
 またチャンク推定部7は、チャンクテーブルTB7からチャンクIDを検索キーとしてチャンクを取得して、チャンクをユーザ端末12に送信する。ユーザ端末12は、チャンク推定部7から受信したチャンクを対象者に提示する。チャンクの提示は、後述する表示部14により、例えば予め対象者によって割り当てられたオブジェクトモデルの1面に表示される。 Further, the chunk estimation unit 7 acquires chunks from the chunk table TB7 using the chunk ID as a search key, and transmits the chunks to the user terminal 12. The user terminal 12 presents the chunk received from the chunk estimation unit 7 to the target person. The chunk presentation is displayed, for example, on one side of the object model previously assigned by the target person by the display unit 14 described later.
 なおチャンクテーブルTB7は、チャンクIDにチャンクとチャンクサマリとハッシュ値とがそれぞれ1対1で紐づけられたテーブルである。なおハッシュ値は例えばチャンクが変更されたか否かを確認するために用いられる。 The chunk table TB7 is a table in which chunks, chunk summaries, and hash values are associated with a chunk ID on a one-to-one basis. The hash value is used, for example, to confirm whether or not the chunk has been changed.
 画像取得部4は、対象者を識別する対象者識別情報を取得する。対象者識別情報は、例えば対応者、および被対応者であって、対応者、及び被対応者を識別する顔画像の他、例えば写真証などのIDカードのバーコード、二次元コードなどであってもよく、カメラなどによって撮像される。情報処理装置1は、撮像した対象者識別情報を識別し、対応者が作業に対する正規の対象者(対応者)であることを確認する。対象者識別情報は、例えば複数の対象者を識別するようにしてもよく、さらに情報共有を行うリモート先の共有者を識別するようにしてもよい。 The image acquisition unit 4 acquires the target person identification information for identifying the target person. The target person identification information is, for example, a face image that identifies the responder and the respondent, and is, for example, a bar code of an ID card such as a photo certificate, a two-dimensional code, or the like. It may be imaged by a camera or the like. The information processing apparatus 1 identifies the captured target person identification information and confirms that the responder is a legitimate target person (correspondent) for the work. The target person identification information may, for example, identify a plurality of target persons, and may further identify a remote co-owner who shares information.
 ここで、図13~16を用いて、本実施の形態による補助記憶装置11に記憶されている情報について説明する。 Here, the information stored in the auxiliary storage device 11 according to the present embodiment will be described with reference to FIGS. 13 to 16.
 図13にオブジェクトモデルテーブルTB10を示す。オブジェクトモデルテーブルTB10は、オブジェクトモデルを識別する情報として、例えばオブジェクトモデルを識別するオブジェクトモデルID、オブジェクトモデルに対する操作の種別を表す操作情報、オブジェクトモデルが表示される基本サイズ、オブジェクトモデル6を複数表示可能な追加個数、オブジェクトモデルが表示される位置を示す表示座標、推定されるシーンID及びチャンクID、作業を行う所属、部門、場所などを識別するエリアID(所属・部門)、対象者(対応者、被対応者など)や対応者のスキル、属性、役割、資格などを識別するロールIDなどを、各々に対応付けて記憶する。 FIG. 13 shows the object model table TB10. The object model table TB10 displays, for example, an object model ID for identifying the object model, operation information indicating the type of operation for the object model, a basic size in which the object model is displayed, and a plurality of object models 6 as information for identifying the object model. Possible additional number, display coordinates indicating the position where the object model is displayed, estimated scene ID and chunk ID, area ID (affiliation / department) that identifies the affiliation, department, location, etc. to perform work, target person (correspondence) Persons, respondents, etc.) and role IDs that identify the skills, attributes, roles, qualifications, etc. of the responders are stored in association with each other.
 次に図14にオブジェクト割当テーブルTB11を示す。オブジェクト割当テーブルTB11は、例えば、オブジェクトモデルIDに紐づけて、オブジェクトモデルが有する表示エリアの個数に関する表示領域情報、表示されるレコメンド画像や参照情報が紐づけられる表示領域ID、シーンID、チャンクIDなどを、各々に対応付けて記憶する。 Next, FIG. 14 shows the object allocation table TB11. The object allocation table TB11 is, for example, linked to the object model ID, display area information regarding the number of display areas of the object model, display area ID to which the recommended image to be displayed and reference information are linked, scene ID, chunk ID. Etc. are stored in association with each.
 次に図15にアノテーションテーブルTB12、注目テーブルTB13を示す。アノテーションテーブルTB12は、例えば、作業に関する映像を識別する映像ID、撮影したカメラを識別するカメラID、作業のシーンを識別するシーンID、映像が撮影された時刻を示す撮影時刻、映像が撮影された撮影時間、画質情報、撮影された映像の画質と視点座標、メタID、注目IDなどを、各々に対応付けて記憶する。 Next, FIG. 15 shows the annotation table TB12 and the attention table TB13. In the annotation table TB12, for example, a video ID for identifying a video related to work, a camera ID for identifying a camera that shot, a scene ID for identifying a work scene, a shooting time indicating the time when the video was shot, and a video were shot. The shooting time, image quality information, image quality and viewpoint coordinates of the shot image, meta ID, attention ID, etc. are stored in association with each other.
 注目テーブルTB13は、例えば、表示するチャンクの優先度などを識別する注目ID、注目情報の表示を示す注目種別情報、シーンID、内容を占める注目情報、注目情報データ(コンテンツ)、上位の参照情報、有無を示す上位注目IDなどを、各々に対応付けて記憶する。 The attention table TB13 is, for example, an attention ID that identifies the priority of chunks to be displayed, attention type information indicating the display of attention information, a scene ID, attention information that occupies the content, attention information data (content), and higher reference information. , High-ranking attention ID indicating the presence or absence, etc. are stored in association with each.
 次に図16にカメラテーブルTB14、ロールテーブルTB15を示す。カメラテーブルTB14は、例えば、撮影されるカメラを識別するカメラID、カメラが設置されるエリアID、カメラのスペック、操作、役割などを示す機種情報、視線情報、切替情報、外部接続情報、操作可能者ID、ロールIDなどが、各々対応付けられて格納される。また、ロールテーブルTB15は、対象者(対応者、被対応者、その他対応者、共有者など)を識別するロールID、社員ID、氏名、資格ID、部門ID、エリアID、関連ロールIDなどを、各々に対応付けて記憶する。 Next, FIG. 16 shows the camera table TB14 and the roll table TB15. The camera table TB14 can be operated, for example, a camera ID that identifies the camera to be photographed, an area ID in which the camera is installed, model information indicating camera specifications, operations, roles, etc., line-of-sight information, switching information, external connection information, and operation. Person ID, role ID, etc. are stored in association with each other. In addition, the role table TB15 has a role ID, an employee ID, a name, a qualification ID, a department ID, an area ID, a related role ID, etc. that identify a target person (correspondent, respondent, other responder, co-owner, etc.). , Correspond to each and stored.
 情報処理装置1はさらにレコメンド画像出力部13を備えてもよく、補助記憶装置11はさらにレコメンドテーブルTB9を備えてもよい。レコメンド画像出力部13は、モデルIDと1又は複数のチャンク用メタIDの組み合わせとを検索キーとして、レコメンドテーブルTB9を用いて推奨被対応物画像を検索する。 The information processing device 1 may further include a recommendation image output unit 13, and the auxiliary storage device 11 may further include a recommendation table TB9. The recommendation image output unit 13 searches for a recommended object image using the recommendation table TB9 using a combination of the model ID and one or a plurality of chunk meta IDs as a search key.
 レコメンド画像出力部13は、検索した推奨被対応物画像をユーザ端末12に出力する。推奨被対応物画像は、元画像20には撮像されていないが、本来は必要であると推測される被対応物の画像を指す。なおレコメンドテーブルTB9は、モデルIDとチャンク用メタIDの組み合わせと推奨被対応物画像とが1対1対1で紐づけられたテーブルである。 The recommendation image output unit 13 outputs the searched recommended object image to the user terminal 12. The recommended object to be image refers to an image of the object to which the original image 20 is not captured, but is presumed to be necessary. The recommendation table TB9 is a table in which the combination of the model ID and the chunk meta ID and the recommended object image are linked one-to-one-to-one.
 レコメンド画像出力部13は、複数の第2の学習済みモデルのうちの1つを、シーンIDと1対1に対応付けられたモデルIDと1又は複数のチャンク用メタIDの組み合わせとを検索キーとして、作業において情報の共有を行う被共有者及び被共有者と共有する被共有情報をさらに検索し、被共有者及び被共有情報に紐づけられるレコメンド情報を出力するようにしてもよい。 The recommendation image output unit 13 searches for one of the plurality of second trained models as a search key for a combination of a model ID and one or a plurality of chunk meta IDs associated with a scene ID on a one-to-one basis. As a method, the shared person who shares information in the work and the shared information shared with the shared person may be further searched, and the recommended information associated with the shared person and the shared information may be output.
 さらにレコメンド画像出力部は、例えば対象者(対応者)と共同作業を行う共同作業者、対象者(対応者)の指導者であるトレーナー、又は対象者(対応者)の監視を行うインスペクターの少なくとも何れかの立場で識別される者を被共有者として検索し、被共有者及び被共有情報に紐づけられるレコメンド情報を出力するようにしてもよい。 Further, the recommendation image output unit is, for example, a collaborator who collaborates with the target person (correspondent), a trainer who is a leader of the target person (correspondent), or at least an inspector who monitors the target person (correspondent). A person identified in any position may be searched as a shared person, and the recommended information associated with the shared person and the shared information may be output.
 表示部14は、チャンク出力部により出力されるチャンク及びレコメンド画像出力部13により出力されるチャンク及びレコメンド画像の各々を、複数の表示エリアを備えるオブジェクトモデルの各面に割り当て、割り当てたオブジェクトモデルを介してユーザ端末12に表示する。 The display unit 14 allocates each of the chunks and the recommended image output by the chunk output unit 13 to each surface of the object model having a plurality of display areas, and assigns the assigned object model. It is displayed on the user terminal 12 via the user terminal 12.
 表示部14は、オブジェクトモデル特定部15をさらに備える。オブジェクトモデル特定部15は、レコメンド画像出力部13により出力されるレコメンド画像及びレコメンド情報を表示するオブジェクトモデルの特定を、シーン及びチャンクと、オブジェクトモデルを一意に示すオブジェクトモデルIDとの紐づけを行い、オブジェクトモデルを特定する。 The display unit 14 further includes an object model identification unit 15. The object model specifying unit 15 identifies the object model that displays the recommended image and the recommended information output by the recommended image output unit 13, and associates the scene and chunk with the object model ID that uniquely indicates the object model. , Identify the object model.
 表示部14は、レコメンド画像出力部13により出力されたレコメンド画像及び前記レコメンド情報を、オブジェクトモデル特定部15により特定されたオブジェクトモデルが備える複数の表示エリアの何れかの表示エリアに、対応者と、被共有者とが共有可能な状態として割り当てる。 The display unit 14 displays the recommendation image output by the recommendation image output unit 13 and the recommendation information in any of the display areas of the plurality of display areas included in the object model specified by the object model identification unit 15. , Assign as a state that can be shared with the shared person.
 オブジェクトモデル特定部15は、シーン推定部6により推定されたシーン及びチャンク推定部7により推定されたチャンクの少なくとも何れかに基づいて、シーン又はチャンクを少なくとも含む参照情報を表示するオブジェクトモデルを一意に示す前記オブジェクトモデルIDとの紐づけを行い、オブジェクトモデルを特定する。 The object model specifying unit 15 uniquely displays an object model that displays reference information including at least a scene or chunk based on at least one of a scene estimated by the scene estimation unit 6 and a chunk estimated by the chunk estimation unit 7. The object model is specified by associating with the indicated object model ID.
 オブジェクトモデル特定部15は、例えば画像取得部4により取得され、補助記憶装置11に記憶されたオブジェクトモデルテーブルTB10を参照し、対応者が必要な際に、対応者に絞り込まれた参照情報を表示するのに適したオブジェクトモデル6を特定する。 The object model specifying unit 15 refers to the object model table TB10 acquired by, for example, the image acquisition unit 4 and stored in the auxiliary storage device 11, and displays reference information narrowed down to the corresponding person when the corresponding person needs it. Identify a suitable object model 6 to do.
 オブジェクトモデル特定部15は、具体的には図13に示すオブジェクトモデルテーブルTB10を参照し、例えば推定されたシーンID、チャンクIDなどに基づき、作業エリア、対応者の位置等の各種の情報により、その条件で表示することができるオブジェクトモデルを特定する。 Specifically, the object model specifying unit 15 refers to the object model table TB10 shown in FIG. 13, and based on, for example, an estimated scene ID, chunk ID, etc., based on various information such as a work area and a position of a corresponding person. Identify the object model that can be displayed under that condition.
 オブジェクトモデル特定部15は、例えばオプジェクトモデルIDで識別されるオブジェクトモデルの基本サイズ、表示座標、エリアIDなどにより、対応者が必要な際に、対象者に絞り込まれた情報及び対応者が気付いていない有益な情報を表示す情報を、対応者に提示できるオブジェクトモデルを特定する。 The object model specifying unit 15 is aware of the information narrowed down to the target person and the corresponding person when the corresponding person is required by, for example, the basic size, display coordinates, area ID, etc. of the object model identified by the object model ID. Identify an object model that can present information to the responder that displays no useful information.
 オブジェクトモデル特定部15は、例えばオブジェクトモデルの特定に、例えば対応者のスキル情報を特定する資格ID、作業を行う空間情報を特定する表示座標、対象物の特徴情報を特定する基本サイズ、又は作業の作業レベル情報の少なくとも何れかに基づいて特定するようにしてもよい。 For example, the object model specifying unit 15 identifies an object model, for example, a qualification ID for specifying skill information of a correspondent, display coordinates for specifying spatial information for performing work, a basic size for specifying feature information of an object, or work. It may be specified based on at least one of the work level information of.
 オブジェクトモデル特定部15は、推定されたシーンID、チャンクIDなどに基づきオブジェクトモデルテーブルを参照し、例えばオブジェクトモデルIDで対応付けられる形状、基本サイズ、追加個数等の各種のデータから、例えば少なくとも2面以上の表示領域8を備える形状、同一、または異なるオブジェクトモデル6が複数表示することのできるオブジェクトモデルを特定するようにしてもよい。 The object model specifying unit 15 refers to the object model table based on the estimated scene ID, chunk ID, etc., and for example, from various data such as the shape, the basic size, and the number of additional objects associated with the object model ID, at least 2 It is also possible to specify an object model having a shape having a display area 8 equal to or larger than a surface, and capable of displaying a plurality of the same or different object models 6.
 オブジェクトモデル特定部15は、推定されたシーンID、チャンクIDなどに基づきオブジェクトモデルテーブルTB10を参照し、例えばオブジェクトモデル6に対する各種の操作情報により、作業の状態に基づいて、回転表示、拡大表示、縮小表示、突出表示、振動表示、状態表示、変色表示、又は濃淡表示の少なくとも何れかの形態で表示することができるオブジェクトモデルを特定するようにしてもよい。 The object model specifying unit 15 refers to the object model table TB10 based on the estimated scene ID, chunk ID, etc., and for example, by various operation information for the object model 6, rotation display, enlarged display, based on the working state, You may want to specify an object model that can be displayed in at least one of reduced display, protruding display, vibration display, state display, discoloration display, and shading display.
 オブジェクトモデル特定部15は、例えば被対応物との対応者の位置関係、利き手、使用言語等の各種の情報に基づいて特定され、表示される位置が決められるようにしてもよい。オブジェクトモデル特定部15は、例えばチャンク推定部7により推定されたチャンクの種類、チャンクの文字数等に基づいて表示可能な表示領域を有するオブジェクトモデルを特定し、オブジェクトモデルの各々の表示エリアにチャンクの割り当てを行うようにしてもよい。 The object model specifying unit 15 may be specified based on various information such as the positional relationship between the object and the corresponding person, the dominant hand, the language used, and the like, and the displayed position may be determined. The object model specifying unit 15 identifies an object model having a display area that can be displayed based on, for example, the type of chunk estimated by the chunk estimation unit 7, the number of characters in the chunk, and the like, and the chunks are displayed in each display area of the object model. You may make an assignment.
 オブジェクトモデル特定部15は、例えば対応者によって予め表示されるオブジェクトモデルがカスタマイズされて登録されている場合は、カスタマイズされたオブジェクトモデルを優先して表示するようにしてもよい。 For example, when the object model displayed in advance by the corresponding person is customized and registered, the object model specifying unit 15 may preferentially display the customized object model.
 オブジェクトモデルは、例えば対応者が装着するユーザ端末12を介して、オブジェクトモデルの左右、または上下の辺の何れかの辺を掴んで回転、または移動させるようにしてもよい。 The object model may be rotated or moved by grasping any of the left and right or upper and lower sides of the object model, for example, via the user terminal 12 worn by the corresponding person.
 オブジェクトモデル特定部15は、例えば割り当てられた各種の情報が、緊急情報等の確実に情報を提示する必要がある場合には、特定されたオブジェクトモデルを対応者方向にせり出して表示させるようにしてもよい。 For example, when it is necessary to reliably present information such as emergency information for various assigned information, the object model specifying unit 15 projects the specified object model toward the corresponding person and displays it. May be good.
 オブジェクトモデル特定部15により特定されたオブジェクトモデルが有する表示エリアに、シーン推定部6により推定されたシーン及びチャンク推定部7により推定されたチャンクの少なくとも何れかに基づいて、シーン又はチャンクを少なくとも含む参照情報の割り当てを行う。 The display area of the object model specified by the object model specifying unit 15 includes at least a scene or chunk based on at least one of the scene estimated by the scene estimation unit 6 and the chunk estimated by the chunk estimation unit 7. Assign reference information.
 オブジェクトモデル特定部15は、具体的には図14に示すオブジェクト割当テーブルTB11を参照し、例えば推定されたシーンID、チャックIDで示される表示可能情報、作業エリア、対応者の位置、チャンクのデータ量等の各種の情報に基づいて、オブジェクトモデル特定部15により特定されたオブジェクトモデルが有する表示エリアへの割り当てを行う。 Specifically, the object model specifying unit 15 refers to the object allocation table TB11 shown in FIG. 14, for example, the estimated scene ID, the displayable information indicated by the chuck ID, the work area, the position of the correspondent, and the chunk data. Based on various information such as the amount, the object model specified by the object model specifying unit 15 is assigned to the display area.
 オブジェクトモデル特定部15は、例えば推定されたチャンクの割り当てを行う場合、オブジェクト割当テーブルの他に、図15に示す注目テーブルTB13を参照し、シーンIDに紐づくチャンクの割り当てを行うようにしてもよい。この場合、例えば注目テーブルTB13の注目IDを参照し、被対応物画像に紐づいて設定される注目情報の有無を判別する。 For example, when the object model specifying unit 15 allocates the estimated chunks, the object model specifying unit 15 may refer to the attention table TB13 shown in FIG. 15 in addition to the object allocation table and allocate the chunks associated with the scene ID. good. In this case, for example, the attention ID of the attention table TB13 is referred to, and the presence or absence of the attention information set in association with the image of the object to be matched is determined.
 オブジェクトモデル特定部15は、図15に示されるアノテーションテーブルTB12、注目テーブルTB13の注目IDに基づいて、例えば推定されたチャンクよりも優先する表示、又はチャンクに付随する表示されるように参照情報(注目情報、注目情報データなど)の割り当てを行う。これにより、対応者が必要な際に、対応用者に絞り込まれた情報及び対応者が気付いていない有益な情報を表示す情報より注目される情報、または上位の情報を付随して、または優先するようなオブジェクトモデルによる表示として対象者に提示することができ、情報提供の実効性、及び情報の有用性を向上させることが可能となる。 Based on the attention IDs of the annotation table TB12 and the attention table TB13 shown in FIG. 15, the object model identification unit 15 has reference information (for example, a display having priority over the estimated chunk, or a display attached to the chunk). (Attention information, attention information data, etc.) are assigned. As a result, when the responder needs it, the information that attracts more attention than the information that displays the information narrowed down to the responder and the useful information that the responder is not aware of, or the information at the higher level is attached or prioritized. It can be presented to the target person as a display by an object model such as that, and it is possible to improve the effectiveness of information provision and the usefulness of information.
 オブジェクトモデル特定部15は、推定されたシーンID、チャンクIDなどに基づいてアノテーションテーブルTB12を参照し、例えば注目IDで各々に紐づけられる各種の映像やデータの全部、または一部を、それら映像が撮影された時間、長さ、視点に基づいて、割り当てるようにしてもよい。 The object model specifying unit 15 refers to the annotation table TB12 based on the estimated scene ID, chunk ID, etc., and for example, all or part of various videos and data associated with each of the attention IDs are the videos. May be assigned based on the time, length, and viewpoint at which the image was taken.
 画像取得部4が取得した元画像、被対応物画像、対象者識別情報に加え、例えば対応者が作業する部屋や場所の位置情報、周囲の温度や湿度などの環境情報、対象者の作業や動き、対象者の生体情報などに関する各種の情報に基づいて、オブジェクトモデルへの表示エリアの表示を割り当てるようにしてもよい。 In addition to the original image, the object image, and the target person identification information acquired by the image acquisition unit 4, for example, the position information of the room or place where the person in charge works, the environmental information such as the ambient temperature and humidity, the work of the target person, and the work of the target person. The display of the display area in the object model may be assigned based on various information related to the movement, the biological information of the target person, and the like.
 オブジェクトモデル特定部15は、ユーザ端末12の透過型ディスプレイに表示される仮想現実と、が重畳して対応者に対して表示されるように、オブジェクトモデルと、オブジェクトモデルの各々の表示エリアと、表示エリアに割り当てるレコメンド画像を設定する。 The object model specifying unit 15 has an object model, each display area of the object model, and a display area of the object model so that the virtual reality displayed on the transmissive display of the user terminal 12 is superimposed and displayed to the corresponding person. Set the recommended image to be assigned to the display area.
 オブジェクトモデル特定部15は、例えば対応者がいる位置の情報である位置情報と、作業に関係する作業関連情報と、を少なくとも含む評価対象情報を取得するようにしてもよい。作業関連情報は、例えば被対応物の周囲の情報、対応者(対象者)が作業する利き腕などであり、これらに基づいて、例えば「対応者の作業位置は被対応物に対して右寄り」、「対応者と被対応物の距離は3メートル」、「被対応物の周囲の配置物の無し」、「対応者の利き腕は右」などと表示するようにしてもよい。 The object model specifying unit 15 may acquire evaluation target information including at least position information which is information on the position where a corresponding person is present and work-related information related to work. The work-related information is, for example, information around the object to be addressed, the dominant arm that the responder (target person) works on, and the like, based on these, for example, "the work position of the responder is to the right of the object to be addressed", "The distance between the correspondent and the object to be dealt with is 3 meters", "there is no arrangement around the object to be correspondent", "the dominant arm of the correspondent is to the right", and the like may be displayed.
 オブジェクトモデル特定部15は、例えば「被対応物の左側の空間が空いている」、「対応者は右利き」、「推定されたチャンク量は2画面」であると判断した場合は、被対応物の左側の空間に表示できる「cube_02(立方体)」、「6領域に割り当て」、「オブジェクトモデルのサイズ(W:D:H)」と「対応物の左側の表示座標(XX:YY:ZZ)」、「チャンク(1B827-01.txt_0)」等の表示に関する情報を表示エリアに設定する。 When the object model specifying unit 15 determines that, for example, "the space on the left side of the object to be corresponded is empty", "the correspondent is right-handed", and "the estimated chunk amount is 2 screens", the object model identification unit 15 is to be dealt with. "Cube_02 (cube)", "allocated to 6 areas", "object model size (W: D: H)" and "display coordinates on the left side of the corresponding object (XX: YY: ZZ)" that can be displayed in the space on the left side of the object. ) ”,“ Chunk (1B827-01.txt_0) ”and the like are set in the display area.
 オブジェクトモデル特定部15は、例えば作業場所であるシーン、対応者と被対応物との位置関係、参照情報であるチャンクのデータまたは情報量、レコメンド画像のサイズなどに基づいて特定されたオブジェクトモデルが立方体で、複数の場合は、例えば複数の立方体を、上下、または左右に位置付けて表示するようにしてもよい。 The object model specifying unit 15 is an object model specified based on, for example, a scene as a work place, a positional relationship between a correspondent and a corresponding object, chunk data or information amount as reference information, a size of a recommended image, and the like. In the case of a plurality of cubes, for example, a plurality of cubes may be positioned vertically or horizontally and displayed.
 オブジェクトモデル特定部15は、例えば推定されたチャンクの他に、オブジェクトモデルの上面に作業の状態などを、例えば顔の喜怒哀楽で示す『フェイスマーク』として割り当てて表示するようにしてもよい。フェイスマークは、例えば『平常(笑顔マーク)』、『異常(悲しみマーク)』、または『通知あり(発話マーク)』などで示すようにしてもよい。 In addition to the estimated chunks, the object model specifying unit 15 may assign and display a working state or the like on the upper surface of the object model, for example, as a "face mark" indicating emotions and emotions of the face. The face mark may be indicated by, for example, "normal (smile mark)", "abnormality (sadness mark)", or "notification (speech mark)".
 次に図2を用いて学習段階における情報処理装置1について説明する。例えば学習段階においては図示せぬ入力装置から入力される被対応者画像30(35)と1又は複数の被対応物画像40~43とを1組として学習させる。ここで学習は、例えば教師あり学習を指すものとする。なお、以下の説明では被対応者画像30(35)を例に説明するが、被対応者画像30(35)の他に、例えば対応者画像、対応者識別情報、被対応者情報等でもよく、対応者と被対象者を含めて対象者を指すものとし、また、対応者識別情報と被対応者情報を含め対象者識別情報61を指すものとする。 Next, the information processing device 1 in the learning stage will be described with reference to FIG. For example, in the learning stage, the corresponding person image 30 (35) input from an input device (not shown) and one or a plurality of corresponding object images 40 to 43 are learned as a set. Here, learning refers to, for example, supervised learning. In the following description, the corresponding person image 30 (35) will be described as an example, but in addition to the corresponding person image 30 (35), for example, the corresponding person image, the corresponding person identification information, the corresponding person information, and the like may be used. , The target person including the responder and the subject, and the target person identification information 61 including the responder identification information and the respondent information.
 図2は、本実施の形態による学習段階における情報処理装置1の構成を示すブロック図である。学習段階においては、情報処理装置1は、第1の学習済みモデル生成部9と、第2の学習済みモデル生成部10と、を備える。 FIG. 2 is a block diagram showing the configuration of the information processing apparatus 1 in the learning stage according to the present embodiment. In the learning stage, the information processing apparatus 1 includes a first trained model generation unit 9 and a second trained model generation unit 10.
 第1の学習済みモデル生成部9は、シーンIDと、被対応者画像30(35)と、を1対として第1の学習モデルDB1’に学習させることで第1の学習済みモデルDB1を生成するプログラムである。 The first trained model generation unit 9 generates the first trained model DB1 by training the first learning model DB1'with the scene ID and the corresponding person image 30 (35) as a pair. It is a program to do.
 第1の学習済みモデル生成部9は、被対応者画像30(35)に関してシーンテーブルTB1からシーンIDを取得し、シーンIDに対応するモデルIDをモデルテーブルTB2から取得する。 The first trained model generation unit 9 acquires the scene ID from the scene table TB1 with respect to the corresponding person image 30 (35), and acquires the model ID corresponding to the scene ID from the model table TB2.
 第2の学習済みモデル生成部10は、モデルIDを指定して1又は複数のチャンク用メタIDと被対応物画像40(41~43)とを1対として第2の学習モデルDB2’に学習させることで第2の学習済みモデルDB2を生成するプログラムである。 The second trained model generation unit 10 designates a model ID and learns from the second learning model DB 2'with one or a plurality of chunk meta IDs and the corresponding object images 40 (41 to 43) as a pair. It is a program that generates the second trained model DB2 by making it.
 第2の学習済みモデル生成部10は、シーンIDとコンテンツIDとが1対多で紐づけられたテーブルであるシーン・コンテンツテーブルTB4からシーンIDを検索キーとしてコンテンツIDを取得する。なおここで検索キーとなるシーンIDは、処理の対象となる被対応物画像40(41~43)と対になっている被対応者画像30(35)と紐づくものである。 The second learned model generation unit 10 acquires the content ID from the scene / content table TB4, which is a table in which the scene ID and the content ID are linked one-to-many, using the scene ID as a search key. Here, the scene ID that serves as a search key is associated with the corresponding person image 30 (35) that is paired with the corresponding object image 40 (41 to 43) to be processed.
 第2の学習済みモデル生成部10は、コンテンツIDとコンテンツとが1対1で紐づけられたテーブルであるコンテンツテーブルTB3からコンテンツIDを検索キーとしてコンテンツを取得する。 The second learned model generation unit 10 acquires content from the content table TB3, which is a table in which the content ID and the content are linked one-to-one, using the content ID as a search key.
 第2の学習済みモデル生成部10は、コンテンツIDとチャンクIDとが1対1又は多で紐づけられたテーブルであるコンテンツ・チャンクテーブルTB5からコンテンツIDを検索キーとしてチャンクIDを取得する。 The second learned model generation unit 10 acquires the chunk ID from the content chunk table TB5, which is a table in which the content ID and the chunk ID are linked one-to-one or many, using the content ID as a search key.
 第2の学習済みモデル生成部10は、チャンクテーブルTB7からチャンクIDを検索キーとしてチャンクを取得し、チャンク・メタテーブルTB6からチャンクIDを検索キーとしてチャンク用メタIDを取得する。 The second learned model generation unit 10 acquires chunks from the chunk table TB7 using the chunk ID as a search key, and acquires a chunk meta ID from the chunk meta table TB6 using the chunk ID as a search key.
 第2の学習済みモデル生成部10は、チャンク用メタテーブルTB8からチャンク用メタIDを検索キーとしてチャンク用メタ値を取得する。チャンク用メタテーブルTB8は、チャンク用メタIDにチャンク用カテゴリIDとチャンク用カテゴリ名とチャンク用メタ値とがそれぞれ1対1で結び付けられたテーブルである。 The second trained model generation unit 10 acquires the chunk meta value from the chunk meta table TB8 using the chunk meta ID as a search key. The chunk meta table TB8 is a table in which the chunk category ID, the chunk category name, and the chunk meta value are linked to the chunk meta ID on a one-to-one basis.
 チャンク用カテゴリIDはチャンク用メタ値が属するカテゴリの名前であるチャンク用カテゴリ名を一意に示す。なお第2の学習済みモデル生成部10は、被対応物画像40(41~43)を参照したうえで、取得したチャンク、コンテンツ及びチャンク用メタ値に問題がないことを確認する。 The chunk category ID uniquely indicates the chunk category name, which is the name of the category to which the chunk meta value belongs. The second trained model generation unit 10 refers to the corresponding object images 40 (41 to 43) and confirms that there is no problem in the acquired chunks, contents, and chunk meta values.
 問題がある値を異常値と判断し教師あり学習の学習には使用しないことで、第2の学習済みモデル生成部10は、精度の高い学習済みモデルDB2を生成することが可能となり、利用段階において情報処理装置1は、精度の高い処理を行うことができる。 By determining a problematic value as an abnormal value and not using it for supervised learning, the second trained model generation unit 10 can generate a highly accurate trained model DB2, and the usage stage. The information processing apparatus 1 can perform highly accurate processing.
 次に図3を用いてユーザ端末12が取得し、情報処理装置1が情報として処理する元画像20と元画像20が分割されて生成された被対応者画像30、対象者識別情報44及び被対応物画像40~43とについて説明する。図3は、本実施の形態による元画像20、被対応者画像30、対象者識別画像44及び複数の被対応物画像40~43を示す図である。 Next, the corresponding person image 30, the target person identification information 44, and the subject are generated by dividing the original image 20 and the original image 20 that are acquired by the user terminal 12 and processed as information by the information processing apparatus 1 using FIG. Corresponding object images 40 to 43 will be described. FIG. 3 is a diagram showing an original image 20, a corresponding person image 30, a target person identification image 44, and a plurality of corresponding object images 40 to 43 according to the present embodiment.
 元画像20、被対応者画像30、対象者識別画像44及び複数の被対応物画像40~43は例えばユーザ端末12に表示される。図3においては同時に表示されている例を示しているが、元画像20、被対応者画像30、対象者識別画像44及び複数の被対応物画像40~43は別々にユーザ端末12に表示されてもよい。 The original image 20, the corresponding person image 30, the target person identification image 44, and the plurality of corresponding object images 40 to 43 are displayed on, for example, the user terminal 12. Although FIG. 3 shows an example of being displayed at the same time, the original image 20, the corresponding person image 30, the target person identification image 44, and the plurality of corresponding object images 40 to 43 are separately displayed on the user terminal 12. You may.
 元画像20には被対応者21、対象者識別画像44及び被対応物22~25が撮像される。被対応物22~25は、例えば机などのブース内のシーン毎には変化しない情報を基に大きさなどが推定される。また被対応物22~25は、被対応物24のように、添付写真26や内部のテキスト27や署名28などいった内容の情報の他、例えば、バーコード、二次元コードなどのコード情報、各種のクーポンなどが取得されてもよい。また、対象者識別画像44には、例えば対象者識別情報61、対象者(例えば対応者または被対応者)の顔写真61a、対象者の氏名61b、対象者を識別するバーコード61cを含んでもよい。 The original image 20 captures the corresponding person 21, the target person identification image 44, and the corresponding objects 22 to 25. The size of the corresponding objects 22 to 25 is estimated based on information that does not change for each scene in the booth such as a desk. Further, the corresponding objects 22 to 25 include, like the corresponding object 24, information on the contents such as the attached photo 26, the internal text 27, and the signature 28, as well as code information such as a bar code and a two-dimensional code. Various coupons and the like may be obtained. Further, the target person identification image 44 may include, for example, the target person identification information 61, the face photograph 61a of the target person (for example, the corresponding person or the person to be dealt with), the name of the target person 61b, and the barcode 61c for identifying the target person. good.
 バーコード、二次元コードなどのコード情報、各種のクーポンは、予め紙媒体に印刷される形態の他に、例えば、対応者、又は被対応者21のユーザ端末12の画面上に表示されてもよい。 Code information such as barcodes and two-dimensional codes, and various coupons are printed on paper media in advance, or may be displayed on the screen of the user terminal 12 of the respondent or the respondent 21, for example. good.
 次に図4を用いて被対応者21及び複数の被対応物22~25の関係について説明する。図4は、本実施の形態による被対応者21及び複数の被対応物22~25の関係である木構造を示す図である。被対応者21の他に、例えば対応者、対象者識別情報(例えば対応者識別情報、被対応者識別情報など)であってもよい。以下は、対象者が被対応者21を例に詳述する。 Next, the relationship between the respondent 21 and the plurality of respondents 22 to 25 will be described with reference to FIG. FIG. 4 is a diagram showing a tree structure in which the respondent 21 and the plurality of counterparts 22 to 25 are related according to the present embodiment. In addition to the corresponding person 21, for example, the corresponding person and the target person identification information (for example, the corresponding person identification information, the corresponding person identification information, etc.) may be used. In the following, the subject will be described in detail by taking the respondent 21 as an example.
 図4に示すように、画像分割部5は、被対応者21を根ノードとし、複数の被対応物22~25を葉ノード又は内部ノードとした木構造として、被対応者21と、複数の被対応物22~25とを関連付ける。 As shown in FIG. 4, the image segmentation section 5 has a plurality of respondents 21 and a plurality of respondents 21 as a tree structure in which the respondent 21 is a root node and a plurality of counterparts 22 to 25 are leaf nodes or internal nodes. It is associated with the corresponding objects 22 to 25.
 また画像分割部5は、さらに被対応物22~25の少なくとも1つに含まれる情報である添付写真26やテキスト27や署名28といった情報の他、例えば、バーコード、二次元コードなどのコード情報、各種のクーポンなどなどを取得して葉ノードとして前記木構造に関連付けてもよい。 Further, the image dividing unit 5 further includes information such as an attached photograph 26, a text 27, and a signature 28, which are information included in at least one of the objects 22 to 25, as well as code information such as a barcode and a two-dimensional code. , Various coupons, etc. may be acquired and associated with the tree structure as a leaf node.
 次に図5を用いて第1の学習済みモデルDB1及び第2の学習済みモデルDB2について説明する。図5は、本実施の形態による第1の学習済みモデルDB1及び第2の学習済みモデルDB2を示す。 Next, the first trained model DB1 and the second trained model DB2 will be described with reference to FIG. FIG. 5 shows a first trained model DB1 and a second trained model DB2 according to the present embodiment.
 第1の学習済みモデルDB1は、と、シーンIDと、を1対の学習データとして複数用いた機械学習により生成された、複数の被対応者画像30(35)と、複数のシーンIDと、の間における連関性が記憶されている。ここで機械学習とは例えば畳み込みニューラルネットワーク(CNN:Convolution Neural Network)とする。 The first trained model DB 1 includes a plurality of respondent images 30 (35), a plurality of scene IDs, and a plurality of respondent images 30 (35) generated by machine learning using a plurality of the trained model DB 1 as a pair of learning data. The connection between them is remembered. Here, machine learning is, for example, a convolutional neural network (CNN).
 被対応者画像30(35)と、シーンIDと、の間における連関性は、具体的には、図5に丸で示すノードと矢印で示すエッジとエッジに設定される重み係数とによってあらわされる畳み込みニューラルネットワークによって表すことができる。なお図5に示すように第1の学習済みモデルDB1への被対応者画像30(35)の入力は、例えば画素p1,p2といった画素ごととする。 The association between the respondent image 30 (35) and the scene ID is specifically represented by the nodes indicated by circles in FIG. 5, the edges indicated by arrows, and the weighting factors set for the edges. It can be represented by a convolutional neural network. As shown in FIG. 5, the input of the corresponding person image 30 (35) to the first trained model DB1 is for each pixel such as pixels p1 and p2.
 第2の学習済みモデルDB2は、モデルIDと1対1で紐づけられ、複数とする。それぞれの第2の学習済みモデルDB2は、被対応物画像40(41~43)と、1又は複数のチャンク用メタIDと、を1対の学習データとして複数用いた機械学習により生成された、複数の被対応物画像40(41~43)と、複数の1又は複数のチャンク用メタIDと、の間における連関性が記憶されている。ここで機械学習とは例えば畳み込みニューラルネットワーク(CNN:Convolution Neural Network)とする。 The second trained model DB2 is associated with the model ID on a one-to-one basis, and there are a plurality of them. Each of the second trained model DB2 is generated by machine learning using a plurality of object images 40 (41 to 43) and one or a plurality of chunk meta IDs as a pair of training data. The association between the plurality of object images 40 (41 to 43) and the plurality of one or a plurality of chunk meta IDs is stored. Here, machine learning is, for example, a convolutional neural network (CNN).
 複数の被対応物画像40(41~43)と、複数の1又は複数のチャンク用メタIDと、の間における連関性は、具体的には、図5に丸で示すノードと矢印で示すエッジとエッジに設定される重み係数とによってあらわされる畳み込みニューラルネットワークによって表すことができる。なお図5に示すように第2の学習済みモデルDB2への被対応物画像40(41~43)の入力は、例えば画素p1,p2といった画素ごととする。 The association between the plurality of object images 40 (41 to 43) and the plurality of one or a plurality of chunk meta IDs is specifically the node indicated by a circle and the edge indicated by an arrow in FIG. It can be represented by a convolutional neural network represented by and the weighting factor set on the edge. As shown in FIG. 5, the input of the object image 40 (41 to 43) to the second trained model DB2 is for each pixel such as pixels p1 and p2.
 次に図6を用いて補助記憶装置11に記憶されている情報であるシーンテーブルTB1、モデルテーブルTB2、コンテンツテーブルTB3、シーン・コンテンツテーブルTB4、コンテンツ・チャンクテーブルTB5、チャンク・メタテーブルTB6、チャンクテーブルTB7、チャンク用メタテーブルTB8及びレコメンドテーブルTB9について説明する。図6は、本実施の形態による補助記憶装置11に記憶されている情報を示す図である。 Next, using FIG. 6, the scene table TB1, the model table TB2, the content table TB3, the scene content table TB4, the content chunk table TB5, the chunk meta table TB6, and the chunk, which are the information stored in the auxiliary storage device 11. The table TB7, the chunk meta table TB8, and the recommendation table TB9 will be described. FIG. 6 is a diagram showing information stored in the auxiliary storage device 11 according to the present embodiment.
 シーンテーブルTB1などに格納されるシーンIDは例えば0FDなどの3桁の16進数とする。またシーンテーブルTB1などに格納されるシーン名は、例えば成績照会や進路相談などとする。 The scene ID stored in the scene table TB1 or the like is a 3-digit hexadecimal number such as 0FD. The scene name stored in the scene table TB1 or the like is, for example, a grade inquiry or a career counseling.
 モデルテーブルTB2などに格納されるモデルIDは、例えばMD1のように2文字の英字と1桁の10進数で表される。コンテンツテーブルTB3などに格納されるコンテンツIDは、例えば1B827-01のように5桁の16進数と2桁の10進数とで表される。コンテンツテーブルTB3などに格納されるコンテンツは、例えば1B827-01.txtのようにコンテンツIDであるファイル名が拡張子付きで示され、コンテンツの実体へのポインタなどが格納される。 The model ID stored in the model table TB2 or the like is represented by a two-character alphabetic character and a one-digit decimal number, for example, MD1. The content ID stored in the content table TB3 or the like is represented by a 5-digit hexadecimal number and a 2-digit decimal number, for example, 1B827-01. The content stored in the content table TB3 or the like is, for example, 1B827-01. A file name that is a content ID, such as txt, is indicated with an extension, and a pointer to the substance of the content is stored.
 コンテンツ・チャンクテーブルTB5などに格納されるチャンクIDは、例えば82700-01のように5桁と2桁の10進数で表される。チャンク・メタテーブルTB6などに格納されるチャンク用メタIDは、例えば24FDのように4桁の16進数とする。 The chunk ID stored in the content chunk table TB5 or the like is represented by a 5-digit and 2-digit decimal number such as 82700-01. The chunk meta ID stored in the chunk meta table TB6 or the like is a 4-digit hexadecimal number such as 24FD.
 チャンクテーブルTB7に格納されるチャンクは、例えば1B827-01.txt_0のように対象となるチャンクと対応するコンテンツのファイル名と1桁の10進数とで示され、対象となるチャンクと対応するコンテンツの実体の一部分へのポインタなどが格納される。 The chunks stored in the chunk table TB7 are, for example, 1B827-01. It is indicated by a file name of the content corresponding to the target chunk and a one-digit decimal number such as pxt_0, and a pointer to a part of the substance of the content corresponding to the target chunk is stored.
 チャンクテーブルTB7に格納されるチャンクサマリは、例えば「ハローワークに、…」といったチャンクの内容を要約した文書とする。チャンクテーブルTB7に格納されるハッシュ値は、例えば564544d8f0b746eのように15桁の16進数とする。 The chunk summary stored in the chunk table TB7 is a document summarizing the contents of the chunk, for example, "Hello Work, ...". The hash value stored in the chunk table TB7 is a 15-digit hexadecimal number such as 564544d8f0b746e.
 チャンク用メタテーブルTB8に格納されるチャンク用カテゴリIDは、例えば394のように3桁の10進数とする。チャンク用メタテーブルTB8に格納されるチャンク用カテゴリ名は、例えば紙のサイズや紙の色や紙に空いた穴の有無などとする。チャンク用メタテーブルTB8に格納されるチャンク用メタ値は、例えばA4やB4や白や青、横に穴あり、穴なしなどとする。なおチャンク用カテゴリID及びチャンク用カテゴリ名の値はNULLであってもよい。 The chunk category ID stored in the chunk meta table TB8 is a 3-digit decimal number such as 394. The chunk category name stored in the chunk meta table TB8 is, for example, the size of the paper, the color of the paper, or the presence or absence of holes in the paper. The chunk meta values stored in the chunk meta table TB8 are, for example, A4, B4, white, blue, with holes on the sides, and without holes. The value of the chunk category ID and the chunk category name may be NULL.
 レコメンドテーブルTB9に格納されているチャンク用メタIDの組み合わせは、(24FD,83D9)、(25FD)などとし、1又は複数のチャンク用メタIDを組み合わせたものとする。レコメンドテーブルTB9に格納されている推奨被対応物画像は、例えばIMG001.jpgのようにファイル名が拡張子付きで示された実体へのポインタなどが格納される。 The combination of chunk meta IDs stored in the recommendation table TB9 is (24FD, 83D9), (25FD), etc., and one or more chunk meta IDs are combined. The recommended object image stored in the recommendation table TB9 is, for example, IMG001. A pointer to an entity whose file name is indicated with an extension, such as jpg, is stored.
 レコメンドテーブルTB9に格納されている共有被対応物情報は、例えばIMG111.jpgのようにファイル名が拡張子付きで示された実体へのポインタなどが格納される。共有被対象物情報は、例えば画像の他、各種のビデオ動画、文書やメールなどのテキスト、又はオンライン(オンライン会議、遠隔会議、テレビ電話など)、チャットなどへのアプリケーション、リンク情報などであってもよい。レコメンドテーブルTB9に格納されている共有者情報は、例えば対象者やインスペクターのように情報を共有する相手先の属性や条件、共有者を識別する情報などを、各々に対応付けて記憶する。 The shared object information stored in the recommendation table TB9 is, for example, IMG111. A pointer to an entity whose file name is indicated with an extension, such as jpg, is stored. Shared object information includes, for example, images, various video videos, texts such as documents and emails, online (online conferences, teleconferencing, videophones, etc.), applications for chats, link information, and the like. May be good. The co-owner information stored in the recommendation table TB9 stores, for example, the attributes and conditions of the other party with whom the information is shared, such as the target person and the inspector, and the information for identifying the co-owner, in association with each other.
 情報処理装置1は、例えばレコメンドテーブル7に格納されている共有者情報を参照し、共有被対応物情報の共有を、後述するオブジェクトモデルの表示エリアを介して行う。 The information processing device 1 refers to, for example, the co-owner information stored in the recommendation table 7, and shares the shared object information through the display area of the object model described later.
 シーン・コンテンツテーブルTB4、コンテンツ・チャンクテーブルTB5及びチャンク・メタテーブルTB6に示すように、作業情報のデータ構造は、チャンク用メタIDを最下層である第1の層とし、チャンクIDを第2の層とし、コンテンツIDを第3の層とし、シーンIDを最上層である第4の層とする階層構造を有している。 As shown in the scene content table TB4, the content chunk table TB5, and the chunk meta table TB6, the data structure of the work information has the chunk ID as the first layer and the chunk ID as the second layer. It has a hierarchical structure in which the content ID is a third layer and the scene ID is a fourth layer, which is the uppermost layer.
 次に図7を用いてシーン推定機能、チャンク推定機能及びチャンク出力機能について説明する。図7は、本実施の形態によるシーン推定機能、チャンク推定機能及びチャンク出力機能の説明に供するシーケンス図である。 Next, the scene estimation function, chunk estimation function, and chunk output function will be described with reference to FIG. 7. FIG. 7 is a sequence diagram for explaining the scene estimation function, chunk estimation function, and chunk output function according to the present embodiment.
 利用段階の情報処理機能は、後述のシーン推定処理S60によって実現されるシーン推定機能と、後述のチャンク推定処理S80によって実現されるチャンク推定機能と、後述のチャンク出力処理S100によって実現されるチャンク出力機能と、から構成される。 The information processing functions at the usage stage include a scene estimation function realized by the scene estimation process S60 described later, a chunk estimation function realized by the chunk estimation process S80 described later, and a chunk output realized by the chunk output process S100 described later. It consists of functions.
 まずシーン推定機能について説明する。制御部15に含まれる画像取得部4は、ユーザ端末12から元画像20を受信する(S1)。次に制御部15に含まれる画像分割部5は、元画像20を分割して被対応者画像30及び被対応物画像40~43とする。 First, the scene estimation function will be explained. The image acquisition unit 4 included in the control unit 15 receives the original image 20 from the user terminal 12 (S1). Next, the image segmentation unit 5 included in the control unit 15 divides the original image 20 into the corresponding person image 30 and the corresponding object images 40 to 43.
 画像分割部5は、シーン推定部6に被対応者画像30を送信し、チャンク推定部7に被対応物画像40~43を送信する。次に制御部15に含まれるシーン推定部6は、被対応者画像30を第1の学習済みモデルDB1に入力する(S2)。 The image segmentation unit 5 transmits the corresponding person image 30 to the scene estimation unit 6 and transmits the corresponding object images 40 to 43 to the chunk estimation unit 7. Next, the scene estimation unit 6 included in the control unit 15 inputs the corresponding person image 30 into the first trained model DB 1 (S2).
 第1の学習済みモデルDB1は、受信した画像30と強く結びついているシーンIDを1又は複数選択し、シーン推定部6に対して選択した1又は複数のシーンID(以下、これを第1のシーンIDリストと呼んでもよい)を出力する(S3)。 The first trained model DB 1 selects one or a plurality of scene IDs strongly associated with the received image 30, and selects one or a plurality of scene IDs for the scene estimation unit 6 (hereinafter, this is the first first). It may be called a scene ID list) (S3).
 シーン推定部6は、第1のシーンIDリストを取得すると、ユーザ端末12にそのまま送信する(S4)。ユーザ端末12は、第1のシーンIDリストに含まれるそれぞれのシーンIDについてのキャッシュの有無をシーン推定部6に対して送信する(S5)。 When the scene estimation unit 6 acquires the first scene ID list, it transmits it to the user terminal 12 as it is (S4). The user terminal 12 transmits to the scene estimation unit 6 whether or not there is a cache for each scene ID included in the first scene ID list (S5).
 ユーザ端末12は、過去に処理した情報に関しては、シーンテーブルTB1と同等のテーブルを保持している。ユーザ端末12は、受信した第1のシーンIDリストのシーンIDを検索キーとしてユーザ端末12が保持するテーブル内を検索する。検索結果が見つかったシーンIDはキャッシュ有りとなり、検索結果がみつからないシーンIDについてはキャッシュ無しとなる。 The user terminal 12 holds a table equivalent to the scene table TB1 with respect to the information processed in the past. The user terminal 12 searches the table held by the user terminal 12 using the scene ID of the received first scene ID list as a search key. The scene ID in which the search result is found has a cache, and the scene ID in which the search result cannot be found has no cache.
 シーン推定部6は、ユーザ端末12から受信した第1のシーンIDリストに含まれるそれぞれのシーンIDのうちユーザ端末12にキャッシュが無い1又は複数のシーンID(以下、これを第2のシーンIDリストと呼んでもよい)を検索キーとしてシーンテーブルTB1を検索する(S6)。 The scene estimation unit 6 has one or a plurality of scene IDs (hereinafter, referred to as a second scene ID) having no cache in the user terminal 12 among the respective scene IDs included in the first scene ID list received from the user terminal 12. The scene table TB1 is searched using (may be called a list) as a search key (S6).
 シーン推定部6は、検索結果として第2のシーンIDリストに含まれるそれぞれのシーンIDに対応するシーン名(以下、これをシーン名リストと呼んでもよい)をシーンテーブルTB1から取得する(S7)。 The scene estimation unit 6 acquires, as a search result, a scene name corresponding to each scene ID included in the second scene ID list (hereinafter, this may be referred to as a scene name list) from the scene table TB1 (S7). ..
 シーン推定部6は、取得したシーン名リストをユーザ端末12にそのままに送信する(S8)。利用段階において情報処理装置1は、ステップS1~S8によって、被対応者画像30のシーンを、シーン名を推定することで、推定するシーン推定機能を実現する。 The scene estimation unit 6 transmits the acquired scene name list to the user terminal 12 as it is (S8). At the usage stage, the information processing apparatus 1 realizes a scene estimation function for estimating the scene of the person to be addressed image 30 by estimating the scene name in steps S1 to S8.
 次にチャンク推定機能について説明する。ユーザ端末12は、受信したシーン名リストを対象者に提示する。シーン名リストの提示は、例えば予め対象者によって割り当てられたオブジェクトモデルの1面に表示される。対象者は、提示されたシーン名リストの中から例えば1つのシーン名を選択する。ユーザ端末12は、対象者に選択されたシーン名を制御部15に含まれるチャンク推定部7に送信する(S9)。 Next, the chunk estimation function will be explained. The user terminal 12 presents the received scene name list to the target person. The presentation of the scene name list is displayed, for example, on one side of the object model previously assigned by the target person. The subject selects, for example, one scene name from the presented scene name list. The user terminal 12 transmits the scene name selected by the target person to the chunk estimation unit 7 included in the control unit 15 (S9).
 チャンク推定部7は、ユーザ端末12から受信したシーン名に対応するシーンIDを検索キーとして(S10)、モデルテーブルTB2を検索しモデルIDを取得する(S11)。 The chunk estimation unit 7 uses the scene ID corresponding to the scene name received from the user terminal 12 as a search key (S10), searches the model table TB2, and acquires the model ID (S11).
 チャンク推定部7は、被対応物画像40(41~43)を画像分割部5から受信する(S12)。チャンク推定部7は、モデルテーブルTB2から取得したモデルIDによって複数の第2の学習済みモデルDB2のうちの1つを指定し、被対応物画像40(41~43)を指定した第2の学習済みモデルDB2に入力する(S13)。 The chunk estimation unit 7 receives the corresponding object image 40 (41 to 43) from the image segmentation unit 5 (S12). The chunk estimation unit 7 designates one of the plurality of second trained model DB2s by the model ID acquired from the model table TB2, and designates the corresponding object images 40 (41 to 43) for the second learning. It is input to the completed model DB2 (S13).
 第2の学習済みモデルDB2は、被対応物画像40(41~43)と強く結びついている1又は複数のチャンク用メタIDを1又は複数選択し、チャンク推定部7に対して選択した1又は複数の、1又は複数のチャンク用メタID(以下、これをチャンク用メタIDリストと呼んでもよい)を出力する(S14)。 The second trained model DB 2 selects one or a plurality of chunk meta IDs strongly associated with the object image 40 (41 to 43), and selects one or a plurality of the chunk estimation unit 7. A plurality of one or a plurality of chunk meta IDs (hereinafter, this may be referred to as a chunk meta ID list) are output (S14).
 チャンク推定部7は、チャンク用メタIDリストに含まれるそれぞれの1又は複数のチャンク用メタIDを検索キーとしてチャンク・メタテーブルTB6を検索する(S15)。 The chunk estimation unit 7 searches the chunk meta table TB6 using each one or a plurality of chunk meta IDs included in the chunk meta ID list as a search key (S15).
 チャンク推定部7は、検索結果として1又は複数のチャンクID(以下、これを第1のチャンクIDリストと呼んでもよい)をチャンク・メタテーブルTB6から取得する(S16)。チャンク推定部7は、取得した第1のチャンクIDリストをユーザ端末12にそのままに送信する(S17)。 The chunk estimation unit 7 acquires one or a plurality of chunk IDs (hereinafter, this may be referred to as a first chunk ID list) from the chunk metatable TB6 as a search result (S16). The chunk estimation unit 7 transmits the acquired first chunk ID list to the user terminal 12 as it is (S17).
 ユーザ端末12は、第1のチャンクIDリストに含まれるそれぞれのチャンクIDについてのキャッシュの有無をチャンク推定部7に対して送信する(S18)。ユーザ端末12は、過去に処理した情報に関しては、チャンクテーブルTB7におけるチャンクID列とチャンクサマリ列とを備えたテーブルを保持している。 The user terminal 12 transmits to the chunk estimation unit 7 whether or not there is a cache for each chunk ID included in the first chunk ID list (S18). The user terminal 12 holds a table including a chunk ID column and a chunk summary column in the chunk table TB7 with respect to the information processed in the past.
 ユーザ端末12は、受信した第1のチャンクIDリストのチャンクIDを検索キーとしてユーザ端末12が保持するテーブル内を検索する。検索結果が見つかったチャンクIDはキャッシュ有りとなり、検索結果がみつからないチャンクIDについてはキャッシュ無となる。 The user terminal 12 searches the table held by the user terminal 12 using the chunk ID of the received first chunk ID list as a search key. Chunk IDs for which search results are found are cached, and chunk IDs for which search results are not found are cached.
 チャンク推定部7は、ユーザ端末12から受信した第1のチャンクIDリストに含まれるそれぞれのチャンクIDのうちユーザ端末12にキャッシュが無い1又は複数のチャンクID(以下、これを第2のチャンクIDリストと呼んでもよい)を検索キーとしてチャンクテーブルTB7を検索する(S19)。 The chunk estimation unit 7 has one or a plurality of chunk IDs (hereinafter, referred to as a second chunk ID) having no cache in the user terminal 12 among the respective chunk IDs included in the first chunk ID list received from the user terminal 12. The chunk table TB7 is searched using (which may be called a list) as a search key (S19).
 チャンク推定部7は、検索結果として第2のチャンクIDリストに含まれるそれぞれのチャンクIDに対応するチャンクサマリ(以下、これをチャンクサマリリストと呼んでもよい)をチャンクテーブルTB7から取得する(S20)。チャンク推定部7は、取得したチャンクサマリリストをユーザ端末12にそのままに送信する(S21)。 The chunk estimation unit 7 acquires a chunk summary (hereinafter, this may be referred to as a chunk summary list) corresponding to each chunk ID included in the second chunk ID list as a search result from the chunk table TB7 (S20). .. The chunk estimation unit 7 transmits the acquired chunk summary list to the user terminal 12 as it is (S21).
 利用段階において情報処理装置1は、ステップS9~S21によって、被対応物22(23~25)のチャンクを、チャンクサマリを推定することで、推定するチャンク推定機能を実現する。 At the usage stage, the information processing apparatus 1 realizes a chunk estimation function for estimating chunks of the corresponding object 22 (23 to 25) by estimating chunk summaries in steps S9 to S21.
 次にチャンク出力機能について説明する。ユーザ端末12は、受信したチャンクサマリリストを対象者に提示する。チャンクサマリリストの提示は、例えば予め対象者によって割り当てられたオブジェクトモデルの1面に表示される。対象者は、提示されたチャンクサマリリストの中から例えば1つのチャンクサマリを選択する。ユーザ端末12は、対象者に選択されたチャンクサマリを制御部15に含まれるチャンク出力部8に送信する(S22)。 Next, the chunk output function will be explained. The user terminal 12 presents the received chunk summary list to the target person. The chunk summary list presentation is displayed, for example, on one side of the object model pre-assigned by the subject. The subject selects, for example, one chunk summary from the presented chunk summary list. The user terminal 12 transmits the chunk summary selected by the target person to the chunk output unit 8 included in the control unit 15 (S22).
 チャンク出力部8は、ユーザ端末12から受信したチャンクサマリに対応するチャンクIDを検索キーとして(S23)、チャンクテーブルTB7を検索しチャンクを取得する(S24)。 The chunk output unit 8 uses the chunk ID corresponding to the chunk summary received from the user terminal 12 as a search key (S23), searches the chunk table TB7, and acquires chunks (S24).
 チャンク出力部8は、取得したチャンクをユーザ端末12にそのままに送信する(S25)。ユーザ端末12は、受信したチャンクをユーザに提示する。チャンクの提示は、例えば予め対象者によって割り当てられたオブジェクトモデルの1面に表示される。利用段階において情報処理装置1は、ステップS22~S25によって、被対応物22(23~25)のチャンクを出力するチャンク出力機能を実現する。 The chunk output unit 8 transmits the acquired chunk to the user terminal 12 as it is (S25). The user terminal 12 presents the received chunk to the user. The chunk presentation is displayed, for example, on one side of the object model pre-assigned by the subject. At the usage stage, the information processing apparatus 1 realizes a chunk output function for outputting chunks of the corresponding object 22 (23 to 25) by steps S22 to S25.
 次に図8を用いて第1の学習済みモデル生成機能及び第2の学習済みモデル生成機能について説明する。図8は、本実施の形態による第1の学習済みモデル生成機能及び第2の学習済みモデル生成機能の説明に供するシーケンス図である。 Next, the first trained model generation function and the second trained model generation function will be described with reference to FIG. FIG. 8 is a sequence diagram provided for explaining the first trained model generation function and the second trained model generation function according to the present embodiment.
 学習段階の情報処理機能は、第1の学習済みモデル生成処理によって実現される第1の学習済みモデル生成機能と、第2の学習済みモデル生成処理によって実現される第2の学習済みモデル生成機能と、から構成される。 The information processing functions at the learning stage include a first trained model generation function realized by the first trained model generation process and a second trained model generation function realized by the second trained model generation process. And consists of.
 まず第1の学習済みモデル生成機能について説明する。学習済みモデル生成部16に含まれる第1の学習済みモデル生成部9は、処理対象とするシーン名と、被対応者画像30と、1又は複数の被対応物画像40~43と、の組を決め、シーンテーブルTB1にシーン名を検索キーとして予め生成されたシーンテーブルTB1を検索する(S31)。 First, the first trained model generation function will be explained. The first trained model generation unit 9 included in the trained model generation unit 16 is a set of a scene name to be processed, a corresponding person image 30, and one or a plurality of corresponding object images 40 to 43. Is determined, and the scene table TB1 generated in advance is searched for in the scene table TB1 using the scene name as a search key (S31).
 第1の学習済みモデル生成部9は、検索結果としてシーンIDをシーンテーブルTB1から取得し(S32)、第1の学習モデルDB1’に、被対応者画像30と、シーンIDと、を1対として学習させる(S33)。 The first trained model generation unit 9 acquires the scene ID from the scene table TB1 as a search result (S32), and sets the corresponding person image 30 and the scene ID in the first learning model DB 1'as a pair. (S33).
 また第1の学習済みモデル生成部9は、モデルテーブルTB2に取得したシーンIDを送信しモデルID取得要求を行う(S34)。モデルテーブルTB2は、受信したシーンIDに対応するモデルIDを生成して、シーンIDとモデルIDとの組み合わせを記憶する。 Further, the first trained model generation unit 9 transmits the acquired scene ID to the model table TB2 and makes a model ID acquisition request (S34). The model table TB2 generates a model ID corresponding to the received scene ID and stores the combination of the scene ID and the model ID.
 次に第1の学習済みモデル生成部9は、モデルIDをモデルテーブルTB2から取得する(S35)。学習段階において情報処理装置1は、ステップS31~S35によって、第1の学習済みモデルDB1を生成する第1の学習済みモデル生成機能を実現する。 Next, the first trained model generation unit 9 acquires the model ID from the model table TB2 (S35). In the learning stage, the information processing apparatus 1 realizes the first trained model generation function of generating the first trained model DB1 by steps S31 to S35.
 次に第2の学習済みモデル生成機能について説明する。学習済みモデル生成部16に含まれる第2の学習済みモデル生成部10は、ステップS32において第1の学習済みモデル生成部9が受信したシーンIDを検索キーとして、予め生成されたシーン・コンテンツテーブルTB4を検索する(S36)。 Next, the second trained model generation function will be described. The second trained model generation unit 10 included in the trained model generation unit 16 uses the scene ID received by the first trained model generation unit 9 in step S32 as a search key to generate a scene content table in advance. Search for TB4 (S36).
 第2の学習済みモデル生成部10は、検索結果としてコンテンツIDをシーン・コンテンツテーブルTB4から取得し(S37)、取得したコンテンツIDを検索キーとして予め生成されたコンテンツテーブルTB3を検索する(S38)。 The second learned model generation unit 10 acquires the content ID from the scene content table TB4 as a search result (S37), and searches the content table TB3 generated in advance using the acquired content ID as a search key (S38). ..
 第2の学習済みモデル生成部10は、検索結果としてコンテンツをコンテンツテーブルTB3から取得し(S39)、ステップS37で取得したコンテンツIDを検索キーとして予め生成されたコンテンツ・チャンクテーブルTB5を検索する(S40)。 The second learned model generation unit 10 acquires the content from the content table TB3 as a search result (S39), and searches the content chunk table TB5 generated in advance using the content ID acquired in step S37 as a search key (S). S40).
 第2の学習済みモデル生成部10は、検索結果としてチャンクIDをコンテンツ・チャンクテーブルTB5から取得し(S41)、取得したチャンクIDを検索キーとして予め生成されたチャンクテーブルTB7を検索する(S42)。 The second trained model generation unit 10 acquires the chunk ID from the content chunk table TB5 as a search result (S41), and searches the chunk table TB7 generated in advance using the acquired chunk ID as a search key (S42). ..
 第2の学習済みモデル生成部10は、検索結果としてチャンクをチャンクテーブルTB7から取得し(S43)、ステップS41で取得したチャンクIDを検索キーとして予め生成されたチャンク・メタテーブルTB6を検索する(S44)。 The second trained model generation unit 10 acquires chunks from the chunk table TB7 as a search result (S43), and searches the chunk metatable TB6 generated in advance using the chunk ID acquired in step S41 as a search key (S). S44).
 第2の学習済みモデル生成部10は、検索結果として1又は複数のチャンク用メタIDをチャンク・メタテーブルTB6から取得し(S45)、取得したそれぞれのチャンク用メタIDを検索キーとして予め生成されたチャンク用メタテーブルTB8を検索する(S46)。 The second trained model generation unit 10 acquires one or a plurality of chunk meta IDs from the chunk meta table TB6 as search results (S45), and pre-generates each acquired meta ID for chunks as a search key. The meta table TB8 for chunks is searched (S46).
 第2の学習済みモデル生成部10は、検索結果としてそれぞれのチャンク用メタIDに対応するチャンク用メタ値をチャンク用メタテーブルTB8からそれぞれ取得する(S47)。 The second trained model generation unit 10 acquires the chunk meta value corresponding to each chunk meta ID as a search result from the chunk meta table TB8 (S47).
 第2の学習済みモデル生成部10は、ステップS39で取得したコンテンツ、ステップS43で取得したチャンク及びステップS47で取得したそれぞれのチャンク用メタ値に問題がないかを被対応者画像30及び被対応物画像40~43を参照して確認を行う。 The second trained model generation unit 10 checks whether there is a problem in the content acquired in step S39, the chunk acquired in step S43, and the respective chunk meta values acquired in step S47, with the corresponding person image 30 and the corresponding. Confirmation is performed with reference to the object images 40 to 43.
 例えば第2の学習済みモデル生成部10は、被対応者21の表情や被対応物22~25に記載された書類名などを参照して確認を行う。第2の学習済みモデル生成部10は、例えば、被対応者21の表情を被対応者画像30から判定し、被対応物22~25に記載された書類名を被対応物画像40~43から判定する。 For example, the second trained model generation unit 10 confirms by referring to the facial expressions of the person to be corresponded 21 and the document names described in the objects to be dealt with 22 to 25. The second trained model generation unit 10 determines, for example, the facial expression of the corresponding person 21 from the corresponding person image 30, and assigns the document names described in the corresponding objects 22 to 25 from the corresponding object images 40 to 43. judge.
 参照した結果、コンテンツやチャンクやチャンク用メタ値が被対応物画像40~43に撮像されている書類とは明らかに違う書類についての情報であることが明確であるなどの問題があった場合は対象となる組についての処理を終了する。 As a result of reference, if there is a problem that it is clear that the content, chunks, and meta values for chunks are information about documents that are clearly different from the documents captured in the corresponding object images 40 to 43. End the processing for the target set.
 次に第2の学習済みモデル生成部10は、第2の学習モデルDB2’にモデルIDと、被対応物画像40(41~43)と、1又は複数のチャンク用メタIDと、を1対として学習させる(S48)。学習段階において情報処理装置1は、ステップS36~S48によって、第2の学習済みモデルDB2を生成する第2の学習済みモデル生成機能を実現する。 Next, the second trained model generation unit 10 sets a pair of the model ID, the corresponding object images 40 (41 to 43), and one or a plurality of chunk meta IDs in the second learning model DB 2'. (S48). In the learning stage, the information processing apparatus 1 realizes the second trained model generation function of generating the second trained model DB 2 by steps S36 to S48.
 次に図9を用いて利用段階における情報処理について説明する。図9は、本実施の形態による利用段階における情報処理の処理手順を示すフローチャートである。利用段階における情報処理は、シーン推定処理S60と、チャンク推定処理S80と、チャンク出力処理S100と、から構成される。 Next, information processing at the usage stage will be described with reference to FIG. FIG. 9 is a flowchart showing a processing procedure of information processing in the usage stage according to the present embodiment. Information processing in the usage stage is composed of a scene estimation process S60, a chunk estimation process S80, and a chunk output process S100.
 まずシーン推定処理S60について説明する。シーン推定処理S60は、ステップS61~ステップS67から構成される。シーン推定部6は、画像分割部5から被対応者画像30(35)を受信すると(S61)、被対応者画像30(35)を第1の学習済みモデルDB1に入力する(S62)。 First, the scene estimation process S60 will be described. The scene estimation process S60 is composed of steps S61 to S67. When the scene estimation unit 6 receives the corresponding person image 30 (35) from the image segmentation unit 5 (S61), the scene estimation unit 6 inputs the corresponding person image 30 (35) into the first trained model DB 1 (S62).
 シーン推定部6は、第1の学習済みモデルDB1から出力として第1のシーンIDリストを取得し(S63)、ユーザ端末12に第1のシーンIDリストをそのまま送信してキャッシュの有無をユーザ端末12に問い合わせる(S64)。 The scene estimation unit 6 acquires the first scene ID list as output from the first trained model DB 1 (S63), transmits the first scene ID list to the user terminal 12 as it is, and determines whether or not there is a cache. Contact 12 (S64).
 ユーザ端末12からの返答結果がすべてキャッシュ有りの場合(S65:NO)、シーン推定処理S60は終了しチャンク推定処理S80が開始される。ユーザ端末12からの返答結果が1つでもキャッシュ無しの場合(S65:YES)、シーン推定部6は、シーンテーブルTB1からシーン名リストを取得し(S66)、そのままユーザ端末12に送信し(S67)、シーン推定処理S60は終了する。 When all the response results from the user terminal 12 have a cache (S65: NO), the scene estimation process S60 ends and the chunk estimation process S80 starts. When even one response result from the user terminal 12 has no cache (S65: YES), the scene estimation unit 6 acquires the scene name list from the scene table TB1 (S66) and transmits it to the user terminal 12 as it is (S67). ), The scene estimation process S60 ends.
 次にチャンク推定処理S80について説明する。チャンク推定処理S80は、ステップS81~ステップS88から構成される。チャンク推定部7は、対象者に選択されたシーン名をユーザ端末12から受信する(S81)。 Next, the chunk estimation process S80 will be described. The chunk estimation process S80 is composed of steps S81 to S88. The chunk estimation unit 7 receives the scene name selected by the target person from the user terminal 12 (S81).
 ユーザ端末12からシーン名を受信すると、チャンク推定部7は、モデルテーブルTB2からモデルIDを取得する(S82)。次にチャンク推定部7は、モデルIDによって複数の第2の学習済みモデルDB2のうちの1つを指定し、画像分割部5から受信した被対応物画像40(41~43)を指定した第2の学習済みモデルDB2に入力する(S83)。 Upon receiving the scene name from the user terminal 12, the chunk estimation unit 7 acquires the model ID from the model table TB2 (S82). Next, the chunk estimation unit 7 designates one of the plurality of second learned models DB 2 by the model ID, and designates the corresponding object images 40 (41 to 43) received from the image division unit 5. It is input to the trained model DB2 of 2 (S83).
 チャンク推定部7は、第2の学習済みモデルDB2から出力としてチャンク用メタIDリストを取得し(S84)、チャンク・メタテーブルTB6から第1のチャンクIDリストを取得する(S85)。次にチャンク推定部7は、ユーザ端末12に第1のチャンクIDリストをそのまま送信してキャッシュの有無をユーザ端末12に問い合わせる(S86)。 The chunk estimation unit 7 acquires a chunk meta ID list as an output from the second trained model DB 2 (S84), and acquires a first chunk ID list from the chunk meta table TB6 (S85). Next, the chunk estimation unit 7 transmits the first chunk ID list to the user terminal 12 as it is, and inquires the user terminal 12 whether or not there is a cache (S86).
 ユーザ端末12からの返答結果がすべてキャッシュ有りの場合(S86:NO)、チャンク推定処理S80は終了しチャンク出力処理S100が開始される。ユーザ端末12からの返答結果が1つでもキャッシュ無しの場合(S86:YES)、チャンク推定部7は、チャンクテーブルTB7からチャンクサマリリストを取得し(S87)、そのままユーザ端末12に送信し(S88)、チャンク推定処理S80は終了する。 When all the response results from the user terminal 12 have a cache (S86: NO), the chunk estimation process S80 ends and the chunk output process S100 starts. When even one response result from the user terminal 12 has no cache (S86: YES), the chunk estimation unit 7 acquires a chunk summary list from the chunk table TB7 (S87) and transmits it to the user terminal 12 as it is (S88). ), The chunk estimation process S80 ends.
 次にチャンク出力処理S100について説明する。チャンク出力処理S100は、ステップS101~ステップS103から構成される。チャンク出力部8は、対象者に選択されたチャンクサマリをユーザ端末12から受信する(S101)。 Next, the chunk output process S100 will be described. The chunk output process S100 is composed of steps S101 to S103. The chunk output unit 8 receives the chunk summary selected by the target person from the user terminal 12 (S101).
 ユーザ端末12からチャンクサマリを受信すると、チャンク出力部8は、チャンクテーブルTB7からチャンクを取得し(S102)、そのままユーザ端末12に送信し(S103)、チャンク出力処理S100は終了する。 When the chunk summary is received from the user terminal 12, the chunk output unit 8 acquires the chunk from the chunk table TB7 (S102) and transmits it to the user terminal 12 as it is (S103), and the chunk output process S100 ends.
 次に図10を用いて表示段階における情報処理の処理手順について説明する。図10は、本実施の形態による表示段階における情報処理を示すフローチャートである。表示部14における表示は、表示処理S110で構成される。 Next, the processing procedure of information processing in the display stage will be described with reference to FIG. FIG. 10 is a flowchart showing information processing in the display stage according to the present embodiment. The display on the display unit 14 is composed of the display process S110.
 表示処理S110は、ステップS111~ステップS114から構成される。表示部14はオブジェクトモデル特定部15を備え、例えば、予め格納されるオブジェクトモデル情報を取得し(S111)、オブジェクトモデルを構成する各表示エリアの情報を取得する(S112)。具体的には、各表示エリアの情報は、例えば、オブジェクトモデル特定部15は、レコメンド画像出力部により出力されるレコメンド画像及びレコメンド情報を表示するオブジェクトモデルの特定を行う。オブジェクトモデル特定部15は、シーン及びチャンクと、オブジェクトモデルを一意に示すオブジェクトモデルIDとの紐づけを行い、オブジェクトモデルを特定する。 The display process S110 is composed of steps S111 to S114. The display unit 14 includes an object model specifying unit 15, for example, acquiring object model information stored in advance (S111) and acquiring information on each display area constituting the object model (S112). Specifically, for the information of each display area, for example, the object model specifying unit 15 specifies the recommended image output by the recommended image output unit and the object model for displaying the recommended information. The object model specifying unit 15 identifies the object model by associating the scene and chunk with the object model ID that uniquely indicates the object model.
 オブジェクトモデルの特定は、例えば、オブジェクトモデルテーブルTB10を参照し、シーンID、チャンクID、対象者または情報共有者などのエリアID、またはロールIDなどの各種の情報に基づいて、特定されるようにしてもよい。 The object model is specified, for example, with reference to the object model table TB10, and is specified based on various information such as a scene ID, a chunk ID, an area ID such as a target person or an information sharer, or a role ID. You may.
 表示部14は、例えば、ユーザ端末に提示する情報の属性、種別、情報量、さらに対象者の種別、対象者の操作などの各種の情報に基づいて、オブジェクトモデルの表示エリアの割り当て無しの場合(S113:NO)は、表示対象以外の情報、デフォルト表示を行う。一方、オブジェクトモデルの表示エリアの割り当てありの場合(S113:YES)は、表示対象となる各種の情報を各々の表示エリアに割り当てて表示する(S114)。 The display unit 14 is not assigned a display area of the object model based on various information such as, for example, the attribute, type, amount of information of the information presented to the user terminal, the type of the target person, the operation of the target person, and the like. (S113: NO) displays information other than the display target and default display. On the other hand, when the display area of the object model is allocated (S113: YES), various information to be displayed is assigned to each display area and displayed (S114).
 表示対象となる各種の情報(オブジェクト)のオブジェクトモデルの各々の表示エリアへの割り当ては、オブジェクト割当テーブルTB11を参照し、例えば、シーンID、チャンクID、ロールIDなどに基づいて表示エリアに割り当てられる。表示エリアは、例えば、オブジェクトモデルIDで識別される表示領域情報、表示領域IDに基づいて割り当てられることになる。 Allocation of various information (objects) to be displayed to each display area of the object model refers to the object allocation table TB11, and is assigned to the display area based on, for example, a scene ID, a chunk ID, a role ID, and the like. .. The display area is allocated based on, for example, the display area information identified by the object model ID and the display area ID.
 表示部14は、レコメンド画像出力部により出力されたレコメンド画像及びレコメンド情報を、オブジェクトモデル特定部15により特定されたオブジェクトモデルが備える複数の表示エリアの何れかの表示エリアに、対象者と、被共有者とが共有可能な状態として割り当てて表示し、表示処理S110は終了する。 The display unit 14 displays the recommendation image and the recommendation information output by the recommendation image output unit in the display area of any of the plurality of display areas included in the object model specified by the object model identification unit 15 with the target person. It is assigned and displayed as a state that can be shared with the co-owner, and the display process S110 ends.
 次に図11を用いて学習段階における情報処理について説明する。図11は、本実施の形態による学習段階における情報処理の処理手順を示すフローチャートである。学習段階における情報処理は、第1の学習済みモデル生成処理S120と、第2の学習済みモデル生成処理S140と、から構成される。 Next, information processing in the learning stage will be described with reference to FIG. FIG. 11 is a flowchart showing a processing procedure of information processing in the learning stage according to the present embodiment. Information processing in the learning stage is composed of a first trained model generation process S120 and a second trained model generation process S140.
 まず第1の学習済みモデル生成処理S120について説明する。第1の学習済みモデル生成処理S120は、ステップS121~ステップS124から構成される。第1の学習済みモデル生成部9は、シーン名と、被対応者画像30(35)と、1又は複数の被対応物画像40(41~43)と、の組を決めると、シーン名を検索キーとしてシーンテーブルTB1を検索する(S121)。 First, the first trained model generation process S120 will be described. The first trained model generation process S120 is composed of steps S121 to S124. When the first trained model generation unit 9 determines a set of the scene name, the corresponding person image 30 (35), and one or more corresponding object images 40 (41 to 43), the scene name is determined. Search the scene table TB1 as a search key (S121).
 第1の学習済みモデル生成部9は、検索結果としてシーンテーブルTB1からシーンIDを取得し(S122)、第1の学習モデルDB1’にシーンIDと、被対応者画像30(35)と、を1対として学習させる(S123)。 The first learned model generation unit 9 acquires the scene ID from the scene table TB1 as a search result (S122), and puts the scene ID and the corresponding person image 30 (35) in the first learning model DB 1'. Learn as a pair (S123).
 次に第1の学習済みモデル生成部9は、モデルテーブルTB2にステップS122で取得したシーンIDを送信しモデルID取得要求を行い、モデルIDを取得する(S124)。 Next, the first trained model generation unit 9 transmits the scene ID acquired in step S122 to the model table TB2, makes a model ID acquisition request, and acquires the model ID (S124).
 次に第2の学習済みモデル生成処理S140について説明する。第2の学習済みモデル生成処理S140は、ステップS141~ステップS150から構成される。第2の学習済みモデル生成部10は、ステップS122で取得されたシーンIDを検索キーとしてシーン・コンテンツテーブルTB4を検索しコンテンツIDを取得する(S141)。 Next, the second trained model generation process S140 will be described. The second trained model generation process S140 is composed of steps S141 to S150. The second learned model generation unit 10 searches the scene content table TB4 using the scene ID acquired in step S122 as a search key, and acquires the content ID (S141).
 第2の学習済みモデル生成部10は、取得したコンテンツIDを検索キーとしてコンテンツテーブルTB3を検索しコンテンツを取得する(S142)。また第2の学習済みモデル生成部10は、取得したコンテンツIDを検索キーとしてコンテンツ・チャンクテーブルTB5を検索しチャンクIDを取得する(S143)。 The second learned model generation unit 10 searches the content table TB3 using the acquired content ID as a search key and acquires the content (S142). Further, the second learned model generation unit 10 searches the content chunk table TB5 using the acquired content ID as a search key, and acquires the chunk ID (S143).
 また第2の学習済みモデル生成部10は、取得したチャンクIDを検索キーとしてチャンクテーブルTB7を検索しチャンクを取得する(S144)。また第2の学習済みモデル生成部10は、取得したチャンクIDを検索キーとしてチャンク・メタテーブルTB6を検索し1又は複数のチャンク用メタIDを取得する(S145)。 Further, the second trained model generation unit 10 searches the chunk table TB7 using the acquired chunk ID as a search key and acquires chunks (S144). Further, the second learned model generation unit 10 searches the chunk meta table TB6 using the acquired chunk ID as a search key, and acquires one or a plurality of chunk meta IDs (S145).
 また第2の学習済みモデル生成部10は、取得した1又は複数のチャンク用メタIDのそれぞれを検索キーとしてチャンク用メタテーブルTB8を検索し、それぞれのチャンク用メタIDに対応するチャンク用メタ値をそれぞれ取得する(S146)。 Further, the second trained model generation unit 10 searches the chunk meta table TB8 using each of the acquired one or a plurality of chunk meta IDs as a search key, and the chunk meta value corresponding to each chunk meta ID. (S146).
 第2の学習済みモデル生成部10は、ステップS142で取得したコンテンツ、ステップS144で取得したチャンク及びステップS146で取得したそれぞれのチャンク用メタ値に問題がないかを被対応者画像30(35)及び被対応物画像40(41~43)を参照して確認を行う(S147)。 The second trained model generation unit 10 checks whether there is a problem with the content acquired in step S142, the chunk acquired in step S144, and the respective chunk meta values acquired in step S146 (corresponding person image 30 (35)). And the corresponding object images 40 (41 to 43) are referred to for confirmation (S147).
 確認の結果問題があった場合(S148:NO)、処理中の組に関しての学習段階の情報処理は終了する。確認の結果問題がない場合(S148:YES)、第2の学習済みモデル生成部10は、第2の学習モデルDB2’にモデルIDと、1又は複数のチャンク用メタIDと、被対応物画像40(41~43)と、を1対として学習させ(S149)、処理中の組に関しての学習段階の情報処理は終了する。 If there is a problem as a result of confirmation (S148: NO), the information processing in the learning stage regarding the set being processed ends. If there is no problem as a result of confirmation (S148: YES), the second trained model generation unit 10 has a model ID, one or more chunk meta IDs, and a corresponding object image in the second learning model DB 2'. 40 (41 to 43) and 40 (41 to 43) are learned as a pair (S149), and the information processing in the learning stage regarding the set being processed is completed.
 以上のように本実施の形態による情報処理装置1によって、作業情報を分割又は示唆したチャンクは、ユーザ端末12を介して提示される。このため、チャンクを適切に設定することで必要な分量の情報を提示することが可能となる。またチャンクを、文書全体を示唆するような情報とすれば、大規模な情報の再構築は不要となる。 As described above, the chunk that divides or suggests the work information by the information processing apparatus 1 according to the present embodiment is presented via the user terminal 12. Therefore, it is possible to present the required amount of information by appropriately setting chunks. Also, if the chunk is information that suggests the entire document, there is no need to reconstruct the information on a large scale.
(本実施の形態:第2実施形態)
 次に図12、図17及び図18を用いて利用段階における情報処理システム100の動作について説明する。図12(a)は、情報処理システム100の動作の一例を示す模式図であり、図12(b)は、情報処理システム100における元画像、対象者画像、複数の被対応物画像を示す図である。また、図17(a)~(h)は、情報処理システム100における対象者における表示パターンを示す図である、また、図18(a)~(b)は、情報処理システム100におけるユーザ端末12の表示の一例を示す模式図である。
(The present embodiment: the second embodiment)
Next, the operation of the information processing system 100 at the utilization stage will be described with reference to FIGS. 12, 17 and 18. FIG. 12A is a schematic diagram showing an example of the operation of the information processing system 100, and FIG. 12B is a diagram showing an original image, a target person image, and a plurality of objects to be imaged in the information processing system 100. Is. 17 (a) to 17 (h) are diagrams showing display patterns in the target person in the information processing system 100, and FIGS. 18 (a) to 18 (b) are user terminals 12 in the information processing system 100. It is a schematic diagram which shows an example of the display of.
 図12(a)に示す情報処理システム100の動作の一例を示す模式図は、例えば対象者(対応者、被対応者、共有者、トレーニーなど)50aが、製造エリアにおいて被対応物60に対して作業を行う際に、作業に関する作業情報を出力するケースを示す。 In the schematic diagram showing an example of the operation of the information processing system 100 shown in FIG. 12 (a), for example, the target person (correspondent, correspondent, co-owner, trainee, etc.) 50a with respect to the corresponding object 60 in the manufacturing area. The case where the work information about the work is output when the work is performed is shown.
 ここで、情報処理システム100は、例えば対象者50aが装着するユーザ端末12(例えばヘッドマウントディスプレイなど)を介して、例えば、対象者識別情報である社員証61、作業を行う被対応物60の画像を取得する。対象者識別情報は、社員証61の他に、例えば被対象者の顔画像や指紋、手相、静脈などの画像であってもよく、対象者を識別できるユニークな情報であればよい。 Here, the information processing system 100 is, for example, via a user terminal 12 (for example, a head-mounted display) worn by the target person 50a, for example, an employee ID card 61 which is the target person identification information, and a corresponding object 60 which performs work. Get an image. In addition to the employee ID card 61, the target person identification information may be, for example, an image of the subject's face, fingerprint, palmistry, vein, or the like, and may be unique information that can identify the target person.
 情報処理システム100は、製造エリアに対象者50aの他に、例えば共同作業を行う対象者50bが同じ作業エリア内に居れば、対象者50bにより撮影された社員証61の対象者識別情報を取得するようにしてもよい。情報処理システム100は、対象者50bからは、例えば対象者50bの社員証61の対象者識別情報と、対象者50b側から撮像した被対応物60の画像を取得する。 The information processing system 100 acquires the target person identification information of the employee ID card 61 taken by the target person 50b if, for example, the target person 50b who collaborates with the target person 50b is in the same work area in addition to the target person 50a in the manufacturing area. You may try to do it. The information processing system 100 acquires, for example, the target person identification information of the employee ID card 61 of the target person 50b and the image of the corresponding object 60 captured from the target person 50b side from the target person 50b.
 情報処理システム100は、例えば製造エリアに複数の対象者50a、対象者50bなどが共同で作業を行う場合は、例えばそれぞれのユーザ装置12のカメラによって撮影された画像を、共同作業における被対応物の画像として特定する、または分割するようにしてもよい。情報処理システム100は、例えば分割された画像を判別し、推奨される推奨被対応物画像及び共有される共有被対応物情報を検索するようにしてもよい。 In the information processing system 100, for example, when a plurality of target persons 50a, target persons 50b, and the like collaborate in a manufacturing area, for example, an image taken by a camera of each user device 12 is used as a corresponding object in the collaborative work. It may be specified as an image of the image or divided. The information processing system 100 may, for example, determine the divided image and search for the recommended recommended object image and the shared shared object information.
 情報処理システム100は、レコメンド画像出力部13により、例えば元画像には撮像されていないが本来は必要であると推測される被対応物の画像、対象者の作業において情報共有が必要であると推測される被対応物及び対象者の作業に関する情報を含むレコメンド画像を出力することが可能となり、作業し忘れなど防止することができる。 According to the recommendation image output unit 13, the information processing system 100 needs to share information in the work of the target person, for example, an image of a corresponding object that is not captured in the original image but is presumed to be necessary in the original image. It is possible to output a recommended image including information on the presumed object to be handled and the work of the target person, and it is possible to prevent forgetting to work.
 製造エリアは、例えばインターネットなどの通信網を介して、顧客・トレーナー51のエリアである顧客・支援エリアと接続される他、例えば、対象者50aが作業の監視を行うインスペクター52のエリアである監視エリアと接続される。対象者50aは、顧客・トレーナー51、及びインスペクター52と、ひとつの作業に対してそれぞれの立場に応じた視点で必要となる情報を多面に出力し、立場、作業場所、作業時間が異なる複数の作業者における情報共有を可能とする。 The manufacturing area is connected to the customer / support area, which is the area of the customer / trainer 51, via a communication network such as the Internet, and the monitoring is, for example, the area of the inspector 52 in which the target person 50a monitors the work. Connected to the area. The target person 50a outputs information necessary for one work from a viewpoint according to each position to the customer / trainer 51 and the inspector 52 from various aspects, and has a plurality of different positions, work places, and work hours. Enables information sharing among workers.
 情報処理システム100は、対象者50aは、顧客・トレーナー51、及びインスペクター52における情報共有を、前述のオブジェクトモデルを介して行う。情報処理システム100は、レコメンド画像出力部13により、例えば、推奨される推奨被対応物画像及び共有される共有被対応物情報を検索し、元画像には撮像されていないが本来は必要であると推測される被対応物の画像、対象者の作業において情報共有が必要であると推測される被対応物及び前記対象者の作業に関する情報などを含むレコメンド画像を出力する。レコメンド画像出力部13は、オブジェクトモデルの複数の表示エリアに割り当てて出力する。 In the information processing system 100, the target person 50a shares information in the customer / trainer 51 and the inspector 52 via the above-mentioned object model. The information processing system 100 searches for, for example, a recommended recommended object image and shared shared object information by the recommendation image output unit 13, and although it is not captured in the original image, it is originally necessary. It outputs a recommended image including an image of the object to be presumed to be, an object to be presumed to need information sharing in the work of the subject, and information on the work of the subject. The recommendation image output unit 13 allocates and outputs to a plurality of display areas of the object model.
 図12(b)は、情報処理システム100における元画像、対象者画像、複数の被対応物画像を示す図である。情報処理システム100は、画像取得部4により、例えば対象者50aが装着するユーザ端末12により撮影された対象者に関する対象者識別情報70、被対応物60画像を取得し、補助記憶装置11に、各々に対応付けられて記憶される。 FIG. 12B is a diagram showing an original image, a target person image, and a plurality of objects to be imaged in the information processing system 100. The information processing system 100 acquires, for example, the target person identification information 70 regarding the target person and the corresponding object 60 image taken by the user terminal 12 worn by the target person 50a by the image acquisition unit 4, and causes the auxiliary storage device 11 to acquire the target person identification information 70 and the corresponding object 60 image. It is associated with each and stored.
 補助記憶装置11に格納される画像は、例えば対象者の画像や対象者識別情報であって、例えば社員証61の画像であってもよい。社員証61には、例えば、作業を行う対象者の顔画像61a、氏名61b、コード情報61cを含んでもよい。また、補助記憶装置11に格納される画像は、例えば被対応物60の画像を含む。被対応物60の画像は、例えば、被対応物60を構成する部品60a、部品60b、部品60cの各画像を含んでもよい。 The image stored in the auxiliary storage device 11 may be, for example, an image of a target person or target person identification information, for example, an image of an employee ID card 61. The employee ID card 61 may include, for example, a face image 61a, a name 61b, and code information 61c of a person who performs work. Further, the image stored in the auxiliary storage device 11 includes, for example, an image of the corresponding object 60. The image of the object 60 may include, for example, images of the parts 60a, 60b, and 60c constituting the object 60.
 画像取得部4により撮像された元画像は、画像分割部5により分割される。画像分割部4は、例えば、画像取得部4により被対応物60の画像を取得した後に、例えば部品60a~60cに分割するようにしてもよい。対象者識別情報は、例えば対象者を識別するための情報であればよく、例えば社員証61の画像であってもよい。対象者識別情報が、例えば社員証61である場合には、作業を行う対象者の顔画像61a、氏名61b、コード情報61cを含んでもよい。 The original image captured by the image acquisition unit 4 is divided by the image segmentation unit 5. The image segmentation unit 4 may be divided into parts 60a to 60c, for example, after the image acquisition unit 4 acquires an image of the object 60. The target person identification information may be, for example, information for identifying the target person, and may be, for example, an image of the employee ID card 61. When the target person identification information is, for example, an employee ID card 61, the face image 61a, the name 61b, and the code information 61c of the target person who performs the work may be included.
 画像取得部4により取得された元画像分割部4により分割された画像は、それぞれ画像70、71、71a~71cとして、補助記憶装置11に各々に対応付けられて記憶される。 The images divided by the original image segmentation unit 4 acquired by the image acquisition unit 4 are stored as images 70, 71, 71a to 71c in association with each other in the auxiliary storage device 11.
 ここで、図17を用いて、本実施形態におけるユーザ端末2の表示の一例について説明する。図17は、ユーザ端末2において、オブジェクトモデル特定部15により特定されたオブジェクトモデルの複数の表示領域に、オブジェクトモデル特定部15により割り当てられ、表示された各種の表示内容を示す。 Here, an example of the display of the user terminal 2 in the present embodiment will be described with reference to FIG. FIG. 17 shows various display contents assigned and displayed by the object model specifying unit 15 to a plurality of display areas of the object model specified by the object model specifying unit 15 in the user terminal 2.
 図17(a)は、例えばシーン推定部6が推定した複数のシーンの候補がオブジェクトモデルの1つの表示エリアに表示した例である。図17(b)は、例えばチャンクIDに紐づく画像や情報として、コンテンツ・差分情報が表示領域に表示される例である。図17(c)は、例えばユーザ端末12がスマートフォン等のデバイスで、自分視点の画像情報として、リアカメラに切り替えて第2の画像として被対応物を撮影し、撮影した被対応物が表示エリアに表示された例である。図17(d)は、例えば熟練者(トレーナー)が対象者(トレーニー)に対して、作業する被対応物を撮影し、第2の画像に基づいて表示される作業チェック表が表示エリアに表示される例である。図17(e)は、例えば熟練者(トレーナー)のユーザ端末末12で録画された被対応物の映像として表示エリアに表示された例である。図17(f)は、例えば作業を行う対象者の行動を俯瞰して確認する映像と、関連する情報が合わせて表示エリアに表示される例である。図17(g)は、例えば熟練者(トレーナー)が記録する学習用情報を取得する場合の熟練者・AI学習データの生成時用の作業情報が表示エリアに表示される例である。図17(h)は、例えばチャンクIDに紐づく関連ナラティブ情報として、関連動画情報、原点情報が表示領域に表示される例である。 FIG. 17A is an example in which a plurality of scene candidates estimated by the scene estimation unit 6 are displayed in one display area of the object model. FIG. 17B is an example in which content / difference information is displayed in the display area as, for example, an image or information associated with a chunk ID. In FIG. 17C, for example, the user terminal 12 is a device such as a smartphone, the image information of the user's viewpoint is switched to the rear camera, and the object to be photographed is photographed as a second image, and the photographed object is displayed in the display area. It is an example displayed in. In FIG. 17D, for example, a skilled person (trainer) photographs a target person (trainee) to work on a corresponding object, and a work checklist displayed based on the second image is displayed in the display area. It is an example to be done. FIG. 17E is an example of being displayed in the display area as an image of a corresponding object recorded by, for example, a user terminal end 12 of a skilled person (trainer). FIG. 17 (f) is an example in which, for example, an image for confirming the behavior of a target person performing a work from a bird's-eye view and related information are displayed together in a display area. FIG. 17 (g) is an example in which the work information for generating the expert / AI learning data when acquiring the learning information recorded by the expert (trainer) is displayed in the display area. FIG. 17H is an example in which related moving image information and origin information are displayed in the display area as related narrative information associated with the chunk ID, for example.
 特定されたオブジェクトモデルの表示エリアには、例えば図17(a)~(h)に表示される各々の参照情報がレコメンド画像として表示されるが、例えばシーン推定部6によるシーンの選択、チャンク推定部7によるチャンクの選択の場面では、図17(a)が対象者の前面に表示されるようにしてもよい。また、対象者の作業内容、作業状況などに基づいて、オブジェクトモデルの表示エリアが回転し、より重要な情報、注目情報などを優先表示するようにしてもよい。 In the display area of the specified object model, for example, each reference information displayed in FIGS. 17A to 17H is displayed as a recommended image. For example, the scene estimation unit 6 selects a scene and estimates chunks. In the scene of selecting chunks by the part 7, FIG. 17A may be displayed in front of the subject. Further, the display area of the object model may be rotated based on the work content, work status, etc. of the target person, and more important information, attention information, and the like may be preferentially displayed.
 次に、図18(a)~(b)に、情報処理システム100によるユーザ端末12における表示内容を示す。図18(a)に示す表示例は、例えばユーザ端末12がスマートフォンであり、平面的な表示として表示される。図18(a)に示す表示例は、例えば対象者(対応者)のスマートフォンやタブレットなどの表示画面80の表示エリア80aに、特定されたオブジェクトモデル80b、80cの各々の表示エリアにチャンクやレコメンド画像などの各種の情報が表示された例である。 Next, FIGS. 18A to 18B show the display contents on the user terminal 12 by the information processing system 100. In the display example shown in FIG. 18A, for example, the user terminal 12 is a smartphone and is displayed as a flat display. In the display example shown in FIG. 18A, for example, chunks and recommendations are made in the display areas 80a of the display screen 80 of the target person (correspondent) such as a smartphone or tablet, and in the respective display areas of the specified object models 80b and 80c. This is an example in which various information such as images are displayed.
 図18(a)に示す表示例は、例えばオブジェクトモデルのイメージを、例えば異なるユーザ端末12間で共通とするようにしてもよい。これにより、例えば対象者が複数で作業するような場合でも、各々の対象者が必要な際に、対象者ごとに絞り込まれた情報、対象者が気付いていない有益な情報、さらに対象者間や、他の拠点の顧客・トレーナー、インスペクターなどと各種の情報を共有することが可能となる。これにより、例えば、共通の操作作法、UIとして対象者に提示することができ、情報提供の実効性、及び情報の有用性を向上させることができる。 In the display example shown in FIG. 18A, for example, the image of the object model may be shared between different user terminals 12. As a result, for example, even when multiple target persons work together, when each target person needs it, information narrowed down to each target person, useful information that the target person does not notice, and even among the target persons , It will be possible to share various information with customers / trainers, inspectors, etc. at other bases. Thereby, for example, it can be presented to the target person as a common operation method and UI, and the effectiveness of information provision and the usefulness of information can be improved.
 次に、図18(b)に示す表示例は、例えばユーザ端末12がパソコン等であり、対象者が監視者(インスペクター)の場合における表示となる。この場合、例えば表示画面81には、被対応物を作業する対象者を俯瞰した画像を表示する表示エリア81a、熟練者の熟練担当者の視点画像を表示する表示エリア81b、対象者に対して情報を送信するオブジェクトモデルを選択させる表示エリア81cと、対象者(例えばトレーナー、トレーニーなど)にアラートを送信するための表示エリア81dが表示される。これにより、対象者が必要な際に、対象者に絞り込まれた情報とは別の視点や役割で、必要な情報、共有する情報、関連する情報などを対象者(対応者)などに提示することが可能となる。さらに、情報提供の実効性、及び情報の有用性が一層向上させることができる。 Next, in the display example shown in FIG. 18B, for example, the user terminal 12 is a personal computer or the like, and the target person is an inspector. In this case, for example, on the display screen 81, a display area 81a for displaying a bird's-eye view of the target person working on the object to be handled, a display area 81b for displaying a viewpoint image of a skilled person in charge of the skilled person, and the target person. A display area 81c for selecting an object model for transmitting information and a display area 81d for transmitting an alert to a target person (for example, a trainer, a trainee, etc.) are displayed. As a result, when the target person needs it, the necessary information, shared information, related information, etc. are presented to the target person (correspondent), etc. from a viewpoint and role different from the information narrowed down to the target person. Is possible. Furthermore, the effectiveness of information provision and the usefulness of information can be further improved.
 また、本実形態によれば、オブジェクトモデル特定部15は、例えば模式化された情報を表示エリアに割り当てるようにしてもよい。模式化された情報とは、例えば人の表情を簡略化した『笑顔』、『不安』、『緊迫』などの表情を示す図やイラスト、対象者の作業状況に対する『注意』、『警告』などの単語やメッセージ、さらには赤色、青色、黄色などの発光状態が表示灯などとして、各々が表示されてもよい。 Further, according to the present embodiment, the object model specifying unit 15 may allocate, for example, schematic information to the display area. Schematized information includes, for example, figures and illustrations showing facial expressions such as "smile", "anxiety", and "tension" that simplify human facial expressions, and "caution" and "warning" for the work situation of the subject. Words and messages, as well as light emission states such as red, blue, and yellow may be displayed as indicator lights and the like.
 モデルテーブルTB2を使用することで、第1の学習済みモデルDB1と第2の学習済みモデルDB2との関係が変わった場合にも、モデルテーブルTB2を変更するだけ対応が可能となり、メンテナンス性に優れた装置を提供できる。 By using the model table TB2, even if the relationship between the first trained model DB1 and the second trained model DB2 changes, it is possible to deal with it simply by changing the model table TB2, which is excellent in maintainability. Can provide the equipment.
 なおモデルテーブルTB2を使用しない場合、第1の学習済みモデルDB1と第2の学習済みモデルDB2との関係が変わった場合には学習済みモデルDB2を再度生成する必要がある。 When the model table TB2 is not used, it is necessary to regenerate the trained model DB2 when the relationship between the first trained model DB1 and the second trained model DB2 changes.
 本実施の形態においては、画像取得部4、画像分割部5、シーン推定部6、チャンク推定部7、チャンク出力部8、第1の学習済みモデル生成部9及び第2の学習済みモデル生成部10及びレコメンド画像出力部13は、プログラムとしたがこれに限らず論理回路でもよい。 In the present embodiment, the image acquisition unit 4, the image division unit 5, the scene estimation unit 6, the chunk estimation unit 7, the chunk output unit 8, the first trained model generation unit 9, and the second trained model generation unit 9 are used. The 10 and the recommendation image output unit 13 are programmed, but the program is not limited to this, and a logic circuit may be used.
 また画像取得部4、画像分割部5、シーン推定部6、チャンク推定部7、チャンク出力部8、第1の学習済みモデル生成部9及び第2の学習済みモデル生成部10、レコメンド画像出力部13、第1の学習済みモデルDB1、第1の学習モデルDB1’、第2の学習済みモデルDB2、第2の学習モデルDB2’、シーンテーブルTB1、モデルテーブルTB2、コンテンツテーブルTB3、シーン・コンテンツテーブルTB4、コンテンツ・チャンクテーブルTB5、チャンク・メタテーブルTB6、チャンクテーブルTB7、チャンク用メタテーブルTB8、レコメンドテーブルTB9、オブジェクトテーブルTB10・オブジェクト割当テーブルTB11・アノテーションテーブルTB12・注目テーブルTB13・カメラテーブルTB14・ロールテーブルTB15は1つの装置に実装されておらず、ネットワークで接続された複数の装置に分散して実装されていてもよい。 Further, the image acquisition unit 4, the image division unit 5, the scene estimation unit 6, the chunk estimation unit 7, the chunk output unit 8, the first trained model generation unit 9, the second trained model generation unit 10, and the recommendation image output unit. 13, 1st trained model DB1, 1st training model DB1', 2nd trained model DB2, 2nd learning model DB2', scene table TB1, model table TB2, content table TB3, scene content table TB4, content chunk table TB5, chunk meta table TB6, chunk table TB7, chunk meta table TB8, recommendation table TB9, object table TB10, object allocation table TB11, annotation table TB12, attention table TB13, camera table TB14, roll The table TB15 is not mounted on one device, but may be distributed and mounted on a plurality of devices connected by a network.
 また上述の図8及び図11に示した学習段階では、第1の学習済みモデル及び第2の学習済みモデルを関連付けて生成する場合について説明したが、本発明はこれに限らず、第1の学習済みモデルDB1と第2の学習済みモデルDB2とは、別々に生成してもよい。 Further, in the learning stage shown in FIGS. 8 and 11 described above, a case where the first trained model and the second trained model are associated and generated has been described, but the present invention is not limited to this, and the first method is not limited to this. The trained model DB1 and the second trained model DB2 may be generated separately.
 第1の学習済みモデルDB1と第2の学習済みモデルDB2とを別々に生成する場合、例えばシーンは既存のものであってコンテンツのみを追加する場合などに、シーンに関する学習を行わずに済む。 When the first trained model DB1 and the second trained model DB2 are generated separately, for example, when the scene is an existing one and only the content is added, it is not necessary to learn about the scene.
 本実施の形態においては、第2の学習済みモデルDB2を複数使用する場合について述べたが、これに限らず使用する第2の学習済みモデルDB2は1つでもよい。また本実施の形態においては、本来は必要であると推測される被対応物の画像を表示する場合について述べたが、これに限らず本来は必要であると推測される被対応物の一部を表示するようにしてもよい。また本実施の形態においては、本来は不要であると推測される被対応物や被対応物の一部を示唆するようにしてもよい。 In the present embodiment, the case where a plurality of second trained model DB2s are used has been described, but the present invention is not limited to this, and only one second trained model DB2 may be used. Further, in the present embodiment, the case of displaying the image of the corresponding object that is presumed to be originally necessary has been described, but the present invention is not limited to this, and a part of the corresponding object that is presumed to be originally necessary is displayed. May be displayed. Further, in the present embodiment, a corresponding object or a part of the corresponding object, which is presumed to be originally unnecessary, may be suggested.
 本実施の形態の情報処理装置1は、利用段階において、画像分割部によって関連付けられた木構造と、第1の学習済みモデルDB1及び第2の学習済みモデルDB2から出力される値から構成される階層構造と、を比較することで、過不足個所を判定してもよい。 The information processing apparatus 1 of the present embodiment is composed of a tree structure associated with the image dividing unit and values output from the first trained model DB1 and the second trained model DB2 in the usage stage. By comparing with the hierarchical structure, excess / deficiency points may be determined.
 本実施の形態においては、例えば表示部14によりオブジェクトモデルの複数の表示エリアに割り当てられるレコメンド画像及び前記レコメンド情報は、シーン推定部6により推定されたシーンの内容を示すシーン情報、シーン情報に紐づく対象者が行う作業に関する作業情報、作業情報に紐づく対象者が行う作業に関する作業工程を示す作業チェック情報、作業情報に紐づくチャンク情報、作業に関する作業内容の差分情報、作業のトレーナーによる手本作業の内容を示す手本情報、対象者の作業シーンの作業映像を示す作業情報、又は手本情報と作業情報との作業の差分に応じて示される指示情報の少なくとも何れかの情報を含むようにしてもよい。 In the present embodiment, for example, the recommendation image and the recommendation information assigned to the plurality of display areas of the object model by the display unit 14 are linked to the scene information and the scene information indicating the contents of the scene estimated by the scene estimation unit 6. Work information related to the work performed by the target person, work check information indicating the work process related to the work performed by the target person linked to the work information, chunk information linked to the work information, difference information of the work content related to the work, hand by the work trainer Includes at least one of model information showing the content of this work, work information showing the work video of the work scene of the target person, or instruction information shown according to the difference in work between the model information and the work information. You may do so.
 本実施の形態においては、例えば表示部14は、オブジェクトモデルを対象者の仮想表示空間内の被対応物の近傍に出力するようにしてもよい。さらに表示部14は、オブジェクトモデルを対象者の仮想表示空間内の被対応物の近傍に表示位置を固定して出力するようにしてもよい。 In the present embodiment, for example, the display unit 14 may output the object model in the vicinity of the corresponding object in the virtual display space of the target person. Further, the display unit 14 may output the object model by fixing the display position in the vicinity of the corresponding object in the virtual display space of the target person.
 本実施の形態においては、例えばオブジェクトモデル特定部は、対象者のスキル情報、作業を行う空間情報、対応物の特徴情報、又は作業の作業レベル情報の少なくとも何れかに基づいてオブジェクトモデルを特定するようにしてもよい。 In the present embodiment, for example, the object model specifying unit specifies the object model based on at least one of the skill information of the target person, the spatial information for performing the work, the characteristic information of the corresponding object, or the work level information for the work. You may do so.
 本実施の形態においては、例えばオブジェクトモデル特定部は、2面以上の表示エリアを備えた1以上のオブジェクトモデルを特定するようにしてもよい。さらに、オブジェクトモデル特定部は、対象者が行う作業の状態に基づいて、回転表示、拡大表示、縮小表示、突出表示、振動表示、状態表示、変色表示、又は濃淡表示の少なくとも何れかの形態でオブジェクトモデルを表示するようにしてもよい。 In the present embodiment, for example, the object model specifying unit may specify one or more object models having two or more display areas. Further, the object model identification unit is in at least one of rotation display, enlarged display, reduced display, protrusion display, vibration display, state display, discoloration display, and shading display based on the state of the work performed by the subject. The object model may be displayed.
 1……情報処理装置、2……中央演算装置、3……主記憶装置、4……画像取得部、5……画像分割部、6……シーン推定部、7……チャンク推定部、8……チャンク出力部、9……第1の学習済みモデル生成部、10……第2の学習済みモデル生成部、11……補助記憶装置、12……ユーザ端末、13……レコメンド画像出力部、14……表示部、15……オブジェクトモデル特定部、100……情報処理システム……
 
1 ... Information processing device, 2 ... Central processing unit, 3 ... Main storage device, 4 ... Image acquisition unit, 5 ... Image division unit, 6 ... Scene estimation unit, 7 ... Chunk estimation unit, 8 ...... Chunk output unit, 9 ... 1st trained model generation unit, 10 ... 2nd trained model generation unit, 11 ... auxiliary storage device, 12 ... user terminal, 13 ... recommended image output unit , 14 …… Display part, 15 …… Object model identification part, 100 …… Information processing system ……

Claims (10)

  1.  対応者の行う作業に関する情報である作業情報を出力する情報処理装置であって、
     前記対応者、及び前記対応者が対応する対象者の少なくとも何れかを含む対象者と前記対応者が対応する複数の被対応物とを含む画像である元画像を取得する画像取得部と、
     前記元画像を分割し前記対象者が撮像された対象者画像とそれぞれの前記被対応物が撮像された複数の被対応物画像とに分割する画像分割部と、
     前記対象者画像と、対応者が行う状況であるシーンを一意に示すシーンIDと、の間における連関性が記憶されている第1の学習済みモデルを使用して、前記シーンを推定するシーン推定部と、
     前記複数の前記被対応物画像と、前記作業情報を分割又は示唆した情報であるチャンクを一意に示すチャンクIDと、1対1に対応付けられた1又は複数のチャンク用メタIDと、の間における連関性が記憶されている複数の第2の学習済みモデルのうちの1つを使用して、前記チャンクを推定するチャンク推定部と、
     前記チャンクを出力するチャンク出力部と、
     前記複数の第2の学習済みモデルのうちの1つを、シーンIDと1対1に対応付けられたモデルIDと1又は複数のチャンク用メタIDの組み合わせとを検索キーとして、推奨被対応物画像を検索し、前記元画像には撮像されていないが本来は必要であると推測される前記被対応物の画像であるレコメンド画像を出力するレコメンド画像出力部と、
     前記チャンク出力部により出力される前記チャンク及び前記レコメンド画像出力部により出力される前記レコメンド画像を、複数の表示エリアを備えるオブジェクトモデルの前記表示エリアに割り当てて表示する表示部と、を備え、
     前記チャンク推定部は、前記モデルIDを用いて選定し、前記チャンク用メタIDは前記被対応物の性質に関する情報であるチャンク用メタ値を一意に示す、情報処理装置。
    An information processing device that outputs work information that is information about the work performed by the responder.
    An image acquisition unit that acquires an original image that is an image including a target person including at least one of the corresponding person and the corresponding target person and a plurality of corresponding objects supported by the corresponding person.
    An image segmentation unit that divides the original image into an image of the subject captured by the subject and a plurality of images of the corresponding objects captured by each of the objects.
    Scene estimation for estimating the scene using the first trained model in which the relationship between the subject image and the scene ID uniquely indicating the scene performed by the correspondent is stored. Department and
    Between the plurality of objects to be imaged, a chunk ID uniquely indicating a chunk that is information that divides or suggests the work information, and one or a plurality of chunk meta IDs associated with one-to-one. A chunk estimator that estimates the chunk using one of a plurality of second trained models in which the associations in are stored.
    The chunk output section that outputs the chunk and
    One of the plurality of second trained models is recommended as a search key using a combination of a model ID and one or a plurality of chunk meta IDs associated with a scene ID in a one-to-one manner. A recommendation image output unit that searches for an image and outputs a recommendation image that is an image of the corresponding object that is not captured in the original image but is presumed to be necessary in the original image.
    A display unit that allocates and displays the chunk output by the chunk output unit and the recommended image output by the recommendation image output unit to the display area of an object model having a plurality of display areas.
    The chunk estimation unit is selected by using the model ID, and the chunk meta ID is an information processing device that uniquely indicates a chunk meta value which is information about the property of the corresponding object.
  2.  前記画像分割部は、前記対象者を根ノードとし、前記複数の前記被対応物を葉ノード又は内部ノードとした木構造として、前記対象者と、前記複数の前記被対応物とを関連付ける、請求項1に記載の情報処理装置。 The image dividing unit associates the target person with the plurality of the corresponding objects as a tree structure in which the target person is a root node and the plurality of the corresponding objects are leaf nodes or internal nodes. Item 1. The information processing apparatus according to Item 1.
  3.  前記画像分割部は、さらに前記被対応物の少なくとも1つに含まれる情報を取得して葉ノードとして前記木構造に関連付ける、請求項2に記載の情報処理装置。 The information processing apparatus according to claim 2, wherein the image segmentation unit further acquires information contained in at least one of the corresponding objects and associates it with the tree structure as a leaf node.
  4.  前記画像取得部が取得する前記対象者の画像は、前記対象者を識別する対象者識別情報であること、
     を特徴とする請求項1に記載の情報処理装置。
    The image of the target person acquired by the image acquisition unit is the target person identification information for identifying the target person.
    The information processing apparatus according to claim 1.
  5.  前記シーン推定部は、前記対象者識別情報と、対応者が行う状況であるシーンを一意に示すシーンIDと、の間における連関性が記憶されている第1の学習済みモデルを使用して、前記シーンをさらに推定し、
     チャンク推定部は、前記複数の前記被対応物画像と、前記チャンクIDと、複数の前記チャンク用メタIDと、の間における連関性が記憶されている複数の第2の学習済みモデルのうちの1つを使用して、前記対象者識別情報に紐づく前記チャンクをさらに推定し、
     レコメンド画像出力部は、前記複数の第2の学習済みモデルのうちの1つを、シーンIDと1対1に対応付けられたモデルIDと1又は複数のチャンク用メタIDの組み合わせとを検索キーとして、共有被対応物情報を検索し、前記作業において情報共有する必要があると推測される前記被対応物の画像及び前記対象者の作業に関する情報を含むレコメンド画像をさらに出力すること、
     を特徴とする請求項1に記載の情報処理装置。
    The scene estimation unit uses the first trained model in which the association between the target person identification information and the scene ID uniquely indicating the scene performed by the correspondent is stored. Further estimating the scene,
    The chunk estimation unit is among a plurality of second trained models in which the association between the plurality of object images, the chunk ID, and the plurality of chunk meta IDs is stored. One is used to further estimate the chunk associated with the subject identification information.
    The recommendation image output unit searches for one of the plurality of second trained models by searching for a model ID associated with a scene ID on a one-to-one basis and a combination of one or a plurality of chunk meta IDs. As a method, the shared object information is searched, and the image of the object that is presumed to need to be shared in the work and the recommendation image including the information about the work of the target person are further output.
    The information processing apparatus according to claim 1.
  6.  前記表示部は、前記レコメンド画像出力部により出力される前記レコメンド画像及び前記レコメンド情報を表示するオブジェクトモデルの特定を、前記シーン及び前記チャンクと、前記オブジェクトモデルを一意に示す前記オブジェクトモデルIDとの紐づけを行い、前記オブジェクトモデルを特定するオブジェクトモデル特定部をさらに備えること、
     を特徴とする請求項1に記載の情報処理装置。
    The display unit specifies the scene, the chunk, and the object model ID that uniquely indicates the object model, in which the scene, the chunk, and the object model that displays the recommended image and the recommended information output by the recommended image output unit are specified. To further provide an object model identification part that identifies the object model by associating it.
    The information processing apparatus according to claim 1.
  7.  前記表示部は、前記レコメンド画像出力部により出力された前記レコメンド画像及び前記レコメンド情報を、前記オブジェクトモデル特定部により特定された前記オブジェクトモデルが備える複数の表示エリアの何れかの表示エリアに、前記対象者と、前記対象者と情報を共有する共有者とが共有可能な状態として割り当てること、
     を特徴とする請求項1又は6に記載の情報処理装置。
    The display unit displays the recommendation image and the recommendation information output by the recommendation image output unit in any display area of a plurality of display areas included in the object model specified by the object model identification unit. Assigning as a state that can be shared between the target person and the co-owner who shares information with the target person,
    The information processing apparatus according to claim 1 or 6.
  8.  前記レコメンド画像出力部は、前記対応者と共同作業を行う共同作業者、前記対応者の指導者であるトレーナー、及び前記対応者の監視を行うインスペクターの少なくとも何れかの立場で識別される者を共有者として検索すること、
     を特徴とする請求項4記載の情報処理装置。
    The recommendation image output unit identifies a collaborator who collaborates with the responder, a trainer who is the leader of the responder, and an inspector who monitors the responder. Searching as a sharer,
    4. The information processing apparatus according to claim 4.
  9.  対応者の行う作業に関する情報である作業情報を出力する情報処理装置が行う情報処理方法であって、
     前記対応者、及び前記対応者が対応する対象者の少なくとも何れかを含む対象者と前記対応者が対応する複数の被対応物とを含む画像である元画像を取得する第1のステップと、
     前記元画像を分割し前記対象者が撮像された対象者画像とそれぞれの前記被対応物が撮像された複数の被対応物画像とに分割する第2のステップと、
     前記対象者画像と、対応者が行う状況であるシーンを一意に示すシーンIDと、の間における連関性が記憶されている第1の学習済みモデルを使用して、前記シーンを推定する第3のステップと、
     前記複数の前記被対応物画像と、前記作業情報を分割又は示唆した情報であるチャンクを一意に示すチャンクIDと、1対1に対応付けられた1又は複数のチャンク用メタIDと、の間における連関性が記憶されている複数の第2の学習済みモデルのうちの1つを使用して、前記チャンクを推定する第4のステップと、
     前記チャンクを出力する第5のステップと、
     前記複数の第2の学習済みモデルのうちの1つを、シーンIDと1対1に対応付けられたモデルIDと1又は複数のチャンク用メタIDの組み合わせとを検索キーとして、推奨被対応物画像を検索し、前記元画像には撮像されていないが本来は必要であると推測される前記被対応物の画像であるレコメンド画像を出力する第6のステップと、
     前記チャンク及び前記レコメンド画像の各々を、複数の表示エリアを備えるオブジェクトモデルの前記表示エリアに割り当てて表示する第7のステップと、を備え、
     前記複数の第2の学習済みモデルのうちの1つはシーンIDと1対1に対応付けられたモデルIDを用いて選定され、前記チャンク用メタIDは前記被対応物の性質に関する情報であるチャンク用メタ値を一意に示す、情報処理方法。
    It is an information processing method performed by an information processing device that outputs work information that is information about the work performed by the responder.
    A first step of acquiring an original image which is an image including a target person including at least one of the corresponding person and the corresponding target person and a plurality of corresponding objects to which the corresponding person corresponds.
    A second step of dividing the original image into an image of the subject captured by the subject and a plurality of images of the subject captured by each of the objects.
    A third estimation of the scene using the first trained model in which the association between the subject image and the scene ID uniquely indicating the scene performed by the correspondent is stored. Steps and
    Between the plurality of objects to be imaged, a chunk ID uniquely indicating a chunk that is information that divides or suggests the work information, and one or a plurality of chunk meta IDs associated with one-to-one. A fourth step of estimating the chunk using one of a plurality of second trained models in which the associations in the are remembered.
    The fifth step of outputting the chunk and
    One of the plurality of second trained models is recommended as a search key using a combination of a model ID and one or a plurality of chunk meta IDs associated with a scene ID in a one-to-one manner. The sixth step of searching the image and outputting the recommended image which is the image of the corresponding object which is not captured in the original image but is presumed to be necessary in the original image.
    Each of the chunk and the recommended image is provided with a seventh step of allocating and displaying each of the recommended images in the display area of the object model having a plurality of display areas.
    One of the plurality of second trained models is selected using the scene ID and the model ID associated with one-to-one, and the chunk meta ID is information regarding the property of the corresponding object. An information processing method that uniquely indicates the chunk meta value.
  10.  対応者の行う作業に関する情報である作業情報を出力する情報処理システムであって、
     前記対応者、及び前記対応者が対応する対象者の少なくとも何れかを含む対象者、及び前記対象者を識別する対象者識別情報の少なくとも何れかを含む対象者画像と、前記対応者が対応する複数の被対応物とを含む画像である元画像を取得する画像取得手段と、
     前記元画像を前記対象者画像とそれぞれの前記被対応物が撮像された複数の被対応物画像とに分割する画像分割手段と、
     前記対象者画像と、対応者が行う状況であるシーンを一意に示すシーンIDと、の間における連関性が記憶されている第1の学習済みモデルを使用して、前記シーンを推定するシーン推定手段と、
     前記複数の前記被対応物画像と、前記作業情報を分割又は示唆した情報であるチャンクを一意に示すチャンクIDと、1対1に対応付けられた1又は複数のチャンク用メタIDと、の間における連関性が記憶されている複数の第2の学習済みモデルのうちの1つを使用して、前記チャンクを推定するチャンク推定手段と、
     前記チャンクを出力するチャンク出力手段と、
     前記複数の第2の学習済みモデルのうちの1つを、シーンIDと1対1に対応付けられたモデルIDと1又は複数のチャンク用メタIDの組み合わせとを検索キーとして、推奨される推奨被対応物画像及び共有される共有対応物情報を検索し、前記元画像には撮像されていないが本来は必要であると推測される前記被対応物の画像、前記対象者の前記作業において情報共有が必要であると推測される前記被対応物及び前記対象者の作業に関する情報を含むレコメンド画像を出力するレコメンド画像出力手段と、
     前記チャンク出力手段により出力される前記チャンク及び前記レコメンド画像出力手段により出力される前記レコメンド画像を、複数の表示エリアを備えるオブジェクトモデルの前記表示エリアに割り当てて表示する表示手段と、を備え、
     前記チャンク推定手段は、前記モデルIDを用いて選定し、前記チャンク用メタIDは前記被対応物の性質に関する情報であるチャンク用メタ値を一意に示す、情報処理システム。
    An information processing system that outputs work information that is information about the work performed by the responder.
    The corresponding person corresponds to the target person image including at least one of the corresponding person and the target person to which the corresponding person corresponds, and the target person identification information for identifying the target person. An image acquisition means for acquiring an original image which is an image including a plurality of objects to be supported,
    An image segmentation means for dividing the original image into an image of the target person and a plurality of images of the corresponding objects in which the corresponding objects are captured.
    Scene estimation for estimating the scene using the first trained model in which the relationship between the subject image and the scene ID uniquely indicating the scene performed by the correspondent is stored. Means and
    Between the plurality of objects to be imaged, a chunk ID uniquely indicating a chunk that is information that divides or suggests the work information, and one or a plurality of chunk meta IDs associated with one-to-one. A chunk estimation means for estimating the chunk using one of a plurality of second trained models in which the associations in the are stored.
    Chunk output means for outputting the chunk and
    One of the plurality of second trained models is recommended as a search key using a combination of a model ID and one or a plurality of chunk meta IDs associated with a scene ID in a one-to-one manner. The image of the corresponding object and the shared corresponding object information to be shared are searched, and the image of the corresponding object, which is not captured in the original image but is presumed to be originally necessary, and the information in the work of the target person. A recommendation image output means for outputting a recommendation image including information on the object to be shared and the work of the target person, which is presumed to need to be shared.
    A display means for allocating and displaying the chunk output by the chunk output means and the recommended image output by the recommended image output means to the display area of an object model having a plurality of display areas.
    The chunk estimation means is selected by using the model ID, and the chunk meta ID uniquely indicates a chunk meta value which is information about the property of the corresponding object.
PCT/JP2021/045713 2020-12-11 2021-12-10 Information processing apparatus, information processing method, and information processing system WO2022124419A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022568365A JPWO2022124419A1 (en) 2020-12-11 2021-12-10

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020205948 2020-12-11
JP2020-205948 2020-12-11

Publications (1)

Publication Number Publication Date
WO2022124419A1 true WO2022124419A1 (en) 2022-06-16

Family

ID=81974603

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/045713 WO2022124419A1 (en) 2020-12-11 2021-12-10 Information processing apparatus, information processing method, and information processing system

Country Status (2)

Country Link
JP (1) JPWO2022124419A1 (en)
WO (1) WO2022124419A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005216137A (en) * 2004-01-30 2005-08-11 Chugoku Electric Power Co Inc:The Maintenance support system and method
JP2009529736A (en) * 2006-03-10 2009-08-20 ネロ アーゲー Apparatus and method for providing a sequence of video frames, apparatus and method for providing a scene model, scene model, apparatus and method for creating a menu structure, and computer program
JP2012150613A (en) * 2011-01-18 2012-08-09 Ricoh Co Ltd Work content measuring device and work management device
JP6607590B1 (en) * 2019-03-29 2019-11-20 株式会社 情報システムエンジニアリング Information providing system and information providing method
WO2020145085A1 (en) * 2019-01-08 2020-07-16 株式会社日立国際電気 Image recognition device, image recognition program, and image recognition method
JP2020528626A (en) * 2017-07-27 2020-09-24 ベステル エレクトロニク サナイー ベ ティカレト エー.エス. How to overlay web pages on 3D objects, devices and computer programs
JP2020166353A (en) * 2019-03-28 2020-10-08 Kddi株式会社 Robot control device, robot control method, and robot

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005216137A (en) * 2004-01-30 2005-08-11 Chugoku Electric Power Co Inc:The Maintenance support system and method
JP2009529736A (en) * 2006-03-10 2009-08-20 ネロ アーゲー Apparatus and method for providing a sequence of video frames, apparatus and method for providing a scene model, scene model, apparatus and method for creating a menu structure, and computer program
JP2012150613A (en) * 2011-01-18 2012-08-09 Ricoh Co Ltd Work content measuring device and work management device
JP2020528626A (en) * 2017-07-27 2020-09-24 ベステル エレクトロニク サナイー ベ ティカレト エー.エス. How to overlay web pages on 3D objects, devices and computer programs
WO2020145085A1 (en) * 2019-01-08 2020-07-16 株式会社日立国際電気 Image recognition device, image recognition program, and image recognition method
JP2020166353A (en) * 2019-03-28 2020-10-08 Kddi株式会社 Robot control device, robot control method, and robot
JP6607590B1 (en) * 2019-03-29 2019-11-20 株式会社 情報システムエンジニアリング Information providing system and information providing method

Also Published As

Publication number Publication date
JPWO2022124419A1 (en) 2022-06-16

Similar Documents

Publication Publication Date Title
US9898647B2 (en) Systems and methods for detecting, identifying and tracking objects and events over time
US10691876B2 (en) Networking in a social network
Stevens et al. Seeing infrastructure: Race, facial recognition and the politics of data
KR20150092100A (en) Customized predictors for user actions in an online system
US11170214B2 (en) Method and system for leveraging OCR and machine learning to uncover reuse opportunities from collaboration boards
US20170032298A1 (en) Methods and systems for visualizing individual and group skill profiles
JP6800453B1 (en) Information processing device and information processing method
JP2018181257A (en) Interview management program and interview management device
WO2022124419A1 (en) Information processing apparatus, information processing method, and information processing system
JP6124354B2 (en) Experience learning support system, experience learning support method, information processing apparatus, control method thereof, and control program
Khairunisa et al. Virtual Job Fair Information System Design Based on Augmented Reality/Virtual Reality
US9811893B2 (en) Composable situational awareness visualization system
JP2021174502A (en) Information processing system, information processor, information processing method, information processing program, communication terminal, and control method and control program thereof
US20220394098A1 (en) Information processing system, system, and information processing method
JP2022136068A (en) Information display device, information display system, information display program, learning method, and data structure
JP6324284B2 (en) Group learning system
KR102349974B1 (en) A Method of providing the school promotional material provision service
JP2008020939A (en) Image processor for supporting manuscript processing, manuscript processing support method and computer program
KR100445688B1 (en) System and Method for Providing Advertisement Using Picture Chatting Service
KR20200130552A (en) Sharing system of job video and method thereof
CN109344249A (en) Information processing method, device, electronic equipment and computer readable storage medium
WO2021193136A1 (en) Information processing device and information processing method
JP6818308B1 (en) Information processing device and information processing method
US20230076217A1 (en) Form creating system and non-transitory computer readable medium
JP2023147195A (en) Program, information processing apparatus, information processing system, and information processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21903515

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022568365

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21903515

Country of ref document: EP

Kind code of ref document: A1