WO2016084453A1 - Information processing device, control method and program - Google Patents

Information processing device, control method and program Download PDF

Info

Publication number
WO2016084453A1
WO2016084453A1 PCT/JP2015/075268 JP2015075268W WO2016084453A1 WO 2016084453 A1 WO2016084453 A1 WO 2016084453A1 JP 2015075268 W JP2015075268 W JP 2015075268W WO 2016084453 A1 WO2016084453 A1 WO 2016084453A1
Authority
WO
WIPO (PCT)
Prior art keywords
recognition target
information
difference
information processing
feature amount
Prior art date
Application number
PCT/JP2015/075268
Other languages
French (fr)
Japanese (ja)
Inventor
勝吉 金本
栗屋 志伸
拓也 藤田
淳史 野田
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2016084453A1 publication Critical patent/WO2016084453A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services

Definitions

  • the present disclosure relates to an information processing device, a control method, and a program.
  • life logs that digitize and record user life, behavior, experience, etc. into video, audio, location information, movement information, etc. have become popular.
  • the life log is automatically recorded by a wearable device (smart band, smart eye glass, smart watch, etc.) worn by the user or a mobile terminal (smart phone, etc.) possessed by the user, and stored in a predetermined server on the cloud, for example.
  • a wearable device smart band, smart eye glass, smart watch, etc.
  • a mobile terminal smart phone, etc.
  • the following cited document 1 provides information for suggesting other contents to be accessed next based on the user profile information and access history information A method is disclosed.
  • the present disclosure proposes an information processing apparatus, a control method, and a program that can provide a temporal difference of a recognition target as a topic.
  • the comparison unit that compares the current feature amount of the recognition target and the feature amount at another time point in time series, and the visualization information that visualizes the difference of the recognition target based on the comparison result of the comparison unit
  • An information processing apparatus including a generation unit that generates
  • the current feature amount of the recognition target is compared with the feature amount at another time point in time series, and the visualization information that visualizes the difference of the recognition target based on the comparison result is generated.
  • a control method including the above.
  • the computer visualizes the difference of the recognition target based on the comparison result of the comparison unit comparing the current feature amount of the recognition target and the feature amount at another time point in time series
  • a program for functioning as a generation unit that generates the visualization information.
  • a server 1 that collects and stores a user's life log and a user terminal 2 that presents a difference as a topic offer are connected via a network 3. ing.
  • the life log is obtained by digitizing the user's life, behavior, experience, etc. into video, audio, position information, movement information, and the like.
  • the life log includes various wearable devices (smart watches, smart bands, Smart eyeglass etc.) continuously acquired.
  • the server 1 accumulates and analyzes such a user's life log, so that a user who has met in the past, places that have been in the past, dishes that have been eaten in the past, etc.
  • the topic can be provided by presenting a difference from the past.
  • the difference presentation can be performed by the user terminal 2, for example.
  • the user terminal 2 is not limited to the smartphone shown in FIG.
  • a tablet terminal a mobile phone terminal, a PDA (Personal Digital Assistant), a PC (Personal Computer), a portable music player, a portable game machine, or a wearable terminal ( HMD, smart eye glass, smart watch, smart band, etc.).
  • PDA Personal Digital Assistant
  • PC Personal Computer
  • portable music player a portable game machine
  • wearable terminal HMD, smart eye glass, smart watch, smart band, etc.
  • the server 1 can predict future changes from past trends and present a difference between the present and the future and provide a topic.
  • FIG. 2 is a block diagram illustrating an example of the configuration of the server 1 according to the present embodiment.
  • the server 1 includes a communication unit 11, a raw data DB (database) 12, a feature amount extraction unit 13, a feature amount DB 14, a model generation unit 15, a model DB 16, a comparison unit 17, and a visualization information generation unit 18. And a prediction unit 19.
  • the communication unit 11 transmits / receives data by connecting to an external device by wireless / wired.
  • the communication unit 11 receives sensor information related to a recognition target and environmental sensor information from the information processing terminal 4 owned by the user.
  • the communication unit 11 acquires predetermined information by connecting to an external server such as the multi-user model DB server 34, the general preference DB server 35, the product meta DB server 36, or the celebrity DB server 37. It is also possible.
  • the communication unit 11 transmits the visualization information generated by the visualization information generation unit 18 to the user terminal 2.
  • the raw data DB 12 is a storage unit that stores information about the recognition target and the environment when recognized, received from the information processing terminal 4 via the communication unit 11.
  • various log sensors included in the information processing terminal 4 will be specifically described.
  • the information processing terminal 4 acquires sensor information related to the recognition target and sensor information related to the surrounding environment when the target is recognized.
  • the information processing terminal 4 includes, for example, a taste sensor 401, an odor sensor 402, a camera 403, a wireless communication unit 404, a position measurement unit 405, a barometer 406, a temperature / humidity meter 407, a clock unit 408, or an external reference information acquisition unit 409. Have Note that these sensors do not have to be provided in the same body.
  • the taste sensor 401 is provided in a tool used when eating such as chopsticks, spoons, and forks, and the odor sensor 402, camera 403,
  • the wireless communication unit 404, the position measurement unit 405, the barometer 406, the temperature / humidity meter 407, the clock unit 408, and the external reference information acquisition unit 409 are provided in a wearable device, a smartphone, a mobile phone terminal, or the like worn by the user. Also good.
  • the information processing terminal 4 transmits the acquired sensor information to the server 1.
  • the information processing terminal 4 can acquire sensor information suitable for the recognition target. More specifically, for example, when the recognition target is cooking, the taste sensor 401 detects taste information (sweet, sour, salty, bitter, umami), the smell sensor 402 detects the smell of the dish, and the camera In 403, a captured image of the dish is acquired.
  • taste information sweet, sour, salty, bitter, umami
  • smell sensor 402 detects the smell of the dish
  • the camera In 403 a captured image of the dish is acquired.
  • the recognition target is a person
  • the other person's odor is detected by the odor sensor 402
  • the captured image of the other party is acquired by the camera 403, and the other party's ID is received by the wireless communication unit 404.
  • the wireless communication unit 404 is realized by, for example, Bluetooth (registered trademark), Wi-Fi (registered trademark), infrared communication, short-range wireless communication, and the like, and connected to a partner information processing terminal (wearable device, smartphone, etc.) An ID indicating who the other party is can be acquired.
  • a partner information processing terminal wearable device, smartphone, etc.
  • An ID indicating who the other party is can be acquired.
  • the recognition target is a landscape
  • a captured image of the landscape is acquired by the camera 403.
  • positional information is acquired by the position measurement unit 405 as information on the environment when the target is recognized, the atmospheric pressure is detected by the barometer 406, and the temperature / hygrometer 407 detects the temperature. Humidity is detected, and the date and time is acquired by the clock unit 408.
  • the position measurement unit 405 is realized by, for example, a GPS (Global Positioning System) positioning unit, receives radio waves from GPS satellites, and detects the current position. In addition to the GPS, the position measurement unit 405 may detect the position by, for example, transmission / reception with Wi-Fi (registered trademark), a mobile phone / PHS / smartphone, or short-distance communication.
  • GPS Global Positioning System
  • information regarding the environment can be acquired by accessing various external servers by the external reference information acquisition unit 409.
  • the external reference information acquisition unit 409 can acquire information such as a dish name and calories by accessing the server 31 storing the dish meta DB and collating the dish metadata.
  • the external reference information acquisition unit 409 can access the server 32 storing the weather information DB, and acquire the weather, temperature, humidity, and the like at that time from the date and time when the object is recognized.
  • the external reference information acquisition unit 409 can access the server 33 storing the place meta DB, and acquire detailed information of the place and facility from the position information at the time of object recognition.
  • the various log sensors provided in the information processing terminal 4 have been specifically described above.
  • the information processing terminal 4 transmits the acquired sensor information to the server 1 at a predetermined timing.
  • the specific example of the sensor shown in FIG. 2 is an example, and the present embodiment is not limited to this.
  • the information processing terminal 4 may further include a sound collection unit, an acceleration sensor, a geomagnetic sensor, a vibration sensor, and the like.
  • the information processing terminal 4 is implement
  • the user terminal 2 may also function as the information processing terminal 4.
  • the feature quantity extraction unit 13 extracts the feature quantity of the recognition target based on the sensor information related to the recognition target stored in the raw data DB 12. For example, when the recognition target is cooking, feature quantities that can be quantified from taste sensor information and odor sensor information, and calories that can be specified from cooking images are extracted. Further, when the recognition target is a person, a part that affects the impression of the person is extracted as a feature amount.
  • the feature amount extraction unit 13 characterizes the arrangement of buildings, signboards, mountains, seas, roads, and the like included in the landscape from the captured image of the landscape.
  • the feature amount DB 14 stores the feature amount of the recognition target extracted by the feature amount extraction unit 13.
  • the model generation unit 15 generates a trend model of the feature quantity of the recognition target based on the past feature quantity of the recognition target. Thereby, it is possible to output a difference between a past tendency (for example, a color of clothes often worn) and the present, as well as a difference between two points such as the present and the past.
  • a past tendency for example, a color of clothes often worn
  • a histogram is used as a model indicating past trends.
  • the model generation unit 15 records the color of the clothes and the brand of the clothes with frequency and generates a histogram. Thereby, in the comparison part 17 mentioned later, it is output from the distribution of the said histogram as a difference whether the color and brand of the clothes currently worn by the target person are usually worn or not worn much. Can be done.
  • the trend model when the recognition target is a person can also generate histograms such as the presence of glasses, clothing types (skirts, pants, etc.), ornaments, and perfume brands. .
  • the trend model when the recognition target is a dish includes a taste histogram based on accumulation of taste information acquired when the same dish was eaten in the past.
  • the model generation part 15 may calculate the model which shows the past tendency with a deviation value.
  • the comparison unit 17 described later can determine whether the current feature amount is the same (general) or different (unusual) from the past tendency depending on whether or not the current feature amount is equal to or less than the threshold value.
  • the model generation unit 15 is not limited to generating a trend model based on past feature values when one user recognizes a recognition target, and refers to the server 34 that stores the multi-user model DB. It is also possible to generate a trend model by referring to past feature values when one or more other users recognize the target. That is, for example, when generating a trend model of the feature amount of the person A, the model generation unit 15 uses the feature amount extracted by the feature amount extraction unit 13 or the past feature amount accumulated in the feature amount DB 14 as a user.
  • a trend model is generated using not only the feature amount based on the sensor information acquired when meeting A but also the feature amount based on the sensor information acquired from the server 34 when another user meets the person A. Is possible.
  • Model DB stores the recognition target trend model generated by the model generation unit 15.
  • the comparison unit 17 compares the current feature quantity of the recognition target with the past feature quantity (especially the latest past feature quantity), the past feature quantity trend, or the future predicted feature quantity, and determines the difference. Output. Before performing the comparison, it is necessary to specify whether the current recognition target is the same as the target recognized by the user in the past. When the recognition target is a person, the same person can be specified from the facial feature amount based on the face image. Further, when the other party's ID can be acquired, the same person can be identified from the matching ID. Further, when the recognition target is a dish, it can be specified whether the dish is the same menu from the store location, the place meta (store name), the dish image, and the like.
  • the recognition target is a landscape
  • the same landscape can be specified from position information, orientation information, image feature amount, and the like.
  • the comparison unit 17 can identify the same recognition target, the current recognition target feature amount, the past feature amount (particularly the latest past feature amount), and the past feature amount trend (past model) Or, if the difference between the feature amounts is equal to or greater than a predetermined value by comparing with the predicted feature amount in the future, the difference is output as a comparison result.
  • the visualization information generation unit 18 generates difference visualization information to be presented to the user based on the difference output from the comparison unit 17. For example, the visualization information generation unit 18 may acquire a past image and a current image of the difference part from the raw data DB 12, and may generate a display image in which these are arranged and an image that emphasizes the difference part as the visualization information (FIG. 9). In addition, the visualization information generation unit 18 may highlight the portion indicating the current feature in the histogram indicating the past tendency of the difference portion (see FIG. 11). Moreover, the visualization information generation part 18 may display the date and time at that time with the past image of a difference part.
  • the visualization information generation unit 18 uses a slider on the time axis to change the image recognition target gradually using the past captured image and the current captured image of the recognition target stored in the raw data DB 12. A screen that can be viewed while switching may be generated (see FIG. 7).
  • the visualization information generation unit 18 may generate not only a screen for pointing out changes in the difference portion but also a screen including detailed information of the difference portion. For example, when the other party (recognition target) wears a product (glasses, hat, shoes, bag, accessory, etc.) different from the previous meeting, the product is output as a difference by the comparison unit 17. At this time, in order to identify what the product is, the visualization information generation unit 18 accesses the server 6 storing the product meta DB, refers to the product metadata, and details the product (product name, Brand, price range, release date, etc.). In addition, the visualization information generation unit 18 can refer to the product metadata to link to a purchase site for the product or present related products.
  • the visualization information generation unit 18 can also generate a screen that points out the difference between the difference portion and “general preference” as advice.
  • the general preference data can be acquired from the server 35 that stores the general preference information DB.
  • the visualization information generation unit 18 can provide a topic on how to change the recognition target by pointing out the difference from “general preference” with respect to the change of the recognition target.
  • General preferences include information such as recent trends and popular items obtained from fashion magazines, color schemes that match the color palette, and the ratio of base and accent colors.
  • information obtained by generalizing the multi-user model and general common sense (ceremonial manners) are also included.
  • the hair style includes information such as how to bundle the hair according to the length of the hair, how to use the hair clip, the hair style that matches the shape of the face, and the hair style that matches the age.
  • the visualization information generation unit 18 may display a calorie display estimated from an image, weight, age, height, and the like as additional information of the difference portion. Moreover, the visualization information generation part 18 may display the celebrity when the characteristic after a change is similar to a celebrity. The celebrity information can be obtained by accessing the server 37 storing the celebrity information DB.
  • the visualization information generation unit 18 not only points out the difference portion but also includes detailed information on the difference portion, advice based on the difference from “general preference”, and additional information on the difference portion.
  • the topic provided to the user can be enriched by generating.
  • the visualization information generation unit 18 transmits the generated visualization information (an example of information related to the difference of recognition targets based on the comparison result of the comparison unit 17) to the user terminal 2 via the communication unit 11 and controls to notify the user. It also functions as a notification control unit.
  • the prediction unit 19 predicts a future feature amount to be recognized, and outputs a prediction result to the comparison unit 17 or the visualization information generation unit 18. Thereby, not only the difference with the past but the difference with the future is performed in the comparison part 17, and the screen containing the future feature-value can be produced
  • FIG. As a first prediction method by the prediction unit 19, it is possible to predict future feature values by extrapolation based on the past and present feature values of two or more recognition targets. For example, based on the amount of hair two years ago, one year ago, and the current hair, the amount of hair one year later is predicted by a linear model.
  • the prediction unit 19 it is possible to predict a future feature value by using a trend model of a past feature value to be recognized. For example, the amount of wrinkles that can be generated when aging is further predicted based on a trend model indicating how to increase the number of wrinkles on the face to be recognized.
  • the visualization information generation unit 18 can generate a face image based on the prediction result of the amount of wrinkles that can be generated when aging and include it in the presentation screen.
  • the configuration of the server 1 according to the present embodiment has been specifically described above.
  • the server 1 is equipped with a microcomputer having a CPU (Central Processing Unit), a ROM (Read Only Memory), a RAM (Random Access Memory), and a nonvolatile memory, and controls each component of the server 1.
  • CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • FIG. 3 is a flowchart showing log collection processing according to the present embodiment.
  • the processing shown in FIG. 3 is executed in the information processing terminal 4 (wearable device or the like) provided with various log sensors. Specifically, the information processing terminal starts log collection periodically or when a trigger is generated by an external event (S103), and identifies a recognition target (S106).
  • S103 an external event
  • S106 a recognition target
  • the information processing terminal collects sensor information suitable for the recognition target (S109). For example, if the recognition target is cooking, information is acquired by the taste sensor 401, the odor sensor 402, and the camera 403. If the recognition target is a person, the odor sensor 402, the camera 403, and the wireless communication unit 404 acquire information.
  • the information processing terminal collects environmental sensor information at the time of recognition (S112).
  • the environmental sensor information is information that is not based on the recognition target. For example, current position information acquired by the position measuring unit 405, current temperature / humidity acquired by the temperature / humidity meter 407, and acquired by the clock unit 408. This corresponds to the current date and time.
  • the information processing terminal attempts to expand log data (S115). That is, the external reference information acquisition unit 409 accesses various external servers (such as the cooking meta DB server 31, the weather information DB server 32, or the location meta DB server 33) to recognize information to be recognized and the environment at the time of recognition. Get more information.
  • various external servers such as the cooking meta DB server 31, the weather information DB server 32, or the location meta DB server 33
  • the information processing terminal transmits all the acquired information to the server 1 and stores it in the raw data DB 12.
  • FIG. 4 is a flowchart showing the feature amount extraction processing in the present embodiment.
  • the feature amount extraction unit 13 of the server 1 reads sensor information to be recognized from the raw data DB 12 (S123).
  • the feature quantity extraction unit 13 extracts the feature quantity of the recognition target based on the sensor information of the recognition target (S126).
  • the feature quantity extraction unit 13 stores the extracted feature quantity in the feature quantity DB 14 (S129).
  • the feature amount extraction described above is performed in real time when the user recognizes the target.
  • FIG. 5 is a flowchart showing the trend model generation process of the feature quantity of the recognition target according to the present embodiment.
  • the model generation unit 15 of the server 1 checks whether or not a trend model having a predetermined feature amount to be recognized has already been generated (S133).
  • the model generation unit 15 reads a trend model (past model) that has already been generated from the model DB 16 (S136).
  • the model generation unit 15 initializes a trend model of a predetermined feature quantity.
  • the model generation unit 15 reads a predetermined feature amount to be recognized from the feature amount DB 14 (S142).
  • feature quantities that have not yet been reflected in the trend model for example, feature quantities newly acquired from the recognition target at present are read.
  • the model generation unit 15 updates the trend model of the feature amount based on the read feature amount (S145).
  • the model generation unit 15 writes the updated tendency model in the model DB 16 (S148).
  • the model generation unit 15 synchronizes the user's model DB in the server 34 storing the multi-user model DB with the updated tendency model.
  • the server 34 stores a trend model to be recognized and a feature amount DB for each user.
  • the generation of the trend model described above is generated for each of a plurality of feature amounts extracted from the recognition target.
  • trend models such as a feature of a hairstyle to be recognized, a feature of clothes, a feature of belongings, and a feature of body shape can be generated.
  • FIG. 6 is a flowchart showing a difference presentation process between two points according to the present embodiment. As shown in FIG. 6, first, the comparison unit 17 reads the current feature amount to be recognized and the most recent feature amount from the feature amount DB 14 (S203).
  • the comparison unit 17 compares feature amounts and calculates a difference (S206).
  • the comparison unit 17 outputs the comparison result (difference part) to the visualization information generation unit 18 (S209).
  • the visualization information generation unit 18 acquires information on the difference part from the raw data DB 12 (S212).
  • the visualization information generation unit 18 generates a comment that points out the difference (S215).
  • the visualization information generation unit 18 may acquire detailed information and related information of the difference part from the external server, and generate a comment including these.
  • the visualization information generation unit 18 generates visualization information based on the acquired information on the difference part and the generated comment, presents the generated visualization information to the user terminal 2 via the communication unit 11, and performs the difference.
  • the location is notified (S218).
  • FIG. 7 is a diagram showing a screen display example of visualization information according to the present embodiment.
  • the visualization information can be viewed using the past captured image and the current captured image of the recognition target while switching the image recognition target gradually with a slider on the time axis.
  • Screens 22-1 to 22-4 in FIG. 7 are screens that can be switched by moving a slider on the time axis.
  • the most recent information (such as a captured image of the partner when the user first meets) is 1972 when the user recognizes the target
  • the most recent time axis on the 1972 is 1972 as shown on the screen 22-1. Set to the year.
  • information such as the date, place, weather, etc. at the time of recognition is displayed together with the captured image (face image) to be recognized acquired in 1972. Thereby, the user can talk about the first meeting with the other party.
  • the comparison unit 17 compares the feature amount of the latest past partner with the feature amount of the current partner, and visualizes the difference based on the comparison result.
  • a screen 22-3 is displayed. For example, if the most recent past meeting with the same partner is a week ago, the screen 22-3 contains information such as the face image of the partner one week ago, the date and place of meeting, and another week before And the difference in the feature amount (such as a change in hairstyle). Thereby, the user can talk about the fact that the partner's hairstyle has changed since the last time they met. When there is no difference, it may be indicated that there is no change.
  • the slider on the time axis is moved to the future such as 2020, information based on the future feature amount predicted by the prediction unit 19 is displayed.
  • the face image generated by the visualization information generation unit 18 based on the number of wrinkles, the amount of hair, the color of hair, the change in sagging face, and the like indicated by the future prediction result is displayed.
  • the visualization information generation unit 18 may acquire and display a captured image of a partner when another user meets the partner from the server 34 that stores the multi-user model DB.
  • FIG. 8 is a diagram for explaining a modified example of displaying a captured image when another user meets the other person on the screen 22-2 shown in FIG.
  • the slider on the time axis is moved between 1972 and 2014 (for example, around 2000). Since the user does not meet the other party at the time indicated by the slider, the past image (captured image) of the other party is not stored in the raw data DB 12, but another user (for example, a common friend XX) meets the other party.
  • the visualization information generation unit 18 acquires the partner image captured when another user meets from the server 34 and displays it on the screen 22-2 ′.
  • the visualization information generation unit 18 displays a past image of the other party and a comment (annotation) such as “I have not met around 2000.
  • a comment such as “I have not met around 2000.
  • An image when a common friend XX meets” is displayed. By attaching, it notifies the user that it is information when other users meet.
  • FIG. 9 is a diagram illustrating an example of visualization information in which a different portion is emphasized.
  • the glasses frame that is the difference is emphasized and notified to the user.
  • the current captured image of the partner and the latest captured image are displayed side by side on the screen 22-6, and further, an enlarged image of the difference portion and a comment regarding the difference portion Is displayed.
  • the user can talk about the fact that the color of the frame of the other person's glasses has changed since the last meeting.
  • FIG. 10 is a flowchart showing a difference presentation process with the trend according to the present embodiment.
  • the comparison unit 17 reads the current feature quantity to be recognized from the feature quantity DB 14 (S233).
  • the comparison unit 17 reads the past tendency to be recognized from the model DB 16 (S236).
  • the comparison unit 17 compares the current feature amount with the past trend and calculates a difference (S239).
  • the comparison unit 17 outputs the comparison result (difference part) to the visualization information generation unit 18 (S242).
  • the visualization information generation unit 18 acquires information on the difference part from the raw data DB 12 (S245).
  • the visualization information generation unit 18 generates a comment that points out the difference (S248).
  • the visualization information generation unit 18 may acquire detailed information and related information of the difference part from the external server, and generate a comment including these.
  • the visualization information generation unit 18 generates visualization information based on the acquired information on the difference part and the generated comment, presents the generated visualization information to the user terminal 2 via the communication unit 11, and performs the difference.
  • the location is notified (S251).
  • FIG. 11 is a diagram illustrating a screen display example of visualization information according to the present embodiment.
  • the visualization information may be realized by a screen that highlights a portion indicating the current feature in a histogram indicating the past tendency of the difference portion.
  • a histogram indicating the past tendency (wear frequency for each color) of the color of the clothes to be recognized is a representative past when classified for each color. It is pointed out that the color of the currently worn clothes is less worn than the past tendency.
  • the visualization information generation unit 18 may display the image when the corresponding color was previously worn and the image of the currently worn clothing side by side.
  • difference presentation according to the score according to the present embodiment will be specifically described with reference to FIGS.
  • the difference portion calculated by the comparison unit 17 is presented to the user as it is.
  • the presentation method according to the present embodiment is not limited to this, and for example, the size of the difference and the content attribute of the difference (negative / positive) ) Can be presented more appropriately by changing the presentation mode (display mode).
  • FIG. 12 is a flowchart showing the difference presenting process according to the score according to the present embodiment.
  • the comparison unit 17 reads the current feature quantity to be recognized and the feature quantity of the past past from the feature quantity DB 14 (S303).
  • the comparison unit 17 compares feature amounts and calculates a difference (S306).
  • the comparison unit 17 outputs the comparison result (difference part) and the difference score to the visualization information generation unit 18 (S209).
  • the difference score is a score indicating the magnitude of the difference from the current feature amount. When there are a plurality of differences, a difference score is calculated for each difference and output to the visualization information generation unit 18.
  • the visualization information generation unit 18 acquires information on the difference part from the raw data DB 12 (S312).
  • the visualization information generating unit 18 calculates a positive-negative score of the difference (S315).
  • the positive-negative score is a continuous value that indicates whether the difference (change) output by the comparison unit 17 is positive or negative. For example, “score: ⁇ 100” is very negative.
  • the difference “score: 50” means a relatively positive difference.
  • the visualization information generation unit 18 determines a difference presentation mode (highlighting degree, arrangement, etc. on the display screen) based on the relationship between the user and the other party, the difference score, and the positive-negative score (S318). . For example, the visualization information generation unit 18 highlights and displays a difference with a large difference score or a difference with a high degree of positiveness. If the difference score is large but negative, the relationship with the partner (familiarity, gender, age) Decide to present with the degree of emphasis and placement according to proximity. In addition, when there are a plurality of differences and the relationship with the partner is intimate, the visualization information generation unit 18 increases the difference score in the descending order even if the positive-negative score is negative (eg, score: ⁇ 50 or less). May be presented. On the other hand, if the relationship with the partner is not intimate, the visualization information generation unit 18 may preferentially display (arrange) the positive-negative score of 20 or more in the order of the difference score.
  • the visualization information generation unit 18 generates a comment that points out the difference (S321).
  • the visualization information generation unit 18 may acquire detailed information and related information of the difference part from the external server, and generate a comment including these.
  • the visualization information generation unit 18 generates visualization information based on the acquired information on the difference part and the generated comment, presents the generated visualization information to the user terminal 2 via the communication unit 11, and performs the difference.
  • the location is notified (S324).
  • a specific example of the visualization information presented to the user by the visualization information generation unit 18 will be described with reference to FIGS. 13 and FIG. 14, as an example, a description will be given of a difference in presentation mode according to the relationship with the partner when the comparison unit 17 outputs a change in hair color and a change in weight as differences.
  • the change in hair color is, for example, a change from brown to black, and the difference score 20 and the positive-negative score indicate neutral changes.
  • the change in body weight is an increase of, for example, an estimated 20 kg, and the difference score 30 and the positive-negative score indicate a negative change.
  • FIG. 13 is a diagram for explaining a presentation mode when the relationship between the user and the other party is not a close relationship.
  • the visualization information generation unit 18 indicates a change in hair color that is a neutral change even if the difference score is low, as shown in a screen 22-8 in FIG. 50 is highlighted with large characters so as to stand out more, and the display order is also displayed at the top.
  • the comment 51 pointing out a change in weight which is a negative change even if the difference score is high, is displayed in small letters so as to be inconspicuous and the display order is displayed in the lower order.
  • the user can talk about a neutral change such as a change in hair color.
  • FIG. 14 is a diagram for explaining a presentation mode when the relationship between the user and the partner is a close relationship.
  • the visualization information generation unit 18 points out a change in body weight with a high difference score even if it is a negative change, as shown in a screen 22-9 in FIG.
  • the comment 52 is highlighted with large characters so as to stand out more, and the display order is also displayed at the top.
  • the comment 53 indicating the change in hair color, which is a change with a low difference score is displayed in small letters and in the lower order. Thereby, even if it is a negative change, such as a change in weight, the user can focus on a difference with a large change.
  • a computer program for causing the functions of the server 1 and the user terminal 2 to be performed on hardware such as the CPU, ROM, and RAM incorporated in the server 1 and the user terminal 2 described above can be created.
  • a computer-readable storage medium storing the computer program is also provided.
  • the notification of the information related to the difference of the recognition target based on the comparison result of the comparison unit 17 is not limited to the display on the user terminal 2 of the visualization information generated by the visualization information generation unit 18 as described above.
  • the server 1 may have a function of a notification control unit that controls the user terminal 2 to notify the user of information related to the recognition target difference based on the comparison result of the comparison unit 17 by voice or the like.
  • the user terminal 2 may have at least a part of the configuration of the server 1 according to the present embodiment.
  • this technique can also take the following structures.
  • a comparison unit that compares the current feature value of the recognition target with the feature value at another point in time series, A generation unit that generates visualization information that visualizes the difference between the recognition targets based on the comparison result of the comparison unit;
  • An information processing apparatus comprising: (2) The information processing apparatus according to (1), wherein the feature quantity of the recognition target is extracted from sensor information detected when a user faces the recognition target. (3) The information processing apparatus according to (1) or (2), wherein the comparison unit compares a current feature amount of the recognition target with a past feature amount. (4) The information processing apparatus according to (3), wherein the comparison unit compares a current feature amount of the recognition target with a feature amount in the past.
  • the information processing apparatus compares a current feature amount of the recognition target with a model indicating a tendency of the past feature amount of the recognition target.
  • the past feature amount model is generated based on the past feature amount of the recognition target.
  • the model of the past feature amount is generated based on the past feature amount of the recognition target extracted from sensor information detected when one or more other users face the recognition target. ).
  • the comparison unit compares a current feature amount of the recognition target with a future feature amount of the recognition target.
  • the information processing apparatus (9) The information processing apparatus according to (8), wherein the future feature amount is predicted based on a past feature amount model of the recognition target and a current feature amount. (10) The information processing apparatus according to any one of (1) to (9), wherein the feature amount is extracted from sensor information suitable for the recognition target. (11) The information processing apparatus according to (10), wherein when the recognition target is a person, the feature amount is extracted from a captured image or odor information of the person detected by a camera sensor or an odor sensor. (12) The feature amount is extracted from the taste information, odor information, or captured image of the dish detected by a taste sensor, an odor sensor, or a camera sensor when the recognition target is a dish. Information processing device.
  • the information processing apparatus (13) The information processing apparatus according to (10), wherein the feature amount is extracted from a captured image of the scenery detected by a camera sensor when the recognition target is a scenery. (14) The information processing apparatus according to any one of (1) to (13), wherein the generation unit generates, as visualization information, a display screen that emphasizes the difference portion to be recognized. (15) The generation unit determines a presentation mode of the difference according to the magnitude of the difference of the recognition target, the positive-negative attribute of the difference, and the relationship between the recognition target and the user, (1) to (14 The information processing apparatus according to any one of the above.
  • the information processing apparatus according to any one of (1) to (15), further including a notification control unit that controls to notify a user of information related to the recognition target difference based on a comparison result of the comparison unit.
  • Computer A comparison unit that compares the current feature value of the recognition target with the feature value at another point in time series, A generation unit that generates visualization information that visualizes the difference between the recognition targets based on the comparison result of the comparison unit; Program to function as

Abstract

[Problem] To provide an information processing device, control method and program that can provide, as a topic, a temporal difference of a subject to be recognized. [Solution] An information processing device of the present invention is provided with: a comparison unit that compares a current feature quantity of a subject to be recognized and a feature quantity thereof at another point in time on a time series; and a notification control unit that performs control to provide notice of the difference of the subject to be recognized based on the comparison results of the comparison unit.

Description

情報処理装置、制御方法、およびプログラムInformation processing apparatus, control method, and program
 本開示は、情報処理装置、制御方法、およびプログラムに関する。 The present disclosure relates to an information processing device, a control method, and a program.
 近年、ユーザの生活、行動、体験等を、映像、音声、位置情報、動き情報等にデジタル化して記録するライフログが浸透してきている。ライフログは、ユーザが装着するウェアラブルデバイス(スマートバンド、スマートアイグラス、スマートウォッチ等)や、ユーザが所持するモバイル端末(スマートフォン等)により自動的に記録され、例えばクラウド上の所定サーバに蓄積される。 In recent years, life logs that digitize and record user life, behavior, experience, etc. into video, audio, location information, movement information, etc. have become popular. The life log is automatically recorded by a wearable device (smart band, smart eye glass, smart watch, etc.) worn by the user or a mobile terminal (smart phone, etc.) possessed by the user, and stored in a predetermined server on the cloud, for example. The
 このように蓄積されたライフログを閲覧する場合は、一般的には時系列で過去履歴が表示されたり、記録したデータ全てが表示されたりするだけであった。閲覧したいデータがある場合、ユーザは検索条件を入力して自ら探索する必要があった。 When browsing the life log accumulated in this way, in general, past history is displayed in chronological order, or all recorded data is only displayed. When there is data to be browsed, the user has to search by himself / herself by inputting a search condition.
 なお、ユーザへの推薦情報を自動的に抽出する技術としては、例えば下記引用文献1において、ユーザのプロファイル情報およびアクセス履歴情報に基づいて次にアクセスする他のコンテンツを示唆するための情報を提供する方法が開示されている。 In addition, as a technique for automatically extracting recommendation information to the user, for example, the following cited document 1 provides information for suggesting other contents to be accessed next based on the user profile information and access history information A method is disclosed.
特開2002-108923号公報JP 2002-108923 A
 しかしながら、ユーザが以前会った人に会った場合や、以前訪れたことのある場所に行った場合等に、以前の様子はどうだったかをライフログから知りたい場合には、その時の日時から画像検索する必要があり、日時が明確でない場合は探し出すことが困難であった。 However, if you want to know from the life log what happened before, such as when you meet a person you have met before or when you have visited a place that you have visited before, the image from the date and time at that time If it is necessary to search and the date and time are not clear, it was difficult to find out.
 また、以前会った時や訪れた時から対象の人物や景色が変わっていることが分かれば、差異を話題にすることができるが、容易に気付くことは困難であった。 Also, if it is known that the target person or landscape has changed since meeting or visiting before, the difference can be discussed, but it was difficult to notice easily.
 そこで、本開示では、認識対象の経時的差異を話題として提供することが可能な情報処理装置、制御方法、およびプログラムを提案する。 Therefore, the present disclosure proposes an information processing apparatus, a control method, and a program that can provide a temporal difference of a recognition target as a topic.
 本開示によれば、認識対象の現在の特徴量と時系列上の他の時点における特徴量とを比較する比較部と、前記比較部の比較結果に基づく前記認識対象の差異を可視化した可視化情報を生成する生成部と、を備える情報処理装置を提案する。 According to the present disclosure, the comparison unit that compares the current feature amount of the recognition target and the feature amount at another time point in time series, and the visualization information that visualizes the difference of the recognition target based on the comparison result of the comparison unit An information processing apparatus including a generation unit that generates
 本開示によれば、認識対象の現在の特徴量と時系列上の他の時点における特徴量とを比較することと、前記比較の結果に基づく前記認識対象の差異を可視化した可視化情報を生成することと、を含む制御方法を提案する。 According to the present disclosure, the current feature amount of the recognition target is compared with the feature amount at another time point in time series, and the visualization information that visualizes the difference of the recognition target based on the comparison result is generated. And a control method including the above.
 本開示によれば、コンピュータを、認識対象の現在の特徴量と時系列上の他の時点における特徴量とを比較する比較部と、前記比較部の比較結果に基づく前記認識対象の差異を可視化した可視化情報を生成する生成部と、として機能させるためのプログラムを提案する。 According to the present disclosure, the computer visualizes the difference of the recognition target based on the comparison result of the comparison unit comparing the current feature amount of the recognition target and the feature amount at another time point in time series We propose a program for functioning as a generation unit that generates the visualization information.
 以上説明したように本開示によれば、認識対象の経時的差異を話題として提供することが可能となる。 As described above, according to the present disclosure, it is possible to provide a temporal difference of a recognition target as a topic.
 なお、上記の効果は必ずしも限定的なものではなく、上記の効果とともに、または上記の効果に代えて、本明細書に示されたいずれかの効果、または本明細書から把握され得る他の効果が奏されてもよい。 Note that the above effects are not necessarily limited, and any of the effects shown in the present specification, or other effects that can be grasped from the present specification, together with or in place of the above effects. May be played.
本開示の一実施形態による情報処理システムの概要を説明する図である。It is a figure explaining the outline of the information processing system by one embodiment of this indication. 本実施形態によるサーバの構成の一例を示すブロック図である。It is a block diagram which shows an example of a structure of the server by this embodiment. 本実施形態によるログ収集処理を示すフローチャートである。It is a flowchart which shows the log collection process by this embodiment. 本実施形態における特徴量抽出処理を示すフローチャートである。It is a flowchart which shows the feature-value extraction process in this embodiment. 本実施形態による認識対象の特徴量の傾向モデル生成処理を示すフローチャートである。It is a flowchart which shows the tendency model generation process of the feature-value of the recognition target by this embodiment. 本実施形態による2点間の差異提示処理を示すフローチャートである。It is a flowchart which shows the difference presentation process between two points by this embodiment. 本実施形態による可視化情報の画面表示例を示す図である。It is a figure which shows the example of a screen display of the visualization information by this embodiment. 図7に示す画面において他のユーザが相手と会った時の撮像画像を表示する変形例について説明する図である。It is a figure explaining the modification which displays the captured image when another user meets the other party on the screen shown in FIG. 差異部分を強調した可視化情報例を示す図である。It is a figure which shows the example of visualization information which emphasized the difference part. 本実施形態による傾向との差異提示処理を示すフローチャートである。It is a flowchart which shows the difference presentation process with the tendency by this embodiment. 本実施形態による可視化情報の画面表示例を示す図である。It is a figure which shows the example of a screen display of the visualization information by this embodiment. 本実施形態によるスコアに応じた差異提示処理を示すフローチャートである。It is a flowchart which shows the difference presentation process according to the score by this embodiment. ユーザと相手との関係が親しい間柄ではない場合における提示態様について説明する図である。It is a figure explaining the presentation aspect in case the relationship between a user and the other party is not a close relationship. ユーザと相手との関係が親しい間柄である場合における提示態様について説明する図である。It is a figure explaining the presentation aspect in case the relationship between a user and the other party is a close relationship.
 以下に添付図面を参照しながら、本開示の好適な実施の形態について詳細に説明する。なお、本明細書及び図面において、実質的に同一の機能構成を有する構成要素については、同一の符号を付することにより重複説明を省略する。 Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In addition, in this specification and drawing, about the component which has the substantially same function structure, duplication description is abbreviate | omitted by attaching | subjecting the same code | symbol.
 また、説明は以下の順序で行うものとする。
 1.本開示の一実施形態による情報処理システムの概要
 2.サーバの構成
 3.動作処理
  3-1.データ収集
  3-2.2点要素間の差異提示
  3-3.傾向との差異提示
  3-4.スコアに応じた差異提示
 4.まとめ
The description will be made in the following order.
1. 1. Overview of information processing system according to an embodiment of the present disclosure 2. Server configuration Operation processing 3-1. Data collection 3-2.2 Differences between point elements 3-3. Presentation of differences from trends 3-4. 3. Presentation of differences according to the score Summary
  <<1.本開示の一実施形態による情報処理システムの概要>>
 まず、本開示の一実施形態による情報処理システムの概要を図1に示して説明する。図1に示すように、本実施形態による情報処理システムは、ユーザのライフログを収集し、蓄積するサーバ1と、話題提供として差異提示を行うユーザ端末2とが、ネットワーク3を介して接続されている。
<< 1. Overview of Information Processing System According to One Embodiment of Present Disclosure >>
First, an overview of an information processing system according to an embodiment of the present disclosure will be described with reference to FIG. As shown in FIG. 1, in the information processing system according to the present embodiment, a server 1 that collects and stores a user's life log and a user terminal 2 that presents a difference as a topic offer are connected via a network 3. ing.
 ライフログは、ユーザの生活、行動、体験等が、映像、音声、位置情報、動き情報等にデジタル化されたものであって、例えばユーザに装着された各種ウェアラブルデバイス(スマートウォッチ、スマートバンド、スマートアイグラス等)により継続的に取得される。サーバ1は、このようなユーザのライフログを蓄積し、解析することで、ユーザが過去に会ったことがある人物や過去に行ったことがある場所、過去に食べたことがある料理等を、後日再度認識した場合に、過去との差異を提示して話題を提供することができる。差異提示は例えばユーザ端末2で行われ得る。ユーザ端末2は、図1に示すスマートフォンに限定されず、例えばタブレット端末、携帯電話端末、PDA(Personal Digital Assistant)、PC(Personal Computer)、携帯用音楽プレーヤー、携帯用ゲーム機、またはウェアラブル端末(HMD、スマートアイグラス、スマートウォッチ、スマートバンド等)であってもよい。 The life log is obtained by digitizing the user's life, behavior, experience, etc. into video, audio, position information, movement information, and the like. For example, the life log includes various wearable devices (smart watches, smart bands, Smart eyeglass etc.) continuously acquired. The server 1 accumulates and analyzes such a user's life log, so that a user who has met in the past, places that have been in the past, dishes that have been eaten in the past, etc. When re-recognized at a later date, the topic can be provided by presenting a difference from the past. The difference presentation can be performed by the user terminal 2, for example. The user terminal 2 is not limited to the smartphone shown in FIG. 1, but is, for example, a tablet terminal, a mobile phone terminal, a PDA (Personal Digital Assistant), a PC (Personal Computer), a portable music player, a portable game machine, or a wearable terminal ( HMD, smart eye glass, smart watch, smart band, etc.).
 また、本実施形態によるサーバ1は、過去の傾向から未来の変化を予測して、現在と未来との差異を提示して話題を提供することも可能である。 Also, the server 1 according to the present embodiment can predict future changes from past trends and present a difference between the present and the future and provide a topic.
 以上、本開示の一実施形態による情報処理システムの概要について説明した。続いて、本実施形態の情報処理システムに含まれるサーバ1の構成について説明する。 The overview of the information processing system according to an embodiment of the present disclosure has been described above. Next, the configuration of the server 1 included in the information processing system of this embodiment will be described.
  <<2.サーバの構成>>
 図2は、本実施形態によるサーバ1の構成の一例を示すブロック図である。図2に示すように、サーバ1は、通信部11、生データDB(データベース)12、特徴量抽出部13、特徴量DB14、モデル生成部15、モデルDB16、比較部17、可視化情報生成部18、および予測部19を有する。
<< 2. Server configuration >>
FIG. 2 is a block diagram illustrating an example of the configuration of the server 1 according to the present embodiment. As shown in FIG. 2, the server 1 includes a communication unit 11, a raw data DB (database) 12, a feature amount extraction unit 13, a feature amount DB 14, a model generation unit 15, a model DB 16, a comparison unit 17, and a visualization information generation unit 18. And a prediction unit 19.
 (通信部)
 通信部11は、外部装置と無線/有線により接続してデータの送受信を行う。例えば、通信部11は、ユーザが所有する情報処理端末4から、認識対象に関するセンサ情報や環境のセンサ情報を受信する。また、通信部11は、多ユーザモデルDBサーバ34、一般的な好ましさDBサーバ35、商品メタDBサーバ36、または著名人DBサーバ37等の外部サーバと接続して所定の情報を取得することも可能である。また、通信部11は、可視化情報生成部18により生成された可視化情報をユーザ端末2に送信する。
(Communication Department)
The communication unit 11 transmits / receives data by connecting to an external device by wireless / wired. For example, the communication unit 11 receives sensor information related to a recognition target and environmental sensor information from the information processing terminal 4 owned by the user. In addition, the communication unit 11 acquires predetermined information by connecting to an external server such as the multi-user model DB server 34, the general preference DB server 35, the product meta DB server 36, or the celebrity DB server 37. It is also possible. The communication unit 11 transmits the visualization information generated by the visualization information generation unit 18 to the user terminal 2.
 (生データDB)
 生データDB12は、通信部11を介して情報処理端末4から受信した、認識対象や認識した際の環境に関する情報を格納する記憶部である。ここで、情報処理端末4が有する各種ログセンサについて具体的に説明する。
(Raw data DB)
The raw data DB 12 is a storage unit that stores information about the recognition target and the environment when recognized, received from the information processing terminal 4 via the communication unit 11. Here, various log sensors included in the information processing terminal 4 will be specifically described.
 -情報処理端末-
 情報処理端末4は、認識対象に関するセンサ情報や、対象を認識した際の周辺の環境に関するセンサ情報を取得する。情報処理端末4は、例えば味覚センサ401、匂いセンサ402、カメラ403、無線通信部404、位置計測部405、気圧計406、温度・湿度計407、時計部408、または外部参照情報取得部409等を有する。なお、これらのセンサは全て同一体に設けられている必要はなく、例えば味覚センサ401は、箸、スプーン、フォーク等の食事を行う際に利用する道具に設けられ、匂いセンサ402、カメラ403、無線通信部404、位置計測部405、気圧計406、温度・湿度計407、時計部408、および外部参照情報取得部409は、ユーザが装着するウェアラブルデバイス、スマートフォン、携帯電話端末等に設けられてもよい。情報処理端末4は、取得したセンサ情報をサーバ1に送信する。
-Information processing terminal-
The information processing terminal 4 acquires sensor information related to the recognition target and sensor information related to the surrounding environment when the target is recognized. The information processing terminal 4 includes, for example, a taste sensor 401, an odor sensor 402, a camera 403, a wireless communication unit 404, a position measurement unit 405, a barometer 406, a temperature / humidity meter 407, a clock unit 408, or an external reference information acquisition unit 409. Have Note that these sensors do not have to be provided in the same body. For example, the taste sensor 401 is provided in a tool used when eating such as chopsticks, spoons, and forks, and the odor sensor 402, camera 403, The wireless communication unit 404, the position measurement unit 405, the barometer 406, the temperature / humidity meter 407, the clock unit 408, and the external reference information acquisition unit 409 are provided in a wearable device, a smartphone, a mobile phone terminal, or the like worn by the user. Also good. The information processing terminal 4 transmits the acquired sensor information to the server 1.
 また、情報処理端末4は、認識対象に適したセンサ情報を取得し得る。より具体的には、例えば認識対象が料理の場合、味覚センサ401により、料理の味覚情報(甘味、酸味、塩味、苦味、旨味)が検知され、匂いセンサ402により料理の匂いが検知され、カメラ403により料理の撮像画像が取得される。また、認識対象が人の場合、匂いセンサ402により相手の匂いが検知され、カメラ403により相手の撮像画像が取得され、無線通信部404により相手のIDが受信される。無線通信部404は、例えばBluetooth(登録商標)、Wi-Fi(登録商標)、赤外線通信、近距離無線通信等により実現され、相手の情報処理端末(ウェアラブルデバイス、スマートフォン等)と接続して、相手が誰であるかを示すIDを取得し得る。また、認識対象が景色の場合、カメラ403により景色の撮像画像が取得される。 Further, the information processing terminal 4 can acquire sensor information suitable for the recognition target. More specifically, for example, when the recognition target is cooking, the taste sensor 401 detects taste information (sweet, sour, salty, bitter, umami), the smell sensor 402 detects the smell of the dish, and the camera In 403, a captured image of the dish is acquired. When the recognition target is a person, the other person's odor is detected by the odor sensor 402, the captured image of the other party is acquired by the camera 403, and the other party's ID is received by the wireless communication unit 404. The wireless communication unit 404 is realized by, for example, Bluetooth (registered trademark), Wi-Fi (registered trademark), infrared communication, short-range wireless communication, and the like, and connected to a partner information processing terminal (wearable device, smartphone, etc.) An ID indicating who the other party is can be acquired. When the recognition target is a landscape, a captured image of the landscape is acquired by the camera 403.
 また、認識対象が何であるかに関わらず、対象を認識した時の環境に関する情報として、位置計測部405により位置情報が取得され、気圧計406により気圧が検知され、温度・湿度計407により温度・湿度が検知され、時計部408により日時が取得される。位置計測部405は、例えばGPS(Global Positioning System)測位部により実現され、GPS衛星からの電波を受信して、現在位置を検知する。また、位置計測部405は、GPSの他、例えばWi-Fi(登録商標)、携帯電話・PHS・スマートフォン等との送受信、または近距離通信等により位置を検知するものであってもよい。 Regardless of what the recognition target is, positional information is acquired by the position measurement unit 405 as information on the environment when the target is recognized, the atmospheric pressure is detected by the barometer 406, and the temperature / hygrometer 407 detects the temperature. Humidity is detected, and the date and time is acquired by the clock unit 408. The position measurement unit 405 is realized by, for example, a GPS (Global Positioning System) positioning unit, receives radio waves from GPS satellites, and detects the current position. In addition to the GPS, the position measurement unit 405 may detect the position by, for example, transmission / reception with Wi-Fi (registered trademark), a mobile phone / PHS / smartphone, or short-distance communication.
 また、環境に関する情報は、外部参照情報取得部409により、各種外部サーバにアクセスして取得することも可能である。例えば認識対象が料理の場合、外部参照情報取得部409は、料理メタDBを格納するサーバ31にアクセスして料理メタデータを照合することで、料理名やカロリー等の情報を取得することができる。また、外部参照情報取得部409は、気象情報DBを格納するサーバ32にアクセスして、対象認識時の日時から、その時の天候や気温、湿度等を取得することができる。また、外部参照情報取得部409は、場所メタDBを格納するサーバ33にアクセスして、対象認識時の位置情報から、場所や施設の詳細情報等取得することができる。 Also, information regarding the environment can be acquired by accessing various external servers by the external reference information acquisition unit 409. For example, when the recognition target is a dish, the external reference information acquisition unit 409 can acquire information such as a dish name and calories by accessing the server 31 storing the dish meta DB and collating the dish metadata. . In addition, the external reference information acquisition unit 409 can access the server 32 storing the weather information DB, and acquire the weather, temperature, humidity, and the like at that time from the date and time when the object is recognized. Further, the external reference information acquisition unit 409 can access the server 33 storing the place meta DB, and acquire detailed information of the place and facility from the position information at the time of object recognition.
 以上、情報処理端末4に設けられる各種ログセンサについて具体的に説明した。情報処理端末4は、取得したセンサ情報を所定のタイミングでサーバ1に送信する。なお、図2に示すセンサの具体例は一例であって、本実施形態はこれに限定されない。例えば、情報処理端末4は、さらに収音部、加速度センサ、地磁気センサ、振動センサ等を含んでいてもよい。また、情報処理端末4は、例えばスマートフォン、スマートバンド、スマートアイグラス、HMD、携帯電話端末等により実現される。さらに、情報処理端末4の機能をユーザ端末2が兼ねていてもよい。 The various log sensors provided in the information processing terminal 4 have been specifically described above. The information processing terminal 4 transmits the acquired sensor information to the server 1 at a predetermined timing. The specific example of the sensor shown in FIG. 2 is an example, and the present embodiment is not limited to this. For example, the information processing terminal 4 may further include a sound collection unit, an acceleration sensor, a geomagnetic sensor, a vibration sensor, and the like. Moreover, the information processing terminal 4 is implement | achieved by the smart phone, a smart band, smart eyeglass, HMD, a mobile telephone terminal etc., for example. Furthermore, the user terminal 2 may also function as the information processing terminal 4.
 (特徴量抽出部)
 特徴量抽出部13は、生データDB12に格納された認識対象に関するセンサ情報に基づいて、認識対象の特徴量を抽出する。例えば、認識対象が料理である場合、味覚センサ情報、匂いセンサ情報から定量化できる特徴量や、料理の画像から特定できるカロリーを抽出する。また、認識対象が人である場合、人の印象に影響する部分を特徴量として抽出する。例えば、髪型の分類、髪の長さ、顔の丸味、推定身長、推定ウェスト、推定体重、眼鏡の有無、眼鏡の色・形・ブランド、アクセサリーの有無、アクセサリーの色・形・ブランド、着用している服装の色・形・ブランド、化粧の有無、化粧の色合いの特徴(口紅の色、グロスの有無、ファンデーションの色等)、顔のしわ、包帯・眼帯の有無、香水の有無、ブランドや品番、シャンプーの銘柄、行動の癖の分類等を特徴量化する。また、認識対象が景色である場合、特徴量抽出部13は、景色の撮像画像から、その景色に含まれる建物、看板、山、海、道路等の配置を特徴量化する。
(Feature extraction unit)
The feature quantity extraction unit 13 extracts the feature quantity of the recognition target based on the sensor information related to the recognition target stored in the raw data DB 12. For example, when the recognition target is cooking, feature quantities that can be quantified from taste sensor information and odor sensor information, and calories that can be specified from cooking images are extracted. Further, when the recognition target is a person, a part that affects the impression of the person is extracted as a feature amount. For example, hairstyle classification, hair length, face roundness, estimated height, estimated waist, estimated weight, presence or absence of glasses, eyeglass color / shape / brand, presence / absence of accessories, accessory color / shape / brand, wear The color / shape / brand of the clothes being worn, the presence / absence of makeup, the characteristics of the color of the makeup (color of lipstick, presence / absence of gloss, color of foundation, etc.) Characterize product numbers, brands of shampoo, classification of behavior traps, etc. When the recognition target is a landscape, the feature amount extraction unit 13 characterizes the arrangement of buildings, signboards, mountains, seas, roads, and the like included in the landscape from the captured image of the landscape.
 (特徴量DB)
 特徴量DB14は、特徴量抽出部13により抽出された認識対象の特徴量を記憶する。
(Feature DB)
The feature amount DB 14 stores the feature amount of the recognition target extracted by the feature amount extraction unit 13.
 (モデル生成部)
 モデル生成部15は、認識対象の過去の特徴量に基づいて、認識対象の特徴量の傾向モデルを生成する。これにより、現在と過去といった2点間の差異に限らず、過去の傾向(例えばよく着用する洋服の色等)と現在との差異を出力することが可能となる。
(Model generator)
The model generation unit 15 generates a trend model of the feature quantity of the recognition target based on the past feature quantity of the recognition target. Thereby, it is possible to output a difference between a past tendency (for example, a color of clothes often worn) and the present, as well as a difference between two points such as the present and the past.
 過去の傾向を示すモデルとして、例えばヒストグラムを利用する。モデル生成部15は、認識対象が人の場合、着用していた洋服の色や洋服のブランドを頻度で記録してヒストグラムを生成する。これにより、後述する比較部17において、当該ヒストグラムの分布から、現在対象人物が着用している洋服の色やブランドが、普段着用するものであるのか、あまり着用しないものであるのかを差異として出力され得る。認識対象が人である場合の傾向モデルは、洋服の色やブランドのヒストグラムの他、例えばメガネの有無、衣類の種類(スカート、パンツ等)、装飾品、香水のブランド等のヒストグラムも生成され得る。また、認識対象が料理である場合の傾向モデルは、過去に同じ料理を食べた際に取得された味覚情報の蓄積に基づく味のヒストグラムが挙げられる。また、モデル生成部15は、過去の傾向を示すモデルを偏差値で算出してもよい。この場合、後述する比較部17において、現在の特徴量が閾値以下であるか否かに応じて過去の傾向と同じ(一般的)であるか、異なる(珍しい)かが判断され得る。 For example, a histogram is used as a model indicating past trends. When the recognition target is a person, the model generation unit 15 records the color of the clothes and the brand of the clothes with frequency and generates a histogram. Thereby, in the comparison part 17 mentioned later, it is output from the distribution of the said histogram as a difference whether the color and brand of the clothes currently worn by the target person are usually worn or not worn much. Can be done. In addition to clothes color and brand histograms, the trend model when the recognition target is a person can also generate histograms such as the presence of glasses, clothing types (skirts, pants, etc.), ornaments, and perfume brands. . The trend model when the recognition target is a dish includes a taste histogram based on accumulation of taste information acquired when the same dish was eaten in the past. Moreover, the model generation part 15 may calculate the model which shows the past tendency with a deviation value. In this case, the comparison unit 17 described later can determine whether the current feature amount is the same (general) or different (unusual) from the past tendency depending on whether or not the current feature amount is equal to or less than the threshold value.
 さらに、モデル生成部15は、一のユーザが認識対象を認識した際の過去の特徴量に基づいて傾向モデルを生成することに限定されず、多ユーザモデルDBを格納するサーバ34を参照して、他の1以上のユーザが対象を認識した際の過去の特徴量を参照して傾向モデルを生成することも可能である。すなわち、例えば人物Aの特徴量の傾向モデルを生成する際、モデル生成部15は、特徴量抽出部13により抽出された特徴量や特徴量DB14に蓄積された過去の特徴量といった、ユーザが人物Aに会った時に取得されたセンサ情報に基づく特徴量に限らず、サーバ34から取得した、他ユーザが人物Aに会った時に取得されたセンサ情報に基づく特徴量も利用して傾向モデルを生成することが可能である。 Furthermore, the model generation unit 15 is not limited to generating a trend model based on past feature values when one user recognizes a recognition target, and refers to the server 34 that stores the multi-user model DB. It is also possible to generate a trend model by referring to past feature values when one or more other users recognize the target. That is, for example, when generating a trend model of the feature amount of the person A, the model generation unit 15 uses the feature amount extracted by the feature amount extraction unit 13 or the past feature amount accumulated in the feature amount DB 14 as a user. A trend model is generated using not only the feature amount based on the sensor information acquired when meeting A but also the feature amount based on the sensor information acquired from the server 34 when another user meets the person A. Is possible.
 (モデルDB)
 モデルDB16は、モデル生成部15により生成された認識対象の傾向モデルを記憶する。
(Model DB)
The model DB 16 stores the recognition target trend model generated by the model generation unit 15.
 (比較部)
 比較部17は、認識対象の現在の特徴量と、過去の特徴量(特に直近の過去の特徴量)、過去の特徴量の傾向、または未来の予測特徴量とを比較して、その差異を出力する。なお、比較を行う前に、今回の認識対象が、ユーザが過去に認識した対象と同一であるかを特定する必要がある。認識対象が人の場合、顔画像に基づいて顔の特徴量から同一人物を特定することができる。また、相手のIDを取得できた場合はIDの一致から同一人物を特定することができる。また、認識対象が料理の場合、店の場所、場所メタ(店名)、料理画像等から同じメニューの料理であるかを特定することができる。また、認識対象が景色の場合、位置情報、方位情報、画像特徴量等から同一の景色を特定することができる。比較部17は、同一の認識対象であると特定できた場合、現在の認識対象の特徴量と、過去の特徴量(特に直近の過去の特徴量)、過去の特徴量の傾向(過去モデル)、または未来の予測特徴量とを比較して、特徴量の差異が所定値以上である場合、比較結果として当該差異を出力する。
(Comparison part)
The comparison unit 17 compares the current feature quantity of the recognition target with the past feature quantity (especially the latest past feature quantity), the past feature quantity trend, or the future predicted feature quantity, and determines the difference. Output. Before performing the comparison, it is necessary to specify whether the current recognition target is the same as the target recognized by the user in the past. When the recognition target is a person, the same person can be specified from the facial feature amount based on the face image. Further, when the other party's ID can be acquired, the same person can be identified from the matching ID. Further, when the recognition target is a dish, it can be specified whether the dish is the same menu from the store location, the place meta (store name), the dish image, and the like. Further, when the recognition target is a landscape, the same landscape can be specified from position information, orientation information, image feature amount, and the like. When the comparison unit 17 can identify the same recognition target, the current recognition target feature amount, the past feature amount (particularly the latest past feature amount), and the past feature amount trend (past model) Or, if the difference between the feature amounts is equal to or greater than a predetermined value by comparing with the predicted feature amount in the future, the difference is output as a comparison result.
 (可視化情報生成部)
 可視化情報生成部18は、比較部17から出力された差異に基づいて、ユーザに提示するための差異の可視化情報を生成する。例えば、可視化情報生成部18は、差異部分の過去画像と現在画像とを生データDB12から取得し、これらを並べた表示画像や差異部分を強調する画像を可視化情報として生成してもよい(図9参照)。また、可視化情報生成部18は、差異部分の過去傾向を示すヒストグラムにおいて今回の特徴を示す部分を強調表示してもよい(図11参照)。また、可視化情報生成部18は、差異部分の過去画像と共に、その時の日時を表示してもよい。また、可視化情報生成部18は、生データDB12に格納されている認識対象の過去の撮像画像および現在の撮像画像を用いて、画像認識対象が徐々に変化する様子を、時間軸上のスライダーで切り替えながら見ることができる画面を生成してもよい(図7参照)。
(Visualization information generator)
The visualization information generation unit 18 generates difference visualization information to be presented to the user based on the difference output from the comparison unit 17. For example, the visualization information generation unit 18 may acquire a past image and a current image of the difference part from the raw data DB 12, and may generate a display image in which these are arranged and an image that emphasizes the difference part as the visualization information (FIG. 9). In addition, the visualization information generation unit 18 may highlight the portion indicating the current feature in the histogram indicating the past tendency of the difference portion (see FIG. 11). Moreover, the visualization information generation part 18 may display the date and time at that time with the past image of a difference part. In addition, the visualization information generation unit 18 uses a slider on the time axis to change the image recognition target gradually using the past captured image and the current captured image of the recognition target stored in the raw data DB 12. A screen that can be viewed while switching may be generated (see FIG. 7).
 また、可視化情報生成部18は、差異部分について変化を指摘するための画面だけではなく、差異部分の詳細情報を含む画面を生成してもよい。例えば、前回会った時と異なる商品(メガネ、帽子、靴、鞄、アクセサリー等)を相手(認識対象)が身に着けていた場合、比較部17により当該商品が差異として出力される。この時、可視化情報生成部18は、当該商品が何であるかを特定するために、商品メタDBを格納するサーバ6にアクセスし、商品メタデータを参照して、当該商品の詳細(商品名、ブランド、価格帯、発売時期等)を表示することができる。また、可視化情報生成部18は、商品メタデータを参照して、当該商品の購入サイトへのリンクを張ったり、関連商品を提示したりすることも可能である。 Further, the visualization information generation unit 18 may generate not only a screen for pointing out changes in the difference portion but also a screen including detailed information of the difference portion. For example, when the other party (recognition target) wears a product (glasses, hat, shoes, bag, accessory, etc.) different from the previous meeting, the product is output as a difference by the comparison unit 17. At this time, in order to identify what the product is, the visualization information generation unit 18 accesses the server 6 storing the product meta DB, refers to the product metadata, and details the product (product name, Brand, price range, release date, etc.). In addition, the visualization information generation unit 18 can refer to the product metadata to link to a purchase site for the product or present related products.
 また、可視化情報生成部18は、差異部分と「一般的な好ましさ」との差分をアドバイスとして指摘する画面を生成することも可能である。一般的な好ましさのデータは、一般的な好ましさ情報DBを格納するサーバ35から取得し得る。可視化情報生成部18は、認識対象の変化に対して「一般的な好ましさ」との差分を指摘することで、さらにどう変化させた方がよいかといった話題を提供することができる。一般的な好ましさとは、例えばファッション誌から取得された最近の流行や人気の高いアイテム、カラーパレットに合った配色、ベースカラーとアクセントカラーの割合、といった情報が含まれる。また、多ユーザモデルを一般化して取得された情報や、一般的な常識(冠婚葬祭マナー)も含まれる。さらに、部屋のインテリアに関しては、白で統一して部屋を広く見せる、小さめの家具を配置して部屋を広く見せる、ダイニングで暖色系の照明を利用するといった配色や配置に関する情報が含まれる。また、ヘアスタイルに関しては、髪の毛の長さに合った束ね方、髪留めの利用方法、顔の形に合った髪型、年齢に合った髪型等の情報が含まれる。 Further, the visualization information generation unit 18 can also generate a screen that points out the difference between the difference portion and “general preference” as advice. The general preference data can be acquired from the server 35 that stores the general preference information DB. The visualization information generation unit 18 can provide a topic on how to change the recognition target by pointing out the difference from “general preference” with respect to the change of the recognition target. General preferences include information such as recent trends and popular items obtained from fashion magazines, color schemes that match the color palette, and the ratio of base and accent colors. In addition, information obtained by generalizing the multi-user model and general common sense (ceremonial manners) are also included. Further, regarding the interior of the room, information on the color arrangement and arrangement such as unifying white to make the room look wider, arranging small furniture to make the room look wider, and using warm-colored lighting in the dining room is included. In addition, the hair style includes information such as how to bundle the hair according to the length of the hair, how to use the hair clip, the hair style that matches the shape of the face, and the hair style that matches the age.
 また、可視化情報生成部18は、差異部分の付加情報として、画像から推定されるカロリー表示や、体重、年齢、身長等を合わせて表示してもよい。また、可視化情報生成部18は、変化後の特徴が著名人と似ている場合、その著名人を表示してもよい。著名人の情報は、著名人情報DBを格納するサーバ37にアクセスして取得し得る。 Further, the visualization information generation unit 18 may display a calorie display estimated from an image, weight, age, height, and the like as additional information of the difference portion. Moreover, the visualization information generation part 18 may display the celebrity when the characteristic after a change is similar to a celebrity. The celebrity information can be obtained by accessing the server 37 storing the celebrity information DB.
 このように、可視化情報生成部18は、差異部分を指摘するだけではなく、差異部分の詳細情報や、「一般的な好ましさ」との差分に基づくアドバイス、差異部分の付加情報を含む画面を生成することで、ユーザに提供する話題を充実させることができる。 As described above, the visualization information generation unit 18 not only points out the difference portion but also includes detailed information on the difference portion, advice based on the difference from “general preference”, and additional information on the difference portion. The topic provided to the user can be enriched by generating.
 可視化情報生成部18は、生成した可視化情報(比較部17の比較結果に基づく認識対象の差異に関する情報の一例)を、通信部11を介してユーザ端末2に送信し、ユーザに通知するよう制御する通知制御部としても機能する。 The visualization information generation unit 18 transmits the generated visualization information (an example of information related to the difference of recognition targets based on the comparison result of the comparison unit 17) to the user terminal 2 via the communication unit 11 and controls to notify the user. It also functions as a notification control unit.
 (予測部)
 予測部19は、認識対象の未来の特徴量を予測し、予測結果を比較部17または可視化情報生成部18に出力する。これにより、比較部17において、過去との差異だけではなく、未来との差異が行われ、また、可視化情報生成部18において、未来の特徴量を含む画面を生成することができる。予測部19による第1の予測方法としては、認識対象の過去および現在の2点以上の特徴量を元に、外挿することで未来の特徴量を予測することが可能である。例えば、2年前、1年前、および現在の髪の毛の量を元に、1年後の髪の毛の量を線形モデルで予測する。また、予測部19による第2の予測方法としては、認識対象の過去の特徴量の傾向モデルを利用して、未来の特徴量を予測することが可能である。例えば、認識対象の顔の皺の増え方を示す傾向モデルを元に、さらに加齢した場合に出来る皺の量を予測する。この場合、可視化情報生成部18は、加齢した場合に出来る皺の量の予測結果に基づく顔画像を生成して提示画面に含めることが可能である。
(Prediction unit)
The prediction unit 19 predicts a future feature amount to be recognized, and outputs a prediction result to the comparison unit 17 or the visualization information generation unit 18. Thereby, not only the difference with the past but the difference with the future is performed in the comparison part 17, and the screen containing the future feature-value can be produced | generated in the visualization information generation part 18. FIG. As a first prediction method by the prediction unit 19, it is possible to predict future feature values by extrapolation based on the past and present feature values of two or more recognition targets. For example, based on the amount of hair two years ago, one year ago, and the current hair, the amount of hair one year later is predicted by a linear model. Further, as a second prediction method by the prediction unit 19, it is possible to predict a future feature value by using a trend model of a past feature value to be recognized. For example, the amount of wrinkles that can be generated when aging is further predicted based on a trend model indicating how to increase the number of wrinkles on the face to be recognized. In this case, the visualization information generation unit 18 can generate a face image based on the prediction result of the amount of wrinkles that can be generated when aging and include it in the presentation screen.
 以上、本実施形態によるサーバ1の構成について具体的に説明した。なお、サーバ1には、CPU(Central Processing Unit)、ROM(Read Only Memory)、RAM(Random Access Memory)、および不揮発性メモリを備えたマイクロコンピュータが搭載され、サーバ1の各構成を制御する。 The configuration of the server 1 according to the present embodiment has been specifically described above. The server 1 is equipped with a microcomputer having a CPU (Central Processing Unit), a ROM (Read Only Memory), a RAM (Random Access Memory), and a nonvolatile memory, and controls each component of the server 1.
  <<3.動作処理>>
 続いて、本実施形態による情報処理システムの動作処理について、図3~図14を参照して具体的に説明する。
<< 3. Action processing >>
Subsequently, the operation processing of the information processing system according to the present embodiment will be specifically described with reference to FIGS.
  <3-1.データ収集>
 まず、図3~図5を参照して本実施形態による各種データ収集について説明する。図3は、本実施形態によるログ収集処理を示すフローチャートである。図3に示す処理は、各種ログセンサが設けられている情報処理端末4(ウェアラブルデバイス等)において実行される。具体的には、情報処理端末は、定期的もしくは外部イベントによるトリガー発生時にログ収集を開始し(S103)、認識対象を特定する(S106)。
<3-1. Data collection>
First, various data collection according to the present embodiment will be described with reference to FIGS. FIG. 3 is a flowchart showing log collection processing according to the present embodiment. The processing shown in FIG. 3 is executed in the information processing terminal 4 (wearable device or the like) provided with various log sensors. Specifically, the information processing terminal starts log collection periodically or when a trigger is generated by an external event (S103), and identifies a recognition target (S106).
 次いで、情報処理端末は、認識対象に適したセンサ情報を収集する(S109)。例えば認識対象が料理であれば、味覚センサ401、匂いセンサ402、カメラ403により情報を取得する。また、認識対象が人物であれば、匂いセンサ402、カメラ403、無線通信部404により情報を取得する。 Next, the information processing terminal collects sensor information suitable for the recognition target (S109). For example, if the recognition target is cooking, information is acquired by the taste sensor 401, the odor sensor 402, and the camera 403. If the recognition target is a person, the odor sensor 402, the camera 403, and the wireless communication unit 404 acquire information.
 次に、情報処理端末は、認識時における環境のセンサ情報を収集する(S112)。環境のセンサ情報とは、認識対象に基づかない情報であって、例えば位置計測部405により取得される現在位置情報、温度・湿度計407により取得される現在の温度・湿度、時計部408により取得される現在日時等が相当する。 Next, the information processing terminal collects environmental sensor information at the time of recognition (S112). The environmental sensor information is information that is not based on the recognition target. For example, current position information acquired by the position measuring unit 405, current temperature / humidity acquired by the temperature / humidity meter 407, and acquired by the clock unit 408. This corresponds to the current date and time.
 次いで、情報処理端末は、ログデータの拡充を図る(S115)。すなわち、外部参照情報取得部409により、各種外部サーバ(料理メタDBのサーバ31、気象情報DBのサーバ32、または場所メタDBのサーバ33等)にアクセスして認識対象の情報や認識時の環境情報をさらに取得する。 Next, the information processing terminal attempts to expand log data (S115). That is, the external reference information acquisition unit 409 accesses various external servers (such as the cooking meta DB server 31, the weather information DB server 32, or the location meta DB server 33) to recognize information to be recognized and the environment at the time of recognition. Get more information.
 そして、情報処理端末は、取得した全ての情報をサーバ1へ送信し、生データDB12に格納させる。 Then, the information processing terminal transmits all the acquired information to the server 1 and stores it in the raw data DB 12.
 続いて、サーバ1における特徴量化について図4を参照して説明する。図4は、本実施形態における特徴量抽出処理を示すフローチャートである。図4に示すように、まず、サーバ1の特徴量抽出部13は、生データDB12から、認識対象のセンサ情報を読み込む(S123)。 Subsequently, the feature amount formation in the server 1 will be described with reference to FIG. FIG. 4 is a flowchart showing the feature amount extraction processing in the present embodiment. As shown in FIG. 4, first, the feature amount extraction unit 13 of the server 1 reads sensor information to be recognized from the raw data DB 12 (S123).
 次に、特徴量抽出部13は、認識対象のセンサ情報に基づいて、認識対象の特徴量を抽出する(S126)。 Next, the feature quantity extraction unit 13 extracts the feature quantity of the recognition target based on the sensor information of the recognition target (S126).
 そして、特徴量抽出部13は、抽出した特徴量を特徴量DB14へ格納する(S129)。以上説明した特徴量の抽出は、ユーザが対象を認識した際にリアルタイムで行われる。 Then, the feature quantity extraction unit 13 stores the extracted feature quantity in the feature quantity DB 14 (S129). The feature amount extraction described above is performed in real time when the user recognizes the target.
 続いて、サーバ1における傾向モデルの生成について図5を参照して説明する。図5は、本実施形態による認識対象の特徴量の傾向モデル生成処理を示すフローチャートである。図5に示すように、まず、サーバ1のモデル生成部15は、認識対象の所定の特徴量の傾向モデルが既に生成されているか否かを確認する(S133)。 Subsequently, generation of a trend model in the server 1 will be described with reference to FIG. FIG. 5 is a flowchart showing the trend model generation process of the feature quantity of the recognition target according to the present embodiment. As shown in FIG. 5, first, the model generation unit 15 of the server 1 checks whether or not a trend model having a predetermined feature amount to be recognized has already been generated (S133).
 次に、既に生成されている場合(S133において「No」)、モデル生成部15は、モデルDB16から、既に生成されている傾向モデル(過去モデル)を読み込む(S136)。 Next, when already generated (“No” in S133), the model generation unit 15 reads a trend model (past model) that has already been generated from the model DB 16 (S136).
 一方、まだ生成されていない場合(S133において「Yes」)、モデル生成部15は、所定の特徴量の傾向モデル初期化を行う。 On the other hand, if it has not been generated yet (“Yes” in S133), the model generation unit 15 initializes a trend model of a predetermined feature quantity.
 次に、モデル生成部15は、特徴量DB14から、認識対象の所定の特徴量を読み込む(S142)。ここでは、傾向モデルにまだ反映されていない特徴量(例えば現在新たに認識対象から取得された特徴量)を読み込む。 Next, the model generation unit 15 reads a predetermined feature amount to be recognized from the feature amount DB 14 (S142). Here, feature quantities that have not yet been reflected in the trend model (for example, feature quantities newly acquired from the recognition target at present) are read.
 次いで、モデル生成部15は、読み込んだ特徴量に基づいて、当該特徴量の傾向モデルを更新する(S145)。 Next, the model generation unit 15 updates the trend model of the feature amount based on the read feature amount (S145).
 次に、モデル生成部15は、更新した傾向モデルを、モデルDB16に書き込む(S148)。 Next, the model generation unit 15 writes the updated tendency model in the model DB 16 (S148).
 次いで、モデル生成部15は、必要であれば、多ユーザモデルDBを格納するサーバ34におけるユーザのモデルDBを、更新した傾向モデルと同期させる。サーバ34には、ユーザ毎に、認識対象の傾向モデルや特徴量DBが格納されている。 Next, if necessary, the model generation unit 15 synchronizes the user's model DB in the server 34 storing the multi-user model DB with the updated tendency model. The server 34 stores a trend model to be recognized and a feature amount DB for each user.
 以上説明した傾向モデルの生成は、認識対象から抽出される複数の特徴量毎に生成される。例えば認識対象の髪型の特徴、洋服の特徴、持ち物の特徴、体型の特徴等の傾向モデルが夫々生成され得る。 The generation of the trend model described above is generated for each of a plurality of feature amounts extracted from the recognition target. For example, trend models such as a feature of a hairstyle to be recognized, a feature of clothes, a feature of belongings, and a feature of body shape can be generated.
 続いて、本実施形態による差異提示の動作処理および可視化情報例について図6~図14を参照して説明する。以下に説明する差異提示の各動作処理は、ユーザが対象を認識した際にリアルタイムで行われ得る。 Subsequently, the difference presentation operation process and visualization information examples according to the present embodiment will be described with reference to FIGS. Each operation process of difference presentation described below can be performed in real time when the user recognizes the target.
  <3-2.2点要素間の差異提示>
 まず、本実施形態による2点要素間(すなわち、現在と直近過去または未来)の差異提示について、図6~図9を参照して具体的に説明する。
<3-2.2 Presentation of differences between point elements>
First, difference presentation between two point elements (that is, the present and the latest past or future) according to the present embodiment will be specifically described with reference to FIGS.
 図6は、本実施形態による2点間の差異提示処理を示すフローチャートである。図6に示すように、まず、比較部17は、認識対象の現在の特徴量と、直近過去の特徴量を、特徴量DB14から読み込む(S203)。 FIG. 6 is a flowchart showing a difference presentation process between two points according to the present embodiment. As shown in FIG. 6, first, the comparison unit 17 reads the current feature amount to be recognized and the most recent feature amount from the feature amount DB 14 (S203).
 次に、比較部17は、特徴量の比較を行い、差異を算出する(S206)。 Next, the comparison unit 17 compares feature amounts and calculates a difference (S206).
 次いで、比較部17は、比較結果(差異個所)を可視化情報生成部18に出力する(S209)。 Next, the comparison unit 17 outputs the comparison result (difference part) to the visualization information generation unit 18 (S209).
 次に、可視化情報生成部18は、生データDB12から差異個所の情報を取得する(S212)。 Next, the visualization information generation unit 18 acquires information on the difference part from the raw data DB 12 (S212).
 次いで、可視化情報生成部18は、差異個所を指摘するコメントを生成する(S215)。この際、可視化情報生成部18は、外部サーバから差異個所の詳細情報や関連情報等を取得し、これらも含めたコメントを生成してもよい。 Next, the visualization information generation unit 18 generates a comment that points out the difference (S215). At this time, the visualization information generation unit 18 may acquire detailed information and related information of the difference part from the external server, and generate a comment including these.
 続いて、可視化情報生成部18は、取得した差異個所の情報や生成したコメントに基づいて可視化情報を生成し、生成した可視化情報を、通信部11を介してユーザ端末2に提示して、差異個所の通知を行う(S218)。 Subsequently, the visualization information generation unit 18 generates visualization information based on the acquired information on the difference part and the generated comment, presents the generated visualization information to the user terminal 2 via the communication unit 11, and performs the difference. The location is notified (S218).
 ここで、可視化情報生成部18によりユーザに提示される可視化情報の具体例について図7を参照して説明する。図7は、本実施形態による可視化情報の画面表示例を示す図である。図7に示すように、例えば可視化情報は、認識対象の過去の撮像画像および現在の撮像画像を用いて、画像認識対象が徐々に変化する様子を時間軸上のスライダーで切り替えながら見ることができる画面により実現される。図7の画面22-1~画面22-4は、時間軸上のスライダーを移動することにより切り替えられる画面である。 Here, a specific example of the visualization information presented to the user by the visualization information generation unit 18 will be described with reference to FIG. FIG. 7 is a diagram showing a screen display example of visualization information according to the present embodiment. As shown in FIG. 7, for example, the visualization information can be viewed using the past captured image and the current captured image of the recognition target while switching the image recognition target gradually with a slider on the time axis. Realized by screen. Screens 22-1 to 22-4 in FIG. 7 are screens that can be switched by moving a slider on the time axis.
 具体的には、例えばユーザが対象を認識した最も過去の情報(初めて会った時の相手の撮像画像等)が1972年の場合、画面22-1に示すように、時間軸の最も過去が1972年に設定される。そして、スライダーを1972年に移動すると、1972年に取得された認識対象の撮像画像(顔画像)と共に、認識時の日時、場所、天気等の情報が併せて表示される。これにより、ユーザは、相手と初めて会った時のことを話題にすることができる。 Specifically, for example, when the most recent information (such as a captured image of the partner when the user first meets) is 1972 when the user recognizes the target, the most recent time axis on the 1972 is 1972 as shown on the screen 22-1. Set to the year. When the slider is moved to 1972, information such as the date, place, weather, etc. at the time of recognition is displayed together with the captured image (face image) to be recognized acquired in 1972. Thereby, the user can talk about the first meeting with the other party.
 また、次にユーザが同一の相手に会ったのが2014年である場合、1972年と2014年の間の相手の情報は蓄積されていないが、画面22-2に示すように、スライダーを1972年と2014年の間に移動した際に、過去画像に基づいて生成した合成画像が表示されてもよい。画面22-2では、1972年の過去画像と、2014年の過去画像とを重ねた合成画像が表示されている。 If the next time the user met the same partner in 2014, information on the partner between 1972 and 2014 is not accumulated, but as shown in screen 22-2, the slider is moved to 1972. When moving between the year and 2014, a composite image generated based on the past image may be displayed. On the screen 22-2, a composite image in which the past image of 1972 and the past image of 2014 are superimposed is displayed.
 次いで、画面22-3に示すように、時間軸上のスライダーを2014年現在に移動すると、2014年に取得された認識対象の撮像画像(顔画像)と共に、認識時の日時、場所、天気等の情報が併せて表示される。ここで、ユーザが2014年現在、同一の相手に会った際に、直近の過去の相手の特徴量と現在の相手の特徴量とが比較部17により比較され、比較結果に基づいて差異を可視化する画面22-3が表示される。例えば同一の相手に会った直近の過去が一週間前である場合、画面22-3には、一週間前の相手の顔画像、会った日時、場所等の情報が含まれ、さらに一週間前との特徴量の差異(髪型が変わった等)が示される。これにより、ユーザは、前回会った時から相手の髪型が変わっていることを話題にすることができる。なお、何ら差異がない場合は、変化がない旨が示されてもよい。 Next, as shown in the screen 22-3, when the slider on the time axis is moved to the current 2014, the recognition target date and time, place, weather, etc., along with the captured image (face image) to be recognized acquired in 2014, etc. Information is also displayed. Here, when the user meets the same partner as of 2014, the comparison unit 17 compares the feature amount of the latest past partner with the feature amount of the current partner, and visualizes the difference based on the comparison result. A screen 22-3 is displayed. For example, if the most recent past meeting with the same partner is a week ago, the screen 22-3 contains information such as the face image of the partner one week ago, the date and place of meeting, and another week before And the difference in the feature amount (such as a change in hairstyle). Thereby, the user can talk about the fact that the partner's hairstyle has changed since the last time they met. When there is no difference, it may be indicated that there is no change.
 続いて、画面22-4に示すように、時間軸上のスライダーを2020年等の未来に移動させると、予測部19により予測された未来の特徴量に基づいた情報が表示される。例えば、未来の予測結果で示される皺の多さ、髪の量、髪の色、顔のたるみ変化等に基づいて可視化情報生成部18により生成された顔画像が表示される。 Subsequently, as shown in the screen 22-4, when the slider on the time axis is moved to the future such as 2020, information based on the future feature amount predicted by the prediction unit 19 is displayed. For example, the face image generated by the visualization information generation unit 18 based on the number of wrinkles, the amount of hair, the color of hair, the change in sagging face, and the like indicated by the future prediction result is displayed.
 以上、認識対象の過去画像等を時間軸に応じて表示する可視化情報例について説明した。なお、図7に示す画面22-2では、ユーザが相手と会っていない間の相手の顔画像を過去画像に基づいて合成して表示しているが、本実施形態はこれに限定されず、可視化情報生成部18は、多ユーザモデルDBを格納するサーバ34から、他ユーザが相手と会った時の相手の撮像画像を取得し、表示してもよい。以下、図8を参照して説明する。 In the foregoing, an example of visualization information that displays past images to be recognized according to a time axis has been described. In addition, on the screen 22-2 shown in FIG. 7, the face image of the partner while the user is not meeting the partner is displayed based on the past image, but this embodiment is not limited to this. The visualization information generation unit 18 may acquire and display a captured image of a partner when another user meets the partner from the server 34 that stores the multi-user model DB. Hereinafter, a description will be given with reference to FIG.
 図8は、図7に示す画面22-2において他のユーザが相手と会った時の撮像画像を表示する変形例について説明する図である。図8に示すように、画面22-2’では、時間軸上のスライダーが1972年と2014年の間(例えば2000年頃)に移動されている。スライダーで示される時期にユーザは相手に会っていないため、生データDB12には相手の過去画像(撮像画像)等が格納されていないが、他ユーザ(例えば共通の友人XX)が相手に会っていた場合、可視化情報生成部18は、他ユーザが会った時に撮像された相手の画像をサーバ34から取得し、画面22-2’に表示する。また、可視化情報生成部18は、相手の過去画像を表示すると共に、「2000年頃は会っていません。共通の友人XXさんが会った時の画像を表示します。」といったコメント(注釈)を付けることで、他ユーザが会った時の情報であることをユーザに通知する。 FIG. 8 is a diagram for explaining a modified example of displaying a captured image when another user meets the other person on the screen 22-2 shown in FIG. As shown in FIG. 8, on the screen 22-2 ', the slider on the time axis is moved between 1972 and 2014 (for example, around 2000). Since the user does not meet the other party at the time indicated by the slider, the past image (captured image) of the other party is not stored in the raw data DB 12, but another user (for example, a common friend XX) meets the other party. In this case, the visualization information generation unit 18 acquires the partner image captured when another user meets from the server 34 and displays it on the screen 22-2 ′. In addition, the visualization information generation unit 18 displays a past image of the other party and a comment (annotation) such as “I have not met around 2000. An image when a common friend XX meets” is displayed. By attaching, it notifies the user that it is information when other users meet.
 次いで、他の可視化情報例について図9を参照して説明する。図9は、差異部分を強調した可視化情報例を示す図である。例えばユーザが会った相手のメガネのフレームの色が、直近過去である昨日から変わった場合、差異部分であるメガネフレームを強調してユーザに通知する。具体的には、例えば図9に示すように、画面22-6において、相手の現在の撮像画像と、直近過去の撮像画像とを並べて表示し、さらに差異部分の拡大画像と、差異部分に関するコメントを表示する。これにより、ユーザは、前回会った時から相手のメガネのフレームの色が変わっていることを話題にすることができる。 Next, another example of visualization information will be described with reference to FIG. FIG. 9 is a diagram illustrating an example of visualization information in which a different portion is emphasized. For example, when the color of the frame of the other party's glasses that the user has met has changed since yesterday, which is the most recent past, the glasses frame that is the difference is emphasized and notified to the user. Specifically, for example, as shown in FIG. 9, the current captured image of the partner and the latest captured image are displayed side by side on the screen 22-6, and further, an enlarged image of the difference portion and a comment regarding the difference portion Is displayed. Thereby, the user can talk about the fact that the color of the frame of the other person's glasses has changed since the last meeting.
  <3-3.傾向との差異提示>
 次に、本実施形態による傾向との差異提示について、図10~図11を参照して具体的に説明する。上述した実施形態では、現在と直近過去または未来といった時系列上の2点要素間の差異を提示しているが、本実施形態はこれに限定されず、モデル生成部15により生成された認識対象の過去の特徴量の傾向と現在との差異を提示することも可能である。
<3-3. Presenting differences with trends>
Next, presentation of differences from the trend according to the present embodiment will be specifically described with reference to FIGS. In the above-described embodiment, the difference between two point elements on the time series such as the present and the latest past or the future is presented. However, the present embodiment is not limited to this, and the recognition target generated by the model generation unit 15 It is also possible to present the difference between the past feature quantity trend and the present one.
 図10は、本実施形態による傾向との差異提示処理を示すフローチャートである。図10に示すように、まず、比較部17は、認識対象の現在の特徴量を特徴量DB14から読み込む(S233)。 FIG. 10 is a flowchart showing a difference presentation process with the trend according to the present embodiment. As shown in FIG. 10, first, the comparison unit 17 reads the current feature quantity to be recognized from the feature quantity DB 14 (S233).
 次いで、比較部17は、認識対象の過去の傾向をモデルDB16から読み込む(S236)。 Next, the comparison unit 17 reads the past tendency to be recognized from the model DB 16 (S236).
 次に、比較部17は、現在の特徴量と過去の傾向とを比較し、差異を算出する(S239)。 Next, the comparison unit 17 compares the current feature amount with the past trend and calculates a difference (S239).
 次いで、比較部17は、比較結果(差異個所)を可視化情報生成部18に出力する(S242)。 Next, the comparison unit 17 outputs the comparison result (difference part) to the visualization information generation unit 18 (S242).
 次に、可視化情報生成部18は、生データDB12から差異個所の情報を取得する(S245)。 Next, the visualization information generation unit 18 acquires information on the difference part from the raw data DB 12 (S245).
 次いで、可視化情報生成部18は、差異個所を指摘するコメントを生成する(S248)。この際、可視化情報生成部18は、外部サーバから差異個所の詳細情報や関連情報等を取得し、これらも含めたコメントを生成してもよい。 Next, the visualization information generation unit 18 generates a comment that points out the difference (S248). At this time, the visualization information generation unit 18 may acquire detailed information and related information of the difference part from the external server, and generate a comment including these.
 続いて、可視化情報生成部18は、取得した差異個所の情報や生成したコメントに基づいて可視化情報を生成し、生成した可視化情報を、通信部11を介してユーザ端末2に提示して、差異個所の通知を行う(S251)。 Subsequently, the visualization information generation unit 18 generates visualization information based on the acquired information on the difference part and the generated comment, presents the generated visualization information to the user terminal 2 via the communication unit 11, and performs the difference. The location is notified (S251).
 ここで、可視化情報生成部18によりユーザに提示される可視化情報の具体例について図11を参照して説明する。図11は、本実施形態による可視化情報の画面表示例を示す図である。図11に示すように、例えば可視化情報は、差異部分の過去の傾向を示すヒストグラムにおいて、今回の特徴を示す部分を強調表示する画面により実現されてもよい。具体的には、図11の画面22-7に示すように、例えば認識対象の洋服の色の過去傾向(色毎の着用頻度)を示すヒストグラムを、色毎に分類した場合の代表的な過去の洋服画像と共に表示し、現在着用している洋服の色が過去の傾向に比べると着用頻度が少ないものであることを指摘する。この際、可視化情報生成部18は、該当する色を前回着用していた時の画像と、現在着用している洋服の画像とを並べて表示してもよい。 Here, a specific example of the visualization information presented to the user by the visualization information generation unit 18 will be described with reference to FIG. FIG. 11 is a diagram illustrating a screen display example of visualization information according to the present embodiment. As illustrated in FIG. 11, for example, the visualization information may be realized by a screen that highlights a portion indicating the current feature in a histogram indicating the past tendency of the difference portion. Specifically, as shown in the screen 22-7 in FIG. 11, for example, a histogram indicating the past tendency (wear frequency for each color) of the color of the clothes to be recognized is a representative past when classified for each color. It is pointed out that the color of the currently worn clothes is less worn than the past tendency. At this time, the visualization information generation unit 18 may display the image when the corresponding color was previously worn and the image of the currently worn clothing side by side.
  <3-4.スコアに応じた差異提示>
 次に、本実施形態によるスコアに応じた差異提示について、図12~図14を参照して具体的に説明する。上述した各実施形態では、比較部17により算出された差異個所をそのままユーザに提示したが、本実施形態による提示方法はこれに限定されず、例えば差異の大きさや差異の内容属性(ネガティブ/ポジティブ)に応じて提示態様(表示態様)を変更することで、より適切に提示することができる。
<3-4. Difference presentation according to the score>
Next, difference presentation according to the score according to the present embodiment will be specifically described with reference to FIGS. In each embodiment described above, the difference portion calculated by the comparison unit 17 is presented to the user as it is. However, the presentation method according to the present embodiment is not limited to this, and for example, the size of the difference and the content attribute of the difference (negative / positive) ) Can be presented more appropriately by changing the presentation mode (display mode).
 図12は、本実施形態によるスコアに応じた差異提示処理を示すフローチャートである。図12に示すように、まず、比較部17は、認識対象の現在の特徴量と、直近過去の特徴量を、特徴量DB14から読み込む(S303)。 FIG. 12 is a flowchart showing the difference presenting process according to the score according to the present embodiment. As shown in FIG. 12, first, the comparison unit 17 reads the current feature quantity to be recognized and the feature quantity of the past past from the feature quantity DB 14 (S303).
 次に、比較部17は、特徴量の比較を行い、差異を算出する(S306)。 Next, the comparison unit 17 compares feature amounts and calculates a difference (S306).
 次いで、比較部17は、比較結果(差異個所)および差異スコアを可視化情報生成部18に出力する(S209)。差異スコアとは、現在の特徴量との差異の大きさを示すスコアである。差異が複数ある場合は、それぞれの差異について差異スコアが算出され、可視化情報生成部18に出力される。 Next, the comparison unit 17 outputs the comparison result (difference part) and the difference score to the visualization information generation unit 18 (S209). The difference score is a score indicating the magnitude of the difference from the current feature amount. When there are a plurality of differences, a difference score is calculated for each difference and output to the visualization information generation unit 18.
 次に、可視化情報生成部18は、生データDB12から差異個所の情報を取得する(S312)。 Next, the visualization information generation unit 18 acquires information on the difference part from the raw data DB 12 (S312).
 次いで、可視化情報生成部18は、差異のポジティブ-ネガティブスコアを算出する(S315)。ポジティブ-ネガティブスコアは、比較部17により出力された差異(変化)がポジティブなものか、ネガティブなものかの度合いを連続値で示すものであって、例えば「スコア:-100」は非常にネガティブな差異、「スコア:50」は比較的ポジティブな差異を意味する。 Next, the visualization information generating unit 18 calculates a positive-negative score of the difference (S315). The positive-negative score is a continuous value that indicates whether the difference (change) output by the comparison unit 17 is positive or negative. For example, “score: −100” is very negative. The difference “score: 50” means a relatively positive difference.
 次に、可視化情報生成部18は、ユーザと相手との関係性、差異スコア、およびポジティブ-ネガティブスコアに基づいて、差異の提示態様(表示画面における強調度、配置等)を決定する(S318)。例えば可視化情報生成部18は、差異スコアが大きい差異やポジティブ度が高い差異は強調して表示し、差異スコアは大きいがネガティブな内容の場合は相手との関係性(親しさ、性別、年齢の近さ等)に応じた強調度や配置で提示するよう決定する。また、可視化情報生成部18は、複数の差異がある場合、相手との関係が親密であれば、ポジティブ-ネガティブスコアがネガティブに大きくとも(例えばスコア:-50以下でも)、差異スコアが大きい順に提示してもよい。一方、相手との関係が親密でなければ、可視化情報生成部18は、ポジティブ-ネガティブスコアが20以上のものを優先的に差異スコア順に表示(配置)してもよい。 Next, the visualization information generation unit 18 determines a difference presentation mode (highlighting degree, arrangement, etc. on the display screen) based on the relationship between the user and the other party, the difference score, and the positive-negative score (S318). . For example, the visualization information generation unit 18 highlights and displays a difference with a large difference score or a difference with a high degree of positiveness. If the difference score is large but negative, the relationship with the partner (familiarity, gender, age) Decide to present with the degree of emphasis and placement according to proximity. In addition, when there are a plurality of differences and the relationship with the partner is intimate, the visualization information generation unit 18 increases the difference score in the descending order even if the positive-negative score is negative (eg, score: −50 or less). May be presented. On the other hand, if the relationship with the partner is not intimate, the visualization information generation unit 18 may preferentially display (arrange) the positive-negative score of 20 or more in the order of the difference score.
 次いで、可視化情報生成部18は、差異個所を指摘するコメントを生成する(S321)。この際、可視化情報生成部18は、外部サーバから差異個所の詳細情報や関連情報等を取得し、これらも含めたコメントを生成してもよい。 Next, the visualization information generation unit 18 generates a comment that points out the difference (S321). At this time, the visualization information generation unit 18 may acquire detailed information and related information of the difference part from the external server, and generate a comment including these.
 続いて、可視化情報生成部18は、取得した差異個所の情報や生成したコメントに基づいて可視化情報を生成し、生成した可視化情報を、通信部11を介してユーザ端末2に提示して、差異個所の通知を行う(S324)。 Subsequently, the visualization information generation unit 18 generates visualization information based on the acquired information on the difference part and the generated comment, presents the generated visualization information to the user terminal 2 via the communication unit 11, and performs the difference. The location is notified (S324).
 ここで、可視化情報生成部18によりユーザに提示される可視化情報の具体例について図13、図14を参照して説明する。図13および図14では、一例として、比較部17により髪色の変化と体重の変化が差異として出力された場合における、相手との関係性に応じた提示態様の違いについて説明する。髪色の変化は、例えば茶色から黒への変化であって、差異スコア20、ポジティブ-ネガティブスコアはニュートラルな変化を示す。また、体重の変化は、例えば推定20kgの増加であって、差異スコア30、ポジティブ-ネガティブスコアはネガティブな変化を示す。 Here, a specific example of the visualization information presented to the user by the visualization information generation unit 18 will be described with reference to FIGS. 13 and FIG. 14, as an example, a description will be given of a difference in presentation mode according to the relationship with the partner when the comparison unit 17 outputs a change in hair color and a change in weight as differences. The change in hair color is, for example, a change from brown to black, and the difference score 20 and the positive-negative score indicate neutral changes. The change in body weight is an increase of, for example, an estimated 20 kg, and the difference score 30 and the positive-negative score indicate a negative change.
 図13は、ユーザと相手との関係が親しい間柄ではない場合における提示態様について説明する図である。可視化情報生成部18は、ユーザと相手との関係が親しい間柄ではない場合、図13の画面22-8に示すように、差異スコアが低くともニュートラルな変化である髪色の変化を指摘するコメント50をより目立つよう大きな文字で強調表示し、また、表示順も上位に表示させる。一方、差異スコアが高くともネガティブな変化である体重の変化を指摘するコメント51は、目立たないよう小さな文字で、また、表示順も下位に表示させる。これにより、ユーザは、髪色の変化といったニュートラルな変化に注目して話題することができる。 FIG. 13 is a diagram for explaining a presentation mode when the relationship between the user and the other party is not a close relationship. When the relationship between the user and the partner is not a close relationship, the visualization information generation unit 18 indicates a change in hair color that is a neutral change even if the difference score is low, as shown in a screen 22-8 in FIG. 50 is highlighted with large characters so as to stand out more, and the display order is also displayed at the top. On the other hand, the comment 51 pointing out a change in weight, which is a negative change even if the difference score is high, is displayed in small letters so as to be inconspicuous and the display order is displayed in the lower order. Thus, the user can talk about a neutral change such as a change in hair color.
 図14は、ユーザと相手との関係が親しい間柄である場合における提示態様について説明する図である。可視化情報生成部18は、ユーザと相手との関係が親しい間柄である場合、図14の画面22-9に示すように、ネガティブな変化であっても、差異スコアが高い体重の変化を指摘するコメント52をより目立つよう大きな文字で強調表示し、また、表示順も上位に表示させる。一方、差異スコアが低い変化である髪色の変化を指摘するコメント53は、小さな文字で、また、表示順も下位に表示させる。これにより、ユーザは、体重の変化といったネガティブな変化であっても変化の大きい差異に注目して話題することができる。 FIG. 14 is a diagram for explaining a presentation mode when the relationship between the user and the partner is a close relationship. When the relationship between the user and the partner is a close relationship, the visualization information generation unit 18 points out a change in body weight with a high difference score even if it is a negative change, as shown in a screen 22-9 in FIG. The comment 52 is highlighted with large characters so as to stand out more, and the display order is also displayed at the top. On the other hand, the comment 53 indicating the change in hair color, which is a change with a low difference score, is displayed in small letters and in the lower order. Thereby, even if it is a negative change, such as a change in weight, the user can focus on a difference with a large change.
 以上、スコアに応じた差異提示について説明した。なお、図12に示すフローでは、2点要素間の比較を行っているが、本実施形態はこれに限定されず、現在と過去傾向の比較結果も同様にスコア(差異スコア、ポジティブ-ネガティブスコア)に応じて提示態様制御してもよい。 This completes the explanation of the difference presentation according to the score. In the flow shown in FIG. 12, the comparison between two point elements is performed. However, the present embodiment is not limited to this, and the comparison result between the current and past trends is similarly scored (difference score, positive-negative score). ), The presentation mode may be controlled.
  <<4.まとめ>>
 上述したように、本開示の実施形態による情報処理システムでは、認識対象の経時的差異を話題として提供することを可能とする。これにより、ユーザは認識対象の変化に気付くことができる。また、認識対象が人物である場合は、久しぶりに会った時等に、変化を話題として利用することができる。また、認識対象が景色である場合、時系列的な景色の変化を見ることで新たな発見が生まれる。
<< 4. Summary >>
As described above, in the information processing system according to the embodiment of the present disclosure, it is possible to provide a temporal difference of a recognition target as a topic. Thereby, the user can notice the change of recognition object. When the recognition target is a person, the change can be used as a topic when meeting for a long time. In addition, when the recognition target is a landscape, a new discovery is born by seeing a time-series change in the landscape.
 以上、添付図面を参照しながら本開示の好適な実施形態について詳細に説明したが、本技術はかかる例に限定されない。本開示の技術分野における通常の知識を有する者であれば、特許請求の範囲に記載された技術的思想の範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、これらについても、当然に本開示の技術的範囲に属するものと了解される。 The preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, but the present technology is not limited to such examples. It is obvious that a person having ordinary knowledge in the technical field of the present disclosure can come up with various changes or modifications within the scope of the technical idea described in the claims. Of course, it is understood that it belongs to the technical scope of the present disclosure.
 例えば、上述したサーバ1、ユーザ端末2に内蔵されるCPU、ROM、およびRAM等のハードウェアに、サーバ1、ユーザ端末2の機能を発揮させるためのコンピュータプログラムも作成可能である。また、当該コンピュータプログラムを記憶させたコンピュータ読み取り可能な記憶媒体も提供される。 For example, a computer program for causing the functions of the server 1 and the user terminal 2 to be performed on hardware such as the CPU, ROM, and RAM incorporated in the server 1 and the user terminal 2 described above can be created. A computer-readable storage medium storing the computer program is also provided.
 また、比較部17の比較結果に基づく認識対象の差異に関する情報の通知は、上述したような可視化情報生成部18により生成された可視化情報のユーザ端末2への表示に限定されない。サーバ1は、比較部17の比較結果に基づく認識対象の差異に関する情報を、ユーザ端末2において音声等によりユーザに通知するよう制御する通知制御部の機能を有してもよい。 Further, the notification of the information related to the difference of the recognition target based on the comparison result of the comparison unit 17 is not limited to the display on the user terminal 2 of the visualization information generated by the visualization information generation unit 18 as described above. The server 1 may have a function of a notification control unit that controls the user terminal 2 to notify the user of information related to the recognition target difference based on the comparison result of the comparison unit 17 by voice or the like.
 また、本実施形態によるサーバ1の少なくとも一部の構成をユーザ端末2が有していてもよい。 Further, the user terminal 2 may have at least a part of the configuration of the server 1 according to the present embodiment.
 また、本明細書に記載された効果は、あくまで説明的または例示的なものであって限定的ではない。つまり、本開示に係る技術は、上記の効果とともに、または上記の効果に代えて、本明細書の記載から当業者には明らかな他の効果を奏しうる。 In addition, the effects described in this specification are merely illustrative or illustrative, and are not limited. That is, the technology according to the present disclosure can exhibit other effects that are apparent to those skilled in the art from the description of the present specification in addition to or instead of the above effects.
 なお、本技術は以下のような構成も取ることができる。
(1)
 認識対象の現在の特徴量と時系列上の他の時点における特徴量とを比較する比較部と、
 前記比較部の比較結果に基づく前記認識対象の差異を可視化した可視化情報を生成する生成部と、
を備える、情報処理装置。
(2)
 前記認識対象の特徴量は、ユーザが前記認識対象と対面した際に検知されるセンサ情報から抽出される、前記(1)に記載の情報処理装置。
(3)
 前記比較部は、前記認識対象の現在の特徴量と過去の特徴量とを比較する、前記(1)または(2)に記載の情報処理装置。
(4)
 前記比較部は、前記認識対象の現在の特徴量と、直近過去の特徴量とを比較する、前記(3)に記載の情報処理装置。
(5)
 前記比較部は、前記認識対象の現在の特徴量と、前記認識対象の過去の特徴量の傾向を示すモデルとを比較する、前記(3)に記載の情報処理装置。
(6)
 前記過去の特徴量のモデルは、前記認識対象の過去の特徴量に基づいて生成される、前記(5)に記載の情報処理装置。
(7)
 前記過去の特徴量のモデルは、1以上の他ユーザが前記認識対象と対面した際に検知されるセンサ情報から抽出された前記認識対象の過去の特徴量に基づいて生成される、前記(5)に記載の情報処理装置。
(8)
 前記比較部は、前記認識対象の現在の特徴量と、当該認識対象の未来の特徴量とを比較する、前記(1)または(2)に記載の情報処理装置。
(9)
 前記未来の特徴量は、前記認識対象の過去の特徴量のモデルおよび現在の特徴量に基づいて予測される、前記(8)に記載の情報処理装置。
(10)
 前記特徴量は、前記認識対象に適したセンサ情報から抽出される、前記(1)~(9)のいずれか1項に記載の情報処理装置。
(11)
 前記特徴量は、前記認識対象が人物である場合、カメラセンサまたは匂いセンサにより検知された前記人物の撮像画像または匂い情報から抽出される、前記(10)に記載の情報処理装置。
(12)
 前記特徴量は、前記認識対象が料理である場合、味覚センサ、匂いセンサ、またはカメラセンサにより検知された前記料理の味情報、匂い情報、または撮像画像から抽出される、前記(10)に記載の情報処理装置。
(13)
 前記特徴量は、前記認識対象が景色である場合、カメラセンサにより検知された前記景色の撮像画像から抽出される、前記(10)に記載の情報処理装置。
(14)
 前記生成部は、前記認識対象の差異個所を強調する表示画面を可視化情報として生成する、前記(1)~(13)のいずれか1項に記載の情報処理装置。
(15)
 前記生成部は、前記認識対象の差異の大きさ、差異のポジティブ-ネガティブ属性、および認識対象とユーザとの関係性に応じて、前記差異の提示態様を決定する、前記(1)~(14)のいずれか1項に記載の情報処理装置。
(16)
 前記情報処理装置は、前記比較部の比較結果に基づく前記認識対象の差異に関する情報をユーザに通知するよう制御する通知制御部をさらに備える、前記(1)~(15)のいずれか1項に記載の情報処理装置。
(17)
 認識対象の現在の特徴量と時系列上の他の時点における特徴量とを比較することと、
 前記比較の結果に基づく前記認識対象の差異を可視化した可視化情報を生成することと、
を含む、制御方法。
(18)
 コンピュータを、
 認識対象の現在の特徴量と時系列上の他の時点における特徴量とを比較する比較部と、
 前記比較部の比較結果に基づく前記認識対象の差異を可視化した可視化情報を生成する生成部と、
として機能させるための、プログラム。
In addition, this technique can also take the following structures.
(1)
A comparison unit that compares the current feature value of the recognition target with the feature value at another point in time series,
A generation unit that generates visualization information that visualizes the difference between the recognition targets based on the comparison result of the comparison unit;
An information processing apparatus comprising:
(2)
The information processing apparatus according to (1), wherein the feature quantity of the recognition target is extracted from sensor information detected when a user faces the recognition target.
(3)
The information processing apparatus according to (1) or (2), wherein the comparison unit compares a current feature amount of the recognition target with a past feature amount.
(4)
The information processing apparatus according to (3), wherein the comparison unit compares a current feature amount of the recognition target with a feature amount in the past.
(5)
The information processing apparatus according to (3), wherein the comparison unit compares a current feature amount of the recognition target with a model indicating a tendency of the past feature amount of the recognition target.
(6)
The information processing apparatus according to (5), wherein the past feature amount model is generated based on the past feature amount of the recognition target.
(7)
The model of the past feature amount is generated based on the past feature amount of the recognition target extracted from sensor information detected when one or more other users face the recognition target. ).
(8)
The information processing apparatus according to (1) or (2), wherein the comparison unit compares a current feature amount of the recognition target with a future feature amount of the recognition target.
(9)
The information processing apparatus according to (8), wherein the future feature amount is predicted based on a past feature amount model of the recognition target and a current feature amount.
(10)
The information processing apparatus according to any one of (1) to (9), wherein the feature amount is extracted from sensor information suitable for the recognition target.
(11)
The information processing apparatus according to (10), wherein when the recognition target is a person, the feature amount is extracted from a captured image or odor information of the person detected by a camera sensor or an odor sensor.
(12)
The feature amount is extracted from the taste information, odor information, or captured image of the dish detected by a taste sensor, an odor sensor, or a camera sensor when the recognition target is a dish. Information processing device.
(13)
The information processing apparatus according to (10), wherein the feature amount is extracted from a captured image of the scenery detected by a camera sensor when the recognition target is a scenery.
(14)
The information processing apparatus according to any one of (1) to (13), wherein the generation unit generates, as visualization information, a display screen that emphasizes the difference portion to be recognized.
(15)
The generation unit determines a presentation mode of the difference according to the magnitude of the difference of the recognition target, the positive-negative attribute of the difference, and the relationship between the recognition target and the user, (1) to (14 The information processing apparatus according to any one of the above.
(16)
The information processing apparatus according to any one of (1) to (15), further including a notification control unit that controls to notify a user of information related to the recognition target difference based on a comparison result of the comparison unit. The information processing apparatus described.
(17)
Comparing the current feature quantity of the recognition target with the feature quantity at another point in time,
Generating visualization information that visualizes the difference of the recognition target based on the comparison result;
Including a control method.
(18)
Computer
A comparison unit that compares the current feature value of the recognition target with the feature value at another point in time series,
A generation unit that generates visualization information that visualizes the difference between the recognition targets based on the comparison result of the comparison unit;
Program to function as
 1  サーバ
 11  通信部
 12  生データDB
 13  特徴量抽出部
 14  特徴量DB
 15  モデル生成部
 16  モデルDB
 17  比較部
 18  可視化情報生成部
 19  予測部
 2  ユーザ端末
 3  ネットワーク
 31  料理メタDBサーバ
 32  気象情報DBサーバ
 33  場所メタDBサーバ
 34  多ユーザモデルDBサーバ
 35  一般的な好ましさDBサーバ
 36  商品メタDBサーバ
 37  著名人DBサーバ
1 server 11 communication unit 12 raw data DB
13 Feature Extraction Unit 14 Feature DB
15 Model generator 16 Model DB
17 Comparison unit 18 Visualization information generation unit 19 Prediction unit 2 User terminal 3 Network 31 Cooking meta DB server 32 Weather information DB server 33 Location meta DB server 34 Multi-user model DB server 35 General preference DB server 36 Product meta DB Server 37 Celebrity DB server

Claims (18)

  1.  認識対象の現在の特徴量と時系列上の他の時点における特徴量とを比較する比較部と、
     前記比較部の比較結果に基づく前記認識対象の差異を可視化した可視化情報を生成する生成部と、
    を備える、情報処理装置。
    A comparison unit that compares the current feature value of the recognition target with the feature value at another point in time series,
    A generation unit that generates visualization information that visualizes the difference between the recognition targets based on the comparison result of the comparison unit;
    An information processing apparatus comprising:
  2.  前記認識対象の特徴量は、ユーザが前記認識対象と対面した際に検知されるセンサ情報から抽出される、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the feature quantity of the recognition target is extracted from sensor information detected when a user faces the recognition target.
  3.  前記比較部は、前記認識対象の現在の特徴量と過去の特徴量とを比較する、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the comparison unit compares a current feature amount of the recognition target with a past feature amount.
  4.  前記比較部は、前記認識対象の現在の特徴量と、直近過去の特徴量とを比較する、請求項3に記載の情報処理装置。 The information processing apparatus according to claim 3, wherein the comparison unit compares the current feature value of the recognition target with a feature value in the past.
  5.  前記比較部は、前記認識対象の現在の特徴量と、前記認識対象の過去の特徴量の傾向を示すモデルとを比較する、請求項3に記載の情報処理装置。 The information processing apparatus according to claim 3, wherein the comparison unit compares a current feature amount of the recognition target with a model indicating a tendency of a past feature amount of the recognition target.
  6.  前記過去の特徴量のモデルは、前記認識対象の過去の特徴量に基づいて生成される、請求項5に記載の情報処理装置。 The information processing apparatus according to claim 5, wherein the model of the past feature quantity is generated based on the past feature quantity of the recognition target.
  7.  前記過去の特徴量のモデルは、1以上の他ユーザが前記認識対象と対面した際に検知されるセンサ情報から抽出された前記認識対象の過去の特徴量に基づいて生成される、請求項5に記載の情報処理装置。 The past feature amount model is generated based on past feature amounts of the recognition target extracted from sensor information detected when one or more other users face the recognition target. The information processing apparatus described in 1.
  8.  前記比較部は、前記認識対象の現在の特徴量と、当該認識対象の未来の特徴量とを比較する、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the comparison unit compares a current feature amount of the recognition target with a future feature amount of the recognition target.
  9.  前記未来の特徴量は、前記認識対象の過去の特徴量のモデルおよび現在の特徴量に基づいて予測される、請求項8に記載の情報処理装置。 The information processing apparatus according to claim 8, wherein the future feature amount is predicted based on a past feature amount model of the recognition target and a current feature amount.
  10.  前記特徴量は、前記認識対象に適したセンサ情報から抽出される、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the feature amount is extracted from sensor information suitable for the recognition target.
  11.  前記特徴量は、前記認識対象が人物である場合、カメラセンサまたは匂いセンサにより検知された前記人物の撮像画像または匂い情報から抽出される、請求項10に記載の情報処理装置。 The information processing apparatus according to claim 10, wherein when the recognition target is a person, the feature amount is extracted from a captured image or odor information of the person detected by a camera sensor or an odor sensor.
  12.  前記特徴量は、前記認識対象が料理である場合、味覚センサ、匂いセンサ、またはカメラセンサにより検知された前記料理の味情報、匂い情報、または撮像画像から抽出される、請求項10に記載の情報処理装置。 The feature amount is extracted from taste information, odor information, or a captured image of the dish detected by a taste sensor, an odor sensor, or a camera sensor when the recognition target is a dish. Information processing device.
  13.  前記特徴量は、前記認識対象が景色である場合、カメラセンサにより検知された前記景色の撮像画像から抽出される、請求項10に記載の情報処理装置。 The information processing apparatus according to claim 10, wherein the feature amount is extracted from a captured image of the scenery detected by a camera sensor when the recognition target is a scenery.
  14.  前記生成部は、前記認識対象の差異個所を強調する表示画面を可視化情報として生成する、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the generation unit generates a display screen that highlights the difference portion to be recognized as visualization information.
  15.  前記生成部は、前記認識対象の差異の大きさ、差異のポジティブ-ネガティブ属性、および認識対象とユーザとの関係性に応じて、前記差異の提示態様を決定する、請求項1に記載の情報処理装置。 2. The information according to claim 1, wherein the generation unit determines a presentation mode of the difference according to a difference size of the recognition target, a positive-negative attribute of the difference, and a relationship between the recognition target and the user. Processing equipment.
  16.  前記情報処理装置は、前記比較部の比較結果に基づく前記認識対象の差異に関する情報をユーザに通知するよう制御する通知制御部をさらに備える、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, further comprising a notification control unit that controls to notify a user of information related to the recognition target difference based on a comparison result of the comparison unit.
  17.  認識対象の現在の特徴量と時系列上の他の時点における特徴量とを比較することと、
     前記比較の結果に基づく前記認識対象の差異を可視化した可視化情報を生成することと、
    を含む、制御方法。
    Comparing the current feature quantity of the recognition target with the feature quantity at another point in time,
    Generating visualization information that visualizes the difference of the recognition target based on the comparison result;
    Including a control method.
  18.  コンピュータを、
     認識対象の現在の特徴量と時系列上の他の時点における特徴量とを比較する比較部と、
     前記比較部の比較結果に基づく前記認識対象の差異を可視化した可視化情報を生成する生成部と、
    として機能させるための、プログラム。
    Computer
    A comparison unit that compares the current feature value of the recognition target with the feature value at another point in time series,
    A generation unit that generates visualization information that visualizes the difference between the recognition targets based on the comparison result of the comparison unit;
    Program to function as
PCT/JP2015/075268 2014-11-27 2015-09-04 Information processing device, control method and program WO2016084453A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014-239932 2014-11-27
JP2014239932A JP2016103079A (en) 2014-11-27 2014-11-27 Information processing device, control method, and program

Publications (1)

Publication Number Publication Date
WO2016084453A1 true WO2016084453A1 (en) 2016-06-02

Family

ID=56074034

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/075268 WO2016084453A1 (en) 2014-11-27 2015-09-04 Information processing device, control method and program

Country Status (2)

Country Link
JP (1) JP2016103079A (en)
WO (1) WO2016084453A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7074422B2 (en) * 2016-11-02 2022-05-24 花王株式会社 Aging analysis method
JP6901139B2 (en) * 2017-12-12 2021-07-14 株式会社Lyxis Lifelog image generator, lifelog image generation method, and program
WO2021187012A1 (en) * 2020-03-16 2021-09-23 日本電気株式会社 Information processing system, information processing method, and non-transitory computer-readable medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007212258A (en) * 2006-02-09 2007-08-23 Sharp Corp Portable telephone device
JP2008202985A (en) * 2007-02-16 2008-09-04 Matsushita Electric Works Ltd Electric power monitoring system
JP2011007417A (en) * 2009-06-25 2011-01-13 Sharp Corp Heating cooker
JP2012208835A (en) * 2011-03-30 2012-10-25 Toshiba Corp Disaster detection image processing system
JP2013005726A (en) * 2011-06-22 2013-01-10 Nikon Corp Information providing system, information providing device, information providing method, and program
JP2013054690A (en) * 2011-09-06 2013-03-21 Seiko Epson Corp Vehicle allocation management system, vehicle allocation method, vehicle operation support system, vehicle operation support method, program and recording medium
JP2013207406A (en) * 2012-03-27 2013-10-07 Nikon Corp Electronic device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007212258A (en) * 2006-02-09 2007-08-23 Sharp Corp Portable telephone device
JP2008202985A (en) * 2007-02-16 2008-09-04 Matsushita Electric Works Ltd Electric power monitoring system
JP2011007417A (en) * 2009-06-25 2011-01-13 Sharp Corp Heating cooker
JP2012208835A (en) * 2011-03-30 2012-10-25 Toshiba Corp Disaster detection image processing system
JP2013005726A (en) * 2011-06-22 2013-01-10 Nikon Corp Information providing system, information providing device, information providing method, and program
JP2013054690A (en) * 2011-09-06 2013-03-21 Seiko Epson Corp Vehicle allocation management system, vehicle allocation method, vehicle operation support system, vehicle operation support method, program and recording medium
JP2013207406A (en) * 2012-03-27 2013-10-07 Nikon Corp Electronic device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KENTA TANAKA: "Information presentation system of the person whom the user met previously by face recognition", ITE TECHNICAL REPORT, vol. 36, no. 9, 13 February 2012 (2012-02-13), pages 183 - 187 *
YUTA MURAKI: "Context Awareness ni Motozuku Ryori Shien System", HEISEI 26 NEN NATIONAL CONVENTION RECORD I.E.E. JAPAN [3] ELECTRONICS/ JOHO KOGAKU SYSTEM/SENSOR MICROMACHINE, 5 March 2014 (2014-03-05), pages 68 - 69 *

Also Published As

Publication number Publication date
JP2016103079A (en) 2016-06-02

Similar Documents

Publication Publication Date Title
JP6777201B2 (en) Information processing equipment, information processing methods and programs
US11503197B2 (en) Retrieving and displaying key words from prior conversations
US11616917B1 (en) Dynamic activity-based image generation for online social networks
US10062163B2 (en) Health information service system
KR102519686B1 (en) Method and apparatus for providing content
US20160371372A1 (en) Music Recommendation Based on Biometric and Motion Sensors on Mobile Device
US20180285641A1 (en) Electronic device and operation method thereof
KR20170029398A (en) Method and electronic apparatus for providing application
WO2017098760A1 (en) Information processing device, information processing method, and program
WO2013128715A1 (en) Electronic device
WO2016084453A1 (en) Information processing device, control method and program
CN111698564A (en) Information recommendation method, device, equipment and storage medium
US20210350131A1 (en) Media overlay selection system
US11876634B2 (en) Group contact lists generation
US20190042657A1 (en) Concierge system, concierge method, and concierge program
JP2013182422A (en) Electronic device
JP2013183289A (en) Electronic device
JP2019046428A (en) Accessory classification system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15862817

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15862817

Country of ref document: EP

Kind code of ref document: A1