JP2012155616A - Content provision system, content provision method, and content provision program - Google Patents

Content provision system, content provision method, and content provision program Download PDF

Info

Publication number
JP2012155616A
JP2012155616A JP2011015535A JP2011015535A JP2012155616A JP 2012155616 A JP2012155616 A JP 2012155616A JP 2011015535 A JP2011015535 A JP 2011015535A JP 2011015535 A JP2011015535 A JP 2011015535A JP 2012155616 A JP2012155616 A JP 2012155616A
Authority
JP
Japan
Prior art keywords
content
user
output
unit
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
JP2011015535A
Other languages
Japanese (ja)
Inventor
Hiromichi Nishiyama
弘道 西山
Original Assignee
Panasonic Corp
パナソニック株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Panasonic Corp, パナソニック株式会社 filed Critical Panasonic Corp
Priority to JP2011015535A priority Critical patent/JP2012155616A/en
Publication of JP2012155616A publication Critical patent/JP2012155616A/en
Application status is Withdrawn legal-status Critical

Links

Images

Abstract

PROBLEM TO BE SOLVED: To easily acquire response of a user to contents and provide contents suited for user preference.SOLUTION: A content provision system includes: an output part 203 for outputting contents; a determination part 208 for determining a type of user's behavior on the basis of information acquired by a sensor 212 for detecting the user's behavior; and a content selection part 209 for determining contents to be output by the output part 203 on the basis of a determination result by the determination part 208 corresponding to the contents output by the output part 203.

Description

  The present invention relates to a content providing system, a content providing method, and a content providing program that provide content that meets user preferences.

  In recent years, pet-type robots have been sold by various manufacturers, and there are many opportunities to find them in ordinary homes. Among them, robots that respond to calls from people and provide information such as content while communicating with people have appeared.

  As what provides a content while a robot communicates with a person, the content which matches the emotion of a robot and selects and reproduces is proposed (for example, refer to patent documents 1). In the content reproduction system described in Patent Document 1, a robot apparatus that autonomously operates by changing emotions according to external actions transmits emotion information to the content reproduction apparatus. The content reproduction device compares the metadata describing information about the content with emotion information of the robot device, and expresses the emotion of the robot device by selecting and reproducing the content that matches the emotion of the robot device.

JP 2005-169567 A

  However, the conventional configuration only displays content in accordance with the emotion of the robot apparatus, and the user evaluates the content, or reflects the user operation performed during content playback on the content to be played back. Can not do it.

  Therefore, an object of the present invention is to provide a content that can easily acquire a user's reaction to the content and meets the user preference.

  The content providing system disclosed in the present application is output from the output unit that outputs content, the determination unit that determines the type of the user's behavior based on information obtained from a sensor that detects the user's behavior, and the output unit. A content selection unit that determines content to be further output by the output unit based on a determination result by the determination unit corresponding to the content.

  According to the above configuration, the type of user behavior is determined from the user behavior detected by the sensor, and the content to be further output is determined based on the determination result corresponding to the output content. Therefore, the user's reaction to the output content can be determined from the user's behavior detected by the sensor, and the determination result can be used for determining the content to be further output. That is, content evaluation by the user is obtained, and content to be further output is determined based on this evaluation, so that it is possible to provide content that suits the user's preference.

  In the content providing system, the determination unit can determine which of a plurality of predetermined behavior types corresponds to the user behavior detected by the sensor. Thereby, the kind of user's behavior can be determined quickly and appropriately.

  In the content providing system, the determination unit determines the user's emotion based on the type of the user's behavior, and the content selection unit corresponds to the content output by the output unit. The output unit may further determine content to be output based on the user's emotion determined in (1).

  Thereby, a user's emotion can be read from a user's behavior, and it becomes possible to output further content according to the user emotion with respect to the outputted content. Therefore, the user's emotion can be reflected in the content selection.

  In the content providing system, the content selection unit outputs a determination result based on the behavior of the user detected by the sensor to the content output by the output unit when the output unit outputs the content. As a corresponding determination result, content to be further output by the output unit may be determined based on the determination result.

  Thereby, the result determined from the behavior of the user at the time of content output can be automatically associated with the content.

  The content providing system may further include a storage unit that records content history data indicating a history of the content output by the output unit and data indicating a determination result corresponding to each content of the history data as evaluation data, The content selection unit may determine a content to be further output by the output unit based on the evaluation data recorded in the storage unit.

  With this configuration, it is possible to accumulate evaluation data that correlates each content with the determination result of the user's behavior for each content determined from the history of the output content and the user's behavior. The content selection unit can determine what kind of content the user likes based on the accumulated evaluation data. Therefore, for example, it is possible to select and output a content that the user likes from among a plurality of contents, or to output the content that the user likes with priority.

  In the content providing system, the sensor includes a sensor that detects a user's contact, and the determination unit is based on at least one of the strength and amount of the user's contact detected by the sensor. It can be set as the aspect which determines the kind of user's behavior.

  Thus, by detecting the user's contact with the sensor, the user's reaction to the output content can be determined, and further content can be determined based on the determination result. Therefore, the user's intuitive operation can be reflected in the content selection. That is, by determining the intuitive operation of the user and combining it with the content, the content can be displayed according to the user's reaction.

  The content providing system further includes a storage unit that stores a plurality of contents, and the content selection unit, based on a determination result by the determination unit, a part of the plurality of contents stored in the storage unit, Furthermore, the content to be output can be determined.

  The content providing system includes a terminal and a server, the terminal includes at least the sensor, the server includes at least the content selection unit, and the determination unit and the output unit include the terminal or It can be a mode provided in the server.

  Thus, by providing a terminal including at least the sensor and a server including at least a content selection unit, a function necessary for a position close to the user and a function for handling a large amount of content are It can be efficiently distributed to servers.

  The content providing system further includes a personality management unit that determines the personality of the terminal based on a history of determination results by the determination unit, and the operation of the terminal is controlled by the personality determined by the personality management unit. It may be an embodiment.

  With this configuration, it is possible to assign a character to the terminal based on at least one type of history of the user's operation and state, and to control the operation of the terminal based on the character.

  With the above configuration, it is possible to easily acquire a reaction to the user's content, and to provide content that matches the user preference.

Functional block diagram showing a configuration example of a content providing system in the first embodiment The flowchart which shows an example of judgment processing of user feeling The figure which shows an example of the criterion data used for contact kind determination Sequence diagram showing an operation example of the content providing system The figure which shows the structural example in case a terminal is provided with the function of a content provision system. The figure which shows the structural example in case the terminal which outputs a content, and the terminal which detects a user's behavior are each provided separately. The figure which shows the structural example in case the content server which provides a content is provided separately separately. The figure which shows the modification (pressure sensor) of a sensor The figure which shows the modification (pressure sensor, infrared sensor) of a sensor The figure which shows the modification (pressure sensor, infrared sensor, camera, microphone) of a sensor The figure which shows the modification (touch panel) of a sensor The figure which shows an example of the criteria data in the case of determining a user's emotion based on a user's voice recognition and image recognition in addition to a user's contact information The figure which shows an example of the evaluation data recorded on a memory | storage part The figure for demonstrating the example of the process which a content selection part selects a channel The figure which shows an example of the probability which the content selection part set based on evaluation data Functional block diagram illustrating a configuration example of a content providing system according to the third embodiment The flowchart which shows the process example which determines the character formation of a terminal The figure which shows an example of the appearance of the terminal when the personality is “Sunao” The figure which shows an example of the appearance of the terminal when the personality is “get off” The figure which shows an example of the appearance of the terminal when the personality is “swell”

(Embodiment 1)
[System configuration example]
FIG. 1 is a functional block diagram illustrating a configuration example of a content providing system according to the first embodiment. The content providing system illustrated in FIG. 1 includes a terminal 101 having a function of outputting content and a server 102 having a function of transmitting content to the terminal 101. The server 102 and the terminal 101 can communicate with each other. For example, the server 102 and the terminal 101 can communicate via a wireless or wired network, but the communication means is not limited to a specific one.

  As an example, in the present embodiment, the terminal 101 is a device that reproduces (displays) video content placed in the home, and is used in the same manner as a TV that receives a program or a PC that receives a video. In this case, the terminal 101 can also be referred to as a content reproduction device.

  The terminal 101 includes a behavior information acquisition unit 201, a control unit 202, an output unit 203, and a communication processing unit 204. In addition, a sensor 212 and an output device 211 are connected to the terminal 101.

  The behavior information acquisition unit 201 acquires behavior information indicating the user's behavior from the sensor 212 outside the terminal 101. That is, the sensor 212 detects at least one of the user's operation and state. The behavior information detected by the sensor 212 can be, for example, information indicating what operation the user has performed on the terminal 101. The user behavior detected by the sensor 212 preferably indicates the user's response to the content output from the terminal 101.

  In the present embodiment, as an example, a case where the sensor 212 is a piezoelectric sensor or an infrared sensor will be described. When the sensor 212 is a piezoelectric sensor, the behavior information acquisition unit 201 determines, for example, the strength touched by the user and the amount touched on the surface of the terminal 101 (for example, the length of the contact time or the length of the contact distance). It can be acquired as behavior information. In this case, the behavior information acquisition unit 201 determines the degree of contact with the user's sensor 212. Note that the user behavior information may be information acquired by a single sensor or information obtained by combining information of a plurality of types of sensors.

  The control unit 202 controls the operation of the terminal 101. For example, the control unit 202 performs control so that user behavior information acquired by the behavior information acquisition unit 201 is transmitted to the communication processing unit 204, and content received by the communication processing unit 204 is displayed by the output unit 203. For example, when the behavior information acquisition unit 201 acquires the behavior information when the content is displayed on the output device 211, the control unit 202 transmits the content and the behavior information to the server 102. 204 can be controlled.

  The output unit 203 is a part that controls the output device 211. When the output device 211 is a display, the output unit 203 sends the video signal of the content received by the communication processing unit 204 to be displayed on the output device 211. . Thereby, the content can be displayed and shown to the user. The output device 211 is not limited to a display only, and may include, for example, a speaker that outputs sound.

  The communication processing unit 204 is an interface for the terminal 101 to communicate with the server 102, and can send information from the terminal 101 and receive information from the server 102.

  For example, the server 102 is a device that distributes content provided by a broadcaster or an Internet service provider. The server 102 includes a storage unit 205, a control unit 206, and a communication processing unit 207. The control unit 206 includes a determination unit 208 and a content selection unit 209.

  The storage unit 205 is a part that stores data of a plurality of contents, for example. The control unit 206 can select and read the content from the storage unit 205 and distribute it. Note that the content stored in the storage unit 205 may be, for example, content uniquely created by the server 102, content acquired by the server 102 from the outside, or other various types. it can.

  The communication processing unit 207 is an interface for the server 102 to communicate with the terminal 101, and can acquire information from the terminal 101 and send information to the terminal 101. Further, the communication processing unit 207 can be connected to, for example, a network other than the terminal 101 and the line (for example, the Internet). The server 102 may acquire content information via the other network.

  The determination unit 208 is a part that determines what operation the user has performed on the terminal 101 or what state the user has entered based on the behavior information received from the terminal 101. Therefore, the determination unit 208 determines the type of user behavior indicated by the behavior information. For example, the determination unit 208 determines to which of the predetermined behavior types the behavior indicated by the behavior information corresponds. As an example, when the value p indicating the strength (pressure) of the user's contact is included in the behavior information, this value p is determined by a plurality of predetermined numerical levels (for example, level strength [p> P1], level The type of contact can be determined depending on which level of [P1 ≧ p ≧ P2] and weak level [p <P2]). Further, the behavior type can be determined by classifying the user's behavior into one of a plurality of predetermined types. Thus, in order to determine the type of user's behavior, information serving as a determination criterion is recorded in advance in, for example, the storage unit 205, and the type can be determined based on the determination criterion.

  The determination unit 208 may determine the user's emotion based on the behavior information or the type of user behavior determined based on the behavior information. The determination of the emotion can be performed using, for example, data recorded in advance indicating the correspondence between the type of user behavior and the emotion. For example, the type of the determined user behavior can be converted into the user's emotion using the correspondence. Alternatively, when the user's behavior is represented by a numerical value, the emotion may be determined depending on which of the plurality of predetermined intervals the numerical value belongs to. In this way, for the user's emotion determination, information serving as a determination criterion can be recorded in advance in, for example, the storage unit 205, and the emotion can be determined based on the determination criterion. A specific example of emotion determination by the determination unit 208 will be described later. The emotions that can be determined here include emotional and sensory emotions such as whether the user likes or dislikes the content, as well as intentions and desires such as “I want you to stop outputting content” or “I want to see more”. included.

  The content selection unit 209 determines content to be further output by the output unit 203 based on the determination result by the determination unit 208 corresponding to the content output by the output unit 203 of the terminal 101. The content selection unit 209 specifies, for example, which content represents the user's reaction among the output content, as a result of the determination by the determination unit 208. Then, the content selection unit 209 further determines the content to be output based on the content and the determination result that is the user's response to the content.

  Here, the “content to be output” determined by the content selection unit 209 can be, for example, content output after the content being output in the output unit 203. That is, the content selection unit 209 can determine the content to be further output following the content being output. Note that not only the content to be output next to the content being output, but also content to be output after that may be determined. In addition, the content selection unit 209 may determine a plurality of contents to be subsequently output and their order, and determine content attributes (for example, category, type, genre, or keyword) based on the determination result. The content to be output may be selected at random from the content group having the determined attribute.

  In addition, the content selection unit 209 can determine whether to continue to output the content that is currently being output, or to interrupt the content output that is currently being output and output another content. Note that the terminal 101 may sequentially output the contents determined by the content selection unit 209, or may output other determined contents in parallel while outputting the contents.

  In this manner, the content selection unit 209 can reflect the determination result for the output content on the content to be further output. For example, when the type of user's behavior with respect to the output content indicates poor evaluation (for example, negative feelings with respect to the content), content different from the content can be determined as content to be further output. On the other hand, when the user's behavior type indicates good evaluation (for example, positive feelings for the content), the content or similar content can be determined as the content to be further output.

  The content selection unit 209 recognizes the determination result based on the user behavior detected by the sensor as the determination result corresponding to the content output by the output unit 203 when the output unit 203 outputs the content. (Identify), and based on the determination result, the content to be further output by the output unit 203 can be determined. The content selection unit 209 associates the type of user behavior determined by the determination unit 208 with the content information being output. Thereby, the determination result for the content being output becomes clear.

  The content selection unit 209 monitors, for example, the content output timing, the user behavior timing by the sensor 212, or the determination timing by the determination unit 208, and the determination result obtained simultaneously during the content output is being output. Can be associated with content. In order to monitor the content being output, the content selection unit 209 may acquire, for example, information indicating the content currently being output from the terminal 101, or content that the server 102 is transmitting to the terminal 101. May be determined as the content currently being output.

  When determining the content to be output, the content selection unit 209 may determine a part of the plurality of contents stored in the storage unit 205 as the content to be further output based on the determination result by the determination unit 208. it can. For example, the storage unit 205 can store a plurality of contents together with information (attribute information) indicating attributes such as the category, type, and genre of the contents. In this case, the content selection unit 209 can determine the content to be output by extracting, from the storage unit 205, the content having the attribute corresponding to the user's emotion obtained from the determination result.

  In addition, the storage unit 205 stores the content divided into a plurality of groups, and the content selection unit 209 selects a group according to the determination result, and selects the content to be output from the content group belonging to the selected group. It can also be extracted.

  The storage unit 205 may be provided in the server 102, but may be provided outside the server 102, for example, in another content server.

  Alternatively, in a configuration in which content is distributed to the terminal 101 by a plurality of channels by the server 102 or an external distribution server, the content selection unit 209 determines the content to be output by selecting a channel according to the determination result. You can also. The control unit 206 of the server 102 reads the content determined by the content selection unit 209 from the storage unit 205 and transmits the content to the terminal 101 via the communication processing unit 207. The communication processing unit 207 functions as a transmission unit that transmits the content determined by the content selection unit 209 to the terminal 101. In addition, the communication processing unit 204 of the terminal 101 functions as a receiving unit that receives content from the server 102. In this way, the server 102 can acquire the user's reaction to the output content, and can transmit the content that matches the user's preference to the terminal 101. Thereby, in a content provision system, the user interface which enables it to learn the user's preference with respect to a content from a user's intuitive action is implement | achieved.

[Example of emotion judgment processing]
Here, a user emotion determination process example by the determination unit 208 will be described. FIG. 2 is a flowchart illustrating an example of a user emotion determination process. In FIG. 2, after obtaining the behavior information of the user (T201), the determination unit 208 determines the contact strength indicated by the behavior information (T202), and determines the operation distance indicating the touched distance (T203). Based on the results of the strength determination (T202) and the operation distance determination (T203), the determination unit 208 determines which type of user's contact (user action) is “striking”, “stroking”, or “no operation”. (T204) (A specific example of the process of T204 will be described later). Moreover, the determination part 208 determines a user's emotion based on the kind. For example, when it is determined that the type of contact is “tapping”, the determination unit 208 determines that the user emotion is “dislike” (T205), and when the type of contact is determined to be “stroking”, the user emotion is “ If it is determined that the user feels “like” (T206) and the type of contact is “no operation”, the user emotion is determined to be “indifferent” (T207). Thus, by determining the user's emotion from the content of the user's behavior information, information indicating what emotion the user has with respect to the content output from the terminal 101 can be obtained.

  Note that data that associates the types of behavior such as “hitting”, “stroking”, and “no operation” with user emotions such as “like”, “dislike”, and “indifference” are stored in advance as reference data for emotion determination It can be recorded in the unit 205. The determination unit 208 can determine emotion using reference data recorded in advance. Further, data associating behavior information with emotion can be used as reference data.

[Specific example of user action type determination]
FIG. 3 is a diagram illustrating an example of determination criterion data used to determine the type of contact by the user. The determination unit 208 can determine the type of user contact based on a combination of the strength of contact by the user and the distance moved in the contacted state (operation distance) using the determination reference data shown in FIG. Thereby, what a user's action (behavior) was can be derived.

  For example, the user's operation strength t is determined to be “strong” because it is greater than a predetermined threshold T1 (T1 <t), and the operation distance s is smaller than a predetermined threshold S2 (s <S2). If it is determined as “short”, the determination unit 208 can determine that the type of contact of the user is “hit” because there is a high possibility that the user has hit the sensor 212 of the terminal 101.

  Further, it is determined that the user operation strength t is “intermediate” within a predetermined section (T1 <t <T2), or t is smaller than the threshold T2 (t <T2) and “weak”, and the operation distance s Is larger than the threshold value S1 (s> S1), and it is determined that the user is stroking the sensor 212 of the terminal 101, the determination unit 208 determines that the type of contact by the user is “stroking” Can be determined.

  Further, when the contact strength t by the user is determined to be “weak” smaller than the threshold T2 (t <T2) and the operation distance s is determined to be smaller than the threshold S2 (s <S2) “short”. Since it is highly likely that the user has hardly touched the sensor 212 of the terminal 101, the determination unit 208 determines that “no operation”. Note that “no operation” may be determined when a user operation is not detected for a certain period of time after content distribution.

  Note that the criteria for determining the strength of the user operation and the operation distance may be divided into more stages instead of the three stages as shown in FIG. Further, the criteria for determining the combination of the strength of the user operation and the operation distance may be customizable by the user's own operation so as to be closer to the actual user operation.

[Operation example of content provision system]
Next, an operation example of the content providing system will be described. FIG. 4 is a sequence diagram illustrating an operation example of the content providing system. In FIG. 4, the same symbols are used for the same components as in FIG.

  In step S01, the server 102 distributes content to the terminal 101. When the user 103 has not performed an operation so far, the content to be distributed first is selected at random.

  In step S02, the output unit 203 causes the display 211 to display the content distributed from the server 102 in the terminal 101.

  In step S03, the user 103 sees the displayed content and performs some action. The action is detected by the sensor 212. The action can be detected as at least one of the operation and the state of the user 103, for example. If the user 103 is not interested in the content, the user 103 may not perform any operation.

  In step S04, in the terminal 101, the behavior information acquisition unit 201 acquires the behavior information of the user 103 indicating the action (behavior) of the user 103 detected by the sensor 212. The control unit 202 causes the communication processing unit 204 to transmit behavior information to the server 102.

  In step S <b> 05, the determination unit 208 of the server 102 determines user behavior from the received behavior information of the user 103. Note that the determination unit 208 may determine the user emotion in this step as described above.

  In step S06, the content selection unit 209 selects content based on the user behavior determined in S05. For example, when the user behavior (emotion) is stroking (like), the content distributed at that time is continuously selected. When the user behavior (emotion) is hit (dislike), the content distributed at that time is stopped and another content is selected.

  As described above, in step S06, the content selection unit 209 recognizes that the behavior information of the user 103 received from the terminal 101 corresponds to the content distributed in step S01, and determines the content to be distributed thereafter. is doing. Which content the received behavior information is the behavior of the user 103 can be specified based on, for example, the reception timing and the content distribution timing. For example, behavior information received during content distribution or within a certain period after content distribution can be determined to indicate behavior for the distributed content. Further, in step S04, the server 102 may receive information specifying the corresponding content from the terminal 101 together with the user behavior information.

  In step S07, the server 102 distributes the content selected in step S06 to the terminal 101.

  In step S08, the output unit 203 causes the output device 211 to display the content on the terminal 101. When a new behavior of the user with respect to this content is detected, steps S03 to S08 are repeated.

  As described above, according to the present embodiment, it is possible to realize a content providing system that feeds back user behavior with respect to distributed content and provides content that suits the user's preference.

  In this embodiment, by acquiring information indicating contact with the user's sensor 212 as behavior information, a reaction to the user's content can be determined by an intuitive operation such as a contact operation with the sensor 212. . For example, according to the content providing system, the user's preference can be grasped by acquiring intuitive actions such as “tapping” and “stroking” by the sensor 212 and combining the content information.

  In FIG. 1, one terminal 101 and one server 102 are provided, but this is an example, and the number is not limited to this. For example, the server 102 can be configured to transmit content to a plurality of terminals. This also applies to the modified examples and embodiments described below.

  Hereinafter, Modifications 1 to 3 of the system configuration of the content providing system will be described with reference to FIGS. 5 to 7, the same functional blocks as those in FIG. In addition, the following first to third modifications of the system configuration are applicable not only to the first embodiment but also to all the following embodiments. Moreover, the modification of a system configuration is not restricted to the following.

[Modification 1 of system configuration]
FIG. 5 is a configuration example when the terminal 101 has a function of a content providing system. The terminal 101 is connected to a sensor 212 and an output device 211, and includes a behavior information acquisition unit 201, a control unit 202 (including a determination unit 208 and a content selection unit 209), an output unit 203, and a storage unit 205. The storage unit 205 stores a plurality of contents. That is, the terminal 101 selects and reproduces content from the storage unit 205, and when the user's behavior is detected during reproduction, the terminal 101 determines the type or emotion of the user's behavior based on the behavior information, and displays the determination result. Based on this, the content to be reproduced next is determined. The determined next content is read from the storage unit 205 and output by the output unit 203.

  The terminal 101 of the first modification can be applied to, for example, a video or music playback device. In this case, when the terminal 101 is playing back the content, the user, for example, taps, strokes, shouts, or praises the terminal 101 to store the content stored in the storage unit 205. From the inside, you can select and play content that suits your mood.

  As a further modification, for example, a part for storing and providing content can be provided outside the terminal 101. For example, instead of storing the content in the storage unit 205 illustrated in FIG. 5, the content is stored in the storage unit 205 of a computer accessible by the terminal 101, and the computer stores the content in response to a request from the terminal 101. It can also be configured to be provided to the terminal 101.

[Modification 2 of system configuration]
FIG. 6 is a configuration example in the case where a terminal that outputs content and a terminal that detects user behavior are provided separately. In the example shown in FIG. 6, the content providing system includes a server 102, an output terminal 101b, and a sensor terminal 101a. The output terminal 101b is connected to the output device 211, and includes a communication processing unit 204b for communicating with the server 102, a control unit 202a, and an output unit 203 for outputting content. The sensor terminal 101a is connected to the sensor 212 and includes a behavior information acquisition unit 201, a control unit 202a, and a communication processing unit 204b. The server 102 includes a storage unit 205 that stores content, a determination unit 208, a content selection unit 209, and a communication processing unit 207.

  The server 102 selects content from the content recorded in the storage unit 205 and distributes it to the output terminal 101b. The output terminal 101b outputs the received content. The sensor terminal 101a detects the user's behavior and transmits it to the server 102. The server 102 determines the type of behavior or the user's emotion from the user's behavior, and further determines the content to be distributed based on the determination result.

  Thus, by configuring the output terminal 101b and the sensor terminal 101a separately, for example, it is possible to provide a terminal that is specialized for detecting the user's behavior and that is easy to convey the behavior to the user. For example, the sensor terminal 101a can be in the shape of a stuffed toy having a head portion and a torso portion, and can be configured to include a pressure sensor that detects contact by the user at the head portion. Further, the communication between the sensor terminal 101a and the server 102 can be wireless communication. Thereby, for example, while holding the stuffed animal at the hand, the user sees the content displayed on the output device 211 by the output terminal 101b, and performs operations such as hitting, stroking, yelling, and giving up the stuffed animal. The content displayed in 101b can be adapted to his / her preference.

[Modification 3 of system configuration]
FIG. 7 is a diagram illustrating a configuration example in the case where content servers that provide content are separately provided independently. In the example illustrated in FIG. 7, the server 102 and the terminal 101 can communicate with the content servers 216-1 and 216-2. The content servers 216-1 and 216-2 include communication processing units 213-1 and 213, control units 214-1 and 215, and content storage units 215-1 and 215-2. The server 102 requests the content server 216-1 or the content server 216-2 for the content determined by the content selection unit 209. The content servers 216-1 and 216-2 receive the content request from the server 102, read the requested content from the content storage unit 215, and distribute it to the terminal 101 via the communication processing unit 213.

  Note that the number of content servers that receive content requests from the server 102 and transmit content to the terminal 101 is not limited to two as shown in FIG. 7, but may be one or three or more. As shown in FIG. 7, by providing a plurality of content servers, the server 102 can select a content server according to the user's behavior from the plurality of content servers. For example, if the behavior of the user detected when the terminal 101 outputs the content received from the content server 216-1 indicates a positive evaluation (emotion), the server 101 also stores the next content Subsequently, the content server 216-1 can be requested. On the other hand, when the user's behavior shows negative evaluation (emotion), the content server 216-2 other than the content server 216-1 can request the content to be output next. As described above, the content selection unit 209 determines a content request destination (content providing means, here, a content server as an example) according to the type of behavior information, requests the content, and receives the requested content. The providing means transmits the content to the terminal 101. Further, the content selection unit 209 may further determine information (content type, genre, etc.) specifying the requested content together with the content request destination, and make a request to the content providing means.

  As an example, the content selection unit 209 has selected the content server of the company A that provides the video file sharing service as the content request destination, but the detected user behavior type is the content server of the company A. Since it is determined that the content distributed from is not interested, it is possible to perform an operation such as switching the request destination to the content server of company B that provides VoD (video on demand).

  Hereinafter, modifications 1 to 4 of the sensor will be described with reference to FIGS. 8A to 8D, respectively. The following sensor modifications 1 to 4 are applicable not only to the first embodiment but also to all the following embodiments. Moreover, the modification of a sensor is not restricted to the following modifications 1-4.

[Sensor Modification 1]
The sensor illustrated in FIG. 8A is a diagram illustrating a configuration example of the sensor and behavior information acquisition unit 201 when the sensor 212 includes a pressure sensor 212a. In the example illustrated in FIG. 8A, the pressure sensor 212a detects the pressure of contact with the user's pressure sensor 212a. The behavior information acquisition unit 201 acquires a value indicating at least one of the strength of contact and the amount of contact as behavior information. The strength of the contact can be obtained by, for example, the magnitude of the pressure detected by the pressure sensor 212a. The contact amount can be obtained, for example, from the contact time or the contact distance detected by the pressure sensor 212a. The contact time can be obtained, for example, by detecting a pressure duration time or a time change. The contact distance can be obtained, for example, by providing a plurality of pressure sensors 212a at a plurality of different positions and detecting the pressure at each position. That is, information on the contact distance can be obtained by detecting pressure changes at a plurality of positions. For example, the pressure continuously detected at a plurality of positions within a predetermined time can be recognized as a contact operation by a series of users. The length of the trajectory in this series of contact operations can be acquired as the operation distance.

  Note that the type of user's behavior can be determined in more detail by detecting two or more values of the contact strength and the contact amount with the pressure sensor 212a. For example, it is possible to determine the intuitive behavior of the user by determining which of the patterns recorded in advance is a combination of two or more values obtained by the pressure sensor 212a. As a specific example, the determination unit 208, as in the example illustrated in FIG. 3, determines the type of user behavior or emotion depending on which of the preset combination patterns of contact strength and operation distance is applicable. Can be determined.

[Sensor Modification 2]
The sensor illustrated in FIG. 8B is a diagram illustrating a configuration example of the sensor and behavior information acquisition unit 201 when the sensor 212 includes a pressure sensor 212a and an infrared sensor 212b. In the example illustrated in FIG. 8B, the infrared sensor 212b detects, for example, a user's movement in a three-dimensional space. The behavior information acquisition unit 201 includes a pressure information acquisition unit 201a and a position information acquisition unit 201b. The pressure information acquisition unit 201a acquires at least one of the contact strength and the contact amount of the user as behavior information. The position information acquisition unit 201b acquires information indicating the position of the user as behavior information. For example, the position information acquisition unit 201b can acquire, as behavior information, a value indicating the direction or speed of the user operation before the contact with the pressure sensor 212a.

  Thereby, the determination unit 208 further uses the direction and speed of the user action before the contact detected by the infrared sensor 212b in addition to the strength and the amount of contact with the user's pressure sensor 212a. The type of operation (user behavior) can be determined. Thus, the strength of the user operation can be acquired by an external sensor such as a pressure sensor or an infrared sensor. As a result, it is possible to acquire a user's intuitive behavior.

[Sensor Modification 3]
The sensor illustrated in FIG. 8C is a diagram illustrating a configuration example of the sensor and behavior information acquisition unit 201 when the sensor 212 includes a pressure sensor 212a, an infrared sensor 212b, a camera 212c, and a microphone 212d. In the example illustrated in FIG. 8C, the camera 212c captures a user's still image or moving image. The microphone 212d records the user's voice. The behavior information acquisition unit 201 includes a pressure information acquisition unit 201a, a position information acquisition unit 201b, an image recognition unit 201c, and a voice recognition unit 201d.

  The image recognizing unit 201c can detect a user in an image captured by the camera 212c, and acquire at least one of the user's image or the user's state and action in the image as behavior information. For example, the user's image itself may be used as the behavior information, or the user's facial expression (for example, a smile, an angry face) determined based on the image or a motion may be used as the behavior information.

  The voice recognition unit 201d recognizes the voice recorded by the microphone 212d. The voice recognition result itself can be used as the behavior information of the user, and it can be used as the behavior information whether or not specific content is included in the recognized voice. Note that the determination process based on the image recognition result or the voice recognition result may be executed by the behavior information acquisition unit 201 of the terminal 101 or the determination unit 208 of the server 102.

  FIG. 9 is a diagram illustrating an example of determination reference data in the case of determining the user's emotion based on the user's voice recognition and image recognition in addition to the user's contact information (contact strength and contact amount). In the example illustrated in FIG. 9, a combination pattern of the user's contact strength and operation distance, voice recognition and image recognition results, and a value indicating the user's emotion in each pattern are recorded.

  When the determination unit 208 determines based on the data of FIG. 9, for example, when the user's contact is “strong” and a negative word is detected as a result of voice recognition, the preference (like feeling) Score) (value) can be determined to be −60.

  The negative word / phrase can be a word / phrase expressing discomfort, disgust, intention to refuse, etc., for example, “No”, “Kirai”, “Noisy”, “No”, “Stop”. The affirmative phrase can be, for example, a phrase that expresses pleasantness, good feeling, positive intention, etc., such as “like”, “suki”, “more”, “saikou”.

  Further, when an angry face is detected as a result of the image recognition, it is possible to further determine that the value of the preference is −60. The determination unit 208 can determine the user's emotion based on the total preference value. As described above, the determination unit 208 can convert the combination of the detected operation distance, voice recognition, and image recognition result into a favorable score by referring to the determination reference data in FIG. 9. For example, the preference value determined using the criterion data can be added on the basis of the score of preference = 0. In this case, if the detected behavior information indicates a positive emotion, the value of favor is positive, the value is added to the score, and if it indicates negative emotion, the value of favor is negative, The value will be subtracted from. As in the above-described process, it is possible to more accurately determine the user's emotion by using the results of voice recognition and image recognition in addition to the user contact information.

[Sensor Modification 4]
The sensor illustrated in FIG. 8D is a diagram illustrating a configuration example of the sensor and the behavior information acquisition unit 201 when the sensor 212 includes the touch panel 212e. The touch panel 212e includes, for example, a display panel, and the display panel can include a sensor that detects an instruction position on the screen by the user. The touch panel 212e may also serve as the content output device 211.

  The behavior information acquisition unit 201 can use the instruction operation on the screen by the user as behavior information. For example, the user's designated position, the locus of the designated position, the moving speed of the designated position, and the like can be used as behavior information, or a gesture determined from these can be used as behavior information.

  Specific examples of the terminal 101 using the touch panel 212e as the sensor 212 and the display as the output device 211 as described above include a mobile phone, a PDA (personal digital assistant), an electronic book reproduction terminal (electronic book reader), and a multifunctional device. Examples include tablet terminals such as mobile phones, electronic dictionaries, portable game machines, and multi-function remote controllers. In such a terminal 101, user behavior information can be acquired by the touch panel 212e. Moreover, the content determined based on the type of behavior determined from the behavior information can be displayed on the display.

  For example, the terminal 101 can be configured with a smartphone which is an example of a multi-function mobile phone. In this case, the user's behavior is detected by the touch panel (sensor) of the smartphone, and the content (for example, video, music, virtual pet, game, etc.) that is output by a display and a sound output device such as a speaker or an earphone depending on the type of behavior. ) Can be determined.

  For example, as illustrated in FIG. 1, the smartphone as the terminal 101 includes a communication processing unit 204 that transmits behavior information to the server 102 capable of data communication, and is determined from the server 102 based on the determination result of the behavior type. It may be configured to receive the content that has been processed. Or as shown in FIG. 5, the structure provided with the determination part 208, the content selection part 209, and the memory | storage part 205 may be sufficient as a smart phone. Further, as shown in FIG. 7, the content may be received from a content server 216 different from the server 102 that determines the type of behavior and determines the content.

  As mentioned above, although the modifications 1-4 of the sensor 212 were demonstrated, the modification of the sensor 212 is not restricted above. For example, the sensor 212 may include a sensor that detects biological information such as a user's blood pressure, pulse, and body temperature. Such a sensor for detecting biological information can be stored in a dedicated chair, for example.

(Embodiment 2)
In the second embodiment, a mode in which the content history distributed to the server 102 and user emotions can be held will be described. Other configurations and functions can be the same as those in the first embodiment. In the second embodiment, for example, in the configuration shown in FIG. 1, the storage unit 205 of the server 102 records evaluation data. The evaluation data includes content history data indicating the history of the output content, and data indicating the type or emotion (determination result of the determination unit 208) of the user corresponding to each content of the history data. The content selection unit 209 determines the content to be distributed next based on the evaluation data recorded in the storage unit 205. With this configuration, the server 102 can determine the content to be further output by the output unit of the terminal 101 based on the content output history and the determination result of the user behavior for each content.

  FIG. 10 is a diagram illustrating an example of evaluation data recorded in the storage unit 205. In the example illustrated in FIG. 10, the content history distributed by the server 102 to the terminal 101 and the user emotion history determined based on the behavior information from the terminal 101 are recorded in association with each other. Specifically, the content name, the content distribution date and time, the content distribution category, and the user's emotion when the content is distributed are recorded as one record. The content distribution category is classified by, for example, the type of content such as “sports”, “music”, and “drama”, and the content genre such as “comedy”, “action”, and “horror”.

  In the example illustrated in FIG. 10, “content A” belongs to “category A”, and the user emotion at the time of distribution is “dislike”. “Content B” belongs to “category B”, and the user emotion is “like”. “Content C” belongs to “Category C”, and the user emotion is “indifference”. “Content D” belongs to “category D”, and the user emotion is “dislike”. At this time, the server 102 can provide the content that suits the user's preference by giving priority to “category B” in which the user emotion is “like”. For example, the content selection unit 209 can select and read the category B content from the categories A to D stored in the storage unit 205 and transmit the selected content to the terminal 101. Alternatively, when content is transmitted from the server 102 using a plurality of channels, the content selection unit 209 can select a channel through which content of a specific category B is transmitted, and transmit the content of this channel to the terminal 101.

  FIG. 11 is a diagram for explaining an example of processing in which the content selection unit 209 selects a channel. In the example shown in FIG. 11, category A content flows on channel (ch1), category B content on channel (ch2), and category C content on channel (ch3) at the same time. At this time, since category B has priority according to user emotion, the content selection unit 209 selects a channel (ch2) through which category B flows and distributes it to the terminal 101.

  The content selection unit 209 can also control the appearance probability of each category in the content to be distributed based on the evaluation data. For example, in the history of determination results indicated by the evaluation data, it is possible to perform control so as to increase the appearance probability of content in a category with many determination results indicating good evaluation. Specifically, the content selection unit 209 can set and record the appearance probability for each content category based on the determination result (user emotion) in the evaluation data. FIG. 12 is an example of the appearance probability set by the content selection unit 209 based on the evaluation data. In the example shown in FIG. 12, categories and appearance probabilities are recorded in association with each other.

  According to this embodiment, the content to be output is controlled based on the user's intuitive behavior history. Therefore, it is possible to control the content to be output so as to meet the user's preference. Note that the content of the evaluation data is not limited to the example shown in FIG. For example, instead of the content distribution category, attribute information representing other content attributes such as content type, genre, related keywords, tags, and the like may be recorded for each content. Thereby, for example, the content selection unit 209 can identify an attribute with a high frequency of determination results indicating a positive emotion in the evaluation data, and prioritize the content having the attribute and determine the output content. become.

  Further, the information indicating the evaluation result is not limited to the user emotion as shown in FIG. 10. For example, the information indicating the user's evaluation of the content such as the type of the behavior information of the user, the score of the preference, etc. Good.

  In addition, the content selection unit 209 can also select content of a category with good evaluation from the storage unit in which a plurality of contents are stored instead of selecting a channel as described above, and transmit the content to the terminal 101.

(Embodiment 3)
FIG. 13 is a functional block diagram illustrating a configuration example of the content providing system according to the third embodiment. In FIG. 13, the same functional blocks as those in FIG. FIG. 13 shows a configuration in which a personality management unit 220 is further provided in the content providing system shown in FIG. Here, as an example, a case where the server 102 includes the personality management unit 220 will be described. The personality management unit 220 can also be provided in the terminal 101.

  Personality management unit 220 determines the personality of terminal 101 based on the history of determination results by determination unit 208. The server 102 controls the operation of the terminal 101 based on the personality determined by the personality management unit 220. For example, the terminal 101 includes an actuator 219 that changes the appearance or operates the terminal 101. The server 102 transmits personality information indicating the personality determined by the personality management unit 220 to the terminal 101. The control unit 202 of the terminal 101 controls the operation of the terminal 101 based on the personality information. Alternatively, the server 102 can control content output at the terminal 101 by controlling the content transmitted to the terminal 101.

  FIG. 14 is a flowchart illustrating an example of processing for determining personality formation of a terminal. In the example illustrated in FIG. 14, after acquiring user behavior information in the server 102, that is, after acquiring the user's behavior (T 601), the determination unit 208 determines what behavior the user has taken (the type of user behavior). (T602). Next, the personality management unit 220 determines whether or not the frequency of the behavior type determined in T602 is higher than a predetermined reference (T603, T604). For example, it is preferable to accumulate in the storage unit 205 a history of user behavior types determined by the determination unit 208. Thereby, the personality management unit 220 can determine the frequency with reference to the history of the types of user behavior accumulated in the storage unit 205.

  When it is determined that the frequency is high, the personality corresponding to the high frequency of the type of the behavior is selected as the personality of the terminal 101 (T605, T607). For example, if it is determined in T602 that the user's action is “hit”, it is determined whether the frequency of “hitting” in the past predetermined period is higher than a predetermined reference (T603). The personality management unit 220 selects “get off” as the personality of the terminal 101 when the frequency of “tapping” is high (T605), and selects “sunao” when the frequency of “striking” is low (T606). If the user action is “stroking”, “Suno” is selected as the personality of the terminal 101 (T606). When the user action is “no operation” and the frequency is high, “fiddle” is selected as the personality of the terminal 101 (T607). Further, when the frequency of “no operation” is low, the personality of the terminal 101 is not changed, and the personality set immediately before is continued.

  The personality information set in this way is set from the server 102 to the terminal 101. The control unit 202 of the terminal 101 changes the operation of the terminal 101 according to the personality information. For example, when the personality is “get off”, the output unit 203 can be instructed to play content that does not match the user preference. Further, when the personality is “fiddle”, the output unit 203 can be controlled not to display even if the content is distributed. With the above processing, the “personality” of the terminal 101 can be formed based on the history of the intuitive behavior of the user, and the operation of the terminal 101 and the content recommendation frequency can be changed according to the “personality”. In the above example, “guru” is an example of a rebellious personality, and “fiddle” is an example of a lazy personality. These personalities are examples, and personalities that can be set by the personality management unit 220 are not limited to these.

  Note that the set personality information may be associated with the color and operation of the terminal 101. For example, the actuator 219 can execute an operation for changing the appearance of the terminal 101 in accordance with the personality information.

  15A, 15B, and 15C are diagrams illustrating an example of the appearance of the terminal 101 that is changed by the actuator. 15A, 15B, and 15C, the terminal 101 includes a display 212 and a sensor 212 in the shape of a cat's head as the output device 211. Content is displayed on the display. The sensor 212 can be, for example, a pressure sensor that detects contact of the cat's head by the user. The head of the cat is provided with eyes 217 and a mouth 218 whose shape is changed by an actuator 219.

  In this example, the shapes of the eyes 217 and the mouth 218 provided on the head of the cat are changed depending on the personality. For example, the shapes of the eyes 217 and the mouth 218 are as shown in FIG. 15A when the personality is “smooth”, FIG. 15B when the personality is “get off”, and FIG. 15C when the personality is “fiddle”, respectively. Can be changed. Further, the color of the terminal 101 may be associated with the set personality information. For example, the terminal 101 may change to yellow when it is “smooth”, red when it “gets off”, and blue when it “plays”. As an example, the cat's nose shown in FIGS. 15A to 15C can be provided with an LED whose emission color can be switched to yellow, red, or blue. In this case, the output unit 203 can switch the color and cause the LED to emit light according to the personality set by the personality management unit 220.

  In this way, the terminal 101 can be provided with an exterior that is adored by a user who imitates a character. According to the present embodiment, it is possible to realize a content providing system including a terminal that can display content that matches user preferences and can have attachment for toys. Further, by processing the operation of the terminal 101 as “personality” using the statistics of the user operation history, it is possible to provide an operation that matches the user's intuition while adding complexity to the operation of the terminal 101. become. In other words, the “personality” can be formed based on the user's intuitive behavior history, and the operation and content recommendation frequency can be changed according to the “personality”. In addition, since the “personality” of the terminal 101 changes depending on the intuitive behavior of the user, the user can be made aware that the terminal 101 is being brought up.

  As mentioned above, although the example of embodiment of this invention was demonstrated as Embodiment 1-3, this invention is not limited to the said Embodiment 1-3. For example, in the above-described embodiment, the case where the content is a video has been described. However, the content is not limited to a video, and includes content that can be browsed, viewed, or experienced by humans. For example, the content includes text, still images, moving images, music, games, or combinations thereof. The content is stored in a storage device as digital or analog data and can be transmitted through a transmission path. The content data is reproduced, displayed, or executed by the device, for example, so that a human can browse, view, or experience the content.

  The present invention is useful, for example, as a computer system that can easily reflect user preferences when providing a program providing service or a moving image providing service for distributing content. Further, it can be applied as a toy robot instead of a pet that can be enjoyed by the user.

DESCRIPTION OF SYMBOLS 101 Terminal 102 Server apparatus 103 User 203 Output part 205 Storage part 208 Judgment part 209 Content selection part 211 Output part 212 Sensor 220 Personality management part

Claims (15)

  1. An output unit for outputting content;
    A sensor that detects user behavior;
    A determination unit that determines the type of the user's behavior based on information obtained from the sensor;
    A content providing system comprising: a content selection unit that determines content to be further output by the output unit based on a determination result by the determination unit corresponding to the content output by the output unit.
  2.   The content providing system according to claim 1, wherein the determination unit determines which one of a plurality of predetermined behavior types corresponds to the user behavior detected by the sensor.
  3. The determination unit determines a user's emotion according to the type of the user's behavior,
    The content selection unit determines content to be further output by the output unit based on a user's emotion determined by the determination unit, corresponding to the content output by the output unit. The content providing system described.
  4.   The content selection unit uses a determination result based on the user behavior detected by the sensor when the output unit is outputting content as a determination result corresponding to the content output by the output unit. The content provision system of any one of Claims 1-3 which determines the content which the said output part further outputs based on the determination result.
  5. A storage unit for recording content history data indicating the history of the content output by the output unit and data indicating a determination result corresponding to each content of the history data as evaluation data;
    The content providing system according to claim 1, wherein the content selection unit determines content to be further output by the output unit based on the evaluation data recorded in the storage unit.
  6. The sensor includes a sensor that detects a user's contact,
    The determination unit according to any one of claims 1 to 5, wherein the determination unit determines a type of the user's behavior based on at least one of a user's contact strength and amount detected by the sensor. The content providing system described.
  7. A storage unit for storing a plurality of contents;
    The content selection unit, based on a determination result by the determination unit, determines a part of the plurality of contents stored in the storage unit as content to be further output. The content providing system described in 1.
  8. The content providing system includes a terminal and a server,
    The terminal includes at least the sensor;
    The server includes at least the content selection unit,
    The content providing system according to claim 1, wherein the determination unit and the output unit are provided in the terminal or the server.
  9. Based on the determination result history by the determination unit, further comprising a personality management unit for determining the personality of the terminal,
    The content providing system according to claim 8, wherein the operation of the terminal is controlled based on the personality determined by the personality management unit.
  10. A terminal capable of communicating with a server that provides content,
    A sensor for detecting the user's behavior;
    A content receiving unit that receives content determined based on the type of user behavior determined using the user behavior detected by the sensor from the server;
    A terminal comprising: an output unit that outputs the content received by the content receiving unit.
  11. A determination unit that determines a type of the user's behavior based on information obtained from a sensor that detects the user's behavior;
    The receiving unit receives content determined based on a determination result by the determination unit corresponding to the content output by the output unit;
    The terminal according to claim 10, wherein the output unit further outputs the content received by the content receiving unit.
  12. A server that provides content that can communicate with a terminal that outputs content to a user and a sensor that detects user behavior,
    A transmission unit for transmitting content to the terminal;
    A content selection unit that determines content to be further transmitted by the transmission unit based on the type of user behavior corresponding to the content transmitted by the transmission unit, determined based on information detected by the sensor; Comprising a server.
  13. A content providing method executed by at least one computer,
    An output step in which the computer outputs content;
    A determination step of determining a type of the user's behavior based on information obtained from a sensor for detecting the user's behavior;
    A content providing method comprising: a content selection step in which the computer further determines content to be output based on a determination result by the determination step corresponding to the content output by the output unit.
  14. A content providing program that causes a computer that can communicate with a sensor that detects a user's behavior and a server that provides the content to execute processing,
    Output processing to output content;
    Data indicating the user's behavior detected by the sensor during the output of the content, or the type of user behavior determined from the user's behavior detected by the sensor during the output of the content to the server Processing to send,
    A content receiving process for receiving content determined based on the type of behavior of the user corresponding to the content from the server;
    A content providing program for causing a computer to execute an output process for further outputting the received content.
  15. A content providing program for causing a computer capable of communicating with an output device for outputting content and a sensor for detecting user behavior to execute processing,
    A transmission process for transmitting content to the output device;
    A content selection process for determining content to be further transmitted in the transmission process based on the type of behavior of the user corresponding to the content transmitted in the transmission process determined based on information obtained from the sensor; A content providing program to be executed by a computer.
JP2011015535A 2011-01-27 2011-01-27 Content provision system, content provision method, and content provision program Withdrawn JP2012155616A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2011015535A JP2012155616A (en) 2011-01-27 2011-01-27 Content provision system, content provision method, and content provision program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2011015535A JP2012155616A (en) 2011-01-27 2011-01-27 Content provision system, content provision method, and content provision program

Publications (1)

Publication Number Publication Date
JP2012155616A true JP2012155616A (en) 2012-08-16

Family

ID=46837258

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2011015535A Withdrawn JP2012155616A (en) 2011-01-27 2011-01-27 Content provision system, content provision method, and content provision program

Country Status (1)

Country Link
JP (1) JP2012155616A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014071741A (en) * 2012-09-28 2014-04-21 Korea Inst Of Industrial Technology Internal state calculation device and method for expressing artificial emotion and recording medium
WO2014112024A1 (en) * 2013-01-21 2014-07-24 Necソフト株式会社 Emotion visualization device, emotion visualization method, and emotion visualization program
WO2014112025A1 (en) * 2013-01-21 2014-07-24 Necソフト株式会社 Screen changing device, screen changing method, and screen changing program
JP2015173857A (en) * 2014-03-17 2015-10-05 株式会社東芝 Electronic apparatus and information processing method
JP2017507434A (en) * 2014-03-11 2017-03-16 リアルアイズ・オーウー How to generate web-based ad inventory and target web-based ads
US9720509B2 (en) 2013-11-05 2017-08-01 Moff, Inc. Gesture detection system, gesture detection apparatus, and mobile communication terminal
JP2017538185A (en) * 2014-10-09 2017-12-21 クアルコム,インコーポレイテッド Method and system for behavior analysis of mobile device behavior based on user persona information
JP2018101341A (en) * 2016-12-21 2018-06-28 本田技研工業株式会社 Content providing device, content providing method and content providing system
WO2018216213A1 (en) * 2017-05-26 2018-11-29 株式会社オプティム Computer system, pavilion content changing method and program
US10307672B2 (en) 2014-05-19 2019-06-04 Moff, Inc. Distribution system, distribution method, and distribution device

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014071741A (en) * 2012-09-28 2014-04-21 Korea Inst Of Industrial Technology Internal state calculation device and method for expressing artificial emotion and recording medium
WO2014112024A1 (en) * 2013-01-21 2014-07-24 Necソフト株式会社 Emotion visualization device, emotion visualization method, and emotion visualization program
WO2014112025A1 (en) * 2013-01-21 2014-07-24 Necソフト株式会社 Screen changing device, screen changing method, and screen changing program
US20160026237A1 (en) * 2013-01-21 2016-01-28 Nec Solution Innovators, Ltd. Screen changing device, screen changing method, and screen changing program
JPWO2014112025A1 (en) * 2013-01-21 2017-01-19 Necソリューションイノベータ株式会社 Screen change device, screen change method, and screen change program
JPWO2014112024A1 (en) * 2013-01-21 2017-01-19 Necソリューションイノベータ株式会社 Emotion visualization device, emotion visualization method, and emotion visualization program
US9720509B2 (en) 2013-11-05 2017-08-01 Moff, Inc. Gesture detection system, gesture detection apparatus, and mobile communication terminal
JP2017507434A (en) * 2014-03-11 2017-03-16 リアルアイズ・オーウー How to generate web-based ad inventory and target web-based ads
JP2015173857A (en) * 2014-03-17 2015-10-05 株式会社東芝 Electronic apparatus and information processing method
US10307672B2 (en) 2014-05-19 2019-06-04 Moff, Inc. Distribution system, distribution method, and distribution device
JP2017538185A (en) * 2014-10-09 2017-12-21 クアルコム,インコーポレイテッド Method and system for behavior analysis of mobile device behavior based on user persona information
JP2018101341A (en) * 2016-12-21 2018-06-28 本田技研工業株式会社 Content providing device, content providing method and content providing system
US10360259B2 (en) 2016-12-21 2019-07-23 Honda Motor Co., Ltd. Content providing apparatus and method
WO2018216213A1 (en) * 2017-05-26 2018-11-29 株式会社オプティム Computer system, pavilion content changing method and program

Similar Documents

Publication Publication Date Title
KR101704848B1 (en) Visual representation expression based on player expression
US9668024B2 (en) Intelligent automated assistant for TV user interactions
TWI475410B (en) Electronic device and method thereof for offering mood services according to user expressions
CA2529603C (en) Intelligent collaborative media
US8442389B2 (en) Electronic apparatus, reproduction control system, reproduction control method, and program therefor
KR20170100067A (en) Intelligent automated assistant in a media environment
US8640021B2 (en) Audience-based presentation and customization of content
JP5632474B2 (en) Method and system for making visual display live-action through input learned from user
Picard et al. Toward agents that recognize emotion
US20150338917A1 (en) Device, system, and method of controlling electronic devices via thought
KR20160034243A (en) Apparatus and methods for providing a persistent companion device
EP2333778A1 (en) Digital data reproducing apparatus and method for controlling the same
EP2658272A1 (en) System and Method for dynamic content modification based on user reaction
JP5500334B2 (en) Information processing apparatus and method, and program
US7698238B2 (en) Emotion controlled system for processing multimedia data
JP2006012171A (en) System and method for using biometrics to manage review
US20080246778A1 (en) Controlling image and mobile terminal
JP2019502991A (en) Apparatus, system and method for forming an interface with a user and / or an external device by detecting a stationary state
US9122752B2 (en) Personalizing content based on mood
US9582246B2 (en) Voice-command suggestions based on computer context
US7065711B2 (en) Information processing device and method, and recording medium
US9292887B2 (en) Reducing transmissions of measurements of affective response by identifying actions that imply emotional response
US8704760B2 (en) Image display apparatus capable of recommending contents according to emotional information
US9160773B2 (en) Mood-based organization and display of co-user lists
US8280827B2 (en) Multilevel semiotic and fuzzy logic user and metadata interface means for interactive multimedia system having cognitive adaptive capability

Legal Events

Date Code Title Description
A300 Withdrawal of application because of no request for examination

Free format text: JAPANESE INTERMEDIATE CODE: A300

Effective date: 20140401