JP2014075076A - Server device - Google Patents

Server device Download PDF

Info

Publication number
JP2014075076A
JP2014075076A JP2012223032A JP2012223032A JP2014075076A JP 2014075076 A JP2014075076 A JP 2014075076A JP 2012223032 A JP2012223032 A JP 2012223032A JP 2012223032 A JP2012223032 A JP 2012223032A JP 2014075076 A JP2014075076 A JP 2014075076A
Authority
JP
Japan
Prior art keywords
information
display
environment
acquired
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2012223032A
Other languages
Japanese (ja)
Other versions
JP6224308B2 (en
Inventor
Kazuo Shimoda
一夫 下田
Atsushi Sasaki
淳 佐々木
Original Assignee
Aoi Pro Inc
株式会社AOI Pro.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aoi Pro Inc, 株式会社AOI Pro. filed Critical Aoi Pro Inc
Priority to JP2012223032A priority Critical patent/JP6224308B2/en
Publication of JP2014075076A publication Critical patent/JP2014075076A/en
Application granted granted Critical
Publication of JP6224308B2 publication Critical patent/JP6224308B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

Provided is a server device that can display an image for improving the satisfaction of a user on a display device corresponding to the environment of the display device.
A video distribution server retrieves image information and text information of an apparatus on the Internet based on a search condition and an environment information acquisition unit that acquires environment information relating to the environment of the display device. And an information processing unit 30b that generates video information for displaying the information by combining the information based on the acquired image information and text information, and outputs the video information to the display device 12. The information processing unit 30b changes the contents of the search condition in accordance with the environment of the display device 12 based on the environment information acquired by the environment information acquiring unit 30a.
[Selection] Figure 2

Description

  The present invention relates to a server device connected to a display device via a network.

  Conventionally, a system including a display device (video information receiving terminal) having a display panel and a server device (video distribution management server) connected to the display device via the Internet and distributing video to the display device is known. (For example, refer to Patent Document 1).

JP 2007-212487 A

Here, the display device described above can be provided in various places such as a coffee shop, a restaurant such as a bar, a waiting room of a financial institution, a shared space of an office building, and the display device is installed. The atmosphere of the space, the number of people in the space, etc. vary depending on the time of day, weather, and other factors, and are not constant. That is, the environment of the display device changes due to various factors. If the video displayed on the display device can be adapted to the environment of the display device, the satisfaction of the user viewing the video can be improved.
Further, if the video displayed on the display device does not get tired of the user and can effectively appeal to the user's sensibility, the user's satisfaction will be improved, and further, the display device and the server device The attractiveness and value as a product of the system consisting of can be improved.
The present invention has been made in view of the above-described circumstances, and an object of the present invention is to provide a server device that can display an image corresponding to an environment and improving user satisfaction on a display device.

In order to achieve the above object, according to the present invention, in a server device connected to a display device via a network, the environment information acquisition means for acquiring environment information related to the environment of the display device, and based on a search condition, Information processing means for searching and acquiring image information possessed by devices on the network, generating video information for displaying the acquired image information, and outputting the generated video information to the display device; The information processing means changes the contents of the search condition in accordance with the environment of the display device based on the environment information acquired by the environment information acquisition means.
Here, the image information refers to information (data, data group) for displaying a still image or an image as a moving image on a predetermined display means such as a still image file or a moving image file. In recent years, with the development of the Internet and the expansion of services that can be used via the Internet, there are innumerable image information such as still image files and video files related to landscape images, animations, and promotional videos on the network. There is a current situation. Based on this, according to the above configuration, the information processing means searches for and acquires image information stored in a device on the network based on the search condition, and generates video information based on the acquired image information. The video information is output to the display device. For this reason, the server device does not repeatedly display the same video according to some rules on the display device, but preferably uses innumerable image information on the network, and randomly selects from a very wide range of options. Thus, it is possible to display an image that improves user satisfaction without getting tired of the user.
Further, according to the above configuration, the environment information acquisition unit acquires the environment information regarding the environment in which the display device is installed, while the information processing unit displays the display device based on the environment information acquired by the environment information acquisition unit. Change the contents of the search conditions according to the environment where is installed. For this reason, the server device can search and collect image information corresponding to the environment in which the display device is installed, thereby displaying the video corresponding to the environment in which the display device is installed on the display device. It becomes possible to make it. Note that the environment information acquisition unit may acquire any method for acquiring the environment information. For example, the method may be performed directly on the server device or indirectly through a network or the like. The environment information may be input by means including the above, or the environment information may be detected and acquired by automatic means without using artificial means.

Further, according to the present invention, the information processing means searches for and acquires the image information and text information that the device on the network has based on the search condition, the acquired image information, and the Based on the text information, the video information for displaying the information in combination is generated, and the generated video information is output to the display device.
Here, the sentence information is information expressed by characters and has meaning as a language. In recent years, with the development of the Internet and the expansion of services available via the Internet, there are innumerable text information on news, weather information, text on articles, and advertisements on the network. There is a present situation. Based on this, according to the above configuration, the information processing means searches and acquires the image information stored in the device on the network and the text information based on the search condition, the acquired image information, and Based on the text information, video information for displaying the information in combination is generated, and the video information is output to the display device. For this reason, the server device does not repeatedly display the same video according to some rules on the display device, but preferably uses a large number of image information and text information existing on the network, so that it is very vast. It is possible to display a video randomly from among the choices, and it is possible to display a video that improves user satisfaction without getting tired of the user.
Furthermore, the information processing means searches for and acquires image information and text information stored in the devices on the network based on the search condition, and based on the acquired image information and text information, Is generated in combination, and the video information is output to the display device. In this way, by searching for image information and text information based on search conditions, image information and text information are not collected randomly, but have some uniformity or relevance. Can be properly searched and collected. And by appropriately searching and displaying the collected image information and sentence information in combination, while appealing to the intuitive sensibility of the user by the image (video, still image) displayed based on the image information, The text displayed based on the text information can stimulate the logical thinking of the user, and it is possible to display a video that appeals to the user's sensibility in a multifaceted and effective manner. It is possible to display an image that improves user satisfaction.
In particular, according to the above configuration, the image information and the text information are independently searched and collected, and the video information is generated by combining these information. Therefore, the image based on the image information and the text information are used. Various combinations with sentences can be provided as images, and this does not bore users. Furthermore, a random combination of images and sentences appeals to the user's sensibility, which may cause an unexpected and unexpected synergistic effect. In this respect, it is possible to provide a video with high entertainment properties.

In the present invention, the search condition is a search method for metadata associated with various types of information held by the devices on the network, and the information processing means is acquired by the environment information acquisition means. Further, the metadata search method is changed based on the environment information.
Here, generally, metadata is given to image information and text information on a network in accordance with a predetermined format, and a search is performed using this metadata as a key. And according to the said structure, the information corresponding to the environment of a display apparatus can be searched and collected with the simple method of changing the search method of metadata based on environmental information.

Further, according to the present invention, the environment information acquisition unit acquires installation position information indicating a position where the display device is provided as the environment information, and the information processing unit is acquired by the environment information acquisition unit. Based on the installation position information or based on a combination of the installation position information and other environmental information, changing the content of the search condition in correspondence with the position where the display device is installed. Features.
Here, the environment of the display device varies depending on where the display device is provided, for example, whether it is provided in a coffee shop in a business district or a pub in a downtown area. And according to the said structure, since an information processing means changes search conditions based on the installation position information which shows the position in which the display apparatus was provided, it is suitable image corresponding to the position in which the display apparatus was provided. Information and text information can be searched and collected, and a video corresponding to the environment of the display device can be displayed on the display device.

In the present invention, the information processing means acquires an attribute of a place where the display device is installed based on the installation position information, and corresponds to the attribute of the place where the display device is installed, The content of the search condition is changed.
Here, the attributes of the place where the display device is installed include, for example, classification according to the characteristics and properties of areas such as office districts, downtowns, and residential districts, and display devices such as coffee shops, taverns, and karaoke houses. It is an attribute that changes depending on the location, such as a classification according to the purpose of the space where the space is provided. The environment of the display device varies depending on the attribute of the place where the display device is installed. According to the above configuration, the information processing means can search and collect appropriate image information and text information corresponding to the attribute of the place where the display device is provided, and the display device can store the location of the display device. It is possible to display a video corresponding to the attribute.

Further, according to the present invention, the display device has a function of acquiring information indicating the position of the display device, and the environment information acquisition unit determines the position of the display device by communication from the display device via the network. The installation position information is acquired based on the acquired information.
According to this configuration, the environment information acquisition unit can reliably acquire the installation position information by using the function of the display device.

Further, the present invention is configured to be able to communicate with the external device located in the vicinity of the display device and having a function of acquiring information indicating its own position via the network. Information indicating a position of the external device is acquired from an external device through communication via the network, and the installation position information is acquired based on the acquired information.
According to this configuration, the environment information acquisition unit can communicate with the external device, and can reliably acquire the installation position information by using the function of the external device.

Further, according to the present invention, the environment information acquisition unit acquires date information on a current date as the environment information, and the information processing unit is based on the date information acquired by the environment information acquisition unit, Or based on the combination of the said date information and the other said environment information, the content of the said search condition is changed corresponding to the present date.
Here, the date / time information relating to the date / time widely indicates information including the concept of “time” such as time, date, time zone, day of the week, season, and the like. The environment of the display device varies depending on the current date and time. For example, the environment of the display device is different between morning and night, and accordingly, the video corresponding to the environment to be displayed by the display device is also different. According to the above configuration, the information processing means can search and collect appropriate image information and sentence information corresponding to the current date and time, and can cause the display device to display an image corresponding to the current date and time. It becomes possible.

In the present invention, the display device includes photographing means for photographing a periphery to generate a photographed image, and the environment information acquiring means communicates the environment information from the display device via the network as the environment information. The captured information is acquired, and the information processing unit is configured to determine whether the captured image is based on the captured image acquired by the environment information acquiring unit or based on a combination of the captured image and other environmental information. The content of the search condition is changed in accordance with the environment of the display device shown.
According to this configuration, the information processing means changes the content of the search condition based on the photographed image generated by photographing the periphery of the display device, so that the actual environment of the display device analyzed from the photographed image is changed. Corresponding appropriate image information and text information can be searched and collected, and an image corresponding to the environment can be displayed on the display device.

Further, the present invention is characterized in that the information processing means detects brightness around the display device based on the photographed image, and changes the content of the search condition according to the brightness. And
Here, the atmosphere around the display device differs depending on the brightness around the display device, and thus the image corresponding to the environment to be displayed by the display device also differs. According to the above configuration, the information processing means can search and collect appropriate image information and text information corresponding to the brightness of the periphery of the display device, and the display device has the brightness of the periphery of the display device. It is possible to display a video corresponding to the height.

Further, according to the present invention, the information processing means detects a human number existing around the display device based on the photographed image, and changes the contents of the search condition according to the human number. It is characterized by doing.
Here, the atmosphere around the display device differs depending on the number of people present around the display device, and accordingly, the image corresponding to the environment to be displayed by the display device also differs. According to the above configuration, the information processing means can search and collect appropriate image information and text information corresponding to the number of people present around the display device, and It is possible to display an image corresponding to the number of humans present in the vicinity.

Further, according to the present invention, the display device includes sound collection means for collecting sound and generating sound collection information, and the environment information acquisition means is provided as the environment information from the display device via the network. The sound collection information is acquired by communication, and the information processing means is based on the sound collection information acquired by the environment information acquisition means or based on a combination of the sound collection information and other environmental information. The content of the search condition is changed in accordance with the environment of the display device indicated by the sound collection information.
Here, the atmosphere around the display device varies depending on the sound around the display device, for example, the environmental sound, and accordingly, the image corresponding to the environment to be displayed by the display device also varies. According to the above configuration, the information processing means can search and collect appropriate image information and text information corresponding to the sound situation around the display device, and the display device It is possible to display a video corresponding to the audio status.

According to the present invention, the information processing means determines whether music is flowing in a place where the display device is installed based on the sound collection information. If the music is flowing, Correspondingly, the content of the search condition is changed.
Here, the video corresponding to the environment to be displayed by the display device differs depending on what kind of music is flowing in the place where the display device is installed. According to the above configuration, the information processing means can search and collect appropriate image information and text information corresponding to the music flowing in the place where the display device is installed, and the display device displays the display information. It is possible to display an image corresponding to music flowing in the place where the device is installed.

Further, according to the present invention, the environmental information acquisition means acquires weather information relating to the weather at the place where the display device is installed as the environmental information, and the information processing means is acquired by the environmental information acquisition means. Based on the weather information or based on a combination of the weather information and other environmental information, the content of the search condition is changed corresponding to the weather of the place where the display device is installed. Features.
Here, the atmosphere around the display device differs depending on the weather at the place where the display device is installed. Therefore, the image corresponding to the environment to be displayed by the display device also differs. And according to the said structure, the information processing means can search and collect suitable image information and text information corresponding to the weather of the place where the display device is installed, and the display device has the periphery of the display device. It is possible to display an image corresponding to the weather.

The present invention is configured to be able to communicate with a terminal via the network, and the information processing means outputs the video information to the display device in response to an instruction from the terminal while the video information is being output to the display device. Storing information related to the text information included in the video information, and outputting the stored text information and information related to the text information to the terminal in response to a request from the terminal. It is characterized by.
According to this configuration, the user can perform the following operations. That is, the user who has visually recognized the video displayed on the display device operates the terminal owned by the user when there is something that is anxious about the displayed text. Then, predetermined data is output from the terminal to the server apparatus, and the information processing means of the server apparatus stores the sentence information related to the sentence and information related to the sentence information. And when a user wants to know in detail about the text which became interested, he accesses a server apparatus and acquires text information and the information relevant to text information.
As a result, the user not only visually recognizes the text based on the text information, but also actively relates to the text or the text when there is a text necessary for the user or a text to be noted. Information can be acquired, and convenience is further improved.

  ADVANTAGE OF THE INVENTION According to this invention, it becomes possible to display an image | video which improves a user's satisfaction corresponding to an environment on a display apparatus.

It is a figure which shows the structure of the video delivery system which concerns on this embodiment. It is a block diagram which shows the functional structure of a video delivery server and a display apparatus. It is a figure which shows an example of the image | video displayed on a display apparatus. It is a flowchart which shows operation | movement of a video delivery server. It is a figure which shows typically the content of the 1st-6th search key table. It is a flowchart which shows operation | movement of a video delivery server. It is a figure which represents typically the relationship between the definition emotion K, the psychological effect classification S, and the large tag D. It is a flowchart which shows operation | movement of a video delivery server.

Hereinafter, embodiments of the present invention will be described with reference to the drawings.
<First Embodiment>
FIG. 1 is a diagram showing a configuration of a video distribution system 1 according to the present embodiment.
As shown in FIG. 1, the video distribution system 1 includes a video distribution server 10 (server device).
The video distribution server 10 is a server device that is developed, maintained, and operated by a service provider that provides a video distribution service to be described later. The video distribution service is realized by the cooperation of software and hardware in the video distribution server 10. Provided by the function.
The video distribution server 10 is connected to the Internet 11 (network) and can communicate with other devices via the Internet 11. As necessary, secure communication is performed between the video distribution server 10 and another device using communication according to a predetermined encryption protocol or existing technology such as a virtual private line.

As shown in FIG. 1, a plurality of display devices 12 are connected to the Internet 11. The display device 12 has (1) a function of communicating with the video distribution server 10 via a network such as the Internet 11 and (2) a function of displaying a video on a predetermined display means such as a liquid crystal display panel or an organic EL panel. For example, a TV having a function of connecting to the Internet 11 called a smart TV, a personal computer, a mobile terminal such as a tablet terminal, a mobile phone such as a smartphone, etc. Can be used as the display device 12 according to the above. In this example, the Internet 11 is given as an example of the network, but the network between the video distribution server 10 and the display device 12 includes all existing networks such as physical and virtual dedicated lines and predetermined lines. Is applicable.
The video distribution service according to the present embodiment is intended to display video on the display device 12 in a new manner that does not exist in society by the function of the video distribution server 10. The video distribution service is assumed to be applied in the following manner, for example.

  As an example, the display device 12 is in a store of a restaurant such as a coffee shop, a restaurant, or a pub, and the customer who visits the store (hereinafter, a person who visually recognizes the display device 12 is generally expressed as “user”). Is provided at a position where can be visually recognized. Then, as will be described in detail later, the video distribution server 10 continuously displays video that is compatible with the environment of the display device 12 and that is comfortable for the user and appeals to the user's sensitivity. For example, a user who enjoys a conversation at a restaurant visually recognizes the display device 12 between conversations, finds a topic of conversation, or creates a document using his own PC, for example. The user who is looking at the display device 12 between work and gets some inspiration, for example, the user who is eating and drinking alone feels comfortably looking at the display device 12 somehow Can drown boredom. The examples given here are only examples, such as shared spaces in office buildings, meeting rooms, hospitals and financial institutions, waiting rooms such as stations, classrooms, private rooms in karaoke houses, private homes, etc. The display device 12 can be widely provided in the place where the user goes and the video distribution server 10 displays an appropriate video according to the environment according to the place where the display device 12 is provided and other events. It is displayed on the display device 12.

  As shown in FIG. 1, a mobile phone 15 (terminal) can communicate with the video distribution server 10 via a telephone line network 14 to which a base station 13 is connected or via an access point related to a wireless LAN. It is connected to the. The mobile phone 15 is owned by a user who views the display device 12. For example, when the display device 12 is provided in a restaurant, the mobile phone 15 is owned by a user who visits the restaurant and views the video displayed on the display device 12.

As shown in FIG. 1, an image information providing server 18, a text information providing server 19, and a voice information providing server 20 are connected to the Internet 11.
The image information providing server 18 is a server that can provide image information to the video distribution server 10. The image information is, for example, information (data, data group) for displaying a still image or a moving image on a display means, such as a still image file of a predetermined format such as JPEG, a moving image file of a predetermined format such as MPEG. ).
For example, the image information providing server 18 is a server related to a moving image posting site or a still image posting site. In this case, the image information is a posted moving image file or a still image file. Further, for example, the image information providing server 18 is a server related to a company homepage or a personal blog. In this case, the image information is an uploaded moving image file or still image file. Further, for example, the image information providing server 18 is a server managed by a company or organization that distributes video content such as promotion videos as business, and in this case, the image information relates to a provided moving image or still image. Content. That is, in recent years, technologies related to the Internet 11 have been developed, and services via the Internet 11 have been expanded, and countless image information is stored in countless server devices connected to the Internet 11. The image information providing server 18 expresses one of the server devices that can provide the image distribution server 10 with the image information stored therein for convenience. In order to provide image information from the image information providing server 18 to the video distribution server 10, it is necessary to clear the copyright problem. In this embodiment, the video distribution server 10 is distributed. It is assumed that the copyright issue has been completely cleared as necessary contracts are concluded between the entity and the service provider. This also applies to text information and audio information described later.
In the following description, still images and moving images are collectively referred to as “images” and are clearly distinguished from “text” described later.

The text information providing server 19 is a server that can provide text information to the video distribution server 10. Sentence information is information expressed by characters and has meaning as a language, for example, news, articles, diaries, advertisements, etc. expressed as sentences.
For example, the text information providing server 19 is a server managed by an entity that distributes and provides news such as a news agency or a newspaper company. In this case, the text information is information related to the news. In addition, the text information includes information related to text written on a home page, a blog, etc., information related to text posted on a short text posting site, and the like. In recent years, countless pieces of text information are stored in countless server devices connected to the Internet 11. The text information providing server 19 expresses one of the server devices that can provide text information stored therein to the video distribution server 10 for convenience.
The audio information providing server 20 is a server that can provide audio information to the video distribution server 10. The audio information is information (data, data group) that holds information related to audio, such as an audio file of a predetermined format such as MP3. The audio information providing server 20 is, for example, a server related to a music posting site. In this case, the audio information is a posted audio file. Further, for example, the image information providing server 18 is a server related to a company homepage or a personal blog. In this case, the image information is an uploaded audio file. Further, for example, the image information providing server 18 is a server managed by a company or organization that distributes music content as a business, and in this case, the audio information is content related to the provided music. That is, in recent years, countless audio information is stored in countless server devices connected to the Internet 11. The audio information providing server 20 expresses one of the server devices that can provide the video distribution server 10 with the audio information stored therein for convenience.

  As shown in FIG. 1, a weather information notification server 21 is connected to the Internet 11. The weather information notification server 21 is a server device having a function of notifying information on weather such as weather in each area. Necessary contracts are concluded between the entity that manages the weather information notification server 21 and the service provider, and the video distribution server 10 accesses the weather information notification server 21 as needed to provide necessary information about the weather. You can get it.

FIG. 2 is a block diagram illustrating functional configurations of the video distribution server 10 and the display device 12.
As shown in FIG. 2, the video distribution server 10 includes a control unit 30, a display unit 31, an input unit 32, a storage unit 33, and an interface unit 34.
The control unit 30 centrally controls each unit of the video distribution server 10, and is executed by a CPU as a calculation execution unit, a ROM that stores various programs executed by the CPU in a nonvolatile manner, and the CPU. A RAM that temporarily stores a program, data related to the program, and other peripheral circuits are provided. The control unit 30 includes an environment information acquisition unit 30a (environment information acquisition unit) and an information processing unit 30b (information processing unit), which will be described later. The functions of the environment information acquisition unit 30a and the information processing unit 30b are realized by the cooperation of hardware and software, such that the CPU of the control unit 30 reads and executes a predetermined program.
The display unit 31 includes a display panel such as a liquid crystal display panel, and displays various types of information on the display panel under the control of the control unit 30.
The input unit 32 is connected to a predetermined input device, detects an operation on the input device, and outputs the operation to the control unit 30.
The storage unit 33 includes a nonvolatile storage device such as a hard disk, and stores various data in a rewritable manner.
The interface unit 34 communicates with various devices including the display device 12 according to the communication standard via the Internet 11 under the control of the control unit 30.
Note that the configuration of the video distribution server 10 is not limited to that shown in FIG. 2, and may be a configuration in which a plurality of server devices are linked, for example, or a part of functions of a centralized system and a distributed system. There may be. That is, the configuration is not limited as long as various functions relating to the video distribution server described later can be realized.

As shown in FIG. 2, the display device 12 includes a device-side control unit 40, a device-side display unit 41, a device-side input unit 42, a position detection unit 43, a camera device 44, and an audio processing unit 45. And an interface unit 46.
The device-side control unit 40 centrally controls each unit of the display device 12, and includes a CPU, a ROM, a RAM, other peripheral circuits, and the like.
The device-side display unit 41 includes a display panel 41a such as a liquid crystal display panel or an organic EL panel, and displays a predetermined image on the display panel 41a under the control of the device-side control unit 40.
The device side input unit 42 is connected to an operation switch provided in the display device 12, detects an operation on the operation switch, and outputs the operation switch to the device side control unit 40.
The position detection unit 43 includes an antenna that receives GPS radio waves transmitted from GPS (Global Positioning System) satellites, and refers to map data stored in a storage unit (not shown) based on the GPS radio waves, Installation position information indicating the coordinates (longitude, latitude) of the position where the display device 12 is provided is generated and output to the device-side control unit 40.
The camera device 44 (photographing means) includes an image sensor such as a CCD image sensor or a CMOS image sensor, a photographing lens group, a lens driving unit for driving the lens group to adjust zoom, focus, etc. Shooting is performed under the control of the unit 40. The apparatus-side control unit 40 generates a captured image based on the output of the image sensor by the function of the installed predetermined application.
The audio processing unit 45 (sound collecting means) is connected to the speaker 48, and the audio signal input from the apparatus-side control unit 40 is digital / analog converted and output to the speaker 48, whereby audio related to the audio signal is obtained. To the outside. The sound processing unit 45 is connected to the microphone 49, and performs analog / digital conversion on a signal related to the sound collected by the microphone 49 to generate sound collection information, and outputs the sound collection information to the apparatus side control unit 40.
The interface unit 46 performs communication based on communication standards with various devices capable of communication under the control of the apparatus-side control unit 40.
Note that the display device 12 in FIG. 2 is merely an example, and can have any configuration depending on the type of the display device 12 (smart TV, personal computer, or the like).

Next, the video distribution service will be described in detail.
First, the basic operation of the video distribution server 10 when providing a video distribution service will be briefly described.
The video distribution service is basically a service in which the video distribution server 10 distributes video to the display device 12 (outputs video information) and causes the display device 12 to display the video. As the video distribution method, any of the existing distribution methods such as streaming reproduction or a method of causing the display device 12 to download the file once and opening the downloaded file for reproduction can be applied.
In the video distribution service, the information processing unit 30b of the video distribution server 10 matches the search condition from the image information provided by the image information providing server 18 by a predetermined search formula (search condition) using a search key. Search and acquire (download) the image information that has been obtained. That is, the information processing unit 30b searches the image information existing innumerably on the Internet 11 for a search condition that matches the search condition.
As is well known, the image information normally stores metadata that is supposed to be used as a tag when searching. For example, a comment segment formed in a predetermined area of a JPEG file stores metadata that can be used as a tag when searching. For example, in an MPEG-7 file, metadata can be described in XML. It is. Then, the information processing unit 30b uses the function of an existing search engine that performs a search using the metadata of the image information to search for and acquire image information that matches the search condition. Note that the function of searching for image information may be implemented in the video distribution server 10 itself, or the function of an external server may be used.

Similarly, the information processing unit 30b searches the sentence information provided by the sentence information providing server 19 for the sentence information that matches the search condition by using a search expression (search condition) using the search key, and acquires it. Yes (download). In other words, the information processing unit 30b searches the text information existing innumerably on the Internet 11 from those that can be acquired and that matches the search condition.
For example, in the case of text information related to news distributed by a server managed by a communication company or the like, usually tags (for example, current affairs, emergency bulletins, sports, entertainment, etc.) are associated with each text information related to news. Therefore, the information processing unit 30b searches for text information that matches the search condition by using a function of a predetermined search engine that performs a search using the tag. In addition, for example, when the text information is information related to text described on a company / personal homepage, the information processing unit 30b functions as a predetermined search engine that performs a search using index information as metadata. Is used to search for text information that matches the search conditions.
Furthermore, the information processing unit 30b searches for and acquires voice information that matches the search condition from the voice information provided by the voice information providing server 20 using a search formula (search condition) using a search key ( to download). In other words, the information processing unit 30b searches for audio information that can be acquired from a myriad of audio information on the Internet 11 and that matches the search condition. As is well known, since the music information stores metadata that is assumed to be used as a tag when searching (for example, metadata of an MP3 file), the information processing unit 30b stores the metadata. Search and acquire image information that matches the search conditions using the function of an existing search engine that uses and searches.

After searching for each of image information, sentence information, and audio information as described above, the information processing unit 30b combines an image (moving image, still image) based on the image information and a sentence based on the sentence information. Video information to be displayed. The video information is, for example, an image file used for streaming playback when displaying video on the display device 12 by streaming playback.
FIG. 3 is a diagram illustrating an example of an image displayed on the display panel 41a of the display device 12 based on the image information.
As shown in FIG. 3, in the video based on the video information, an image G (moving image, still image) based on the image information is displayed on the entire area of the display panel 41a, and a text B based on the text information is displayed in a predetermined area below. Is displayed as a string. In this way, the information processing unit 30b displays a video in which an image based on image information and a text based on text information are combined in a manner as shown in FIG. 3 based on the image information and text information. Generate video information. The video in FIG. 3 is merely an example, and the position of the text based on the text information is not limited to the position shown in FIG. Also good. The video distribution server 10 is provided with a function of combining image information and text information in a predetermined manner by performing predetermined image processing on the image information. By this function, video information is generated based on image information and text information.
Next, the information processing unit 30b outputs the generated video information to the display device 12 in accordance with a predetermined protocol, outputs audio information to the display device 12, and displays the video indicated by the video information on the display panel 41a. At the same time, sound is output based on the sound information. “Output the video information to the display device 12 to display the video” means, for example, in the case of streaming playback, the image file relating to the generated video information is transferred and played back in a predetermined procedure. For example, in the case of a method of displaying a video by downloading a file, it means that after downloading an image file related to video information, the image file is opened and reproduced.

The information processing unit 30b continuously displays video on the display device 12 by continuously generating and outputting the video information as described above. That is, since there are an infinite number of image information and text information on the Internet 11, it is assumed that there are a considerable number of image information and text information that match the search condition. Then, the information processing section 30b sequentially generates video information according to a predetermined priority order based on the searched and collected considerable number of image information and text information, and displays the video.
As described above, in the present embodiment, the video distribution server 10 does not repeatedly display the same video according to a predetermined rule based on the image file prepared in advance, Then, text information is searched and collected, and video information is generated based on the information in order and displayed. For this reason, it is possible not to repeatedly display the same image but to display different images at random, and to improve user satisfaction without getting tired of the user.

Further, the information processing unit 30b searches for and acquires image information and text information on the Internet 11 based on the search condition, and “combines these pieces of information based on the acquired image information and text information. The video information to be displayed is generated and the video information is output to the display device 12. In this way, by searching for image information and text information based on search conditions, image information and text information are not collected randomly, but have some uniformity or relevance. Can be properly searched and collected. And by appropriately searching and displaying the collected image information and sentence information in combination, while appealing to the intuitive sensibility of the user by the image (video, still image) displayed based on the image information, The text displayed based on the text information can stimulate the logical thinking of the user, and it is possible to display a video that appeals to the user's sensibility in a multifaceted and effective manner. It is possible to display an image that improves user satisfaction.
In particular, according to the above configuration, the image information and the text information are independently searched and collected, and the video information is generated by combining these information. Therefore, the image based on the image information and the text information are used. Various combinations with sentences can be provided as images, and this does not bore users. Furthermore, a random combination of images and sentences appeals to the user's sensibility, which may cause an unexpected and unexpected synergistic effect. In this respect, it is possible to provide a video with high entertainment properties.

By the way, as described above, the display device 12 is assumed to be installed in various places such as a coffee shop, a restaurant such as a bar, a waiting room of a financial institution, and a shared space of an office building. The atmosphere of the space in which the display device 12 is installed, the number of humans present in the space, etc. vary depending on the time zone, weather, and other factors, and are not constant. That is, the environment of the display device 12 changes due to various factors. Then, if the video displayed on the display device 12 can be adapted to the environment of the display device 12, the video displayed on the display device 12 and the surrounding atmosphere will match, without giving the user a sense of incongruity, User comfort can be improved and satisfaction can be improved.
Based on this, the video distribution server 10 according to the present embodiment performs the following operations, so that the video distribution server 10 can be appropriately adapted to the environment of the display device 12 according to the location where the display device 12 is provided and other events. The correct video.

FIG. 4 is a flowchart of operations performed when the video distribution server 10 provides a video distribution service. The process shown in the flowchart of FIG. 4 is to construct an appropriate search condition according to the environment of the display device 12, more specifically, a search used when searching for image information, text information, and audio information. This process is aimed at appropriately selecting keys. The process shown in the flowchart of FIG. 4 is performed at a predetermined timing set in advance, for example, every hour.
Referring to FIG. 4, the environment information acquisition unit 30a of the control unit 30 of the video distribution server 10 communicates with the device-side control unit 40 of the display device 12 in accordance with a predetermined protocol, and acquires installation position information. As described above, the installation position information is information indicating the coordinates (longitude, latitude) of the position where the display device 12 is provided, and is generated by the position detection unit 43 of the display device 12 (step SA1). ).
Next, the information processing unit 30b obtains the “land pattern” of the place where the display device 12 is installed (the attribute of the place where the display device 12 is installed) based on the installation position information acquired by the environment information acquisition unit 30a. Obtain (step SA2). Hereinafter, the process of step SA2 will be described in detail.
A land pattern is a characteristic or property of an area on a map such as a downtown area, an office area, a residential area, an urban area, or a local city. In the present embodiment, a map database is stored in the storage unit 33 of the video distribution server 10, the map is divided into a plurality of areas, and information indicating a land pattern is stored in association with each area. In step SA2, the information processing unit 30b determines which area on the map the installation position of the display device 12 belongs to based on the installation position information and the map database, and associates it with the area to which it belongs. The obtained land pattern is acquired as the land pattern of the place where the display device 12 is installed.
In the present embodiment, the video distribution server 10 itself has a map database and acquires a land pattern based on the installation position information, but it may be configured to make an inquiry to an external server.

Next, the information processing unit 30b acquires the “space type” of the place where the display device 12 is installed (the attribute of the place where the display device 12 is installed) (step SA3).
The space type is a classification according to the use of the space and area in which the display device 12 is provided, such as a coffee shop, a pub, a karaoke house, a station, an office building, a classroom, and a road surface. In the map database described above, “space type” is stored in association with buildings and facilities existing on the map, and the information processing unit 30b is based on the installation position information and the map database. The building or facility where the display device 12 is installed is specified, and the space type associated with the specified building is acquired as the space type of the place where the display device 12 is installed.
Next, the information processing unit 30b refers to the first search key table K1, and specifies the first search key based on the land pattern acquired in step SA2 and the space type acquired in step SA3 (step SA4). .

FIG. 5A is a diagram schematically illustrating an example of the first search key table K1.
The first search key table K1 is a table that stores a search key in association with each combination of land pattern and space type. As shown in FIG. 5A, a search key exists for each piece of image information, text information, and audio information. In the first search key table K1, each combination of a land pattern and a space type is provided. In addition, a search key for each information is stored in association with each other. This also applies to search keys in other tables described later. Further, the first search key table K1 may be defined on a program that realizes the function of the information processing unit 30b, and is data stored in the storage unit 33 in such a manner that the program can be referred to. May be. The same applies to each table described later.
Here, depending on the combination of the land pattern and the space type, the state and atmosphere (environment) of the place where the display device 12 is provided are different, and the images matching the atmosphere of the place are also different. For example, a bar in a downtown area and a coffee shop in an office area have different atmospheres of the places, and images of customers who come to the store feel comfortable. And the 1st search key table K1 matches and memorize | stores the search key for searching the image information suitable for environment, text information, and audio | voice information for every combination of a land pattern and a space classification. The search key associated with the combination of the land pattern and the space type can be freely set by the person in charge.
In step SA4, the information processing unit 30b refers to the first search key table K1, and uses the first search key associated with the combination of the land pattern acquired in step SA2 and the space type acquired in step SA3. Identified as a search key. For example, in the example of FIG. 5A, when the land pattern is “office town” and the space type is “coffee shop”, the information processing unit 30 b uses “luxury” as the first search key for image information. “Economic news” is specified as a first search key for text information, and “low tempo” is specified as a first search key for music information.

In the present embodiment, the display device 12 includes the position detection unit 43 that detects its own position using GPS radio waves. However, the display device 12 automatically detects its own position. It may not have a function. Even in such a case, the environment information acquisition unit 30a can acquire the installation position information of the display device 12 by the following means. That is, in recent years, mobile phones and mobile terminals (external devices) having a position detection function using GPS have become widespread. Using this, the video distribution server 10 publishes a dedicated page that can be accessed from a mobile phone or the like. For example, when the operator is using the display device 12 near the display device 12, When the mobile phone is used to access the dedicated page and perform a necessary operation, the mobile phone outputs positional information of the mobile phone to the video distribution server 10. Then, the environment information acquisition unit 30a acquires the position information of the mobile phone as the installation position information of the display device 12.
In addition, a dedicated application for causing the mobile phone or the like to output the location information of the mobile phone to the video distribution server 10 via the Internet 11 is installed, and the environment information acquisition unit 30a acquires the installation location information by the function of this application. It may be configured to.
In addition, the video distribution server 10 provides a predetermined user interface to the display device 12 or a device such as a mobile phone, and the user inputs information regarding a place where the display device 12 is installed to the provided user interface. For example, the environment information acquisition unit 30a may acquire the installation position information by means including human means.

Now, after specifying a 1st search key, the environment information acquisition part 30a acquires the date information which shows the present day of the week and time (step SA5). The acquisition of the day of the week and the time is performed by, for example, an existing technology using an RTC or a CPU clock.
Next, the information processing unit 30b refers to the second search key table K2 and specifies the second search key based on the date / time information acquired in Step SA5 (Step SA6).
FIG. 5B is a diagram schematically illustrating an example of the second search key table K2.
The second search key table K2 exists for each combination of land pattern and space type. In step SA6, the information processing unit 30b acquires the land pattern acquired in step SA2 and the space acquired in step SA3. The second search key table K2 corresponding to the combination of types is referred to. FIG. 5B shows, as an example, a second search key table K2 corresponding to the case of land pattern: office district, space type: coffee shop.
The second search key table K2 is a table that stores a search key in association with each combination of the day of the week and the time zone to which the time belongs.
Here, depending on the combination of the day of the week and the time zone to which the current time belongs, the state and atmosphere of the place where the display device 12 is provided are different, and the video corresponding to the environment is also different. For example, the atmosphere of the place is different between weekday mornings and holiday nights, and the images that customers feel comfortable with are different. The second search key table K2 associates search keys for searching for image information, text information, and audio information corresponding to the environment for each combination of the day of the week and the time zone to which the current time belongs. Remember.
In step SA6, the information processing unit 30b refers to the second search key table K2, and specifies the search key associated with the combination of the day of the week and time acquired in step SA5 as the second search key. . For example, in the example of FIG. 5B, when the day of the week is “Monday” and the time zone is “10:00 to 12:00”, the information processing unit 30b uses “morning” as the second search key for image information. ”,“ Today's weather ”as the second search key for text information, and“ Classic ”as the second search key for music information.
In this example, the date and time information is the day of the week and the time, but as the date and time information, information including the concept of “time” such as date, day of the week, season, etc. can be widely used.

After specifying the second search key, the environment information acquisition unit 30a communicates with the device-side control unit 40 of the display device 12 in accordance with a predetermined protocol, and acquires a captured image based on the image captured by the camera device 44. (Step SA7). Here, the captured image acquired by the environment information acquisition unit 30a is captured at a time as close as possible to the current time.
Next, the information processing unit 30b acquires the brightness (bright / dark) around the display device 12 based on the captured image acquired by the environment information acquisition unit 30a (step SA8). For the determination, all existing methods can be applied. For example, the average value of the gradation values of the color components of a part or all of the pixels constituting the captured image is calculated, and the calculated average value and a predetermined value are determined. This is performed by comparison with the threshold value.
Next, the information processing unit 30b refers to the third search key table K3, and specifies the third search key based on the brightness around the display device 12 acquired in step SA8 (step SA9).
FIG. 5C is a diagram schematically illustrating an example of the third search key table K3.
The third search key table K3 exists for each combination of land pattern and space type. In step SA9, the information processing unit 30b acquires the land pattern acquired in step SA2 and the space acquired in step SA3. The third search key table K3 corresponding to the combination of types is referred to. FIG. 5C shows, as an example, a third search key table K3 in the case of land pattern: office district and space type: coffee shop.
The third search key table K3 is a table that stores a search key in association with each brightness (bright / dark).
Here, depending on whether the periphery of the display device 12 is bright or dark, the state and atmosphere of the place where the display device 12 is provided are different, and the image corresponding to the environment is also different. For example, when the display device 12 is bright, the space provided with the display device 12 tends to be lively. When the display device is dark, the space provided with the display device 12 tends to be calm. The third search key table K3 associates search keys for searching image information, text information, and audio information corresponding to the environment for each brightness (bright / dark) around the display device 12. Remember.
In step SA9, the information processing section 30b refers to the third search key table K3, and specifies the search key associated with the brightness (bright / dark) acquired in step SA8 as the third search key. For example, in the example of FIG. 5C, when the brightness around the display device 12 is “bright”, the information processing unit 30 b specifies “bright” as the third search key for image information and audio information. .

Now, after specifying the third search key, the information processing unit 30b, based on the photographed image acquired by the environment information acquisition unit 30a in step SA7, the number of humans present in the vicinity of the display device 12 (more / less) ) Is acquired (step SA10). All the existing methods can be applied to the process of step SA10. For example, the information processing unit 30b performs existing face detection on the captured image, detects the number of human faces reflected in the captured image, compares the detected number of human faces with a predetermined threshold, The number (many / few) of humans present around the display device 12 is acquired.
Next, the information processing unit 30b refers to the fourth search key table K4, and specifies the fourth search key based on the number of people present around the display device 12 acquired in Step SA10 (Step SA11).
FIG. 5D is a diagram schematically illustrating an example of the fourth search key table K4.
The fourth search key table K4 exists for each combination of land pattern and space type. In step SA11, the information processing unit 30b acquires the land pattern acquired in step SA2 and the space acquired in step SA3. The fourth search key table K4 corresponding to the combination of types is referred to. FIG. 5D shows, as an example, a fourth search key table K4 in the case of land pattern: office district, space type: coffee shop.
The fourth search key table K4 is a table that stores a search key in association with each person's number (many / few) around the display device 12.
Here, depending on whether there are many or few people around the display device 12, the state and atmosphere of the place where the display device 12 is provided are different, and the images corresponding to the environment are also different. The fourth search key table K4 is a search key for searching for image information, text information, and audio information corresponding to the environment for each person (many / small) in the vicinity of the display device 12. Are stored in association with each other.
In step SA11, the information processing section 30b refers to the fourth search key table K4, and specifies the search key associated with the number of humans (more / less) acquired in step SA10 as the fourth search key. For example, in the example of FIG. 5D, when there are “many” people around the display device 12, the information processing unit 30 b identifies “lively” as the fourth search key for image information and audio information. To do.

After specifying the fourth search key, the environment information acquisition unit 30a communicates with the device-side control unit 40 of the display device 12 in accordance with a predetermined protocol, and acquires the sound collection information generated by the voice processing unit 45. (Step SA12).
Next, the information processing unit 30b determines whether music is flowing in the place where the display device 12 is installed based on the sound collection information acquired by the environment information acquisition unit 30a (step SA13). For this determination, all existing methods can be used. For example, the state of a specific band of the sound waveform indicated by the sound collection information, the peak pitch of the waveform, and the like are analyzed, and it is determined whether music is flowing based on the analysis result.
If it is determined that music is not flowing (step SA13: NO), the information processing section 30b moves the processing procedure to step SA16.
On the other hand, if it is determined that music is flowing (step SA13: YES), the information processing unit 30b further analyzes the sound collection information to determine whether the flowing music is up-tempo or low-tempo. (Step SA14). For this determination, all existing methods can be used. For example, the information processing unit 30b extracts a drum rhythm pattern based on the sound waveform indicated by the sound collection information, and determines whether the music is up-tempo or low-tempo based on the drum rhythm pattern.

Next, the information processing section 30b refers to the fifth search key table K5 and specifies the fifth search key based on the determination result of step SA15 (step SA15).
FIG. 5E is a diagram schematically illustrating an example of the fifth search key table K5.
The fifth search key table K5 exists for each combination of land pattern and space type. In step SA15, the information processing unit 30b acquires the land pattern acquired in step SA2 and the space acquired in step SA3. The fifth search key table K5 corresponding to the combination of types is referred to. FIG. 5E shows, as an example, a fourth search key table K4 in the case of land pattern: office district and space type: coffee shop.
The fifth search key table K5 is a table that stores a search key in association with each music tempo (up tempo / low tempo).
Here, depending on the tempo of the music flowing in the place where the display device 12 is installed, the state and atmosphere of the place where the display device 12 is provided are different, and the image corresponding to the environment is also different. For example, when the music being played is at a low tempo, the atmosphere around the display device 12 tends to be a calm atmosphere. The fifth search key table K5 stores a search key for searching image information, text information, and audio information corresponding to the environment in association with each music tempo (up tempo / low tempo).
In step SA15, the information processing unit 30b refers to the fifth search key table K5 and specifies the search key associated with the music tempo determined in step SA14 as the fifth search key. For example, in the example of FIG. 5E, when the music is at a low tempo, the information processing unit 30b specifies “chic” as the fifth search key of the image information.
In this example, the music tempo is determined in step SA14. However, the music genre may be determined, and the fifth search key corresponding to the genre may be specified.
If it is determined in step SA13 that music is playing, the information processing unit 30b sets a predetermined flag. Then, when providing the video distribution service, the information processing unit 30b does not search, collect, and output the audio information to the display device 12. This is because if the sound is output from the display device 12 in a state where music is flowing in the place where the display device 12 is installed, the sounds may overlap and give the user an unpleasant feeling.

Now, after specifying the fifth search key, the environment information acquisition unit 30a accesses the weather information notification server 21 via the Internet 11 and acquires weather information indicating the weather at the place where the display device 12 is installed. (Step SA16). In this example, it is assumed that the weather information is information indicating one of rain / cloudy / sunny weather.
Next, the information processing section 30b refers to the sixth search key table K6 and specifies the sixth search key based on the weather information acquired in Step SA16 (Step SA17).
FIG. 5F is a diagram schematically illustrating an example of the sixth search key table K6.
The sixth search key table K6 exists for each combination of land pattern and space type. In step SA17, the information processing unit 30b acquires the land pattern acquired in step SA2 and the space acquired in step SA3. The sixth search key table K6 corresponding to the combination of types is referred to. FIG. 5F shows, as an example, a sixth search key table K6 in the case of land pattern: office district and space type: coffee shop.
The sixth search key table K6 is a table that stores a search key in association with each weather (rain / cloudy / clear).
Here, depending on the weather at the place where the display device 12 is installed, the state and atmosphere of the place where the display device 12 is provided are different, and the images corresponding to the environment are also different. And the 6th search key table K6 matches and memorize | stores the search key for searching the image information corresponding to environment, text information, and audio | voice information for every weather.
In step SA17, the information processing section 30b refers to the sixth search key table K6, and specifies the search key associated with the weather acquired in step SA16 as the sixth search key. For example, in the example of FIG. 5F, when the weather is “rain”, the information processing unit 30b specifies “rain” as the sixth search key for each of image information, text information, and audio information.

Next, the information processing unit 30b searches for image information, text information, and audio information when providing the video distribution service based on the first search key to the sixth search key specified by the above-described means. An expression (search condition) is generated (step SA18).
The search expression is generated as follows, for example. That is, taking image information and text information as an example, the information processing unit 30b uses a logical sum of the identified first search key to sixth search key as a search expression (search condition). In the example given in the description of the flowchart of FIG. 4, the search formula for image information is “luxury (first search key) or morning (second search key) or bright (third search key) or lively (first 4 search key) or chic (fifth search key) or rain (sixth search key), while the text information search formula is "economic news (first search key) or today's weather (second search key) Or rain (6th search key) ".
In the above example, the logical sum of each search key is used as a search formula, but the configuration of the search formula is not limited to this. For example, by taking a logical sum for a specific search key, the logical sum and logical product are used in a predetermined manner, a partial match search is used, a priority is given to the search key, or "weight" May be attached. That is, any method can be used as long as a search expression is generated from the specified search key.
After generating the search formula in step SA18, the information processing unit 30b searches and collects image information, text information, and voice information according to the generated search formula by the above-described method, and based on the collected information. The video information is generated, and the video is displayed on the display device 12 based on the video information.

  As described above, the video distribution server 10 according to the present embodiment acquires information (environment information) related to the environment in which the display device 12 is installed by the environment information acquisition unit 30a, and based on the acquired environment information, A search key used for generation is specified, and a search expression is generated using the specified search key. That is, the search formula (search condition) is changed according to the environment in which the display device 12 is installed. For this reason, the video distribution server 10 can search and collect video information, text information, and audio information corresponding to the environment in which the display device 12 is installed. It is possible to display an image corresponding to the environment in which the device 12 is installed.

Now, the video distribution server 10 according to the present embodiment further provides the following bookmark service.
That is, in the present embodiment, during the provision of the video distribution service, an image based on the image information and a text based on the text information are displayed in combination on the display panel 41a of the display device 12 (see FIG. 3). Here, the text based on the text information is, for example, news, articles, a part of a diary recorded by another person, text posted by another person, advertisement, etc., but a user who visually recognizes the display device 12 May be interested in the displayed text and wants to know more, or has some inspiration and wants to see it again later. The bookmark service is a service that assumes the above cases.
A user who receives the bookmark service needs to install a dedicated application having a function described below in the mobile phone 15 (terminal) owned by the user in advance. The application is provided by a service provider, and is installed in the mobile phone 15 via an existing application download service or the like.

FIG. 6 is a flowchart showing the operation of the video distribution server 10 when the bookmark service is provided.
As a premise of the flowchart of FIG. 6, the following operation is performed by the user. That is, the user who is viewing the video displayed on the display device 12 by the video distribution service starts up a dedicated application for his / her mobile phone 15. And if there is something that is worrisome about the text displayed on the display panel 41a while visually recognizing the video, the mobile phone 15 is subjected to a predetermined operation (for example, a predetermined icon displayed on the touch panel of the mobile phone 15). Touch operation). Then, predetermined data including an identification code for uniquely identifying the mobile phone 15 is transmitted from the mobile phone 15 to the video distribution server 10 via the Internet by the function of the dedicated application (step SX1). .
When the predetermined data is input, the information processing unit 30b of the control unit 30 of the video distribution server 10 acquires information related to the text information included in the video information currently being output to the display device 12 (step). SB1). In the present embodiment, the information related to the text information is text information and the URL of the text information provider. For example, when the text information is information related to news, the URL of the text information providing source is a page (may be another page such as a main page) in which information related to the news is described. For example, when the text information is a part of the text of the blog, it is the URL of the page related to the blog. The URL is stored in a predetermined storage area in association with the text information when the information processing unit 30b searches the text information. Note that the information related to the text information may include a server that provided the text information, information about companies, individuals, and the like.
Next, the information processing unit 30b stores information related to the text information in a predetermined storage area of the storage unit 33 in association with the identification code of the mobile phone 15 (step SB2).

Thereafter, the information processing unit 30b monitors whether there is a request for output from the mobile phone 15 for information related to the stored text information (step SB3). Here, in the application installed in the mobile phone 15, data for requesting output of information related to the text information together with the identification code of the mobile phone 15 is transmitted via the Internet 11 in accordance with a user instruction. A function for transmitting to the distribution server 10 is implemented, and the user starts an application and makes the above request by the function of the application.
When there is a request for outputting information related to the text information (step SB3: YES), the information processing unit 30b uses the identification code of the mobile phone 15 as a key to provide information related to the text information stored in the storage unit 33. (Step SB4), and the acquired information is output to the mobile phone 15 via the Internet 11 according to a predetermined protocol (step SB5), and text information and text information are displayed on the touch panel of the mobile phone 15. Display information related to.
As a result, the user not only visually recognizes the text based on the text information, but also actively relates to the text or the text when there is a text necessary for the user or a text to be noted. Information can be acquired, and convenience is further improved.

As described above, the video distribution server 10 (server device) according to the present embodiment includes the environment information acquisition unit 30a (environment information acquisition means) that acquires environment information related to the environment in which the display device 12 is installed, and the search condition. Based on the above, the image information and the text information stored in the device on the Internet 11 are searched and acquired, and the acquired video information and the video for displaying the information in combination based on the text information An information processing unit 30b (information processing means) that generates information and outputs video information to the display device 12; Then, the information processing unit 30b changes the content of the search condition in accordance with the environment in which the display device 12 is installed based on the environment information acquired by the environment information acquisition unit 30a.
According to this configuration, the video distribution server 10 does not cause the display device 12 to repeatedly display the same video according to some rules, but preferably uses innumerable image information and text information existing on the network. Thus, it is possible to display a video at random from a very wide range of options, and it is possible to display a video that improves user satisfaction without getting tired of the user. Furthermore, the video distribution server 10 displays the image information (moving image and still image) based on the image information by appropriately searching and displaying the collected image information and the text information in combination. While appealing to the user's sensibility, the user's logical thinking can be stimulated by the text displayed based on the text information, making it possible to display images that appeal to the user's sensibility in a multifaceted and effective manner. Along with this, it is possible to display an image that improves user satisfaction. Further, according to the configuration, since the image information and the text information are independently searched and collected, and the video information is generated by combining these information, the image based on the image information and the text based on the text information are combined. Can be provided as a video, and in this respect users will not get bored. Furthermore, a random combination of images and sentences appeals to the user's sensibility, which may cause an unexpected and unexpected synergistic effect. In this respect, it is possible to provide a video with high entertainment properties. Furthermore, according to the above configuration, the video distribution server 10 can search and collect video information and text information corresponding to the environment in which the display device 12 is installed, whereby the display device 12 displays the display information. It is possible to display an image corresponding to the environment in which the device 12 is installed.

The search condition in the present embodiment is a search method for metadata associated with various types of information stored in devices on the Internet 11, and the information processing unit 30b is acquired by the environment information acquisition unit 30a. The metadata search method is changed based on the environmental information.
Here, generally, metadata is given to image information and text information on a network in accordance with a predetermined format, and a search is performed using this metadata as a key. And according to the said structure, the information corresponding to the environment of the display apparatus 12 can be searched and collected with the simple method of changing the search method of metadata based on environmental information.

In the present embodiment, the environment information acquisition unit 30a acquires the installation position information indicating the position where the display device 12 is provided as the environment information, and the information processing unit 30b is acquired by the environment information acquisition unit 30a. Based on the installation position information or based on a combination of the installation position information and other environmental information, the contents of the search condition are changed in correspondence with the position where the display device 12 is installed.
According to this configuration, it is possible to search and collect appropriate image information and text information corresponding to the position where the display device 12 is provided, and to display a video corresponding to the environment of the display device 12 on the display device 12. It becomes possible to make it.

In the present embodiment, the information processing unit 30b acquires the attribute of the place where the display device 12 is installed (in this example, the land pattern and the space type) based on the installation position information, and the display device 12 The content of the search condition is changed according to the attribute of the place where is installed.
According to this configuration, it is possible to search and collect appropriate image information and text information corresponding to the attribute of the place where the display device 12 is provided, and the display device 12 corresponds to the attribute of the location of the display device 12. It is possible to display the selected video.

In the present embodiment, the display device 12 includes a position detection unit 43 having a function of acquiring information indicating its own position. And the environment information acquisition part 30a acquires the information which shows the position detected by the position detection part 43 by communication via the network from the display apparatus 12, and acquires installation position information based on the acquired information.
According to this configuration, the environment information acquisition unit 30a can reliably acquire the installation position information by using the function of the display device 12.

When the display device 12 does not have the position detection unit 43, the environment information acquisition unit 30a acquires information indicating the position of the external device from the external device such as a mobile phone through communication via the Internet 11, The installation position information is acquired based on the acquired information.
According to this configuration, the environment information acquisition unit 30a can communicate with the external device, and can reliably acquire the installation position information by using the function of the external device.

In the present embodiment, the environment information acquisition unit 30a acquires date information regarding the current date as environment information, and the information processing unit 30b is based on the date information acquired by the environment information acquisition unit 30a, or Based on the combination of the date and time information and other environment information, the contents of the search condition are changed in correspondence with the current date and time.
According to this configuration, it is possible to search and collect appropriate image information and text information corresponding to the current date and time, and to display a video corresponding to the current date and time on the display device 12.

In the present embodiment, the display device 12 includes a camera device (photographing unit) that captures the periphery and generates a captured image. Then, the environment information acquisition unit 30a acquires a captured image from the display device 12 by communication via the Internet 11 as environment information, and the information processing unit 30b is based on the captured image acquired by the environment information acquisition unit 30a. Alternatively, based on the combination of the captured image and other environment information, the contents of the search condition are changed in correspondence with the environment in which the display device 12 indicated by the captured image is installed.
According to this configuration, it is possible to search and collect appropriate image information and text information corresponding to the actual environment of the display device 12 analyzed from the photographed image, and to display a video corresponding to the environment on the display device 12. It becomes possible.

In the present embodiment, the information processing unit 30b detects the brightness around the display device 12 based on the photographed image, and changes the content of the search condition according to the brightness.
According to this configuration, it is possible to search and collect appropriate image information and text information corresponding to the brightness around the display device 12, and the display device 12 corresponds to the brightness around the display device 12. An image can be displayed.

Further, in the present embodiment, the information processing unit 30b detects the number of people existing around the display device 12 based on the captured image, and changes the content of the search condition according to the number of people. .
According to this configuration, the information processing unit 30b can search and collect appropriate image information and text information corresponding to the number of humans present around the display device 12, and the display device 12 can display the display device. Images corresponding to the number of human beings present around 12 can be displayed.

In the present embodiment, the display device 12 includes a sound processing unit 45 (sound collecting means) that collects sound and generates sound collection information. Then, the environment information acquisition unit 30a acquires sound collection information from the display device 12 through communication via the Internet 11 as environment information, and the information processing unit 30b uses the sound collection information acquired by the environment information acquisition unit 30a. Based on the combination of the sound collection information and other environmental information, the contents of the search condition are changed in accordance with the environment where the display device 12 indicated by the sound collection information is installed.
According to this configuration, it is possible to search and collect appropriate image information and text information corresponding to the state of sound around the display device 12, and to display the sound state around the display device 12 in the display device 12. Corresponding video can be displayed.

In the present embodiment, the information processing unit 30b determines whether music is flowing in the place where the display device 12 is installed based on the sound collection information. If the music is flowing, Correspondingly, the contents of the search condition are changed.
According to this configuration, the information processing unit 30b can search and collect appropriate image information and text information corresponding to music flowing in the place where the display device 12 is installed, and the display device 12 It is possible to display an image corresponding to music flowing in a place where the display device 12 is installed.

Moreover, in this embodiment, the environment information acquisition part 30a acquires the weather information regarding the weather of the place where the display apparatus 12 was installed as environment information, and the information processing part 30b was acquired by the environment information acquisition part 30a. Based on weather information or based on a combination of weather information and other environmental information, the content of the search condition is changed according to the weather of the place where the display device 12 is installed.
According to this configuration, the information processing unit 30b can search and collect appropriate image information and text information corresponding to the weather of the place where the display device 12 is installed, and the display device 12 It is possible to display an image corresponding to the weather around the area.

The video distribution server 10 according to the present embodiment can communicate with the mobile phone 15 (terminal) via the Internet 11. The information processing unit 30b stores information related to text information included in the video information output to the display device 12 in response to an instruction from the mobile phone 15 while the video information is being output to the display device 12. In response to a request from the mobile phone 15, information related to the stored text information is output to the mobile phone 15.
According to this configuration, the user not only visually recognizes the text based on the text information, but also positively if there is a text necessary for the user or a text to be noted. It is possible to acquire information related to, and the convenience is further improved.

The above-described embodiment is merely an aspect of the present invention, and can be arbitrarily modified and applied within the scope of the present invention.
For example, in the above-described embodiment, the video distribution server 10 searches for three pieces of information of image information, sentence information, and audio information according to the search condition, and a combination of image information and sentence information, or image information, sentence information. And video information based on a combination of audio information. However, it is not always necessary to search and output these three pieces of information. For example, the image information and the text information are searched and combined and output, or only the image information is searched and combined and output. It may be a configuration. Even with such a configuration, it is possible to achieve the same effects as those described in the above-described embodiments.
For example, in this embodiment, for convenience of explanation, the image information providing server 18, the text information providing server 19, and the audio information providing server 10 have been described as separate servers. Needless to say, the configuration may be stored and provided in the same server.
In addition, for example, the method in which the environment information acquisition unit 30a acquires the environment information may be any form, for example, directly or indirectly to the video distribution server 10 via a network or the like. Environmental information may be input by means including artificial means.

Second Embodiment
Next, a second embodiment will be described.
Here, a person who views the video displayed on the display device 12 based on the video information output from the video distribution server 10 (hereinafter simply referred to as “viewer”) is a person with emotion. . Therefore, if the display content of the display device 12 can be adapted to the emotion of the viewer, the satisfaction of the viewer can be dramatically improved, and the added value of the service provided by the video distribution server 10 can be further increased. It can be improved. Based on the above, the video distribution server 10 according to the present embodiment has an object of causing the display device 12 to display video or the like corresponding to the viewer's emotion as much as possible.
Note that the emotion of the viewer who views the display device 12 is a concept included in the environment of the display device 12.
The functional configuration of the video distribution server 10 according to the present embodiment is the same as the functional configuration of the video distribution server 10 according to the first embodiment. With reference to FIG. 2, detailed description of each component is omitted.

The inventors have found that when "human emotion" is defined (expressed), a considerable number of definitions are possible. For example, human emotions such as “I don't feel motivated”, “I can't expect”, “I want to laugh”, “I want to cry”, “I want to feel calm” can be defined in various ways. Hereinafter, the defined human emotion is referred to as “defined emotion K”.
Further, the inventors have several psychological effects (hereinafter referred to as “psychological effects”) that image information, text information, and audio information (hereinafter simply referred to as “contents”) give to humans. It discovered that it could classify | categorize into 10 or more. Psychological efficacy is a conceptual representation of the psychological effects and effects that a viewer who has viewed content receives from the content. For example, there are a psychological effect of “improvement of power”, a psychological effect of “make calm”, a psychological effect of “make fun”, and the like. Hereinafter, the classification of psychological efficacy is referred to as “psychological efficacy classification S”.
In addition, for the viewers who have a certain emotion, the inventors have the most effective psychological impact on the viewer when providing content belonging to a certain psychological effect category. Found that can give. For example, when there is a psychological effect type S of “improvement”, for a viewer who has a feeling of “not motivated”, the psychological effect type S related to “improvement” When the content to which it belongs is provided, the psychological influence can be exerted most effectively.
Further, the inventors have found that the psychological efficacy types S classified into several to several tens can be grouped into several groups according to the “trend” of the psychological efficacy. Based on this, the inventors grouped the psychological efficacy types into several groups, and gave each group a tag that can be added as metadata to the content (hereinafter referred to as “large tag D”). did.

FIG. 7 is a diagram schematically illustrating the relationship between the definition emotion K, the psychological effect type S, and the large tag D.
In the example of FIG. 7, 100 defined emotions K, defined emotions K1 to K100, are defined as defined emotions K. Furthermore, 15 psychological effect types S of psychological effect types S1 to S15 are defined as psychological effect types S. Each of the definition emotions K1 to K100 is associated with one of the psychological effect types S1 to S15 from the viewpoint of “most effectively having a psychological influence on the viewer”. In the example of FIG. 7, the definition emotion K1 “not motivated” and the psychological effect type S1 “improvement” are associated with each other, but this has an emotion “not motivated”. If the content belonging to the psychological efficacy type S related to “improvement of energy” is provided to the viewer who is, the association is performed in consideration of the most effective psychological influence. It has been broken.
In the example of FIG. 7, five large tags D of large tags D1 to D5 are defined as large tags D. Each of the psychological effect types S1 to S15 is associated with one of the large tags D. As described above, the large tag D is a tag given to each group after the psychological efficacy type S is grouped according to the “trend” of the psychological efficacy, and the psychological efficacy types S1 to S1. Each of S15 is associated with a large tag D corresponding to its own group.

The relationship between the definition emotion K, the psychological effect type S, and the large tag D shown in FIG. 7 is stored as data in the storage unit 33 of the video distribution server 10. The relationship between these information can be changed at any time by human means or by automatic means, and addition of information (addition of a newly defined definition emotion K, etc.) is also possible at any time.
Hereinafter, the data indicating the relationship between the definition emotion K, the psychological efficacy type S, and the large tag D stored in the storage unit 33 is referred to as “large tag relationship data”. As shown in FIG. 7, one definition emotion K is linked to one large tag D via the corresponding psychological effect type S, and the control unit 30 of the video distribution server 10 uses this large tag. By referring to the relational data, it is possible to identify one large tag D corresponding to a certain definition emotion K.

In the present embodiment, the image information providing server 18 (see FIG. 1) stores and provides image information (content) with a large tag D as metadata. For example, the large tag D as metadata for image information is given as follows.
For example, based on the fact that each of the large tags D is a tag defined for each psychological efficacy trend of the psychological efficacy category S, an expert on content understands the content of the content, By means, an appropriate large tag D is assigned to the image information (content).
In addition, for example, a user who accesses image information (content) collects what image information (content) is accessed by what means when feelings, and based on the collected information, The large tag D is given to the image information (content) by an automatic means or an automatic means. As an example, when the user accesses the content, it is configured to take an online questionnaire about the emotion at that time, and the result of the questionnaire is accumulated in association with the content, and the accumulated questionnaire result is obtained by human means, or Then, analysis is performed by an automatic means using a program, and a large tag D is given to the image information (content) based on the analysis result. As another example, the image information (content) selector can acquire biological information (pulse, body temperature, blood pressure, etc.) when accessing the image information (content), and the acquired biological information is the content. The psychological state when the content is selected is analyzed based on the accumulated information, and the large tag D is given to the image information (content) based on the analysis result.
Similarly to the image information in the image information providing server 18, the text information in the text information providing server 19 (see FIG. 1) and the voice information in the voice information providing server 10 (see FIG. 1) are also appropriate as metadata. Large tag D is given.

As described above, in this embodiment, the “human emotion” can be defined innumerably, while the content can be classified into several to ten from the viewpoint of psychological efficacy, and further, the psychological efficacy classification S Found that it can be grouped into several groups depending on the “trend” of its psychological efficacy. Finally, all of the defined definition emotions K are linked to about several (in this example, five) large tags D via the psychological effect type S. By associating the definition emotion K and the large tag D with such a technique, the human emotions that can be defined innumerably are appropriately metabolized based on the emotion characteristics and content characteristics. It became possible to link with objective information that can be used to select content as tags as data.
The number of large tags D is about several (in this example, five). Therefore, the content is not subdivided for each large tag D, that is, a large number of contents to which a certain large tag D is assigned can be secured. As a result, when content is searched using one large tag D as a key, a large number of content can be searched, and by displaying videos related to the searched large number of contents via the display device 12, etc. As in the first embodiment, it is possible to display an image that improves user satisfaction without getting tired of the user.

Next, the operation when the video distribution server 10 according to the present embodiment outputs video information to the display device 12 and displays the video on the display device 12 (during video distribution service) will be described.
FIG. 8 is a flowchart showing the above operation. In particular, FIG. 8 shows the operation of the video distribution server 10 when displaying on the display device 12 a video composed of an image based on image information and a text based on text information.
First, the environment information acquisition unit 30a of the control unit 30 of the video distribution server 10 acquires the definition emotion K corresponding to the emotion of the viewer of the display device 12 as environment information (step SC1).

Acquisition of the definition emotion K as environmental information is performed as follows, for example.
For example, first, factors affecting human emotions (hereinafter referred to as “external factors”) such as the above-mentioned land pattern, space type, current time zone, day of the week, weather, season, temperature, humidity, etc. And the definition emotion K are stored in association with each other. The correspondence between the combination of external factors and the definition emotion K is appropriate after considering the effect of each external factor on human psychology and the effect of the combination of external factors on human psychology. To be done. For example, in the case of land pattern = office district, space type = coffee shop, day of the week = Friday, weather = sunny, season = winter, humans tend to have the feeling of "I want to be fun" Correspondence is appropriately performed from the viewpoint. In step SC1, the environment information acquisition unit 30a acquires each of the external factors and specifies the definition emotion K associated with the acquired combination of external factors, thereby determining the definition emotion K as the environment information. Get as.
Also, for example, the viewer of the display device 12 is made to install a dedicated application in advance on the mobile phone owned by the viewer. This application has a function of providing a user interface for the viewer to select his / her current emotion, and the viewer selects and confirms his / her emotion via the user interface. In the user interface, selectable viewer emotions are prepared based on the defined definition emotion K. When the user's selection of his / her emotion is confirmed, information indicating the selected emotion is transmitted to the video distribution server 10 together with the position information of the mobile phone via the Internet 11 by the function of the application. The environment information acquisition unit 30a identifies the corresponding display device 12 based on the position information of the mobile phone, and identifies the defined emotion K corresponding to the viewer's emotion based on the information indicating the selected emotion. . As described above, the environment information acquisition unit 30a acquires the definition emotion K.
Further, for example, the environment information acquisition unit 30a acquires a captured image based on the image captured by the camera device 44 of the display device 12 through communication. Then, the environment information acquisition unit 30a analyzes the viewer's facial expression displayed in the captured image by an optical method, or analyzes the atmosphere of the space by an optical method using the same method as in the first embodiment. The viewer's emotion is estimated and the corresponding definition emotion K is specified.
Further, for example, when the viewer is equipped with a device for detecting biological information such as a heart rate monitor or a sphygmomanometer, the environment information acquisition unit 30a acquires the biological information from the device via communication and acquires the information. The viewer's emotion is estimated based on the biometric information, and the corresponding definition emotion K is specified.
The method for acquiring the definition emotion K has been described above with an example, but the method is not limited to the above example, and it goes without saying that all existing methods can be applied. For example, the viewer's emotion may be estimated by reflecting both the combination of external factors and the analysis result of the captured image based on the image captured by the camera device 44, and the corresponding defined emotion K may be identified. .

When the definition emotion K as environment information is acquired by the environment information acquisition unit 30a, the information processing unit 30b refers to the above-described large tag relation data stored in the storage unit 33 (step SC2) and acquired in step SC1. One large tag D corresponding to the definition emotion K is specified (step SC3). As described above, each of the defined emotions K is linked to one large tag D via the psychological effect type S. By referring to the large tag relation data, each defined emotion K is assigned to one defined emotion K. Based on this, one corresponding large tag D can be identified.
Next, the information processing unit 30b searches and acquires image information and text information using the large tag D identified in step SC3 as a search key. As described above, a large tag D appropriate as metadata is assigned to image information and text information. The image information and text information searched here are information reflecting viewers' emotions.
Next, the information acquisition unit 30b generates image information by combining the image information and the text information by the method described in the first embodiment (step SC5), and outputs the generated image information to the display device 12. The video based on the video information is displayed (step SC6).
Here, since the video displayed on the display device 12 is a video that has a psychological effect on the viewer effectively, it leads to an improvement in the satisfaction of the viewer, and further, an additional video distribution service is added. Value can be improved.

1 Video distribution system 10 Video distribution server (server device)
11 Internet (network)
12 Display device 15 Mobile phone (terminal)
30 control unit 30a environment information acquisition unit (environment information acquisition means)
30b Information processing unit (information processing means)

Claims (15)

  1. In a server device connected to a display device via a network,
    Environmental information acquisition means for acquiring environmental information relating to the environment of the display device;
    Based on the search condition, the image information of the devices on the network is searched and acquired, video information for displaying the acquired image information is generated, and the generated video information is output to the display device. An information processing means,
    The information processing means includes
    A server device, wherein the content of the search condition is changed in accordance with the environment of the display device based on the environment information acquired by the environment information acquisition means.
  2. The information processing means includes
    Based on the search conditions, the image information and text information of the devices on the network are searched and acquired, and the information is combined and displayed based on the acquired image information and text information. The server apparatus according to claim 1, wherein the video information to be generated is generated, and the generated video information is output to the display device.
  3. The search condition is a search method for metadata associated with various information possessed by devices on the network,
    The information processing means includes
    3. The server apparatus according to claim 1, wherein a search method for the metadata is changed based on the environment information acquired by the environment information acquisition unit.
  4. The environmental information acquisition means includes
    As the environmental information, obtain installation position information indicating the position where the display device is provided,
    The information processing means includes
    Based on the installation position information acquired by the environment information acquisition means, or based on a combination of the installation position information and other environmental information, corresponding to the position where the display device is installed, 4. The server device according to claim 1, wherein the contents of the search condition are changed.
  5. The information processing means includes
    Based on the installation position information, an attribute of the place where the display device is installed is acquired, and the content of the search condition is changed according to the attribute of the place where the display device is installed. The server device according to claim 4.
  6. The display device has a function of acquiring information indicating its own position,
    The environmental information acquisition means includes
    The information indicating the position of the display device is acquired from the display device through communication via the network, and the installation position information is acquired based on the acquired information. Server equipment.
  7. It is located in the vicinity of the display device, and is configured to be able to communicate with an external device having a function of acquiring information indicating its own position via the network.
    The environmental information acquisition means includes
    The information indicating the position of the external device is acquired from the external device through communication via the network, and the installation position information is acquired based on the acquired information. Server equipment.
  8. The environmental information acquisition means includes
    As the environment information, obtain date and time information on the current date and time,
    The information processing means includes
    Based on the date and time information acquired by the environment information acquisition means or based on a combination of the date and time information and other environment information, the contents of the search condition are changed in correspondence with the current date and time. 8. The server device according to claim 1, wherein the server device is a server device.
  9. The display device includes photographing means for photographing a periphery and generating a photographed image,
    The environmental information acquisition means includes
    As the environment information, the captured image is acquired from the display device through communication via the network,
    The information processing means includes
    Based on the captured image acquired by the environment information acquisition means or based on a combination of the captured image and other environmental information, corresponding to the environment of the display device indicated by the captured image, 9. The server apparatus according to claim 1, wherein the contents of the search condition are changed.
  10. The information processing means includes
    The server device according to claim 9, wherein brightness of the periphery of the display device is detected based on the photographed image, and contents of the search condition are changed according to the brightness.
  11. The information processing means includes
    11. The human condition existing around the display device is detected based on the captured image, and the content of the search condition is changed according to the human condition. The server device described.
  12. The display device includes sound collection means for collecting sound and generating sound collection information,
    The environmental information acquisition means includes
    As the environment information, the sound collection information is acquired from the display device through communication via the network,
    The information processing means includes
    Based on the sound collection information acquired by the environment information acquisition means, or based on a combination of the sound collection information and other environmental information, it is made to correspond to the environment of the display device indicated by the sound collection information. The server apparatus according to claim 1, wherein contents of the search condition are changed.
  13. The information processing means includes
    Based on the sound collection information, it is determined whether or not music is flowing in a place where the display device is installed. If music is flowing, the contents of the search condition are changed corresponding to the music. The server device according to claim 12, wherein
  14. The environmental information acquisition means includes
    As the environmental information, obtain weather information about the weather of the place where the display device is installed,
    The information processing means includes
    Based on the weather information acquired by the environmental information acquisition means, or based on a combination of the weather information and other environmental information, corresponding to the weather of the place where the display device is installed, 14. The server device according to claim 1, wherein the contents of the search condition are changed.
  15. It is configured to be able to communicate with the terminal via the network,
    The information processing means includes
    While outputting the video information to the display device, according to an instruction from the terminal, while storing information related to the text information included in the video information output to the display device,
    The server apparatus according to claim 1, wherein the stored sentence information and information related to the sentence information are output to the terminal in response to a request from the terminal.
JP2012223032A 2012-10-05 2012-10-05 Server device Active JP6224308B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2012223032A JP6224308B2 (en) 2012-10-05 2012-10-05 Server device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2012223032A JP6224308B2 (en) 2012-10-05 2012-10-05 Server device

Publications (2)

Publication Number Publication Date
JP2014075076A true JP2014075076A (en) 2014-04-24
JP6224308B2 JP6224308B2 (en) 2017-11-01

Family

ID=50749173

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2012223032A Active JP6224308B2 (en) 2012-10-05 2012-10-05 Server device

Country Status (1)

Country Link
JP (1) JP6224308B2 (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002342642A (en) * 2001-05-14 2002-11-29 Olympus Optical Co Ltd System for displaying video
JP2003125379A (en) * 2001-10-17 2003-04-25 Nippon Telegraph & Telephone West Corp Information providing system, method thereof, information distributing server apparatus, contents distribution server apparatus, street television set, mobile communication terminal and program
JP2004220498A (en) * 2003-01-17 2004-08-05 Toppan Printing Co Ltd Advertisement delivery system and method
JP2007052295A (en) * 2005-08-18 2007-03-01 Daiei Inc Store with advertisement space, advertisement display method of store, and advertisement display program of store
JP2007193591A (en) * 2006-01-19 2007-08-02 Casio Comput Co Ltd Web search server and web search method
JP2007235992A (en) * 2007-05-03 2007-09-13 Hitachi Ltd Advertisement display control apparatus and method
JP2008048113A (en) * 2006-08-15 2008-02-28 Nippon Telegr & Teleph Corp <Ntt> Dynamic image data distribution system, dynamic image data distribution method, dynamic image data providing method, dynamic image data distribution program, dynamic image data providing program, and computer-readable recording medium recorded with these programs
JP2008225315A (en) * 2007-03-15 2008-09-25 Konica Minolta Holdings Inc Advertisement display system
JP2010146412A (en) * 2008-12-19 2010-07-01 Nippon Telegr & Teleph Corp <Ntt> Device, method and program for selecting content
JP2012053773A (en) * 2010-09-02 2012-03-15 Nippon Telegr & Teleph Corp <Ntt> Peripheral information display system, and peripheral information display program
JP2012098598A (en) * 2010-11-04 2012-05-24 Yahoo Japan Corp Advertisement providing system, advertisement provision management device, advertisement provision management method, and advertisement provision management program

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002342642A (en) * 2001-05-14 2002-11-29 Olympus Optical Co Ltd System for displaying video
JP2003125379A (en) * 2001-10-17 2003-04-25 Nippon Telegraph & Telephone West Corp Information providing system, method thereof, information distributing server apparatus, contents distribution server apparatus, street television set, mobile communication terminal and program
JP2004220498A (en) * 2003-01-17 2004-08-05 Toppan Printing Co Ltd Advertisement delivery system and method
JP2007052295A (en) * 2005-08-18 2007-03-01 Daiei Inc Store with advertisement space, advertisement display method of store, and advertisement display program of store
JP2007193591A (en) * 2006-01-19 2007-08-02 Casio Comput Co Ltd Web search server and web search method
JP2008048113A (en) * 2006-08-15 2008-02-28 Nippon Telegr & Teleph Corp <Ntt> Dynamic image data distribution system, dynamic image data distribution method, dynamic image data providing method, dynamic image data distribution program, dynamic image data providing program, and computer-readable recording medium recorded with these programs
JP2008225315A (en) * 2007-03-15 2008-09-25 Konica Minolta Holdings Inc Advertisement display system
JP2007235992A (en) * 2007-05-03 2007-09-13 Hitachi Ltd Advertisement display control apparatus and method
JP2010146412A (en) * 2008-12-19 2010-07-01 Nippon Telegr & Teleph Corp <Ntt> Device, method and program for selecting content
JP2012053773A (en) * 2010-09-02 2012-03-15 Nippon Telegr & Teleph Corp <Ntt> Peripheral information display system, and peripheral information display program
JP2012098598A (en) * 2010-11-04 2012-05-24 Yahoo Japan Corp Advertisement providing system, advertisement provision management device, advertisement provision management method, and advertisement provision management program

Also Published As

Publication number Publication date
JP6224308B2 (en) 2017-11-01

Similar Documents

Publication Publication Date Title
Frith Smartphones as locative media
US20200265070A1 (en) Social network driven indexing system for instantly clustering people with concurrent focus on same topic into on topic chat rooms and/or for generating on-topic search results tailored to user preferences regarding topic
Trottier Social media as surveillance: Rethinking visibility in a converging world
US9858348B1 (en) System and method for presentation of media related to a context
Kourouthanassis et al. Tourists responses to mobile augmented reality travel guides: The role of emotions on adoption behavior
US10133458B2 (en) System and method for context enhanced mapping
US9639855B2 (en) Dynamic embedded recognizer and preloading on client devices grammars for recognizing user inquiries and responses
RU2610944C2 (en) History log of users activities and associated emotional states
Burland et al. Coughing and clapping: Investigating audience experience
US9134875B2 (en) Enhancing public opinion gathering and dissemination
US8725826B2 (en) Linking users into live social networking interactions based on the users&#39; actions relative to similar content
US8799005B2 (en) Systems and methods for capturing event feedback
JP6300295B2 (en) Friend recommendation method, server therefor, and terminal
Benyon et al. Presence and digital tourism
Malik Representing black Britain: Black and Asian images on television
CN103023965B (en) Based on the media packet of event, playback with share
CN103403705B (en) Loading a mobile computing device with media files
US8280771B2 (en) Advertising that is relevant to a person
US9110903B2 (en) Method, system and apparatus for using user profile electronic device data in media delivery
Höpken et al. Context-based adaptation of mobile applications in tourism
TWI416344B (en) Computer-implemented method and computer-readable medium for providing access to content
US20200034401A1 (en) Personalized content processing and delivery system and media
Kreutzer Generation mobile: online and digital media usage on mobile phones among low-income urban youth in South Africa
US9026917B2 (en) System and method for context enhanced mapping within a user interface
CN102939604B (en) The method and apparatus of Internet resources for context index

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20151002

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20160808

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20160906

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20161107

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20170509

A601 Written request for extension of time

Free format text: JAPANESE INTERMEDIATE CODE: A601

Effective date: 20170705

A711 Notification of change in applicant

Free format text: JAPANESE INTERMEDIATE CODE: A711

Effective date: 20170818

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20170904

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A821

Effective date: 20170818

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20170919

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20171005

R150 Certificate of patent or registration of utility model

Ref document number: 6224308

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150