WO2010126042A1 - Content output system - Google Patents

Content output system Download PDF

Info

Publication number
WO2010126042A1
WO2010126042A1 PCT/JP2010/057464 JP2010057464W WO2010126042A1 WO 2010126042 A1 WO2010126042 A1 WO 2010126042A1 JP 2010057464 W JP2010057464 W JP 2010057464W WO 2010126042 A1 WO2010126042 A1 WO 2010126042A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
unit
storage unit
search condition
terminal device
Prior art date
Application number
PCT/JP2010/057464
Other languages
French (fr)
Japanese (ja)
Inventor
淳 新谷
寺田 智
重幸 山中
英知 大槻
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Publication of WO2010126042A1 publication Critical patent/WO2010126042A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/38Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/38Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/383Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Definitions

  • the present invention relates to a content output system that displays and reproduces content composed of images, sounds, and the like, a server device, a content output device, a content output method, a content output program, and a recording medium that stores the content output program.
  • Patent Document 1 describes a method for enjoying the contents owned by the user in combination with the contents on the Internet.
  • a photographic image owned by a user and a desired photographic image selected by the user from among a large number of photographic images on the Internet are arranged in combination, and these photographic images are printed out. .
  • Patent Document 1 the user must specify all of the content and its arrangement position, and the input operation for this is complicated. In particular, when there is a large amount of content on the server, it is necessary to find and select the desired content from this large amount of content, which is extremely troublesome for the user.
  • the present invention has been made in view of the above-described conventional problems, and efficiently searches for and uses useful content from a large amount of content stored in a server device or the like on a network. It is an object to provide a content output system, a server device, a content output device, a content output method, a content output program, and a recording medium storing the content output program.
  • a content output system is a content output system that performs information communication between a terminal device and a server device through a network.
  • the terminal device stores a plurality of contents in a first content storage.
  • a classification unit that classifies a plurality of contents stored in the first content storage unit into one or a plurality of content groups based on a classification condition, and classifies each of the content groups into the content group
  • a search condition generation unit that generates a search condition based on the accompanying information of the content that has been generated, and the search condition generated by the search condition generation unit for each of the content groups is transmitted to the server device.
  • each of the content groups includes an output unit that outputs both the content classified into the content group and the content received from the server device.
  • the first content storage unit exists in a terminal device such as its own personal computer
  • the second content storage unit exists in a server device on the network.
  • the content is classified into one or a plurality of content groups based on the classification conditions, and search conditions are generated for each content group based on the accompanying information of the content classified into the content group and stored in the server device.
  • the content corresponding to the search condition can be searched from each content that has been set.
  • the content obtained from the classification condition and the content obtained from the search condition can be output together. That is, if the classification conditions are appropriately set on the personal computer side, a plurality of contents stored in the first content storage unit are classified into one or a plurality of content groups based on the classification conditions.
  • a search condition is generated based on the accompanying information of the content classified into the content group, and the content that matches the search condition is searched on the server device side, and is classified into the content group
  • Both the content and the content obtained from the search condition are output. Therefore, simply by setting the classification conditions on the personal computer side, the mutually related contents are selected on the personal computer side and the server apparatus side, and the mutually related contents are output. It becomes possible to efficiently retrieve useful content from a large amount of content stored in a server device or the like on a network and use it in a terminal device.
  • the output unit may set a display layout of the content classified into the content group and the content received from the server device, and display and output these content in the display layout.
  • the output unit may output the content classified into the content group and the content received from the server device in an identifiable manner.
  • the output unit may output the content classified into the content group and the content received from the server device together with accompanying information of each of these content.
  • This configuration makes it possible to check the accompanying information of each content that has been output.
  • the terminal device may include an input operation unit for inputting the classification condition.
  • This configuration makes it possible to input and set arbitrary classification conditions.
  • the classification condition may be set in advance, changed by an input operation of the input operation unit, or input set by an input operation of the input operation unit. As described above, various methods for setting the classification condition can be applied.
  • the accompanying information may be position information or date / time information.
  • the classification unit compares the position information or date / time information of each content stored in the first content storage unit with a threshold value, and sets one or more contents stored in the first content storage unit or You may classify
  • the classification unit arranges the contents in time series using the date and time information of each content stored in the first content storage unit, and then arranges the contents using the position information of each content. May be classified into one or a plurality of content groups.
  • the threshold value may be set in advance, changed by an input operation of the input operation unit, input set by an input operation of the input operation unit, or changed based on accompanying information of content. .
  • Various methods for setting the threshold in this way can be applied.
  • the terminal device transmits accompanying information of each content stored in the first content storage unit to the server device, and the server device performs classification based on the accompanying information of each content received from the terminal device.
  • a condition may be obtained and this classification condition may be transmitted to the terminal device.
  • the content transmitted from the server device to the terminal device includes an address on the Internet, and from the terminal device to the server device or another server in response to an input operation on the terminal device.
  • the server sends the address, collects information based on the address at the server device that received the address, sends the information from the server device to the terminal device, and displays the information on the terminal device. May be.
  • the content output system of the present invention is a content output system that performs information communication between a terminal device and a server device via a network
  • the server device includes a first content storage unit that stores a plurality of contents and a plurality of contents. And a plurality of contents stored in the first content storage unit are transmitted to the terminal device, and a search condition is received from the terminal device as a response to the transmission of the content.
  • a communication unit that transmits content that meets a condition to the terminal device, and a search unit that searches for content that meets the received search condition from among a plurality of contents stored in the second content storage unit,
  • the terminal device stores information in the first content storage unit from the server device.
  • a plurality of received contents a classification unit that classifies the received plurality of contents into one or more content groups based on classification conditions, and each of the content groups is classified into the content group
  • a search condition generation unit that generates a search condition based on the accompanying information of the content, and the search condition generated by the search condition generation unit for each of the content groups is transmitted to the server device.
  • a communication unit that receives content corresponding to a search condition from a server device as a response to transmission, and an output unit that outputs, for each of the content groups, content classified into the content group and each content received from the server device It has.
  • the first content storage unit and the second content storage unit exist in the server device on the network, and a plurality of devices stored in the first content storage unit of the server device are the terminal devices such as a personal computer.
  • Content is classified into one or more content groups based on the classification conditions, and for each content group, based on the accompanying information of the content classified into the content group A search condition is generated, and content corresponding to the search condition can be searched from each content stored in the second content storage unit of the server device.
  • the content obtained from the classification condition and the content obtained from the search condition can be output together.
  • the classification conditions are appropriately set on the personal computer side, a plurality of contents stored in the first content storage unit of the server device are classified into one or a plurality of content groups based on the classification conditions. The Then, for each of the content groups, a search condition is generated based on the accompanying information of the content classified into the content group, and the content that matches the search condition is stored in the second content storage unit on the server device side. The contents retrieved from the contents and classified into the contents group and the contents obtained from the search conditions are output together. Therefore, by setting the classification condition on the personal computer side, the mutually related content is selected between the first content storage unit and the second content storage unit of the server device, and the mutually related content is selected. Is output, the user can efficiently search for useful content from a large amount of content stored in a server device or the like on the network and use it on the terminal device.
  • the content output system of the present invention is a content output system that performs information communication between a terminal device and a server device via a network, wherein the server device includes a first content storage unit that stores a plurality of contents and a plurality of contents.
  • Each of the content group, a classification unit that classifies a plurality of contents stored in the first content storage unit into one or a plurality of content groups based on a classification condition, and each of the content groups A search condition generation unit that generates a search condition based on accompanying information of content classified into the content group, and content corresponding to the search condition generated by the search condition generation unit for each of the content groups From a plurality of contents stored in the second content storage unit
  • the first content storage unit and the second content storage unit exist in the server device on the network, and the server device can store a plurality of contents stored in the first content storage unit based on the classification condition. It can be classified into one or more content groups. Further, the server device generates a search condition for each content group based on the accompanying information of the content classified into the content group, and the search condition is determined from each content stored in the second content storage unit of the server device. The content corresponding to can be searched. Then, for each content group, the content classified from the content group and the content obtained from the search condition are transmitted from the server device to a terminal device such as a personal computer, and the content is output by the terminal device. Can do.
  • the mutually related content is selected between the first content storage unit and the second content storage unit of the server device, and the mutually related content is output to the terminal device. It is possible to efficiently search for useful content from a large amount of content stored in the server device and use it in the terminal device.
  • the server device includes a first content storage unit storing a plurality of contents and a second content storage unit storing a plurality of contents in a server device that performs information communication with a terminal device through a network.
  • a plurality of contents stored in the first content storage unit are classified into one or a plurality of content groups based on classification conditions, and each of the content groups is classified into the content group
  • a search condition generation unit that generates a search condition based on the accompanying information of the content, and a content corresponding to the search condition generated by the search condition generation unit for each of the content groups
  • the second content storage unit A search unit for searching from a plurality of contents stored in the database, and each of the content groups And a both communication unit that transmits to the terminal device searched content by categorized content and the search unit to the Ceiling group.
  • the server device can classify the plurality of contents stored in the first content storage unit into one or a plurality of content groups based on the classification condition. Further, the server device generates a search condition for each content group based on the accompanying information of the content classified into the content group, and corresponds to the search condition from each content stored in the second content storage unit Content to be searched, and the content classified from the content group and the content obtained from the search condition can be transmitted to the terminal device through the network. That is, according to the server device of the present invention, the mutually related content is selected between the first content storage unit and the second content storage unit, and the mutually related content is transmitted to the terminal device. Therefore, the user can efficiently search for useful content from a large amount of content stored in a server device or the like on the network and use it on the terminal device.
  • the content output device of the present invention includes a first content storage unit storing a plurality of contents, a second content storage unit storing a plurality of contents, and a plurality of contents stored in the first content storage unit.
  • a search condition that generates a search condition for each of the content groups based on the incidental information of the content classified into the content group for each of the content groups based on the classification condition
  • a search unit for searching for a content corresponding to the search condition generated by the search condition generation unit from a plurality of contents stored in the second content storage unit for each of the content groups; For each of the content groups, the content classified into the content group and the content searched by the search unit. And a both output section for outputting the tool.
  • the output device can classify the plurality of contents stored in the first content storage unit into one or a plurality of contents groups based on the classification condition. Further, the output device generates a search condition for each content group based on the accompanying information of the content classified into the content group, and corresponds to the search condition from each content stored in the second content storage unit Content to be searched, and content obtained from the content group and the search condition can be output to the output unit. That is, according to the output device of the present invention, the mutually related content is selected between the first content storage unit and the second content storage unit, and the mutually related content is output to the output unit. Therefore, the user can efficiently search for and use useful content from a large amount of content stored in the output device.
  • the content output method of the present invention is a content output method for outputting content, wherein a first content storage step for storing a plurality of contents, a second content storage step for storing a plurality of contents, and the first content storage.
  • a search condition generation step for generating a search condition, and for each of the content groups, the content corresponding to the search condition generated in the search condition generation step is searched from the plurality of contents stored in the second content storage step. Search step, and For people, and an output step of outputting both the content searched categorized content and the search step to the content group.
  • the plurality of contents stored in the first content storage step can be classified into one or a plurality of content groups based on the classification condition. Further, for each content group, a search condition is generated based on the accompanying information of the content classified into the content group, and the content corresponding to the search condition is searched from the content stored in the second content storage step. Thus, the content classified into the content group and the content obtained from the search condition can be output together. That is, according to the content output method of the present invention, mutually related contents are selected from the plurality of contents stored in the first content storage step and the plurality of contents stored in the second content storage step, The mutually related contents are output. Therefore, by causing a terminal device such as a computer to execute such a content output method, a user of the terminal device can efficiently use a useful content out of a large amount of content stored in a server device on the network. Search and use.
  • Such a content output method can be realized as a content output program for causing a computer to execute each step, and this content output program is recorded on a computer-readable recording medium and provided. It is possible.
  • the computer can implement the present invention by reading a program from a recording medium, receiving the program through a communication network, and executing the program.
  • a plurality of processes can be distributed to a plurality of terminals. Therefore, the program can be applied not only to a single terminal such as a computer but also to a system.
  • a content output system capable of efficiently searching and using useful content from a large amount of content stored in a server device or the like on a network
  • a content output method, a content output program, and a recording medium storing the content output program can be provided.
  • FIG. 1 is a block diagram showing an embodiment of a content output system of the present invention.
  • FIG. 2 is a flowchart showing content search and output processing in the terminal device of FIG.
  • FIG. 3 is a diagram illustrating a list of accompanying information of photographic images displayed on the screen in the terminal device of FIG.
  • FIG. 4 is a diagram illustrating a content group including a photographic image at a shooting position that falls within a wide area.
  • FIG. 5 is a flowchart showing processing for classifying photographic images into content groups in the terminal device of FIG.
  • FIG. 6 is a diagram illustrating search conditions given from the terminal device of FIG. 1 to the server device.
  • FIG. 7 is a diagram exemplifying accompanying information of a photographic image retrieved by the server device of FIG. FIG.
  • FIG. 8 is a diagram showing an example of a display form of content on the screen in the terminal device of FIG. 1
  • (a) is a diagram showing a display example of a photographic image stored in the second content storage unit
  • b) is a diagram showing a display example of a photographic image stored in the first content storage unit.
  • FIG. 9 is a diagram illustrating another display form of the content on the screen in the terminal device of FIG.
  • FIG. 10 is a diagram for explaining an operation when deleting the content on the screen in the terminal device of FIG. 1.
  • FIG. 11 is a diagram for explaining a work purchase screen as another content display form in the terminal device of FIG.
  • FIG. 1 is a diagram showing a display example of a photographic image
  • FIG. 12 is a flowchart showing processing for displaying the work purchase screen shown in FIG. 11
  • FIG. 13 is a diagram illustrating still another display form of content in the terminal device of FIG.
  • FIG. 14 is a diagram illustrating an example in which a product image is displayed as content on the screen in the terminal device of FIG. 1, and FIG.
  • FIG. 14A is a diagram illustrating a display example of the product image stored in the first content storage unit.
  • (b) is a diagram showing a display example of the product image stored in the second content storage unit.
  • FIG. 15 is a diagram showing another example in which a product image is displayed as content on the screen in the terminal device of FIG.
  • FIG. 16 is a block diagram showing a modification of the terminal device of FIG.
  • FIG. 17 is a block diagram showing a modification of the system of FIG.
  • FIG. 18 is a block diagram showing another modification of the system of FIG.
  • FIG. 19 is a block diagram showing an embodiment of the content output apparatus of the present invention.
  • FIG. 1 is a block diagram showing an embodiment of a content output system of the present invention.
  • the system of this embodiment includes a terminal device 101 on the user side, a server device 201, and a network N that connects the terminal device 101 and the server device 201 to each other.
  • the terminal device 101 includes a personal computer and peripheral devices thereof, and includes an input unit 102, a content management unit 103, a first content storage unit 104, a content classification unit 105, a search condition generation unit 106, a communication unit 107, a display.
  • a generation unit 108, a display unit 109, and the like are provided.
  • the input unit 102 is a keyboard, a mouse, or the like.
  • the content management unit 103, the content classification unit 105, the search condition generation unit 106, and the display generation unit 108 are implemented by reading out and executing programs in the ROM by the CPU and implementing their functions.
  • the first content storage unit 104 is a storage device such as a hard disk device.
  • the communication unit 107 is a communication interface or the like, and performs data communication with the server apparatus 201 through the network N.
  • the display unit 109 is a display device such as a liquid crystal display device.
  • the first content storage unit 104 is not limited to a hard disk device, and may be an external storage medium such as an SD card, a DVD medium, or a BD medium that can be read by the terminal device 101.
  • the server device 201 includes a computer and its peripheral devices, and includes a communication unit 202, a search unit 203, a conversion table storage unit 204, a second content storage unit 205, and the like.
  • the communication unit 202 is a communication interface or the like, and performs data communication with the terminal device 101 through the network N.
  • search unit 203 a program in the ROM is read and executed by the CPU, and its function is realized.
  • the conversion table storage unit 204 and the second storage unit 205 are storage devices such as a hard disk device.
  • a plurality of contents are inputted and stored in the first content storage unit 104 of the terminal device 101 through an interface (not shown) of the terminal device 101.
  • the content in the first content storage unit 104 is a personal property acquired by the user of the terminal device 101.
  • the second content storage unit 205 of the server apparatus 201 a large number of contents are input and stored through the network N and an interface (not shown) of the server apparatus 201.
  • the content in the second content storage unit 205 is a shared material that can be used by an unspecified number of people.
  • These contents are still images such as photographic images, and have accompanying information indicating shooting date and time, shooting position, and the like.
  • the terminal device 101 After the power switch of the terminal device 101 is turned on and the terminal device 101 is activated, when content classification is instructed by an input operation of the input unit 102 (step S301), the terminal device 101 responds thereto.
  • the content management unit 103 searches the content in the first content storage unit 104, generates a list of the searched content, and displays the content list on the screen of the display unit 109 through the display generation unit 108 ( Step S302).
  • a list of all content in the first content storage unit 104 may be generated, or some content such as content creation period, age, folder hierarchy, and specific tags may be included.
  • a list may be generated.
  • the classification of contents is instructed by an input operation of the input unit 102, and the creation period, age, folder hierarchy, specific tag, and the like are instructed.
  • the content management unit 103 selects content corresponding to the instructed creation period, age, folder hierarchy, specific tag, and the like, and generates a list of the selected content. For example, content thumbnails and file names are displayed as a list.
  • the list of contents is displayed on the screen of the display unit 109 and simultaneously output to the content classification unit 105.
  • the content classification unit 105 refers to the accompanying information of each content in the list, and classifies all the content listed in the list into one or a plurality of content groups based on the classification condition (step S303).
  • This classification condition may be selected by an input operation of the input unit 102 from among a plurality of types of preset classification conditions, or may be set by an input operation of the input unit 102.
  • the search condition generation unit 106 receives the content classified into each content group from the content classification unit 105 via the content management unit 103, and refers to the accompanying information of the content classified into the content group for each content group. Then, a search condition is generated from the accompanying information (step S304). At this time, every time one content group is created by the content classification unit 105, a search condition may be generated from the accompanying information of each content classified into the content group, or the first content may be generated by the content classification unit 105. After the classification of all the contents in the storage unit 104 is completed and all the content groups are created, each search condition may be generated for each content group.
  • the content management unit 103 When the content management unit 103 receives the search condition generated by the search condition generation unit 106, the content management unit 103 transmits the search condition from the communication unit 107 to the server device 201 through the network N, and the content corresponding to the search condition is transmitted to the server device. It requests to 201 (step S305).
  • the search condition from the terminal device 101 is received by the communication unit 202 through the network N, and this search condition is input to the search unit 203.
  • the search unit 203 refers to the conversion table in the conversion table storage unit 204 and converts this search condition so as to match the accompanying information of each content in the second content storage unit 205, and the second content storage unit 205.
  • the content having the accompanying information corresponding to the converted search condition is searched by referring to the accompanying information of each content, and the searched content is transmitted from the communication unit 202 to the terminal device 101 through the network N.
  • the terminal device 101 waits for a response from the server device 201 (step S306), and when the content in the second content storage unit 205 having accompanying information corresponding to the search condition is received by the communication unit 107 via the network N (step S306).
  • step S306 “yes”), the received content is input to the content management unit 103 (step S307).
  • the content management unit 103 outputs the received content in the second content storage unit 205 and the content of the content group classified in step S303 to the display generation unit 108.
  • the display generation unit 108 When these contents are input, the display generation unit 108 generates a display layout of these contents (step S308), and displays and outputs these contents together on the screen of the display unit 109 in the display layout (step S309). .
  • step S306 If no response is received from the server apparatus 201 and no content is received from the server apparatus 201 (“no” in step S306), only the contents of the content group classified in step S303 are displayed on the screen of the display unit 109. Is displayed and output (steps S308 and S309).
  • steps S304 to S309 it is determined whether or not the processing of steps S304 to S309 has been completed for all content groups whose content has been classified by the content classification unit 105 (step S310). 2), if the processes of steps S304 to S309 are repeated and completed (“yes” in step S310), the process of FIG. 2 ends.
  • step S302S the process proceeds to the processing after step S302S and the content classification is started.
  • the process may proceed immediately to step S302S and subsequent steps.
  • a recording medium such as a flash memory
  • step S302S when a card such as an IC card or a mobile phone having a card function is held over while the terminal device 101 is operating, the card is scanned to determine whether or not specific information is recorded on the card. When it is determined that the information is included, the process may proceed to step S302S and subsequent steps. As a result, the user can proceed to the processing after step S302S only by scanning the card.
  • the card is scanned by a reader (such as a card reader or Felica), and (C: ⁇ user) of the terminal device 101, or the server device 201 ( ⁇ 10.23.45.67 ⁇ japan) or URL (http: // pro It is determined whether an address on the Internet such as /) is recorded on the card, and when it is determined that this address is recorded, the process proceeds to step S302S and subsequent steps.
  • a reader such as a card reader or Felica
  • URL http: // pro It is determined whether an address on the Internet such as /) is recorded on the card, and when it is determined that this address is recorded, the process proceeds to step S302S and subsequent steps.
  • an address indicating the location of the first content storage unit 104 or the second content storage unit 205 is set as an address, and this address is delivered to an application that executes the processing of the flowchart of FIG.
  • the first content storage unit 104 and the second content storage unit 205 may be accessed.
  • an application for executing the processing of the flowchart of FIG. 2 may be activated.
  • this fact may be displayed on the screen of the display unit 109, and the application may be started in response to an instruction by an input operation of the input unit 102 thereafter.
  • the first content storage unit 104 or the second content storage unit 205 is accessed based on this address, and the content stored in these storage units is confirmed.
  • the application may be activated.
  • user information is set as specific information recorded on the card, and a correspondence table between the user information and the addresses of the first content storage unit 104 and the second content storage unit 205 is stored in the memory of the terminal device 101. deep. Then, when the user information is read from the card, the addresses of the first content storage unit 104 and the second content storage unit 205 corresponding to the user information are obtained by referring to the table in the memory, and these addresses are shown in the flowchart of FIG. The first content storage unit 104 and the second content storage unit 205 may be accessed based on this address. Further, a process for creating a correspondence table between the user information and the addresses of the first content storage unit 104 and the second content storage unit 205 may be set so that the user can create the correspondence table. . Furthermore, a password, personal name, membership number, fingerprint, etc. may be set as user information. If it is a fingerprint, it is necessary to recognize and identify this fingerprint.
  • content classification conditions are also stored in a recording medium or a card, and when the card is held over, the content classification conditions are read from the card, and the classification conditions are transferred to the content classification unit 105, and the content classification unit 105.
  • the content classification by may be started.
  • the content classification will be explained.
  • the first content storage unit 104 stores a plurality of photographic images that are personal belongings acquired by the user of the terminal device 101.
  • the second content storage unit 205 stores a large number of photographic images provided by, for example, a photographic service company, which are shared materials that can be used by an unspecified number of people.
  • the content management unit 103 searches for a photographic image in the first content storage unit 104. Then, a list of the retrieved photographic images is generated, and the photographic image list is displayed on the screen of the display unit 109 through the display generation unit 108.
  • FIG. 3 illustrates a list of photographic images.
  • other information may be included as accompanying information, and the types of information may be increased or decreased.
  • the accompanying information of the photographic image is described in XML, it may be described in another description language, binary data, or a structure data format handled in the program.
  • the list of photographic images is displayed on the screen of the display unit 109 and simultaneously output to the content classification unit 105.
  • the content classification unit 105 refers to the accompanying information of each photographic image in this list and classifies each photographic image into several content groups based on the classification condition.
  • the content classification unit 105 focuses on the longitude information ⁇ gps-long> of the shooting position, which is accompanying information of the photographic image, and refers to the longitude information ⁇ gps-long> of each photographic image to obtain one photographic image.
  • the photographic position of a photographic image that falls within a certain area with the photographic position as the base point is obtained, and each photographic image is classified under a classification condition in which all the photographic images that have the photographing position within this certain area are set as one content group.
  • the separation distance of the shooting positions is calculated.
  • the base point and the identifier ⁇ picture id "4"
  • the separation distance between the photographing positions of the photographic image is 7.5 km
  • the separation distance of the shooting position from the base point is compared with a threshold value, and the separation distance is less than the threshold value.
  • the separation distance of the shooting position of the photographic image from the base point is obtained for each base point, the separation distance of the shooting position of the photographic image is compared with a threshold value, and this photographic image is Decide whether to classify into the same content group as the photographic image of the base point. Thereby, a plurality of content groups can be obtained.
  • the longitude information ⁇ gps-long> but also the latitude information ⁇ gps-lat> may be used, or both may be used to determine the separation distance between the shooting positions of each photographic image.
  • the threshold value may be set and stored in advance in the memory of the terminal apparatus 101, or may be changed or set as appropriate by an input operation of the input unit 102 by the user. It is also possible to change the threshold according to the base point, shooting date and time, content type, and the like.
  • the entire shooting range may be obtained from the shooting positions of all the photographic images in the first content storage unit 104, and a threshold value corresponding to the entire shooting range may be calculated. For example, when the entire shooting range is wide, the threshold value is increased, and when the entire shooting range is narrow, the threshold value is set small. As a result, the number of photographic images included in each content group can be adjusted.
  • the threshold value may be inquired from the terminal device 101 to the server device 201.
  • the server device 201 includes a data table in which areas and thresholds are associated with each other. The server device 201 obtains an area where the shooting position from the terminal device 101 is entered, searches the data table for a threshold value corresponding to this area, and sets the threshold value to the terminal. Send to device 101. More specifically, as the data table, a data table in which a large threshold value is set corresponding to a large area of Mt.
  • FIG. 4 is a diagram illustrating a content group including a photographic image at a shooting position that falls within a wide area.
  • the content classification unit 105 refers to the shooting date / time information ⁇ time> of each photo image in the content list of FIG. 3 and sorts the photo images in the order of their shooting date / time. Then, the content classification unit 105 sequentially selects each photo image from the top, calculates a shooting date difference between one photo image and the next photo image, and determines whether the date difference is less than a threshold value. If it is less than the threshold, the next group of photographic images is included in the content group of one photographic image, and if it is greater than or equal to the threshold, a new content group is set, and the next content group is set to the next content group. Include sequential photographic images. Thereby, it is possible to classify photographic images having a short shooting time interval into one content group.
  • the next photographic image is selected, the time difference from a photographic image that does not belong to any content group is calculated, and if this time difference is less than the threshold, the photographic image Are grouped into new content groups.
  • the threshold value can be set by various methods, similar to the threshold value compared with the separation distance of the photographing position of the photographic image described above. For example, the maximum value of the time difference may be calculated for the entire photographic image in the first content storage unit 104, and the threshold value may be set small when the maximum value is large, and the threshold value may be set small when the time difference is small. As a result, the number of photographic images included in the content group can be adjusted.
  • content groups can be created in units of one day, one week, one month, one year, and even seasonal, morning and night.
  • the date and time information is transmitted from the terminal apparatus 101 to the server apparatus 201, and a list of contents close to the date and time of the date and time information is created on the server 201 side. May be returned to the terminal apparatus 101.
  • photographic images may be classified into content groups by using both the shooting date and the shooting position instead of selectively using the shooting date and the shooting position.
  • the photo images are rearranged in the order of their shooting date / time.
  • the content classification unit 105 sequentially selects each photographic image from the top, obtains a separation distance between the photographing position of one photographic image and the photographing position of the next photographic image, and this separation distance is less than the threshold value. If it is less than the threshold, classify the next photographic image into the same content group as one photographic image, and if it is greater than or equal to the threshold, newly set the content group, The next sequence of photographic images is classified into this new content group.
  • the photographic images can be classified into content groups using the shooting position and the shooting date / time.
  • the content classification unit 105 When the content classification unit 105 receives the list, the content classification unit 105 refers to the accompanying information of each photographic image, arranges the photographic images listed in the list in order of photographing date and time, and then determines whether there is a photographing position for each photographic image. (Step S601). If there is a photographic image accompanied by a photographing position, the photographing position of the photographic image is set as a base point, and photographing date / time information is acquired from the accompanying information of the photographic image (step S602).
  • the content classification unit 105 acquires shooting date / time information from the accompanying information of the photographic image whose shooting date / time is one order before the photographic image that is the base point (step S603), and the shooting date / time of these photographic images. Is determined, and it is determined whether or not the time difference is less than a threshold value (S604). If it is less than the threshold value (“yes” in step S604), the process returns to step S603, and the shooting date / time information is acquired from the accompanying information of the photographic image whose shooting date / time is the previous one (step S603).
  • step S604 The time difference between the photographing date and time of the photograph image and the photograph image whose photographing date and time information was acquired in the immediately preceding step S603 is obtained, and it is determined whether or not this time difference is less than the threshold value (S604). Thereafter, similarly, if it is less than the threshold value (“yes” in step S604), the process returns to step S603, and the shooting date / time information is acquired from the accompanying information of the photographic image whose shooting date / time is the previous one (step S603). Then, the time difference between the photographing dates and times of two photograph images arranged in succession is obtained, and it is determined whether or not this time difference is less than a threshold value (step S604).
  • step S604 When the value is equal to or greater than the threshold value (“no” in step S604), the photographic image is returned to the photographic image having the shooting date and time one after the photographic image for which the shooting date / time information of the content was acquired in the immediately preceding step S603. Is the first photographic image of the content group (step S605).
  • the photograph images are sequentially included in one content group.
  • the content classification unit 105 returns to the photographic image that is the base point (step S606), and acquires photographic date / time information from the incidental information of the photographic image in the order of the photographic date one after the photographic image (In step S607), a time difference between the photographing dates and times of these photographic images is obtained, and it is determined whether this time difference is less than a threshold value (S608). If it is less than the threshold (“yes” in step S608), the process returns to step S607, and the shooting date / time information is acquired from the accompanying information of the photographic image with the next shooting date / time (step S607).
  • step S608 The time difference between the photographing date and time of the photograph image and the photograph image whose photographing date and time information was acquired in the immediately preceding step S607 is obtained, and it is determined whether or not this time difference is less than the threshold value (S608). Thereafter, similarly, if it is less than the threshold (“yes” in step S608), the process returns to step S607, and the shooting date / time information is acquired from the incidental information of the photograph image with the next shooting date / time (step S607). Then, a time difference between the shooting dates and times of two adjacent photographic images on the list is obtained, and it is determined whether or not this time difference is less than a threshold value (step S608).
  • step S608 If the value is equal to or greater than the threshold (“no” in step S608), the photographic date is returned to the photographic image in the order that was one before the photographic date for which the photographic date / time information was acquired in the immediately preceding step S607.
  • the last photographic image of the group is set (step S609).
  • the photographic images are sequentially advanced and included in one content group.
  • the first photographic image and the last photographic image are obtained, and the contents from the first photographic image to the last photographic image are set as one content group (step S610).
  • the photographic images listed in the list are arranged in order of the shooting date and time. Are acquired in order from the shooting date and time of the photographic image that is the base point, and the processing proceeds from step S601 to step S605, and then the date and time is determined from the shooting date and time of the photographic image that is the base point. If the shooting date / time information of the subsequent photographic image can be acquired in order from the shooting date / time closest to the shooting date / time of the photographic image that is the base point, and the processing of step S606 to step S609 can proceed, the photographic image listed The process of arranging the images in the order of shooting date and time may be omitted.
  • a classification condition may be used in which photographic images taken within a certain time centered on the photographing date and time of the base photographic image are used as one content group. For example, if the shooting date and time of the base photographic image is AM 9:00 and the fixed time is 2 hours, all the photographic images whose shooting date and time are in the range of AM 8:00 to AM 10:00 are combined into one content group. Put together.
  • the photographic image can be classified into a content group using these comments or marks.
  • the content classification unit 105 refers to the accompanying information of each photographic image in the list of photographic images, determines the presence or absence of comment information ⁇ comment> for each photographic image, and extracts only photographic images having the comment information ⁇ comment>. Into one content group.
  • each photographic image in the order up to this photographic image is collected into one content group, and another photographic image having comment information ⁇ comment> from the next photographic image.
  • Each photographic image in the order up to is collected into another content group.
  • the photographic images in the order one order before this photographic image are collected into one content group, and the comment information ⁇ comment> is obtained from this photographic image.
  • Each photographic image in the order up to one prior to another photographic image may be combined into another content group.
  • each photographic image can be grouped into a content group using a mark as an index.
  • the user may manually classify.
  • the manual classification method is, for example, that thumbnails of photographic images are displayed on the screen, and when the user selects a photographic image belonging to each content group or displays a slide show of photographic images, the user selects each content group. For example, it is possible to sequentially input and specify photographic images belonging to.
  • the classification condition may be set by default, or may be input or changed by an input operation of the input unit 102.
  • a plurality of types of classification conditions are set in advance, these classification conditions are displayed on the screen of the display unit 109, and one of the classification conditions on the screen is selected by an input operation of the input unit 102. It doesn't matter.
  • the process proceeds to step S303 after the classification condition is input or selected, or the process proceeds to step S303 unless the classification condition is input or selected. May be prohibited.
  • the search condition generation unit 106 uses the accompanying information of the photographic images included in the content group to search for photographic images related to the content group from among the photographic images in the second content storage unit 205 of the server device 201. Generate search conditions for. For example, each shooting position is acquired from the accompanying information of each photo image of the content group in FIG. 4, and the center position of these shooting positions is obtained as a search condition. This search condition is transmitted from the terminal device 101 to the server device 201.
  • the search condition from the terminal device 101 is received by the communication unit 202 and input to the search unit 203.
  • the search unit 203 refers to the correspondence table in the conversion table storage unit 204 and searches for the identifier of the photographic image corresponding to the center position as the search condition.
  • the conversion table storage unit 204 stores in advance a correspondence table in which areas including a large number of positions are associated with identifiers of photographic images. With reference to the correspondence table, the conversion table storage unit 204 corresponds to an area including a specified position. The identifier of a photographic image can be searched.
  • the search unit 203 searches for an identifier corresponding to the center position that is the search condition
  • the search unit 203 refers to the accompanying information of each photographic image in the second content storage unit 205 and searches for a photographic image including the searched identifier in the accompanying information. Search for.
  • the retrieved photographic image is transmitted from the server device 201 to the terminal device 101.
  • a plurality of photographic images corresponding to the search condition exist in the second content storage unit 205, all of these photographic images may be transmitted from the server device 201 to the terminal device 101, or the server device
  • the upper limit number of photographic images may be set on the 201 side, and photographic images equal to or smaller than the upper limit number may be transmitted to the terminal device 101.
  • the place name corresponding to the search condition may be searched on the server apparatus 201 side, and the place name may be added to the accompanying information of the photograph image, and then the photograph image may be transmitted to the terminal apparatus 101.
  • a place name including the center position may be set instead of the center position of the shooting position of each photo image of the content group.
  • a data table in which an area including a large number of positions and a place name are associated is provided on the terminal device 101 side, and a place name corresponding to the area including the center position is searched from the data table by the search condition generation unit 106.
  • the location name is transmitted from the terminal device 101 to the server device 201 as a search condition.
  • the search unit 203 searches the place name of the search condition from the accompanying information of each photographic image in the second content storage unit 205, and obtains a photographic image including the place name in the accompanying information.
  • the photographic image is transmitted to the terminal device 101.
  • the search condition generation unit 106 extracts all the shooting positions from the accompanying information of each photographic image of the content group, and generates a list of the shooting positions of each photographic image as shown in FIG. This list is transmitted to the server apparatus 201 as a search condition.
  • the shooting position of each photographic image in the list is compared with the shooting position of the accompanying information of each photographic image in the second content storage unit 205, and the photographic image at the shooting position that matches the list side is The photographic image stored in the second content storage unit 205 is searched for and acquired, and the acquired photographic image is transmitted to the terminal device 101.
  • a photo image of the shooting position within the area centered on the shooting position of the list may be retrieved from the second content storage unit 205. May be distinguished from a photographic image at a shooting position in the area and transmitted to the terminal device 101, and these photographic images may be displayed separately on the terminal device 101 side.
  • a match tag ⁇ match> indicating whether or not the photographing positions completely match is added as accompanying information, and such distinction display is performed based on the match tag ⁇ match>.
  • a photographic image retrieved from photographic images stored in the second content storage unit 205 is received by the communication unit 107, and the received photographic image in the second content storage unit 205 is received as a content management unit. 103.
  • the content management unit 103 receives the input photographic image in the second content storage unit 205 and the photographic image of the content group previously classified by the content classification unit 105, that is, the photographic image in the first content storage unit 104.
  • the data is output to the display generation unit 108.
  • the display generation unit 108 sets the display order and display layout of these photographic images, and displays these photographic images on the screen of the display unit 109.
  • the photographic image P1 in the second content storage unit 205 is displayed on the screen of the display unit 109, and then in the first content storage unit 104 as shown in FIG. 8B.
  • the photographic images P2 and P3 are sequentially displayed.
  • the place name 11 of the search condition is displayed together with the photographic image P1 in the second content storage unit 205, or the photographic image in the second content storage unit 205 of the server device 201.
  • a mark 12 indicating this, or a separation distance 13 of the shooting position of each photographic image may be displayed. This facilitates the distinction between the photograph image taken by the photographer and the photograph image provided by the photograph service company.
  • the photographic image P11 in the second content storage unit 205 and the photographic image P12 in the first content storage unit 104 may be laid out and displayed on the screen of the display unit 109.
  • the photographic images acquired from the first content storage unit 104 and the second content storage unit 205 are automatically selected or searched, they are not necessarily preferable for the user, and are not intended by the user. It may not be suitable. For this reason, an unintended photographic image can be selected by an input operation of the input unit 102, and this photographic image can be deleted from the screen of the display unit 109.
  • the photographic image P21 when the photographic image P21 is displayed on the screen of the display unit 109, the photographic image P21 on the screen is selected by the input operation of the input unit 102, and the photographic image P21 is deleted. Instruct. In response to this, the content management unit 103 deletes the selected photographic image from the photographic images acquired from the first content storage unit 104 and the second content storage unit 205.
  • information such as a URL for accessing the work purchase screen of the photographic image provider may be included.
  • a button B1 or the like for starting the browser is displayed on the screen of the display unit 109, and when the button B1 on the screen is operated by an input operation of the input unit 102,
  • the content management unit 103 activates a browser or the like, calls a work purchase screen corresponding to the URL via the Internet, and displays a work purchase screen as shown in FIG. 11B on the screen of the display unit 109. To do. This makes it possible to widely introduce and sell photographic images taken by the photographic image provider.
  • step S701 in FIG. 12A when the button B1 on the screen is operated by an input operation of the input unit 102 (step S701 in FIG. 12A), in response to this, the content management unit 103 activates a browser or the like. Then, a request message including information such as a URL for accessing the work purchase screen is created, and this request message is transmitted to the server apparatus 201 through the network N (step S702 in FIG. 12A). The server apparatus 201 waits for a response from the server apparatus 201 (step S703 in FIG. 12A).
  • the server apparatus 201 Upon receiving the request message (step S721 “yes” in FIG. 12B), the server apparatus 201 analyzes the request message and extracts information such as a URL included in the message (FIG. 12B). Step S722)), using the information such as the URL, the contents necessary for the work purchase screen are collected, and a response message including the contents necessary for the work purchase screen is created (FIG. 12B). Step S723), this response message is returned to the terminal device 101 through the network N (Step S724 in FIG. 12B).
  • Step S704 in FIG. 12A When the terminal device 101 receives the response message (step S704 in FIG. 12A), the response message is analyzed, and the content necessary for the work purchase screen included in the message is extracted (FIG. 12 ( Step S705 of a), a work purchase screen is created using this content or the like (step S706 of FIG. 12A), and this work purchase screen is displayed on the screen of the display unit 109 (FIG. 12A). Step S707).
  • the request message may be simply transmitted according to a protocol such as HTTP without activating the browser. Also in this case, it is possible to receive the work purchase screen or the content necessary for the work purchase screen as a response to this request message.
  • the server device that receives and responds to the request message from the terminal device 101 is not specified by the server device 201 and may be any server device on the Internet.
  • the second content storage unit 205 of the server device 201 when content in the first content storage unit 104 of the terminal device 101 is output, other content related to this content is stored in the second content storage unit 205 of the server device 201. It is possible to search from the stored contents and output the contents together on the terminal device 101 side. For example, when a photographic image of a trip taken by the user is stored in the first content storage unit 104 and a photographic image provided by a photo service company is stored in the second content storage unit 205, the user can use the second content storage unit. Even without selecting a photographic image in 205, a photographic image related to the photographic image in the first content storage unit 104 is selected from the contents stored in the second content storage unit 205 and taken by the user. Since both the photograph image and the photograph image of the photograph service company are displayed and output, it is possible to display a high-quality photograph image or a slide show.
  • information such as the place name and URL for accessing the work purchase screen of the photograph image provider is set as accompanying information, so the place name of the shooting position of the photo image can be displayed, and the work purchase screen can be quickly displayed. Can be called to promote the purchase of works.
  • the photographic image and audio information such as BGM are stored in the second content storage unit 205.
  • the contents can be searched for and transmitted from the server apparatus 201 to the terminal apparatus 101.
  • the terminal apparatus 101 can reproduce BGM or the like by voice when displaying a photographic image or a slide show.
  • information such as URL for accessing the work purchase screen of the provider such as BGM is provided as accompanying information in the voice information, and the browser is started on the screen of the display unit 109 as shown in FIG. Button B2 and the like are displayed.
  • the content management unit 103 activates a browser or the like and calls the work purchase screen corresponding to the URL through the Internet. You may do it.
  • artists such as semi-professional and independent artists have a lower degree of music recognition than professional artists, and thus can provide a wide range of works and advertisements in cooperation with such a photo image providing service.
  • the system of this embodiment can be applied to various other information services.
  • the system of this embodiment can be used for EC services.
  • the user's purchased product image and accompanying information are stored in the first content storage unit 104, and the product image is classified into one or a plurality of content groups based on the accompanying information by the content classification unit 105.
  • the search condition generation unit 106 generates a search condition from the accompanying information of the product image of each content group, and transmits this search condition to the server device 201.
  • the server device 201 a product image corresponding to the search condition is searched from product images stored in the second content storage unit 205, and the product image is transmitted to the terminal device 101.
  • the terminal device 101 as shown in FIG.
  • the product image in the first content storage unit 104 is displayed on the screen of the display unit 109 as already purchased, and then as shown in FIG. 14 (b). 2. Display the recommended product image from the product images stored in the content storage unit 205 as a recommended product image. Alternatively, as shown in FIG. 15, the purchased product image and the recommended product image are laid out and displayed on the screen of the display unit 109.
  • FIG. 16 is a block diagram showing a modification of the terminal device 101 of FIG.
  • a first content storage unit 111 that stores user's personal content is provided in a server storage or the like on the network N.
  • the terminal device 101 is provided with a first content acquisition unit 112 for accessing the first content storage unit 111 on the network N.
  • a second content storage unit 113 that stores content that can be used by an unspecified number of people is also provided in a server storage or the like on the network N.
  • the first content acquisition unit 112 of the terminal device 101 accesses the first content storage unit 111 on the network N via the communication unit 107 and reads and acquires the content from the first content storage unit 111. Thereafter, the same processing as in the system of FIG. 1 is performed to classify the content in the first content storage unit 111 into one or a plurality of content groups, and a search condition is generated for each content group.
  • the content in the second content storage unit 113 is searched through the network N, the content is taken into the terminal device 101, and both the content in the first content storage unit 111 and the content in the second content storage unit 113 are combined. Output.
  • FIG. 17 shows a modification of the content output system of FIG.
  • the server device 201 is provided with the first content storage unit 104
  • the terminal device 101 is provided with the first content acquisition unit 112 for accessing the first content storage unit 104 on the network N.
  • the first content acquisition unit 112 of the terminal device 101 accesses the first content storage unit 104 of the server device 201 via the communication unit 107 and reads and acquires the content from the first content storage unit 104. Thereafter, the same processing as in the system of FIG. 1 is performed to classify the content in the first content storage unit 104 into one or a plurality of content groups, and a search condition is generated for each content group.
  • the content in the second content storage unit 205 of the server device 201 is searched based on the content, the content is taken into the terminal device 101, and the content in the first content storage unit 104 and the content in the second content storage unit 205 are retrieved. Output together.
  • FIG. 18 shows another modification of the content output system of FIG.
  • the server device 201 is provided with a first content storage unit 104, a content management unit 103, a content classification unit 105, and a search condition generation unit 106.
  • the control unit 115 transmits a content classification instruction from the communication unit 107 to the server device 201 through the network N. .
  • a content classification instruction is received by the communication unit 202 and input to the content management unit 103. Thereafter, the same processing as in the system of FIG. 1 is performed to classify the contents in the first content storage unit 111 into content groups, generate search conditions, and the second content storage unit 113 based on the search conditions. Search for content in Then, for each content group, the content in the first content storage unit 111 and the content in the second content storage unit 113 are read and returned to the terminal device 101 through the network N.
  • the terminal device 101 receives a plurality of contents for each content group and displays them on the screen of the display unit 109.
  • FIG. 19 is a block diagram showing an embodiment of the content output apparatus of the present invention.
  • the content output device 121 of this embodiment is configured by adding a second content storage unit 205 and a conversion table storage unit 204 to the terminal device 101 of FIG.
  • the second content storage unit 205 stores a large number of contents. These contents are collected through the network N from other terminal devices and server devices.
  • the conversion table storage unit 204 performs the same function as the conversion table storage unit 204 in the server apparatus 201 of FIG.
  • Such a content output device 121 includes the second content storage unit 205 and the conversion table storage unit 204, there is no need to access an external server device unlike the terminal device 101 of FIG. When outputting content in the content storage unit 104, it is possible to search for other content related to this content from the content stored in the second content storage unit 205 and output the content together. .
  • photographic images and audio information not only photographic images and audio information, but also contents such as graphics and other still images and moving images can be handled in the present invention in the same manner as photographic images and audio information.
  • the content is a still image such as a photographic image
  • the position information is the shooting position of the photographic image
  • the date / time information is the shooting date / time.
  • the date information is not limited to this.
  • the content may be a moving image, music, audio information, etc. in addition to a still image such as a photographic image.
  • the output of the content is a display of a still image or a moving image.
  • the output of the content is reproduction of music or audio.
  • the position information of the content may indicate, for example, the position where the still image or moving image such as a photographic image is captured. It may indicate the date and time of capturing a still image such as a photographic image or a moving image. In such a case, mutually related imaging positions or contents at the imaging positions are output.
  • the location information of the content may indicate, for example, the recording position of the music or audio information
  • the date and time information of the content is the recording of the music or audio information. It may indicate date and time or delivery date and time. In such a case, the contents of the recording position, recording date / time, or distribution date / time related to each other are output.
  • the content accompanying information described above may be any information as long as it is information attached to the content, and is not limited to date information and position information.
  • the accompanying information may be information indicating both date information and position information, or may be information indicating only one of date information and position information.
  • both the first and second content storage units it is not necessary for both the first and second content storage units to be single, and a plurality of them can be provided, and a plurality of types of memory devices may be mixed and applied.
  • a plurality of first content storage units can be provided in one or both of the terminal device and the server device, or distributed on the network.
  • a second content storage unit can be provided.
  • a TV image signal may be output from the terminal device 101 to the TV, and the display content similar to the screen of the display unit 109 may be displayed on the TV screen.
  • the function of the terminal device 101 can be incorporated in the TV set. In this case, content can be viewed and EC service can be received in the same manner as viewing TV programs.
  • an image signal for another type of display device may be output, and display content similar to the screen of the display unit 109 may be displayed on the screen of the other type of display device.
  • Other types of display devices include portable terminals.
  • the present invention is not limited to a content output device or a content output system, and stores a content output method, a content output program for causing a computer to execute each step of the content output method, and a content output program Includes recording media.
  • the computer may be any device that can execute the program.
  • the computer can implement the present invention by reading a program from a recording medium, receiving the program through a communication network, and executing the program.
  • a plurality of processes can be distributed to a plurality of terminals. Therefore, the program can be applied not only to a single terminal such as a computer but also to a system.
  • the present invention can be applied to a personal computer or the like that displays or reproduces content composed of images, sounds, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Information Transfer Between Computers (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

Disclosed is a content output system which efficiently retrieves of useful contents from among a large quantity of contents stored in a server device etc. on a network and use of the retrieved contents. Specifically disclosed is a content output system wherein informational communication is performed between a terminal device (101) and the server device (201) via the network. When a content stored in a first content storage unit (104) of the terminal device (101) is outputted, other contents related to the content are retrieved from among contents stored in a second content storage unit (205) of the server device (201) and the retrieved contents are outputted together on the side of the terminal device (101). Thus, user's operations for browsing a large amount of contents stored in the second content storage unit (205) of the server device (201) and selecting contents can be eliminated.

Description

[規則37.2に基づきISAが決定した発明の名称] コンテンツ出力システム[Name of invention determined by ISA based on Rule 37.2] Content output system
 本発明は、画像、音声等からなるコンテンツを表示したり再生するコンテンツ出力システム、サーバー装置、コンテンツ出力装置、コンテンツ出力方法、コンテンツ出力プログラム、及びコンテンツ出力プログラムを記憶した記録媒体に関する。 The present invention relates to a content output system that displays and reproduces content composed of images, sounds, and the like, a server device, a content output device, a content output method, a content output program, and a recording medium that stores the content output program.
 近年、デジタルカメラやデジタルビデオカメラの普及に伴い、ユーザーは、写真画像等の静止画像、動画像、音楽などからなるコンテンツを簡単に作成することができるようになってきており、また自ら作成したコンテンツを家庭内やネットワーク上の記録媒体に蓄積して、簡単にいつでもそのコンテンツにアクセスし、気軽に楽しむことができるようにもなっている。例えば、旅行に行ったときにユーザー自ら撮影した写真画像や動画を好きな音楽と組み合わせてスライドショーとして楽しむことが可能である。 In recent years, with the spread of digital cameras and digital video cameras, it has become possible for users to easily create content consisting of still images such as photographic images, moving images, music, etc. The contents can be stored on a recording medium at home or on a network so that the contents can be easily accessed and enjoyed at any time. For example, it is possible to enjoy photographic images and videos taken by the user himself / herself when traveling as a slide show by combining with favorite music.
 一方、インターネット上には、静止画像、動画像、音楽等の大量のコンテンツが存在しており、ユーザーは、それらの大量のコンテンツにインターネットを通じていつでもアクセスし、取得、視聴することが可能である。 On the other hand, a large amount of content such as still images, moving images, and music exists on the Internet, and a user can access, acquire, and view such a large amount of content at any time via the Internet.
 ここで、自分の所有するコンテンツと、インターネット上のコンテンツを組み合わせて楽しむ方法が、特許文献1に記載されている。この特許文献1においては、ユーザーが所有する写真画像と、インターネット上の多数の写真画像のうちからユーザーが選択した所望の写真画像とを組み合わせて配置し、これらの写真画像を印刷出力している。 Here, Patent Document 1 describes a method for enjoying the contents owned by the user in combination with the contents on the Internet. In this Patent Document 1, a photographic image owned by a user and a desired photographic image selected by the user from among a large number of photographic images on the Internet are arranged in combination, and these photographic images are printed out. .
特開2007-323614号公報(2007年12月13日公開)JP 2007-323614 A (published on December 13, 2007)
 しかしながら、特許文献1では、コンテンツ及びその配置位置の全てをユーザーが指定しなければならず、このための入力操作が煩雑である。特に、サーバー上のコンテンツが大量にある場合は、この大量のコンテンツから所望のコンテンツを探し出して選択せねばならず、ユーザーにとっては極めて大きな手間となる。 However, in Patent Document 1, the user must specify all of the content and its arrangement position, and the input operation for this is complicated. In particular, when there is a large amount of content on the server, it is necessary to find and select the desired content from this large amount of content, which is extremely troublesome for the user.
 そこで、本発明は、上記従来の問題点に鑑みてなされたものであり、ネットワーク上のサーバー装置等に蓄積されている大量のコンテンツのうちから有用なものを効率的に検索して利用することが可能なコンテンツ出力システム、サーバー装置、コンテンツ出力装置、コンテンツ出力方法、コンテンツ出力プログラム、及びコンテンツ出力プログラムを記憶した記録媒体を提供することを目的とする。 Therefore, the present invention has been made in view of the above-described conventional problems, and efficiently searches for and uses useful content from a large amount of content stored in a server device or the like on a network. It is an object to provide a content output system, a server device, a content output device, a content output method, a content output program, and a recording medium storing the content output program.
 上記課題を解決するために、本発明のコンテンツ出力システムは、ネットワークを通じて、端末装置及びサーバー装置間で情報通信を行うコンテンツ出力システムにおいて、前記端末装置は、複数のコンテンツを記憶した第1コンテンツ記憶部と、前記第1コンテンツ記憶部に記憶された複数のコンテンツを、分類条件に基づいて、1つ又は複数のコンテンツ群に分類する分類部と、前記コンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツの付随情報を基に、検索条件を生成する検索条件生成部と、前記コンテンツ群の各々について、前記検索条件生成部により生成された検索条件を前記サーバー装置に送信し、この検索条件の送信に対する応答として検索条件に該当するコンテンツをサーバー装置から受信する通信部と、前記コンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツ及び前記サーバー装置から受信したコンテンツを共に出力する出力部とを備え、前記サーバー装置は、複数のコンテンツを記憶した第2コンテンツ記憶部と、前記端末装置からの検索条件を受信し、この検索条件に該当するコンテンツを端末装置に送信する通信部と、前記第2コンテンツ記憶部に記憶された複数のコンテンツの中から前記受信した検索条件に該当するコンテンツを検索する検索部とを備えている。 In order to solve the above problems, a content output system according to the present invention is a content output system that performs information communication between a terminal device and a server device through a network. The terminal device stores a plurality of contents in a first content storage. A classification unit that classifies a plurality of contents stored in the first content storage unit into one or a plurality of content groups based on a classification condition, and classifies each of the content groups into the content group A search condition generation unit that generates a search condition based on the accompanying information of the content that has been generated, and the search condition generated by the search condition generation unit for each of the content groups is transmitted to the server device. That receives content that meets the search criteria from the server as a response to And a second content storage that stores a plurality of contents, wherein each of the content groups includes an output unit that outputs both the content classified into the content group and the content received from the server device. Receiving a search condition from the terminal device, a communication unit that transmits content corresponding to the search condition to the terminal device, and the plurality of contents stored in the second content storage unit And a search unit for searching for content corresponding to the search condition.
 この構成では、第1コンテンツ記憶部が自己のパーソナルコンピュータ等の端末装置に存在し、第2コンテンツ記憶部がネットワーク上のサーバー装置に存在しており、自己のパーソナルコンピュータに蓄積されている複数のコンテンツを、分類条件に基づいて、1つ又は複数のコンテンツ群に分類し、コンテンツ群の各々について、そのコンテンツ群に分類されたコンテンツの付随情報を基に検索条件を生成し、サーバー装置に蓄積されている各コンテンツから検索条件に該当するコンテンツを検索することができる。そして、コンテンツ群の各々について、分類条件から求められたコンテンツ及び検索条件から求められたコンテンツを共に出力することができる。すなわち、自己のパーソナルコンピュータ側で分類条件を適宜に設定すれば、分類条件に基づいて、第1コンテンツ記憶部に記憶されている複数のコンテンツが1つ又は複数のコンテンツ群に分類される。そして、そのコンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツの付随情報を基に検索条件が生成されて、サーバー装置側で検索条件に合うコンテンツが検索され、当該コンテンツ群に分類されたコンテンツ及び検索条件から求められたコンテンツが共に出力される。従って、自己のパーソナルコンピュータ側で分類条件を設定するだけで、自己のパーソナルコンピュータ側及びサーバー装置側で相互に関連するコンテンツが選択されて、この相互に関連するコンテンツが出力されるから、ユーザーは、ネットワーク上のサーバー装置等に蓄積されている大量のコンテンツのうちから有用なものを効率的に検索して端末装置で利用することが可能となる。 In this configuration, the first content storage unit exists in a terminal device such as its own personal computer, and the second content storage unit exists in a server device on the network. The content is classified into one or a plurality of content groups based on the classification conditions, and search conditions are generated for each content group based on the accompanying information of the content classified into the content group and stored in the server device. The content corresponding to the search condition can be searched from each content that has been set. For each content group, the content obtained from the classification condition and the content obtained from the search condition can be output together. That is, if the classification conditions are appropriately set on the personal computer side, a plurality of contents stored in the first content storage unit are classified into one or a plurality of content groups based on the classification conditions. Then, for each of the content groups, a search condition is generated based on the accompanying information of the content classified into the content group, and the content that matches the search condition is searched on the server device side, and is classified into the content group Both the content and the content obtained from the search condition are output. Therefore, simply by setting the classification conditions on the personal computer side, the mutually related contents are selected on the personal computer side and the server apparatus side, and the mutually related contents are output. It becomes possible to efficiently retrieve useful content from a large amount of content stored in a server device or the like on a network and use it in a terminal device.
 また、前記出力部は、前記コンテンツ群に分類されたコンテンツと前記サーバー装置から受信したコンテンツとの表示レイアウトを設定して、これらのコンテンツを該表示レイアウトで表示出力してもよい。 In addition, the output unit may set a display layout of the content classified into the content group and the content received from the server device, and display and output these content in the display layout.
 この構成によれば、コンテンツの好ましい表示を実現することができる。 According to this configuration, a preferable display of content can be realized.
 更に、前記出力部は、前記コンテンツ群に分類されたコンテンツと前記サーバー装置から受信したコンテンツとを識別可能に出力してもよい。 Furthermore, the output unit may output the content classified into the content group and the content received from the server device in an identifiable manner.
 この構成によれば、前記コンテンツ群に分類されたコンテンツ、即ち、第1コンテンツ記憶部に記憶されているコンテンツと、前記サーバー装置から受信したコンテンツとを識別することができる。 According to this configuration, it is possible to identify content classified into the content group, that is, content stored in the first content storage unit and content received from the server device.
 また、前記出力部は、前記コンテンツ群に分類されたコンテンツ及び前記サーバー装置から受信したコンテンツを、これらコンテンツの各々の付随情報と共に出力してもよい。 In addition, the output unit may output the content classified into the content group and the content received from the server device together with accompanying information of each of these content.
 この構成によれば、出力された各コンテンツの付随情報の確認が可能となる。 This configuration makes it possible to check the accompanying information of each content that has been output.
 更に、前記端末装置は、前記分類条件を入力するための入力操作部を備えていてもよい。 Furthermore, the terminal device may include an input operation unit for inputting the classification condition.
 この構成によれば、任意の分類条件を入力設定することが可能となる。 This configuration makes it possible to input and set arbitrary classification conditions.
 また、前記分類条件は、予め設定されるか、入力操作部の入力操作により変更されるか、又は入力操作部の入力操作により入力設定されてもよい。このように分類条件を設定するための種々の方法を適用することができる。 Further, the classification condition may be set in advance, changed by an input operation of the input operation unit, or input set by an input operation of the input operation unit. As described above, various methods for setting the classification condition can be applied.
 また、前記付随情報は、位置情報もしくは日時情報であってもよい。 The accompanying information may be position information or date / time information.
 この場合、前記分類部は、前記第1コンテンツ記憶部に記憶された各コンテンツの位置情報もしくは日時情報を閾値と比較して、前記第1コンテンツ記憶部に記憶された複数のコンテンツを1つ又は複数のコンテンツ群に分類してもよい。あるいは、前記分類部は、前記第1コンテンツ記憶部に記憶された各コンテンツの日時情報を用いて、それらコンテンツを時系列で配列した後、各コンテンツの位置情報を用いて、配列されたそれらコンテンツを1つ又は複数のコンテンツ群に分類してもよい。 In this case, the classification unit compares the position information or date / time information of each content stored in the first content storage unit with a threshold value, and sets one or more contents stored in the first content storage unit or You may classify | categorize into a some content group. Alternatively, the classification unit arranges the contents in time series using the date and time information of each content stored in the first content storage unit, and then arranges the contents using the position information of each content. May be classified into one or a plurality of content groups.
 更に、前記閾値は、予め設定されるか、入力操作部の入力操作により変更されるか、入力操作部の入力操作により入力設定されるか、又はコンテンツの付随情報に基づいて変更されてもよい。このように閾値を設定するための種々の方法を適用することができる。 Further, the threshold value may be set in advance, changed by an input operation of the input operation unit, input set by an input operation of the input operation unit, or changed based on accompanying information of content. . Various methods for setting the threshold in this way can be applied.
 また、前記端末装置は、前記第1コンテンツ記憶部に記憶された各コンテンツの付随情報を前記サーバー装置に送信し、前記サーバー装置は、前記端末装置から受信した各コンテンツの付随情報に基づいて分類条件を求め、この分類条件を前記端末装置に送信してもよい。 Further, the terminal device transmits accompanying information of each content stored in the first content storage unit to the server device, and the server device performs classification based on the accompanying information of each content received from the terminal device. A condition may be obtained and this classification condition may be transmitted to the terminal device.
 更に、前記サーバー装置から前記端末装置へと送信されたコンテンツには、インターネット上のアドレスが含まれており、前記端末装置での入力操作に応答して前記端末装置から前記サーバー装置又は他のサーバー装置へと前記アドレスを送信し、このアドレスを受信したサーバー装置で該アドレスに基づき情報を収集して、この情報を該サーバー装置から前記端末装置へと送信し、前記端末装置で該情報を表示してもよい。 Further, the content transmitted from the server device to the terminal device includes an address on the Internet, and from the terminal device to the server device or another server in response to an input operation on the terminal device. The server sends the address, collects information based on the address at the server device that received the address, sends the information from the server device to the terminal device, and displays the information on the terminal device. May be.
 この構成によれば、端末装置で出力されたコンテンツを起点にして、様々な情報をユーザーに提供することが可能となる。 According to this configuration, various information can be provided to the user starting from the content output from the terminal device.
 あるいは、本発明のコンテンツ出力システムは、ネットワークを通じて、端末装置及びサーバー装置間で情報通信を行うコンテンツ出力システムにおいて、前記サーバー装置は、複数のコンテンツを記憶した第1コンテンツ記憶部と、複数のコンテンツを記憶した第2コンテンツ記憶部と、前記第1コンテンツ記憶部に記憶された複数のコンテンツを前記端末装置に送信し、このコンテンツの送信に対する応答として前記端末装置から検索条件を受信し、この検索条件に該当するコンテンツを前記端末装置に送信する通信部と、前記第2コンテンツ記憶部に記憶された複数のコンテンツの中から前記受信した検索条件に該当するコンテンツを検索する検索部とを備え、前記端末装置は、前記サーバー装置から前記第1コンテンツ記憶部に記憶された複数のコンテンツを受信して、受信した複数のコンテンツを、分類条件に基づいて、1つ又は複数のコンテンツ群に分類する分類部と、前記コンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツの付随情報を基に、検索条件を生成する検索条件生成部と、前記コンテンツ群の各々について、前記検索条件生成部により生成された検索条件を前記サーバー装置に送信し、この検索条件の送信に対する応答として検索条件に該当するコンテンツをサーバー装置から受信する通信部と、前記コンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツ及び前記サーバー装置から受信した各コンテンツを出力する出力部とを備えている。 Alternatively, the content output system of the present invention is a content output system that performs information communication between a terminal device and a server device via a network, wherein the server device includes a first content storage unit that stores a plurality of contents and a plurality of contents. And a plurality of contents stored in the first content storage unit are transmitted to the terminal device, and a search condition is received from the terminal device as a response to the transmission of the content. A communication unit that transmits content that meets a condition to the terminal device, and a search unit that searches for content that meets the received search condition from among a plurality of contents stored in the second content storage unit, The terminal device stores information in the first content storage unit from the server device. A plurality of received contents, a classification unit that classifies the received plurality of contents into one or more content groups based on classification conditions, and each of the content groups is classified into the content group A search condition generation unit that generates a search condition based on the accompanying information of the content, and the search condition generated by the search condition generation unit for each of the content groups is transmitted to the server device. A communication unit that receives content corresponding to a search condition from a server device as a response to transmission, and an output unit that outputs, for each of the content groups, content classified into the content group and each content received from the server device It has.
 この構成では、第1コンテンツ記憶部及び第2コンテンツ記憶部がネットワーク上のサーバー装置に存在しており、自己のパーソナルコンピュータ等の端末装置で、サーバー装置の第1コンテンツ記憶部に記憶された複数のコンテンツを受信し、受信した複数のコンテンツを、分類条件に基づいて、1つ又は複数のコンテンツ群に分類し、コンテンツ群の各々について、そのコンテンツ群に分類されたコンテンツの付随情報を基に検索条件を生成し、サーバー装置の第2コンテンツ記憶部に蓄積されている各コンテンツから検索条件に該当するコンテンツを検索することができる。そして、コンテンツ群の各々について、分類条件から求められたコンテンツ及び検索条件から求められたコンテンツを共に出力することができる。すなわち、自己のパーソナルコンピュータ側で分類条件を適宜に設定すれば、分類条件に基づいて、サーバー装置の第1コンテンツ記憶部に記憶されている複数のコンテンツが1つ又は複数のコンテンツ群に分類される。そして、そのコンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツの付随情報を基に検索条件が生成されて、サーバー装置側で検索条件に合うコンテンツが第2コンテンツ記憶部に記憶されているコンテンツの中から検索され、当該コンテンツ群に分類されたコンテンツ及び検索条件から求められたコンテンツが共に出力される。従って、自己のパーソナルコンピュータ側で分類条件を設定するだけで、サーバー装置の第1コンテンツ記憶部と第2コンテンツ記憶部との間で相互に関連するコンテンツが選択されて、この相互に関連するコンテンツが出力されるから、ユーザーは、ネットワーク上のサーバ装置等に蓄積されている大量のコンテンツのうちから有用なものを効率的に検索して端末装置で利用することが可能となる。 In this configuration, the first content storage unit and the second content storage unit exist in the server device on the network, and a plurality of devices stored in the first content storage unit of the server device are the terminal devices such as a personal computer. Content is classified into one or more content groups based on the classification conditions, and for each content group, based on the accompanying information of the content classified into the content group A search condition is generated, and content corresponding to the search condition can be searched from each content stored in the second content storage unit of the server device. For each content group, the content obtained from the classification condition and the content obtained from the search condition can be output together. That is, if the classification conditions are appropriately set on the personal computer side, a plurality of contents stored in the first content storage unit of the server device are classified into one or a plurality of content groups based on the classification conditions. The Then, for each of the content groups, a search condition is generated based on the accompanying information of the content classified into the content group, and the content that matches the search condition is stored in the second content storage unit on the server device side. The contents retrieved from the contents and classified into the contents group and the contents obtained from the search conditions are output together. Therefore, by setting the classification condition on the personal computer side, the mutually related content is selected between the first content storage unit and the second content storage unit of the server device, and the mutually related content is selected. Is output, the user can efficiently search for useful content from a large amount of content stored in a server device or the like on the network and use it on the terminal device.
 あるいは、本発明のコンテンツ出力システムは、ネットワークを通じて、端末装置及びサーバー装置間で情報通信を行うコンテンツ出力システムにおいて、前記サーバー装置は、複数のコンテンツを記憶した第1コンテンツ記憶部と、複数のコンテンツを記憶した第2コンテンツ記憶部と、前記第1コンテンツ記憶部に記憶された複数のコンテンツを、分類条件に基づいて、1つ又は複数のコンテンツ群に分類する分類部と、前記コンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツの付随情報を基に、検索条件を生成する検索条件生成部と、前記コンテンツ群の各々について、前記検索条件生成部により生成された検索条件に該当するコンテンツを、前記第2コンテンツ記憶部に記憶された複数のコンテンツの中から検索する検索部と、前記コンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツ及び前記検索部により検索されたコンテンツを共に端末装置に送信する通信部とを備え、前記端末装置は、前記コンテンツの各々をサーバー装置から受信する通信部と、これらのコンテンツを出力する出力部とを備えている。 Alternatively, the content output system of the present invention is a content output system that performs information communication between a terminal device and a server device via a network, wherein the server device includes a first content storage unit that stores a plurality of contents and a plurality of contents. Each of the content group, a classification unit that classifies a plurality of contents stored in the first content storage unit into one or a plurality of content groups based on a classification condition, and each of the content groups A search condition generation unit that generates a search condition based on accompanying information of content classified into the content group, and content corresponding to the search condition generated by the search condition generation unit for each of the content groups From a plurality of contents stored in the second content storage unit A search unit for searching, and a communication unit that transmits, to each of the content groups, the content classified into the content group and the content searched for by the search unit to a terminal device, Are provided from the server device, and an output unit for outputting these contents.
 この構成では、第1コンテンツ記憶部及び第2コンテンツ記憶部がネットワーク上のサーバー装置に存在しており、サーバー装置で、第1コンテンツ記憶部に記憶された複数のコンテンツを、分類条件に基づいて、1つ又は複数のコンテンツ群に分類することができる。さらに、サーバ装置で、コンテンツ群の各々について、そのコンテンツ群に分類されたコンテンツの付随情報を基に検索条件を生成し、サーバー装置の第2コンテンツ記憶部に蓄積されている各コンテンツから検索条件に該当するコンテンツを検索することができる。そして、コンテンツ群の各々について、サーバー装置からパーソナルコンピュータ等の端末装置へ、当該コンテンツ群に分類されたコンテンツ及び検索条件から求められたコンテンツが送信されるから、これらコンテンツを端末装置で出力することができる。すなわち、サーバー装置の第1コンテンツ記憶部と第2コンテンツ記憶部との間で相互に関連するコンテンツが選択されて、この相互に関連するコンテンツが端末装置に出力されるから、ユーザーは、ネットワーク上のサーバ装置等に蓄積されている大量のコンテンツのうちから有用なものを効率的に検索して端末装置で利用することが可能となる。 In this configuration, the first content storage unit and the second content storage unit exist in the server device on the network, and the server device can store a plurality of contents stored in the first content storage unit based on the classification condition. It can be classified into one or more content groups. Further, the server device generates a search condition for each content group based on the accompanying information of the content classified into the content group, and the search condition is determined from each content stored in the second content storage unit of the server device. The content corresponding to can be searched. Then, for each content group, the content classified from the content group and the content obtained from the search condition are transmitted from the server device to a terminal device such as a personal computer, and the content is output by the terminal device. Can do. That is, the mutually related content is selected between the first content storage unit and the second content storage unit of the server device, and the mutually related content is output to the terminal device. It is possible to efficiently search for useful content from a large amount of content stored in the server device and use it in the terminal device.
 また、本発明のサーバー装置は、ネットワークを通じて、端末装置との間で情報通信を行うサーバー装置において、複数のコンテンツを記憶した第1コンテンツ記憶部と、複数のコンテンツを記憶した第2コンテンツ記憶部と、前記第1コンテンツ記憶部に記憶された複数のコンテンツを、分類条件に基づいて、1つ又は複数のコンテンツ群に分類する分類部と、前記コンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツの付随情報を基に、検索条件を生成する検索条件生成部と、前記コンテンツ群の各々について、前記検索条件生成部により生成された検索条件に該当するコンテンツを、前記第2コンテンツ記憶部に記憶された複数のコンテンツの中から検索する検索部と、前記コンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツ及び前記検索部により検索されたコンテンツを共に端末装置に送信する通信部とを備えている。 The server device according to the present invention includes a first content storage unit storing a plurality of contents and a second content storage unit storing a plurality of contents in a server device that performs information communication with a terminal device through a network. A plurality of contents stored in the first content storage unit are classified into one or a plurality of content groups based on classification conditions, and each of the content groups is classified into the content group A search condition generation unit that generates a search condition based on the accompanying information of the content, and a content corresponding to the search condition generated by the search condition generation unit for each of the content groups, the second content storage unit A search unit for searching from a plurality of contents stored in the database, and each of the content groups And a both communication unit that transmits to the terminal device searched content by categorized content and the search unit to the Ceiling group.
 この構成では、サーバ装置は、第1コンテンツ記憶部に記憶された複数のコンテンツを、分類条件に基づいて、1つ又は複数のコンテンツ群に分類することができる。さらに、サーバ装置は、コンテンツ群の各々について、そのコンテンツ群に分類されたコンテンツの付随情報を基に検索条件を生成して、第2コンテンツ記憶部に蓄積されている各コンテンツから検索条件に該当するコンテンツを検索し、当該コンテンツ群に分類されたコンテンツ及び検索条件から求めたコンテンツを、ネットワークを通じて端末装置へ送信することができる。すわなち、本発明のサーバー装置によれば、第1コンテンツ記憶部と第2コンテンツ記憶部との間で相互に関連するコンテンツが選択されて、この相互に関連するコンテンツが端末装置に送信されるから、ユーザーは、ネットワーク上のサーバ装置等に蓄積されている大量のコンテンツのうちから有用なものを効率的に検索して端末装置で利用することが可能となる。 In this configuration, the server device can classify the plurality of contents stored in the first content storage unit into one or a plurality of content groups based on the classification condition. Further, the server device generates a search condition for each content group based on the accompanying information of the content classified into the content group, and corresponds to the search condition from each content stored in the second content storage unit Content to be searched, and the content classified from the content group and the content obtained from the search condition can be transmitted to the terminal device through the network. That is, according to the server device of the present invention, the mutually related content is selected between the first content storage unit and the second content storage unit, and the mutually related content is transmitted to the terminal device. Therefore, the user can efficiently search for useful content from a large amount of content stored in a server device or the like on the network and use it on the terminal device.
 一方、本発明のコンテンツ出力装置は、複数のコンテンツを記憶した第1コンテンツ記憶部と、複数のコンテンツを記憶した第2コンテンツ記憶部と、前記第1コンテンツ記憶部に記憶された複数のコンテンツを、分類条件に基づいて、1つ又は複数のコンテンツ群に分類する分類部と、前記コンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツの付随情報を基に、検索条件を生成する検索条件生成部と、前記コンテンツ群の各々について、前記検索条件生成部により生成された検索条件に該当するコンテンツを、前記第2コンテンツ記憶部に記憶された複数のコンテンツの中から検索する検索部と、前記コンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツ及び前記検索部により検索されたコンテンツを共に出力する出力部とを備えている。 On the other hand, the content output device of the present invention includes a first content storage unit storing a plurality of contents, a second content storage unit storing a plurality of contents, and a plurality of contents stored in the first content storage unit. A search condition that generates a search condition for each of the content groups based on the incidental information of the content classified into the content group for each of the content groups based on the classification condition A search unit for searching for a content corresponding to the search condition generated by the search condition generation unit from a plurality of contents stored in the second content storage unit for each of the content groups; For each of the content groups, the content classified into the content group and the content searched by the search unit. And a both output section for outputting the tool.
 この構成では、出力装置は、第1コンテンツ記憶部に記憶された複数のコンテンツを、分類条件に基づいて、1つ又は複数のコンテンツ群に分類することができる。さらに、出力装置は、コンテンツ群の各々について、そのコンテンツ群に分類されたコンテンツの付随情報を基に検索条件を生成して、第2コンテンツ記憶部に蓄積されている各コンテンツから検索条件に該当するコンテンツを検索し、当該コンテンツ群に分類されたコンテンツ及び検索条件から求めたコンテンツを、出力部に出力することができる。すわなち、本発明の出力装置によれば、第1コンテンツ記憶部と第2コンテンツ記憶部との間で相互に関連するコンテンツが選択されて、この相互に関連するコンテンツが出力部に出力されるから、ユーザーは、出力装置に蓄積されている大量のコンテンツのうちから有用なものを効率的に検索して利用することが可能となる。 In this configuration, the output device can classify the plurality of contents stored in the first content storage unit into one or a plurality of contents groups based on the classification condition. Further, the output device generates a search condition for each content group based on the accompanying information of the content classified into the content group, and corresponds to the search condition from each content stored in the second content storage unit Content to be searched, and content obtained from the content group and the search condition can be output to the output unit. That is, according to the output device of the present invention, the mutually related content is selected between the first content storage unit and the second content storage unit, and the mutually related content is output to the output unit. Therefore, the user can efficiently search for and use useful content from a large amount of content stored in the output device.
 また、本発明のコンテンツ出力方法は、コンテンツを出力するコンテンツ出力方法において、複数のコンテンツを記憶する第1コンテンツ記憶ステップと、複数のコンテンツを記憶する第2コンテンツ記憶ステップと、前記第1コンテンツ記憶ステップで記憶した複数のコンテンツを、分類条件に基づいて、1つ又は複数のコンテンツ群に分類する分類ステップと、前記コンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツの付随情報に基づき、検索条件を生成する検索条件生成ステップと、前記コンテンツ群の各々について、前記検索条件生成ステップで生成した検索条件に該当するコンテンツを、前記第2コンテンツ記憶ステップで記憶した複数のコンテンツの中から検索する検索ステップと、前記コンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツ及び前記検索ステップで検索したコンテンツを共に出力する出力ステップとを有する。 The content output method of the present invention is a content output method for outputting content, wherein a first content storage step for storing a plurality of contents, a second content storage step for storing a plurality of contents, and the first content storage. A step of classifying the plurality of contents stored in the step into one or a plurality of content groups based on a classification condition, and for each of the content groups, based on incidental information of the content classified into the content group, A search condition generation step for generating a search condition, and for each of the content groups, the content corresponding to the search condition generated in the search condition generation step is searched from the plurality of contents stored in the second content storage step. Search step, and For people, and an output step of outputting both the content searched categorized content and the search step to the content group.
 この構成では、第1コンテンツ記憶ステップで記憶した複数のコンテンツを、分類条件に基づいて、1つ又は複数のコンテンツ群に分類することができる。さらに、コンテンツ群の各々について、そのコンテンツ群に分類したコンテンツの付随情報を基に検索条件を生成し、この検索条件に該当するコンテンツを第2コンテンツ記憶ステップで記憶したコンテンツの中からを検索して、当該コンテンツ群に分類したコンテンツ及び検索条件から求めたコンテンツを共に出力することができる。すわなち、本発明のコンテンツ出力方法によれば、第1コンテンツ記憶ステップで記憶した複数のコンテンツと第2コンテンツ記憶ステップで記憶した複数のコンテンツの中から相互に関連するコンテンツが選択されて、この相互に関連するコンテンツが出力される。したがって、このようなコンテンツ出力方法を、コンピュータ等の端末装置に実行させることで、その端末装置のユーザーは、ネットワーク上のサーバ装置等に蓄積されている大量のコンテンツのうちから有用なものを効率的に検索して利用することが可能となる。 In this configuration, the plurality of contents stored in the first content storage step can be classified into one or a plurality of content groups based on the classification condition. Further, for each content group, a search condition is generated based on the accompanying information of the content classified into the content group, and the content corresponding to the search condition is searched from the content stored in the second content storage step. Thus, the content classified into the content group and the content obtained from the search condition can be output together. That is, according to the content output method of the present invention, mutually related contents are selected from the plurality of contents stored in the first content storage step and the plurality of contents stored in the second content storage step, The mutually related contents are output. Therefore, by causing a terminal device such as a computer to execute such a content output method, a user of the terminal device can efficiently use a useful content out of a large amount of content stored in a server device on the network. Search and use.
 なお、このようなコンテンツ出力方法は、その各ステップをコンピュータに実行させるためのコンテンツ出力プログラムとして実現することが可能であり、このコンテンツ出力プログラムは、コンピュータ読み取り可能な記録媒体に記録して提供することが可能である。 Such a content output method can be realized as a content output program for causing a computer to execute each step, and this content output program is recorded on a computer-readable recording medium and provided. It is possible.
 コンピュータは、プログラムを記録媒体から読出したり、プログラムを通信ネットワークを通じて受信し、プログラムを実行して、本発明を実施することができる。複数のコンピュータやインターネットからなるシステムにおいては、複数の処理を複数の端末に分散して行い得る。従って、プログラムは、コンピュータ等の単一の端末だけではなく、システムにも適用し得る。 The computer can implement the present invention by reading a program from a recording medium, receiving the program through a communication network, and executing the program. In a system composed of a plurality of computers and the Internet, a plurality of processes can be distributed to a plurality of terminals. Therefore, the program can be applied not only to a single terminal such as a computer but also to a system.
 本発明によれば、ネットワーク上のサーバー装置等に蓄積されている大量のコンテンツのうちから有用なものを効率的に検索して利用することが可能なコンテンツ出力システム、サーバー装置、コンテンツ出力装置、コンテンツ出力方法、コンテンツ出力プログラム、及びコンテンツ出力プログラムを記憶した記録媒体を提供することができる。 According to the present invention, a content output system, a server device, a content output device, and a content output system capable of efficiently searching and using useful content from a large amount of content stored in a server device or the like on a network, A content output method, a content output program, and a recording medium storing the content output program can be provided.
図1は、本発明のコンテンツ出力システムの一実施形態を示すブロック図である。FIG. 1 is a block diagram showing an embodiment of a content output system of the present invention. 図2は、図1の端末装置におけるコンテンツの検索及び出力処理を示すフローチャートである。FIG. 2 is a flowchart showing content search and output processing in the terminal device of FIG. 図3は、図1の端末装置における画面に表示される写真画像の付随情報のリストを例示する図である。FIG. 3 is a diagram illustrating a list of accompanying information of photographic images displayed on the screen in the terminal device of FIG. 図4は、広いエリアに入る撮影位置の写真画像を含むコンテンツ群を例示する図である。FIG. 4 is a diagram illustrating a content group including a photographic image at a shooting position that falls within a wide area. 図5は、図1の端末装置における写真画像をコンテンツ群に分類するための処理を示すフローチャートである。FIG. 5 is a flowchart showing processing for classifying photographic images into content groups in the terminal device of FIG. 図6は、図1の端末装置からサーバー装置に与えられる検索条件を例示する図である。FIG. 6 is a diagram illustrating search conditions given from the terminal device of FIG. 1 to the server device. 図7は、図1のサーバー装置で検索された写真画像の付随情報を例示する図である。FIG. 7 is a diagram exemplifying accompanying information of a photographic image retrieved by the server device of FIG. 図8は、図1の端末装置における画面上のコンテンツの表示形態の一例示す図であり、(a)は第2コンテンツ記憶部に記憶されている写真画像の表示例を示す図であり、(b)は第1コンテンツ記憶部に記憶されている写真画像の表示例を示す図である。FIG. 8 is a diagram showing an example of a display form of content on the screen in the terminal device of FIG. 1, (a) is a diagram showing a display example of a photographic image stored in the second content storage unit, b) is a diagram showing a display example of a photographic image stored in the first content storage unit. 図9は、図1の端末装置における画面上のコンテンツの他の表示形態を例示する図である。FIG. 9 is a diagram illustrating another display form of the content on the screen in the terminal device of FIG. 図10は、図1の端末装置における画面上のコンテンツを削除するときの操作を説明するための図である。FIG. 10 is a diagram for explaining an operation when deleting the content on the screen in the terminal device of FIG. 1. 図11は、図1の端末装置におけるコンテンツの別の表示形態としての作品購買画面を説明するための図であり、(a)は写真画像の表示例を示す図であり、(b)は、(a)に示す写真画像の表示後、ブラウザを起動させるためのボタンが操作された時に表示される作品購買画面の一例を示す図である。FIG. 11 is a diagram for explaining a work purchase screen as another content display form in the terminal device of FIG. 1, (a) is a diagram showing a display example of a photographic image, (b) It is a figure which shows an example of the work purchase screen displayed when the button for starting a browser is operated after the display of the photograph image shown to (a). 図12は、図11で示した作品購買画面を表示するための処理を示すフローチャートであり、(a)は、端末装置による処理を示すフローチャートであり、(b)はサーバー装置による処理を示すフローチャートである。12 is a flowchart showing processing for displaying the work purchase screen shown in FIG. 11, (a) is a flowchart showing processing by the terminal device, and (b) is a flowchart showing processing by the server device. It is. 図13は、図1の端末装置におけるコンテンツの更に他の表示形態を例示する図である。FIG. 13 is a diagram illustrating still another display form of content in the terminal device of FIG. 図14は、図1の端末装置における画面上のコンテンツとして商品画像を表示した一例を示す図であり、(a)は第1コンテンツ記憶部に記憶されている商品画像の表示例を示す図であり、(b)は第2コンテンツ記憶部に記憶されている商品画像の表示例を示す図である。FIG. 14 is a diagram illustrating an example in which a product image is displayed as content on the screen in the terminal device of FIG. 1, and FIG. 14A is a diagram illustrating a display example of the product image stored in the first content storage unit. Yes, (b) is a diagram showing a display example of the product image stored in the second content storage unit. 図15は、図1の端末装置における画面上のコンテンツとして商品画像を表示した他の例を示す図である。FIG. 15 is a diagram showing another example in which a product image is displayed as content on the screen in the terminal device of FIG. 図16は、図1の端末装置の変形例を示すブロック図である。FIG. 16 is a block diagram showing a modification of the terminal device of FIG. 図17は、図1のシステムの変形例を示すブロック図である。FIG. 17 is a block diagram showing a modification of the system of FIG. 図18は、図1のシステムの他の変形例を示すブロック図である。FIG. 18 is a block diagram showing another modification of the system of FIG. 図19は、本発明のコンテンツ出力装置の一実施形態を示すブロック図である。FIG. 19 is a block diagram showing an embodiment of the content output apparatus of the present invention.
 以下、本発明の実施形態を添付図面を参照しつつ詳細に説明する。 Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
 図1は、本発明のコンテンツ出力システムの一実施形態を示すブロック図である。本実施形態のシステムは、ユーザー側の端末装置101、サーバー装置201、及び端末装置101とサーバー装置201を相互に接続するネットワークNを備えている。 FIG. 1 is a block diagram showing an embodiment of a content output system of the present invention. The system of this embodiment includes a terminal device 101 on the user side, a server device 201, and a network N that connects the terminal device 101 and the server device 201 to each other.
 端末装置101は、パーソナルコンピュータやその周辺機器等からなるものであり、入力部102、コンテンツ管理部103、第1コンテンツ記憶部104、コンテンツ分類部105、検索条件生成部106、通信部107、表示生成部108、及び表示部109等を備えている。 The terminal device 101 includes a personal computer and peripheral devices thereof, and includes an input unit 102, a content management unit 103, a first content storage unit 104, a content classification unit 105, a search condition generation unit 106, a communication unit 107, a display. A generation unit 108, a display unit 109, and the like are provided.
 入力部102は、キーボードやマウス等である。コンテンツ管理部103、コンテンツ分類部105、検索条件生成部106、及び表示生成部108は、CPUによりROM内のプログラムが読出され実行されて、それらの機能が具現化されるものである。第1コンテンツ記憶部104は、ハードディスク装置等の記憶装置である。通信部107は、通信用インターフェース等であり、ネットワークNを通じて、サーバー装置201との間でデータ通信を行う。表示部109は、液晶表示装置等の表示デバイスである。 The input unit 102 is a keyboard, a mouse, or the like. The content management unit 103, the content classification unit 105, the search condition generation unit 106, and the display generation unit 108 are implemented by reading out and executing programs in the ROM by the CPU and implementing their functions. The first content storage unit 104 is a storage device such as a hard disk device. The communication unit 107 is a communication interface or the like, and performs data communication with the server apparatus 201 through the network N. The display unit 109 is a display device such as a liquid crystal display device.
 尚、第1コンテンツ記憶部104は、ハードディスク装置に限定されず、端末装置101で読取り可能なSDカード、DVDメディア、BDメディアなどの外部記憶媒体であってもよい。 Note that the first content storage unit 104 is not limited to a hard disk device, and may be an external storage medium such as an SD card, a DVD medium, or a BD medium that can be read by the terminal device 101.
 サーバー装置201は、コンピュータやその周辺機器等からなるものであり、通信部202、検索部203、変換表記憶部204、及び第2コンテンツ記憶部205等を備えている。 The server device 201 includes a computer and its peripheral devices, and includes a communication unit 202, a search unit 203, a conversion table storage unit 204, a second content storage unit 205, and the like.
 通信部202は、通信用インターフェース等であり、ネットワークNを通じて、端末装置101との間でデータ通信を行う。検索部203は、CPUによりROM内のプログラムが読出され実行されて、その機能が具現化されるものである。変換表記憶部204及び第2記憶部205は、ハードディスク装置等の記憶装置である。 The communication unit 202 is a communication interface or the like, and performs data communication with the terminal device 101 through the network N. In the search unit 203, a program in the ROM is read and executed by the CPU, and its function is realized. The conversion table storage unit 204 and the second storage unit 205 are storage devices such as a hard disk device.
 ここで、端末装置101の第1コンテンツ記憶部104には、該端末装置101のインターフェース(図示せず)を通じて、複数のコンテンツが入力され記憶されている。第1コンテンツ記憶部104内のコンテンツは、端末装置101のユーザーが取得した個人の所有物である。 Here, a plurality of contents are inputted and stored in the first content storage unit 104 of the terminal device 101 through an interface (not shown) of the terminal device 101. The content in the first content storage unit 104 is a personal property acquired by the user of the terminal device 101.
 また、サーバー装置201の第2コンテンツ記憶部205には、ネットワークN及び該サーバー装置201のインターフェース(図示せず)を通じて、多数のコンテンツが入力され記憶されている。第2コンテンツ記憶部205内のコンテンツは、不特定多数の人々が利用可能な共有物である。 In the second content storage unit 205 of the server apparatus 201, a large number of contents are input and stored through the network N and an interface (not shown) of the server apparatus 201. The content in the second content storage unit 205 is a shared material that can be used by an unspecified number of people.
 これらのコンテンツは、写真画像等の静止画像などであり、撮影日時、撮影位置等を示す付随情報を有している。 These contents are still images such as photographic images, and have accompanying information indicating shooting date and time, shooting position, and the like.
 このようなシステムにおいては、端末装置101の第1コンテンツ記憶部104内のコンテンツを出力するときに、このコンテンツに関連する他のコンテンツをサーバー装置201の第2コンテンツ記憶部205に記憶されているコンテンツの中から検索して、端末装置101側でそれらのコンテンツを共に出力することができる。 In such a system, when content in the first content storage unit 104 of the terminal device 101 is output, other content related to this content is stored in the second content storage unit 205 of the server device 201. It is possible to search from the contents and output the contents together on the terminal device 101 side.
 次に、図2のフローチャートに従って、そのようなコンテンツの検索及び出力処理を説明する。 Next, the content search and output process will be described with reference to the flowchart of FIG.
 端末装置101では、該端末装置101の電源スイッチが入れられて、端末装置101が起動された後、入力部102の入力操作によりコンテンツの分類が指示されると(ステップS301)、これに応答してコンテンツ管理部103は、第1コンテンツ記憶部104内のコンテンツを検索して、この検索したコンテンツのリストを生成し、このコンテンツのリストを表示生成部108を通じて表示部109の画面に表示する(ステップS302)。 In the terminal device 101, after the power switch of the terminal device 101 is turned on and the terminal device 101 is activated, when content classification is instructed by an input operation of the input unit 102 (step S301), the terminal device 101 responds thereto. The content management unit 103 searches the content in the first content storage unit 104, generates a list of the searched content, and displays the content list on the screen of the display unit 109 through the display generation unit 108 ( Step S302).
 この際、第1コンテンツ記憶部104内の全てのコンテンツのリストを生成するようにしてもよいし、コンテンツの作成期間や年代、フォルダ階層、特定のタグがついたものなど、一部のコンテンツのリストを生成するようにしても構わない。一部のコンテンツのリストを生成する場合は、入力部102の入力操作によりコンテンツの分類を指示すると共に、作成期間や年代、フォルダ階層、特定のタグ等を指示する。コンテンツ管理部103は、指示された作成期間や年代、フォルダ階層、特定のタグ等に該当するコンテンツを選択して、選択したコンテンツのリストを生成する。例えば、コンテンツのサムネイルやファイル名をリストにして表示する。 At this time, a list of all content in the first content storage unit 104 may be generated, or some content such as content creation period, age, folder hierarchy, and specific tags may be included. A list may be generated. When generating a list of some contents, the classification of contents is instructed by an input operation of the input unit 102, and the creation period, age, folder hierarchy, specific tag, and the like are instructed. The content management unit 103 selects content corresponding to the instructed creation period, age, folder hierarchy, specific tag, and the like, and generates a list of the selected content. For example, content thumbnails and file names are displayed as a list.
 このコンテンツのリストは、表示部109の画面に表示されると同時に、コンテンツ分類部105に出力される。コンテンツ分類部105は、リストの各コンテンツの付随情報を参照し、分類条件に基づいて、リストに挙げられている全てのコンテンツを1つ又は複数のコンテンツ群に分類する(ステップS303)。この分類条件は、予め設定された複数種類の分類条件のうちから入力部102の入力操作により選択されてもよいし、入力部102の入力操作により設定されてもよい。 The list of contents is displayed on the screen of the display unit 109 and simultaneously output to the content classification unit 105. The content classification unit 105 refers to the accompanying information of each content in the list, and classifies all the content listed in the list into one or a plurality of content groups based on the classification condition (step S303). This classification condition may be selected by an input operation of the input unit 102 from among a plurality of types of preset classification conditions, or may be set by an input operation of the input unit 102.
 検索条件生成部106は、各コンテンツ群に分類されたコンテンツをコンテンツ分類部105からコンテンツ管理部103を介して受け取り、各コンテンツ群について、そのコンテンツ群に分類されたコンテンツの付随情報を参照して、付随情報から検索条件を生成する(ステップS304)。このとき、コンテンツ分類部105により1つのコンテンツ群が作成される度に、そのコンテンツ群に分類された各コンテンツの付随情報から検索条件を生成してもよいし、コンテンツ分類部105により第1コンテンツ記憶部104内の全てのコンテンツの分類が終了して、全てのコンテンツ群が作成されてから、コンテンツ群別に、それぞれの検索条件を生成しても構わない。 The search condition generation unit 106 receives the content classified into each content group from the content classification unit 105 via the content management unit 103, and refers to the accompanying information of the content classified into the content group for each content group. Then, a search condition is generated from the accompanying information (step S304). At this time, every time one content group is created by the content classification unit 105, a search condition may be generated from the accompanying information of each content classified into the content group, or the first content may be generated by the content classification unit 105. After the classification of all the contents in the storage unit 104 is completed and all the content groups are created, each search condition may be generated for each content group.
 コンテンツ管理部103は、検索条件生成部106により生成された検索条件を受け取ると、この検索条件を通信部107からネットワークNを通じてサーバー装置201へと送信し、この検索条件に該当するコンテンツをサーバー装置201に要求する(ステップS305)。 When the content management unit 103 receives the search condition generated by the search condition generation unit 106, the content management unit 103 transmits the search condition from the communication unit 107 to the server device 201 through the network N, and the content corresponding to the search condition is transmitted to the server device. It requests to 201 (step S305).
 サーバー装置201では、端末装置101からの検索条件をネットワークNを通じて通信部202で受信し、この検索条件を検索部203に入力する。 In the server device 201, the search condition from the terminal device 101 is received by the communication unit 202 through the network N, and this search condition is input to the search unit 203.
 検索部203は、変換表記憶部204内の変換テーブルを参照して、この検索条件を第2コンテンツ記憶部205内の各コンテンツの付随情報に整合するように変換し、第2コンテンツ記憶部205内の各コンテンツの付随情報を参照して、その変換した検索条件に該当する付随情報を有するコンテンツを検索し、この検索したコンテンツを通信部202からネットワークNを通じて端末装置101に送信する。 The search unit 203 refers to the conversion table in the conversion table storage unit 204 and converts this search condition so as to match the accompanying information of each content in the second content storage unit 205, and the second content storage unit 205. The content having the accompanying information corresponding to the converted search condition is searched by referring to the accompanying information of each content, and the searched content is transmitted from the communication unit 202 to the terminal device 101 through the network N.
 端末装置101では、サーバー装置201の応答を待機しており(ステップS306)、検索条件に該当する付随情報を有する第2コンテンツ記憶部205内のコンテンツをネットワークNを通じて通信部107で受信すると(ステップS306で「yes」)、受信したコンテンツをコンテンツ管理部103に入力する(ステップS307)。コンテンツ管理部103は、受信した第2コンテンツ記憶部205内のコンテンツと、ステップS303で分類されたコンテンツ群のコンテンツとを表示生成部108に出力する。 The terminal device 101 waits for a response from the server device 201 (step S306), and when the content in the second content storage unit 205 having accompanying information corresponding to the search condition is received by the communication unit 107 via the network N (step S306). In step S306, “yes”), the received content is input to the content management unit 103 (step S307). The content management unit 103 outputs the received content in the second content storage unit 205 and the content of the content group classified in step S303 to the display generation unit 108.
 表示生成部108は、これらのコンテンツを入力すると、これらのコンテンツの表示レイアウトを生成し(ステップS308)、これらのコンテンツをその表示レイアウトで表示部109の画面上に共に表示出力する(ステップS309)。 When these contents are input, the display generation unit 108 generates a display layout of these contents (step S308), and displays and outputs these contents together on the screen of the display unit 109 in the display layout (step S309). .
 また、サーバー装置201の応答がなくて、サーバー装置201からのコンテンツが受信されなかった場合は(ステップS306で「no」)、ステップS303で分類されたコンテンツ群のコンテンツのみが表示部109の画面に表示出力される(ステップS308、S309)。 If no response is received from the server apparatus 201 and no content is received from the server apparatus 201 (“no” in step S306), only the contents of the content group classified in step S303 are displayed on the screen of the display unit 109. Is displayed and output (steps S308 and S309).
 引き続いて、コンテンツ分類部105でコンテンツが分類された全てのコンテンツ群について、ステップS304~S309の処理が終了したか否かが判定され(ステップS310)、終了していなければ(ステップS310で「no」)、ステップS304~S309の処理が繰り返され、終了していれば(ステップS310で「yes」)、図2の処理が終了する。 Subsequently, it is determined whether or not the processing of steps S304 to S309 has been completed for all content groups whose content has been classified by the content classification unit 105 (step S310). 2), if the processes of steps S304 to S309 are repeated and completed (“yes” in step S310), the process of FIG. 2 ends.
 尚、ここでは、端末装置101の電源スイッチがオンにされ、入力部102の入力操作によりコンテンツの分類が指示されてから、ステップS302S以降の処理に進んで、コンテンツの分類を開始しているが、この代わりに、端末装置101の電源スイッチがオンにされると、ステップS302S以降の処理に直ちに進んでもよい。 Here, although the power switch of the terminal device 101 is turned on and the content classification is instructed by the input operation of the input unit 102, the process proceeds to the processing after step S302S and the content classification is started. Instead of this, when the power switch of the terminal device 101 is turned on, the process may proceed immediately to step S302S and subsequent steps.
 また、端末装置101の動作中に、フラッシュメモリ等の記録媒体が端末装置101に装着もしくは接続されると、この記録媒体にコンテンツが記録されているか否かを判定し、コンテンツが記録されていると判定したときに、ステップS302S以降の処理に進んでもよい。 Further, when a recording medium such as a flash memory is attached or connected to the terminal apparatus 101 during operation of the terminal apparatus 101, it is determined whether or not content is recorded on the recording medium, and the content is recorded. If it is determined, the process may proceed to step S302S and subsequent steps.
 更に、端末装置101の動作中に、ICカードやカード機能を有する携帯電話機等のカードがかざされると、カードをスキャンして、カードに特定の情報が記録されている否かを判定し、特定の情報が含まれていると判定したときに、ステップS302S以降の処理に進んでもよい。これにより、ユーザーは、カードのスキャンを行うだけで、ステップS302S以降の処理に進むことができる。例えば、カードを読み取り機(カードリーダやFelica等)によりスキャンして、端末装置101の(C:\user)、あるいはサーバー装置201の (\10.23.45.67\japan)やURL(http://pro/)等のインターネット上のアドレスがカードに記録されているか否かを判定し、このアドレスが記録されていると判定したときに、ステップS302S以降の処理に進む。この場合、アドレスとして第1コンテンツ記憶部104や第2コンテンツ記憶部205の所在を示すものを設定して、このアドレスを図2のフローチャートの処理を実行するアプリケーションに引き渡して、このアドレスに基づいて第1コンテンツ記憶部104や第2コンテンツ記憶部205にアクセスするようにしても構わない。また、このアドレスが記録されていると判定したときに、図2のフローチャートの処理を実行するアプリケーションを起動してもよい。あるいは、このアドレスが記録されていると判定したときに、この旨を表示部109の画面に表示し、この後の入力部102の入力操作による指示に応答してアプリケーションを起動してもよい。更に、このアドレスが記録されていると判定し、このアドレスに基づいて第1コンテンツ記憶部104や第2コンテンツ記憶部205にアクセスし、これらの記憶部に記憶されているコンテンツを確認してから、アプリケーションを起動してもよい。 Furthermore, when a card such as an IC card or a mobile phone having a card function is held over while the terminal device 101 is operating, the card is scanned to determine whether or not specific information is recorded on the card. When it is determined that the information is included, the process may proceed to step S302S and subsequent steps. As a result, the user can proceed to the processing after step S302S only by scanning the card. For example, the card is scanned by a reader (such as a card reader or Felica), and (C: \ user) of the terminal device 101, or the server device 201 (¥ 10.23.45.67 \ japan) or URL (http: // pro It is determined whether an address on the Internet such as /) is recorded on the card, and when it is determined that this address is recorded, the process proceeds to step S302S and subsequent steps. In this case, an address indicating the location of the first content storage unit 104 or the second content storage unit 205 is set as an address, and this address is delivered to an application that executes the processing of the flowchart of FIG. The first content storage unit 104 and the second content storage unit 205 may be accessed. Further, when it is determined that this address is recorded, an application for executing the processing of the flowchart of FIG. 2 may be activated. Alternatively, when it is determined that this address is recorded, this fact may be displayed on the screen of the display unit 109, and the application may be started in response to an instruction by an input operation of the input unit 102 thereafter. Furthermore, it is determined that this address is recorded, the first content storage unit 104 or the second content storage unit 205 is accessed based on this address, and the content stored in these storage units is confirmed. The application may be activated.
 また、カードに記録される特定の情報としてユーザー情報を設定し、かつユーザー情報と第1コンテンツ記憶部104や第2コンテンツ記憶部205のアドレスとの対応テーブルを端末装置101のメモリに記憶しておく。そして、カードからユーザー情報を読み出すと、メモリ内のテーブルを参照して、ユーザー情報に対応する第1コンテンツ記憶部104や第2コンテンツ記憶部205のアドレスを取得し、このアドレスを図2のフローチャートの処理を実行するアプリケーションに引き渡して、このアドレスに基づいて第1コンテンツ記憶部104や第2コンテンツ記憶部205にアクセスするようにしても構わない。また、ユーザー情報と第1コンテンツ記憶部104や第2コンテンツ記憶部205のアドレスとの対応テーブルを作成するための処理を設定して、ユーザーが対応テーブルを作成することができるようにしてもよい。更に、ユーザー情報として、パスワード、個人名、会員番号、指紋等を設定してもよい。指紋等で有る場合は、この指紋を認識して識別する必要がある。 Further, user information is set as specific information recorded on the card, and a correspondence table between the user information and the addresses of the first content storage unit 104 and the second content storage unit 205 is stored in the memory of the terminal device 101. deep. Then, when the user information is read from the card, the addresses of the first content storage unit 104 and the second content storage unit 205 corresponding to the user information are obtained by referring to the table in the memory, and these addresses are shown in the flowchart of FIG. The first content storage unit 104 and the second content storage unit 205 may be accessed based on this address. Further, a process for creating a correspondence table between the user information and the addresses of the first content storage unit 104 and the second content storage unit 205 may be set so that the user can create the correspondence table. . Furthermore, a password, personal name, membership number, fingerprint, etc. may be set as user information. If it is a fingerprint, it is necessary to recognize and identify this fingerprint.
 また、記録媒体やカード等にコンテンツの分類条件をも記憶しておき、カードがかざされると、カードからコンテンツの分類条件を読み出して、この分類条件をコンテンツ分類部105に引き渡し、コンテンツ分類部105によるコンテンツの分類を開始するようにしてもよい。 Further, content classification conditions are also stored in a recording medium or a card, and when the card is held over, the content classification conditions are read from the card, and the classification conditions are transferred to the content classification unit 105, and the content classification unit 105. The content classification by may be started.
 次に、コンテンツの分類、検索、及び表示別に、具体例を述べる。 Next, specific examples will be described for each content classification, search, and display.
 まず、コンテンツの分類について説明する。例えば、コンテンツが写真画像であるとすると、第1コンテンツ記憶部104には、端末装置101のユーザーが取得した個人の所有物である複数の写真画像が記憶されている。また、第2コンテンツ記憶部205には、不特定多数の人々が利用可能な共有物であって、例えば写真サービス会社などから提供された多数の写真画像が記憶されている。 First, the content classification will be explained. For example, if the content is a photographic image, the first content storage unit 104 stores a plurality of photographic images that are personal belongings acquired by the user of the terminal device 101. The second content storage unit 205 stores a large number of photographic images provided by, for example, a photographic service company, which are shared materials that can be used by an unspecified number of people.
 ここで、端末装置101において、入力部102の入力操作により写真画像の分類が指示されると、これに応答してコンテンツ管理部103は、第1コンテンツ記憶部104内の写真画像を検索して、この検索した写真画像のリストを生成し、この写真画像のリストを表示生成部108を通じて表示部109の画面に表示する。 Here, when the terminal device 101 is instructed to classify a photographic image by an input operation of the input unit 102, in response to this, the content management unit 103 searches for a photographic image in the first content storage unit 104. Then, a list of the retrieved photographic images is generated, and the photographic image list is displayed on the screen of the display unit 109 through the display generation unit 108.
 図3は、写真画像のリストを例示している。このリストには、各写真画像の付随情報が記載されている。付随情報は、写真画像の識別子<picture id=" ">、写真画像のURLである<url>、撮影日時情報<time>、撮影位置の緯度情報<gps-lat>、撮影位置の経度情報<gps-long>、ユーザーコメント<comment>等である。勿論、付随情報として他の情報が含まれていてもよいし、情報の種類が増減されても構わない。また、写真画像の付随情報をXMLにて記述しているが、他の記述言語やバイナリデータ、プログラム内部で扱う構造データの形式で記述しても構わない。 FIG. 3 illustrates a list of photographic images. In this list, accompanying information of each photographic image is described. The accompanying information includes a photo image identifier <picture id = "">, a photo image URL <url>, shooting date and time information <time>, shooting position latitude information <gps-lat>, and shooting position longitude information < gps-long>, user comment <comment>, etc. Of course, other information may be included as accompanying information, and the types of information may be increased or decreased. Although the accompanying information of the photographic image is described in XML, it may be described in another description language, binary data, or a structure data format handled in the program.
 この写真画像のリストは、表示部109の画面に表示されると同時に、コンテンツ分類部105に出力される。コンテンツ分類部105は、このリストの各写真画像の付随情報を参照し、各写真画像を分類条件で幾つかのコンテンツ群に分類する。 The list of photographic images is displayed on the screen of the display unit 109 and simultaneously output to the content classification unit 105. The content classification unit 105 refers to the accompanying information of each photographic image in this list and classifies each photographic image into several content groups based on the classification condition.
 例えば、コンテンツ分類部105では、写真画像の付随情報である撮影位置の経度情報<gps-long>に着眼し、各写真画像の経度情報<gps-long>を参照して、1枚の写真画像の撮影位置を基点とする一定エリアに入る写真画像の撮影位置を求め、この一定エリアに撮影位置が入る写真画像の全てを1つのコンテンツ群とする分類条件で、各写真画像を分類する。 For example, the content classification unit 105 focuses on the longitude information <gps-long> of the shooting position, which is accompanying information of the photographic image, and refers to the longitude information <gps-long> of each photographic image to obtain one photographic image. The photographic position of a photographic image that falls within a certain area with the photographic position as the base point is obtained, and each photographic image is classified under a classification condition in which all the photographic images that have the photographing position within this certain area are set as one content group.
 識別子<picture id="1">の写真画像の撮影位置を基点とした場合は、識別子<picture id="2">、<picture id="3">等の他の写真画像別に、基点からの撮影位置の離間距離を算出する。基点と識別子<picture id="2">の写真画像の撮影位置の差は経度にして2分、つまり約3kmの離間距離である。これと同様に経度の差に対応する離間距離を求めると、基点と識別子<picture id="3">の写真画像の撮影位置間の離間距離が6km、基点と識別子<picture id="4">の写真画像の撮影位置間の離間距離が7.5km、基点と識別子<picture id="5">の写真画像の撮影位置間の離間距離が1.5kmである。 When the shooting position of the photo image with the identifier <picture id = "1"> is used as the base point, the base point for each other photo image such as the identifier <picture id = "2"> or <picture id = "3"> The separation distance of the shooting positions is calculated. The difference between the photographing position of the photographic image of the base point and the identifier <picture id = "2"> is 2 minutes in longitude, that is, a separation distance of about 3 km. Similarly, when the separation distance corresponding to the difference in longitude is obtained, the separation distance between the shooting position of the photograph image of the base point and the identifier <picture id = "3"> is 6 km, and the base point and the identifier <picture id = "4" The separation distance between the photographing positions of the photographic image is 7.5 km, and the separation distance between the photographing positions of the photographic image with the base point and the identifier <picture id = "5"> is 1.5 km.
 そして、識別子<picture id="2">、<picture id="3">等の他の写真画像別に、基点からの撮影位置の離間距離を閾値と比較し、この離間距離が閾値未満であれば、その写真画像を識別子<picture id="1">の写真画像と同一のコンテンツ群に分類し、また離間距離が閾値以上であれば、その写真画像を識別子<picture id="1">の写真画像と同一のコンテンツ群に分類しない。 Then, for each of other photographic images such as identifiers <picture id = "2">, <picture id = "3">, the separation distance of the shooting position from the base point is compared with a threshold value, and the separation distance is less than the threshold value. For example, the photo image is classified into the same content group as the photo image with the identifier <picture id = "1">, and if the separation distance is equal to or greater than the threshold, the photo image is identified with the identifier <picture id = "1">. Do not categorize in the same content group as the photographic image.
 幾つかの写真画像の撮影位置を基点として、基点毎に、基点からの写真画像の撮影位置の離間距離を求め、この写真画像の撮影位置の離間距離を閾値と比較して、この写真画像を基点の写真画像と同一のコンテンツ群に分類するか否かを決める。これにより、複数のコンテンツ群を求めることができる。 Taking the shooting position of several photographic images as a base point, the separation distance of the shooting position of the photographic image from the base point is obtained for each base point, the separation distance of the shooting position of the photographic image is compared with a threshold value, and this photographic image is Decide whether to classify into the same content group as the photographic image of the base point. Thereby, a plurality of content groups can be obtained.
 勿論、経度情報<gps-long>だけではなく、緯度情報<gps-lat>を用いたり、両者を用いて、各写真画像の撮影位置間の離間距離を求めてもよい。 Of course, not only the longitude information <gps-long> but also the latitude information <gps-lat> may be used, or both may be used to determine the separation distance between the shooting positions of each photographic image.
 尚、閾値は、端末装置101のメモリに予め設定して記憶しておいてもよいし、ユーザーの入力部102の入力操作により適宜に変更したり入力設定しても構わない。また、基点や撮影日時、コンテンツの種類などによって閾値を変化させることも可能である。他にも、第1コンテンツ記憶部104内の写真画像全ての撮影位置から、全体の撮影範囲を求めて、この全体の撮影範囲に応じた閾値を算出してもよい。例えば、全体の撮影範囲が広い場合は、閾値を大きくし、全体の撮影範囲が狭い場合は、閾値を小さく設定する。これにより、各コンテンツ群に含まれる写真画像数を調整することが可能となる。 Note that the threshold value may be set and stored in advance in the memory of the terminal apparatus 101, or may be changed or set as appropriate by an input operation of the input unit 102 by the user. It is also possible to change the threshold according to the base point, shooting date and time, content type, and the like. In addition, the entire shooting range may be obtained from the shooting positions of all the photographic images in the first content storage unit 104, and a threshold value corresponding to the entire shooting range may be calculated. For example, when the entire shooting range is wide, the threshold value is increased, and when the entire shooting range is narrow, the threshold value is set small. As a result, the number of photographic images included in each content group can be adjusted.
 更に、閾値を端末装置101からサーバー装置201に問合せてもよい。例えば、端末装置101では、識別子<picture id="1">の写真画像の撮影位置をサーバー装置201に送信して、閾値を要求する。サーバー装置201では、エリアと閾値を対応付けたデータテーブルを備えており、端末装置101からの撮影位置が入るエリアを求め、このエリアに対応する閾値をデータテーブルから検索して、この閾値を端末装置101に送信する。より具体的には、データテーブルとして、富士山の広いエリアに対応して大きな閾値を設定し、遊園地等の狭いエリアに対応して小さな閾値を設定したデータテーブルを用いる。そして、撮影位置が富士山の広いエリアに入れば、このエリアに対応する大きな閾値を端末装置101に送信し、また撮影位置が遊園地の狭いエリアに入れば、このエリアに対応する小さな閾値を端末装置101に送信する。これにより、広いエリアに入る撮影位置の写真画像を1つのコンテンツ群に含ませたり、狭いエリアに入る撮影位置の写真画像を1つのコンテンツ群に含ませることができる。図4は、広いエリアに入る撮影位置の写真画像を含むコンテンツ群を例示する図である。ここでは、識別子<picture id="1">、識別子<picture id="2">、<picture id="5">の各写真画像が1つのコンテンツ群として纏められている。 Furthermore, the threshold value may be inquired from the terminal device 101 to the server device 201. For example, the terminal device 101 transmits the photographing position of the photographic image with the identifier <picture id = "1"> to the server device 201 to request a threshold value. The server device 201 includes a data table in which areas and thresholds are associated with each other. The server device 201 obtains an area where the shooting position from the terminal device 101 is entered, searches the data table for a threshold value corresponding to this area, and sets the threshold value to the terminal. Send to device 101. More specifically, as the data table, a data table in which a large threshold value is set corresponding to a large area of Mt. Fuji and a small threshold value is set corresponding to a narrow area such as an amusement park is used. If the shooting position enters a wide area of Mt. Fuji, a large threshold value corresponding to this area is transmitted to the terminal device 101. If the shooting position enters a narrow area of the amusement park, a small threshold value corresponding to this area is set to the terminal. Send to device 101. Accordingly, a photographic image at a shooting position that falls within a wide area can be included in one content group, and a photographic image at a shooting position that falls within a narrow area can be included in one content group. FIG. 4 is a diagram illustrating a content group including a photographic image at a shooting position that falls within a wide area. Here, each picture image of identifier <picture id = "1">, identifier <picture id = "2">, and <picture id = "5"> is collected as one content group.
 コンテンツの他の分類方法として、撮影位置ではなく、撮影日時に着眼し、各写真画像の撮影日時情報<time>を参照して、相互の撮影時間間隔が短い写真画像を1つのコンテンツ群に分類するという分類条件で、各写真画像を分類することも可能である。 As another method of classifying content, focus on the shooting date and time, not the shooting position, and refer to the shooting date and time information <time> of each photo image to classify photographic images with short mutual shooting time intervals into one content group It is also possible to classify each photographic image under the classification condition of performing.
 例えば、コンテンツ分類部105は、図3のコンテンツリストにおける各写真画像の撮影日時情報<time>を参照して、各写真画像をそれらの撮影日時順に並び替える。そして、コンテンツ分類部105は、各写真画像を先頭から順次選択し、一つの写真画像と次の順番の写真画像間の撮影日時差を算出して、その日時差が閾値未満であるか否かを判定し、閾値未満であれば、一つの写真画像のコンテンツ群に次の順番の写真画像を含ませ、閾値以上であれば、コンテンツ群を新たに設定して、この新たなコンテンツ群に次の順番の写真画像を含ませる。これにより、相互の撮影時間間隔が短い写真画像を1つのコンテンツ群に分類することができる。 For example, the content classification unit 105 refers to the shooting date / time information <time> of each photo image in the content list of FIG. 3 and sorts the photo images in the order of their shooting date / time. Then, the content classification unit 105 sequentially selects each photo image from the top, calculates a shooting date difference between one photo image and the next photo image, and determines whether the date difference is less than a threshold value. If it is less than the threshold, the next group of photographic images is included in the content group of one photographic image, and if it is greater than or equal to the threshold, a new content group is set, and the next content group is set to the next content group. Include sequential photographic images. Thereby, it is possible to classify photographic images having a short shooting time interval into one content group.
 すなわち、ある基点の写真画像からどれだけの時間が空いたかを算出し、ある一定の時間内の写真画像を一つのコンテンツ群に纏める。 That is, it calculates how much time is available from a photographic image of a certain base point, and collects photographic images within a certain time into one content group.
 例えば、図3において、識別子<picture id="1">の写真画像を基点とすると、<picture id="2">の写真画像との時間差は0分、<picture id="3">の写真画像との時間差は7分、<picture id="4">の写真画像との時間差は17分、<picture id="5">の写真との時間差は22分となる。そして、それぞれの写真画像の時間差と閾値を比較し、時間差が閾値未満であるか否かに応じて、これらの写真画像を、<picture id="1">の写真画像のコンテンツ群に纏めるか否かを決定する。 For example, in FIG. 3, assuming that the picture image of the identifier <picture id = "1"> is the base point, the time difference from the picture image of <picture id = "2"> is 0 minutes, and <picture id = "3"> The time difference from the picture image is 7 minutes, the time difference from the picture image of <picture id = "4"> is 17 minutes, and the time difference from the picture of <picture id = "5"> is 22 minutes. Then, the time difference of each photographic image is compared with a threshold value, and depending on whether or not the time difference is less than the threshold value, whether these photographic images are grouped into a content group of photographic image of <picture id = "1"> Decide whether or not.
 閾値を10分とした場合は、<picture id="1">、<picture id="2">、<picture id="3">が1つのコンテンツ群に纏められる。 When the threshold is 10 minutes, <picture <id = "1">, <picture id = "2">, and <picture id = "3"> are combined into one content group.
 1つのコンテンツ群を纏め終わると、次の基点となる写真画像を選んで、いずれのコンテンツ群にも属していない写真画像との時間差を算出して、この時間差が閾値未満であれば、写真画像を新たなコンテンツ群に纏める。 When one content group is gathered, the next photographic image is selected, the time difference from a photographic image that does not belong to any content group is calculated, and if this time difference is less than the threshold, the photographic image Are grouped into new content groups.
 また、基点とする写真画像をずらしていくという方法もある。 There is also a method of shifting the photographic image as the base point.
 例えば、図3において、識別子<picture id="1">の写真画像を基点として、<picture id="2">の写真画像との時間差を取得し、この時間差が閾値未満であれば、<picture id="2">の写真画像を<picture id="1">の写真画像と同一のコンテンツ群に分類する。閾値を超えているなら、新たなコンテンツ群を作成し、作成したこのコンテンツ群に<picture id="2">の写真画像を分類する。そして、<picture id="2">の写真画像を基点として、<picture id="3">の写真画像との時間差を取得し、この時間差が閾値未満であれば、<picture id="3">の写真画像を<picture id="2">の写真画像と同一のコンテンツ群に分類し、閾値を超えているなら、更に新たなコンテンツ群を作成し、作成したこのコンテンツ群に<picture id="3">の写真画像を分類する。以降、同様の処理を繰り返す。 For example, in FIG. 3, the time difference from the picture image of <picture id = “2”> is acquired from the photo image of the identifier <picture id = “1”>, and if this time difference is less than the threshold, < The picture image of picture id = "2"> is classified into the same content group as the photo image of <picture id = "1">. If the threshold value is exceeded, a new content group is created, and photographic images of <picture id = "2"> are classified into the created content group. Then, the time difference from the picture image of <picture id = "3"> is obtained using the picture image of <picture id = "2"> as a base point. If this time difference is less than the threshold, <picture id = "3" Classify "> photographic images into the same content group as the <picture id =" 2 "> photographic image, and if the threshold is exceeded, create a new content group and add the <picture に id =" 2 "> Classify photo images with id = "3">. Thereafter, the same processing is repeated.
 閾値を10分とした場合、<picture id="1">、<picture id="2">、 <picture id="3">、 <picture id="4">が同一のコンテンツ群に分類され、<picture id="5">が他のコンテンツ群に分類される。これにより、撮影の時間間隔が短い写真画像を一つのコンテンツ群に分類することができる。 If the threshold is 10 minutes, <picture id = "1">, <picture id = "2">, <picture id = "3">, and <picture id = "4"> are classified into the same content group <Picture id = "5"> is classified into another content group. Thereby, it is possible to classify photographic images having a short shooting time interval into one content group.
 閾値の設定については、先に述べた写真画像の撮影位置の離間距離と比較される閾値と同様に、多様な方法で設定することができる。例えば、第1コンテンツ記憶部104内の写真画像全体で時間差の最大値を算出しておき、この最大値が大きいときに閾値を小さく、時間差が小さいときに閾値を小さく設定してもよい。これにより、コンテンツ群に含まれる写真画像の数を調整する事ができる。 The threshold value can be set by various methods, similar to the threshold value compared with the separation distance of the photographing position of the photographic image described above. For example, the maximum value of the time difference may be calculated for the entire photographic image in the first content storage unit 104, and the threshold value may be set small when the maximum value is large, and the threshold value may be set small when the time difference is small. As a result, the number of photographic images included in the content group can be adjusted.
 また、写真画像の撮影時刻だけではなく、1日単位や1週間単位、1ヶ月単位、1年単位、更には季節単位や朝昼夜単位でコンテンツ群を作成することもできる。 Also, not only the shooting time of photographic images, but also content groups can be created in units of one day, one week, one month, one year, and even seasonal, morning and night.
 更に、閾値を自動で調整するのではなく、端末装置101からサーバー装置201へと日時情報を送信し、サーバー201側で該日時情報の日時に近いコンテンツのリストを作成して、このコンテンツのリストを端末装置101に返信するようにしてもよい。 Further, instead of automatically adjusting the threshold value, the date and time information is transmitted from the terminal apparatus 101 to the server apparatus 201, and a list of contents close to the date and time of the date and time information is created on the server 201 side. May be returned to the terminal apparatus 101.
 コンテンツの他の分類方法として、撮影日時又は撮影位置を選択的に用いるのではなく、撮影日時及び撮影位置を共に用いて、写真画像をコンテンツ群に分類してもよい。 As another method for classifying content, photographic images may be classified into content groups by using both the shooting date and the shooting position instead of selectively using the shooting date and the shooting position.
 例えば、図3のコンテンツリストに挙げられた各写真画像の撮影日時情報<time>を参照して、各写真画像をそれらの撮影日時順に並び替える。そして、コンテンツ分類部105は、各写真画像を先頭から順次選択し、一つの写真画像の撮影位置と次の順番の写真画像の撮影位置との間の離間距離を求め、この離間距離が閾値未満であるか否かを判定し、閾値未満であれば、一つの写真画像と同一のコンテンツ群に次の順番の写真画像を分類し、閾値以上であれば、コンテンツ群を新たに設定して、この新たなコンテンツ群に次の順番の写真画像を分類する。これにより、撮影日時順にかつ相互に近い撮影位置の写真画像を1つのコンテンツ群に分類することができる。この方法を用いれば、旅行等の写真画像に関しては、日時に従って、かつその場所場所での写真画像を同一のコンテンツ群に纏めることができるので、ユーザーにとってより好ましい分類が可能となる。 For example, referring to the shooting date / time information <time> of each photo image listed in the content list of FIG. 3, the photo images are rearranged in the order of their shooting date / time. Then, the content classification unit 105 sequentially selects each photographic image from the top, obtains a separation distance between the photographing position of one photographic image and the photographing position of the next photographic image, and this separation distance is less than the threshold value. If it is less than the threshold, classify the next photographic image into the same content group as one photographic image, and if it is greater than or equal to the threshold, newly set the content group, The next sequence of photographic images is classified into this new content group. Thereby, it is possible to classify photographic images at photographing positions close to each other in order of photographing date and time into one content group. By using this method, it is possible to classify photographic images such as travel according to the date and time and place them in the same content group, so that it can be classified more favorably for the user.
 更に、一部の写真画像だけに撮影位置が付随する場合は、撮影位置及び撮影日時を用いて、写真画像をコンテンツ群に分類することができる。 Furthermore, when a shooting position is attached to only some of the photographic images, the photographic images can be classified into content groups using the shooting position and the shooting date / time.
 図5のフローチャートを参照しつつ、そのように写真画像をコンテンツ群に分類するための処理を説明する。 Referring to the flowchart of FIG. 5, a process for classifying photographic images into content groups will be described.
 コンテンツ分類部105は、リストを受け取ると、各写真画像の付随情報を参照して、そのリストに挙げられた写真画像を撮影日時順に配列した後、写真画像毎に、撮影位置の有無を判定する(ステップS601)。そして、撮影位置が付随する写真画像があれば、この写真画像の撮影位置を基点として設定し、この写真画像の付随情報から撮影日時情報を取得する(ステップS602)。 When the content classification unit 105 receives the list, the content classification unit 105 refers to the accompanying information of each photographic image, arranges the photographic images listed in the list in order of photographing date and time, and then determines whether there is a photographing position for each photographic image. (Step S601). If there is a photographic image accompanied by a photographing position, the photographing position of the photographic image is set as a base point, and photographing date / time information is acquired from the accompanying information of the photographic image (step S602).
 引き続いて、コンテンツ分類部105は、基点となった写真画像よりも撮影日時が1つ前の順番の写真画像の付随情報から撮影日時情報を取得し(ステップS603)、これらの写真画像の撮影日時の時間差を求め、この時間差が閾値未満であるか否かを判定する(S604)。そして、閾値未満であれば(ステップS604で「yes」)、ステップS603に戻って、更に撮影日時が1つ前の順番の写真画像の付随情報から撮影日時情報を取得し(ステップS603)、この写真画像と直前のステップS603で撮影日時情報を取得した写真画像との撮影日時の時間差を求め、この時間差が閾値未満であるか否かを判定する(S604)。以降同様に、閾値未満であれば(ステップS604で「yes」)、ステップS603に戻って、更に撮影日時が1つ前の順番の写真画像の付随情報から撮影日時情報を取得し(ステップS603)、連続して配列された2つの写真画像の撮影日時の時間差を求め、この時間差が閾値未満であるか否かを判定する(ステップS604)。そして、閾値以上になると(ステップS604で「no」)、直前のステップS603でコンテンツの撮影日時情報を取得した写真画像よりも撮影日時が1つ後の順番の写真画像に戻って、この写真画像をコンテンツ群の最初の写真画像とする(ステップS605)。 Subsequently, the content classification unit 105 acquires shooting date / time information from the accompanying information of the photographic image whose shooting date / time is one order before the photographic image that is the base point (step S603), and the shooting date / time of these photographic images. Is determined, and it is determined whether or not the time difference is less than a threshold value (S604). If it is less than the threshold value (“yes” in step S604), the process returns to step S603, and the shooting date / time information is acquired from the accompanying information of the photographic image whose shooting date / time is the previous one (step S603). The time difference between the photographing date and time of the photograph image and the photograph image whose photographing date and time information was acquired in the immediately preceding step S603 is obtained, and it is determined whether or not this time difference is less than the threshold value (S604). Thereafter, similarly, if it is less than the threshold value (“yes” in step S604), the process returns to step S603, and the shooting date / time information is acquired from the accompanying information of the photographic image whose shooting date / time is the previous one (step S603). Then, the time difference between the photographing dates and times of two photograph images arranged in succession is obtained, and it is determined whether or not this time difference is less than a threshold value (step S604). When the value is equal to or greater than the threshold value (“no” in step S604), the photographic image is returned to the photographic image having the shooting date and time one after the photographic image for which the shooting date / time information of the content was acquired in the immediately preceding step S603. Is the first photographic image of the content group (step S605).
 すなわち、撮影日時の時間差が閾値未満である限りは、写真画像を順次遡って1つのコンテンツ群に含ませる。 That is, as long as the time difference between the shooting dates and times is less than the threshold, the photograph images are sequentially included in one content group.
 次に、コンテンツ分類部105は、基点となった写真画像に戻って(ステップS606)、この写真画像よりも撮影日時が1つ後の順番の写真画像の付随情報から撮影日時情報を取得し(ステップS607)、これらの写真画像の撮影日時の時間差を求め、この時間差が閾値未満であるか否かを判定する(S608)。そして、閾値未満であれば(ステップS608で「yes」)、ステップS607に戻って、更に撮影日時が1つ後の順番の写真画像の付随情報から撮影日時情報を取得し(ステップS607)、この写真画像と直前のステップS607で撮影日時情報を取得した写真画像との撮影日時の時間差を求め、この時間差が閾値未満であるか否かを判定する(S608)。以降同様に、閾値未満であれば(ステップS608で「yes」)、ステップS607に戻って、更に撮影日時が1つ後の順番の写真画像の付随情報から撮影日時情報を取得し(ステップS607)、リスト上で隣り合う2つの写真画像の撮影日時の時間差を求め、この時間差が閾値未満であるか否かを判定する(ステップS608)。そして、閾値以上になると(ステップS608で「no」)、直前のステップS607で撮影日時情報を取得した写真画像よりも撮影日時が1つ前の順番の写真画像に戻って、この写真画像をコンテンツ群の最後の写真画像とする(ステップS609)。 Next, the content classification unit 105 returns to the photographic image that is the base point (step S606), and acquires photographic date / time information from the incidental information of the photographic image in the order of the photographic date one after the photographic image ( In step S607), a time difference between the photographing dates and times of these photographic images is obtained, and it is determined whether this time difference is less than a threshold value (S608). If it is less than the threshold (“yes” in step S608), the process returns to step S607, and the shooting date / time information is acquired from the accompanying information of the photographic image with the next shooting date / time (step S607). The time difference between the photographing date and time of the photograph image and the photograph image whose photographing date and time information was acquired in the immediately preceding step S607 is obtained, and it is determined whether or not this time difference is less than the threshold value (S608). Thereafter, similarly, if it is less than the threshold (“yes” in step S608), the process returns to step S607, and the shooting date / time information is acquired from the incidental information of the photograph image with the next shooting date / time (step S607). Then, a time difference between the shooting dates and times of two adjacent photographic images on the list is obtained, and it is determined whether or not this time difference is less than a threshold value (step S608). If the value is equal to or greater than the threshold (“no” in step S608), the photographic date is returned to the photographic image in the order that was one before the photographic date for which the photographic date / time information was acquired in the immediately preceding step S607. The last photographic image of the group is set (step S609).
 すなわち、撮影日時の時間差が閾値未満である限りは、写真画像を順次進めて1つのコンテンツ群に含ませる。 That is, as long as the time difference between the shooting dates and times is less than the threshold, the photographic images are sequentially advanced and included in one content group.
 こうして最初の写真画像と最後の写真画像を求め、最初の写真画像から最後の写真画像までを1つのコンテンツ群とする(ステップS610)。 Thus, the first photographic image and the last photographic image are obtained, and the contents from the first photographic image to the last photographic image are set as one content group (step S610).
 尚、図5の処理では、リストを受信すると、まず、リストに挙げられた写真画像を撮影日時順に配列する処理を行うが、基点となった写真画像の撮影日時より撮影日時が前の写真画像の撮影日時情報を、基点となった写真画像の撮影日時と撮影日時が近いものから順に取得してステップS601~ステップS605の処理を進め、次いで、基点となった写真画像の撮影日時より日時が後の写真画像の撮影日時情報を、基点となった写真画像の撮影日時と撮影日時が近いものから順に取得してステップS606~ステップS609の処理を進めることができれば、リストに挙げられた写真画像を撮影日時順に配列する処理は省略されてよい。 In the process of FIG. 5, when the list is received, first, the photographic images listed in the list are arranged in order of the shooting date and time. Are acquired in order from the shooting date and time of the photographic image that is the base point, and the processing proceeds from step S601 to step S605, and then the date and time is determined from the shooting date and time of the photographic image that is the base point. If the shooting date / time information of the subsequent photographic image can be acquired in order from the shooting date / time closest to the shooting date / time of the photographic image that is the base point, and the processing of step S606 to step S609 can proceed, the photographic image listed The process of arranging the images in the order of shooting date and time may be omitted.
 また、図5の処理では、撮影日時順に配列した時に連続して配列される2つの写真画像の撮影日時の時間差が閾値未満である限り、これらの写真画像を1つのコンテンツ群に纏めているが、基点の写真画像の撮影日時を中心とする一定時間内に撮影された写真画像を1つのコンテンツ群とする分類条件を用いてもよい。例えば、基点の写真画像の撮影日時がAM9:00で、一定時間を2時間とした場合は、AM8:00~AM10:00の範囲に撮影日時が含まれる写真画像の全てを1つのコンテンツ群に纏める。 Further, in the process of FIG. 5, as long as the time difference between the shooting dates and times of two photo images arranged successively when arranged in order of shooting date and time is less than the threshold value, these photo images are combined into one content group. Alternatively, a classification condition may be used in which photographic images taken within a certain time centered on the photographing date and time of the base photographic image are used as one content group. For example, if the shooting date and time of the base photographic image is AM 9:00 and the fixed time is 2 hours, all the photographic images whose shooting date and time are in the range of AM 8:00 to AM 10:00 are combined into one content group. Put together.
 また、写真画像の付随情報として、コメントもしくはマークが設定されている場合は、これらのコメントもしくはマークを用いて、写真画像をコンテンツ群に分類することができる。 Also, when a comment or mark is set as accompanying information of a photographic image, the photographic image can be classified into a content group using these comments or marks.
 例えば、図3の写真画像のリストにおいては、付随情報にコメント情報<comment>が設定されている写真画像と設定されていない写真画像がある。コンテンツ分類部105は、写真画像のリストにおける各写真画像の付随情報を参照して、写真画像毎に、コメント情報<comment>の有無を判定し、コメント情報<comment>を有する写真画像だけを抽出して1つのコンテンツ群に纏める。 For example, in the list of photographic images in FIG. 3, there are photographic images in which comment information <comment> is set as accompanying information and photographic images that are not set. The content classification unit 105 refers to the accompanying information of each photographic image in the list of photographic images, determines the presence or absence of comment information <comment> for each photographic image, and extracts only photographic images having the comment information <comment>. Into one content group.
 あるいは、リスト上の写真画像の順番や、撮影日時等に従って配列された写真画像の順番で、各写真画像の付随情報を順次読み出して、写真画像毎に、コメント情報<comment>の有無を判定し、コメント情報<comment>を有する写真画像があったならば、この写真画像までの順番の各写真画像を1つのコンテンツ群に纏め、次の写真画像からコメント情報<comment>を有する別の写真画像までの順番の各写真画像を他の1つのコンテンツ群に纏めるようにする。又は、コメント情報<comment>を有する写真画像があったならば、この写真画像よりも1つ手前までの順番の各写真画像を1つのコンテンツ群に纏め、この写真画像からコメント情報<comment>を有する別の写真画像の1つ手前までの順番の各写真画像を他の1つのコンテンツ群に纏めてもよい。勿論、コメントと同様に、マークを指標にして、各写真画像をコンテンツ群に纏めることができる。 Alternatively, the accompanying information of each photographic image is sequentially read out in the order of the photographic images on the list and in the order of the photographic images arranged according to the shooting date and time, and the presence or absence of comment information <comment> is determined for each photographic image. If there is a photographic image having comment information <comment>, each photographic image in the order up to this photographic image is collected into one content group, and another photographic image having comment information <comment> from the next photographic image. Each photographic image in the order up to is collected into another content group. Alternatively, if there is a photographic image having comment information <comment>, the photographic images in the order one order before this photographic image are collected into one content group, and the comment information <comment> is obtained from this photographic image. Each photographic image in the order up to one prior to another photographic image may be combined into another content group. Of course, as with comments, each photographic image can be grouped into a content group using a mark as an index.
 このように写真画像を分類するための幾つかの方法を述べたが、更に他の方法もある。例えば、利用者が手動で分類しても構わない。手動での分類方法は、例えば画面上に写真画像をサムネイル表示して、ユーザーが各コンテンツ群に属する写真画像を選択したり、写真画像のスライドショーを表示しているときに、ユーザーが各コンテンツ群に属する写真画像を逐次入力指定するなどがある。 Although several methods for classifying photographic images have been described, there are other methods. For example, the user may manually classify. The manual classification method is, for example, that thumbnails of photographic images are displayed on the screen, and when the user selects a photographic image belonging to each content group or displays a slide show of photographic images, the user selects each content group. For example, it is possible to sequentially input and specify photographic images belonging to.
 尚、分類条件は、デフォルト設定されてもよいし、入力部102の入力操作により入力設定されたり変更されてもよい。あるいは、複数種類の分類条件を予め設定しておき、これらの分類条件を表示部109の画面に表示し、入力部102の入力操作により、画面上のいずれかの分類条件を選択するようにしても構わない。また、上記ステップS302でコンテンツのリストが表示された後、分類条件の入力や選択が行われてからステップS303に進んだり、分類条件の入力や選択が行われない限りは、ステップS303への移行を禁止してもよい。 Note that the classification condition may be set by default, or may be input or changed by an input operation of the input unit 102. Alternatively, a plurality of types of classification conditions are set in advance, these classification conditions are displayed on the screen of the display unit 109, and one of the classification conditions on the screen is selected by an input operation of the input unit 102. It doesn't matter. After the content list is displayed in step S302, the process proceeds to step S303 after the classification condition is input or selected, or the process proceeds to step S303 unless the classification condition is input or selected. May be prohibited.
 次に、コンテンツの検索について説明する。端末装置101において、コンテンツ分類部105によりコンテンツ群が作成されると、コンテンツ群がコンテンツ管理部103を介して検索条件生成部106に引き渡される。検索条件生成部106は、コンテンツ群に含まれる写真画像の付随情報を用いて、サーバー装置201の第2コンテンツ記憶部205内の写真画像のうちから該コンテンツ群に関連する写真画像を検索するための検索条件を生成する。例えば、図4のコンテンツ群の各写真画像の付随情報からそれぞれの撮影位置を取得し、これらの撮影位置の中心の位置を検索条件として求める。この検索条件は、端末装置101からサーバー装置201へと送信される。 Next, content search will be described. In the terminal device 101, when a content group is created by the content classification unit 105, the content group is delivered to the search condition generation unit 106 via the content management unit 103. The search condition generation unit 106 uses the accompanying information of the photographic images included in the content group to search for photographic images related to the content group from among the photographic images in the second content storage unit 205 of the server device 201. Generate search conditions for. For example, each shooting position is acquired from the accompanying information of each photo image of the content group in FIG. 4, and the center position of these shooting positions is obtained as a search condition. This search condition is transmitted from the terminal device 101 to the server device 201.
 サーバー装置201では、端末装置101からの検索条件が通信部202で受信されて検索部203に入力される。検索部203は、検索条件を入力すると、変換表記憶部204内の対応テーブルを参照して、検索条件である中心の位置に対応する写真画像の識別子を検索する。変換表記憶部204には、多数の位置を含むエリアと写真画像の識別子を対応付けた対応テーブルが予め記憶されており、この対応テーブルを参照して、特定された位置を含むエリアに対応する写真画像の識別子を検索することができる。検索部203は、検索条件である中心の位置に対応する識別子を検索すると、第2コンテンツ記憶部205の各写真画像の付随情報を参照して、検索された識別子を付随情報に含む写真画像を検索する。この検索した写真画像は、サーバー装置201から端末装置101へと送信される。 In the server device 201, the search condition from the terminal device 101 is received by the communication unit 202 and input to the search unit 203. When the search condition is input, the search unit 203 refers to the correspondence table in the conversion table storage unit 204 and searches for the identifier of the photographic image corresponding to the center position as the search condition. The conversion table storage unit 204 stores in advance a correspondence table in which areas including a large number of positions are associated with identifiers of photographic images. With reference to the correspondence table, the conversion table storage unit 204 corresponds to an area including a specified position. The identifier of a photographic image can be searched. When the search unit 203 searches for an identifier corresponding to the center position that is the search condition, the search unit 203 refers to the accompanying information of each photographic image in the second content storage unit 205 and searches for a photographic image including the searched identifier in the accompanying information. Search for. The retrieved photographic image is transmitted from the server device 201 to the terminal device 101.
 尚、検索条件に対応する複数の写真画像が第2コンテンツ記憶部205内に存在するのであれば、これらの写真画像全てをサーバー装置201から端末装置101へと送信してもよく、あるいはサーバー装置201側で写真画像の上限枚数を設定して、この上限枚数以下の写真画像を端末装置101に送信するようにしても構わない。また、サーバー装置201側で検索条件に対応する地名等を検索し、この地名を写真画像の付随情報に付加してから、写真画像を端末装置101に送信してもよい。 If a plurality of photographic images corresponding to the search condition exist in the second content storage unit 205, all of these photographic images may be transmitted from the server device 201 to the terminal device 101, or the server device The upper limit number of photographic images may be set on the 201 side, and photographic images equal to or smaller than the upper limit number may be transmitted to the terminal device 101. Alternatively, the place name corresponding to the search condition may be searched on the server apparatus 201 side, and the place name may be added to the accompanying information of the photograph image, and then the photograph image may be transmitted to the terminal apparatus 101.
 検索条件として、コンテンツ群の各写真画像の撮影位置の中心位置そのものではなく、中心位置を含む地名を設定してもよい。この場合は、端末装置101側に多数の位置を含むエリアと地名を対応付けたデータテーブルを設けておき、検索条件生成部106により中心位置を含むエリアに対応する地名をデータテーブルから検索して、この地名を検索条件として端末装置101からサーバー装置201へと送信する。サーバー装置201では、検索条件の地名を受信すると、検索部203により検索条件の地名を第2コンテンツ記憶部205の各写真画像の付随情報から検索し、この地名を付随情報に含む写真画像を求め、この写真画像を端末装置101に送信する。 As a search condition, a place name including the center position may be set instead of the center position of the shooting position of each photo image of the content group. In this case, a data table in which an area including a large number of positions and a place name are associated is provided on the terminal device 101 side, and a place name corresponding to the area including the center position is searched from the data table by the search condition generation unit 106. The location name is transmitted from the terminal device 101 to the server device 201 as a search condition. In the server apparatus 201, when the place name of the search condition is received, the search unit 203 searches the place name of the search condition from the accompanying information of each photographic image in the second content storage unit 205, and obtains a photographic image including the place name in the accompanying information. The photographic image is transmitted to the terminal device 101.
 あるいは、検索条件をひとつに絞るのではなく、複数の検索条件を用いることも可能である。例えば、端末装置101では、検索条件生成部106によりコンテンツ群の各写真画像の付随情報からそれぞれの撮影位置を全て抜き出して、図6に示すような各写真画像の撮影位置のリストを生成し、このリストを検索条件としてサーバー装置201に送信する。サーバー装置201では、このリストの各写真画像の撮影位置と第2コンテンツ記憶部205内の各写真画像の付随情報の撮影位置とを照合比較し、リスト側と一致する撮影位置の写真画像を、第2コンテンツ記憶部205に記憶された写真画像の中から検索して、取得し、取得した写真画像を端末装置101に送信する。この場合、リストの撮影位置と完全に一致しなくても、リストの撮影位置を中心位置とするエリア内にある撮影位置の写真画像を第2コンテンツ記憶部205から検索してもよく、撮影位置が完全に一致した写真画像と、エリア内にある撮影位置の写真画像とを区別して、端末装置101に送信し、端末装置101側で、これらの写真画像を区別して表示してもよい。例えば、図7に示すように撮影位置が完全に一致したか否かを示すマッチタグ<match>を付随情報として付加しておき、マッチタグ<match>に基づきそのような区別表示を行うようにする。 Alternatively, it is possible to use a plurality of search conditions instead of limiting the search conditions to one. For example, in the terminal device 101, the search condition generation unit 106 extracts all the shooting positions from the accompanying information of each photographic image of the content group, and generates a list of the shooting positions of each photographic image as shown in FIG. This list is transmitted to the server apparatus 201 as a search condition. In the server device 201, the shooting position of each photographic image in the list is compared with the shooting position of the accompanying information of each photographic image in the second content storage unit 205, and the photographic image at the shooting position that matches the list side is The photographic image stored in the second content storage unit 205 is searched for and acquired, and the acquired photographic image is transmitted to the terminal device 101. In this case, even if it does not completely match the shooting position of the list, a photo image of the shooting position within the area centered on the shooting position of the list may be retrieved from the second content storage unit 205. May be distinguished from a photographic image at a shooting position in the area and transmitted to the terminal device 101, and these photographic images may be displayed separately on the terminal device 101 side. For example, as shown in FIG. 7, a match tag <match> indicating whether or not the photographing positions completely match is added as accompanying information, and such distinction display is performed based on the match tag <match>.
 次に、コンテンツの表示について説明する。端末装置101では、第2コンテンツ記憶部205に記憶された写真画像の中から検索された写真画像が通信部107で受信され、その受信した第2コンテンツ記憶部205内の写真画像がコンテンツ管理部103に入力される。コンテンツ管理部103は、入力された第2コンテンツ記憶部205内の写真画像と、先にコンテンツ分類部105により分類されたコンテンツ群の写真画像、つまり第1コンテンツ記憶部104内の写真画像とを表示生成部108に出力する。表示生成部108は、これらの写真画像の表示順序や表示レイアウトを設定した上で、これらの写真画像を表示部109の画面に表示する。 Next, content display will be described. In the terminal device 101, a photographic image retrieved from photographic images stored in the second content storage unit 205 is received by the communication unit 107, and the received photographic image in the second content storage unit 205 is received as a content management unit. 103. The content management unit 103 receives the input photographic image in the second content storage unit 205 and the photographic image of the content group previously classified by the content classification unit 105, that is, the photographic image in the first content storage unit 104. The data is output to the display generation unit 108. The display generation unit 108 sets the display order and display layout of these photographic images, and displays these photographic images on the screen of the display unit 109.
 例えば、図8(a)に示すように第2コンテンツ記憶部205内の写真画像P1を表示部109の画面に表示し、この後に図8(b)に示すように第1コンテンツ記憶部104内の各写真画像P2、P3を順次表示する。 For example, as shown in FIG. 8A, the photographic image P1 in the second content storage unit 205 is displayed on the screen of the display unit 109, and then in the first content storage unit 104 as shown in FIG. 8B. The photographic images P2 and P3 are sequentially displayed.
 また、図8(a)に示すように第2コンテンツ記憶部205内の写真画像P1と共に、検索条件の地名11を表示したり、サーバー装置201の第2コンテンツ記憶部205内の写真画像であることを示すマーク12、あるいは各写真画像の撮影位置の離間距離13等を表示してもよい。これにより、自分が撮った写真画像と写真サービス会社から提供された写真画像との区別が容易になる。 Further, as shown in FIG. 8A, the place name 11 of the search condition is displayed together with the photographic image P1 in the second content storage unit 205, or the photographic image in the second content storage unit 205 of the server device 201. A mark 12 indicating this, or a separation distance 13 of the shooting position of each photographic image may be displayed. This facilitates the distinction between the photograph image taken by the photographer and the photograph image provided by the photograph service company.
 また、図9に示すように第2コンテンツ記憶部205内の写真画像P11と第1コンテンツ記憶部104内の写真画像P12とを表示部109の画面上にレイアウトして表示しても構わない。 Further, as shown in FIG. 9, the photographic image P11 in the second content storage unit 205 and the photographic image P12 in the first content storage unit 104 may be laid out and displayed on the screen of the display unit 109.
 更に、第1コンテンツ記憶部104及び第2コンテンツ記憶部205から取得された写真画像は、自動的に選択もしくは検索されたものであるから、ユーザーにとって好ましいものであるとは限らず、ユーザーの意図にそぐわないことがある。このため、入力部102の入力操作により、意図にそぐわない写真画像を選択して、この写真画像を表示部109の画面上から削除することができるようにしている。 Further, since the photographic images acquired from the first content storage unit 104 and the second content storage unit 205 are automatically selected or searched, they are not necessarily preferable for the user, and are not intended by the user. It may not be suitable. For this reason, an unintended photographic image can be selected by an input operation of the input unit 102, and this photographic image can be deleted from the screen of the display unit 109.
 例えば、図10に示すように写真画像P21が表示部109の画面上に表示されているときに、入力部102の入力操作により画面上の写真画像P21を選択して、この写真画像P21の削除を指示する。これに応答してコンテンツ管理部103は、第1コンテンツ記憶部104及び第2コンテンツ記憶部205から取得された写真画像のうちから該選択された写真画像を削除する。 For example, as shown in FIG. 10, when the photographic image P21 is displayed on the screen of the display unit 109, the photographic image P21 on the screen is selected by the input operation of the input unit 102, and the photographic image P21 is deleted. Instruct. In response to this, the content management unit 103 deletes the selected photographic image from the photographic images acquired from the first content storage unit 104 and the second content storage unit 205.
 また、第2コンテンツ記憶部205内の写真画像の付随情報として、写真画像の提供者の作品購買画面にアクセスするためのURL等の情報を含ませてもよい。そして、図11(a)に示すように表示部109の画面上にブラウザを起動するためのボタンB1等表示しておき、入力部102の入力操作により画面上のボタンB1が操作されると、これに応答してコンテンツ管理部103は、ブラウザ等を起動し、インターネットを通じて、URLに対応する作品購買画面を呼び出し、図11(b)に示すような作品購買画面を表示部109の画面に表示する。これにより、写真画像の提供者が自分で撮った写真画像の紹介や販売を広く展開することができる。 Also, as the accompanying information of the photographic image in the second content storage unit 205, information such as a URL for accessing the work purchase screen of the photographic image provider may be included. Then, as shown in FIG. 11A, a button B1 or the like for starting the browser is displayed on the screen of the display unit 109, and when the button B1 on the screen is operated by an input operation of the input unit 102, In response to this, the content management unit 103 activates a browser or the like, calls a work purchase screen corresponding to the URL via the Internet, and displays a work purchase screen as shown in FIG. 11B on the screen of the display unit 109. To do. This makes it possible to widely introduce and sell photographic images taken by the photographic image provider.
 ここで、図12(a)、(b)のフローチャートを参照しつつ、写真画像の付随情報であるURLを用いて、画面を呼び出すための処理手順を説明する。 Here, with reference to the flowcharts of FIGS. 12A and 12B, a processing procedure for calling up a screen using a URL that is accompanying information of a photographic image will be described.
 まず、端末装置101では、入力部102の入力操作により画面上のボタンB1が操作されると(図12(a)のステップS701)、これに応答してコンテンツ管理部103は、ブラウザ等を起動し、作品購買画面にアクセスするためのURL等の情報を含むリクエストメッセージを作成して、このリクエストメッセージをネットワークNを通じてサーバー装置201に送信し(図12(a)のステップS702)、このリクエストメッセージに対するサーバー装置201からの応答を待機する(図12(a)のステップS703)。 First, in the terminal device 101, when the button B1 on the screen is operated by an input operation of the input unit 102 (step S701 in FIG. 12A), in response to this, the content management unit 103 activates a browser or the like. Then, a request message including information such as a URL for accessing the work purchase screen is created, and this request message is transmitted to the server apparatus 201 through the network N (step S702 in FIG. 12A). The server apparatus 201 waits for a response from the server apparatus 201 (step S703 in FIG. 12A).
 サーバー装置201では、リクエストメッセージを受信すると(図12(b)のステップS721「yes」)、リクエストメッセージを解析して、このメッセージに含まれているURL等の情報を抽出し(図12(b)のステップS722)、このURL等の情報を用いて、作品購買画面に必要なコンテンツ等を収集し、この作品購買画面に必要なコンテンツ等を含むレスポンスメッセージを作成して(図12(b)のステップS723)、このレスポンスメッセージをネットワークNを通じて端末装置101に返信する(図12(b)のステップS724)。 Upon receiving the request message (step S721 “yes” in FIG. 12B), the server apparatus 201 analyzes the request message and extracts information such as a URL included in the message (FIG. 12B). Step S722)), using the information such as the URL, the contents necessary for the work purchase screen are collected, and a response message including the contents necessary for the work purchase screen is created (FIG. 12B). Step S723), this response message is returned to the terminal device 101 through the network N (Step S724 in FIG. 12B).
 端末装置101ではレスポンスメッセージを受信すると(図12(a)のステップS704)、このレスポンスメッセージを解析して、このメッセージに含まれている作品購買画面に必要なコンテンツ等を抽出し(図12(a)のステップS705)、このコンテンツ等を用いて、作品購買画面を作成し(図12(a)のステップS706)、この作品購買画面を表示部109の画面に表示する(図12(a)のステップS707)。 When the terminal device 101 receives the response message (step S704 in FIG. 12A), the response message is analyzed, and the content necessary for the work purchase screen included in the message is extracted (FIG. 12 ( Step S705 of a), a work purchase screen is created using this content or the like (step S706 of FIG. 12A), and this work purchase screen is displayed on the screen of the display unit 109 (FIG. 12A). Step S707).
 尚、ボタンB1の操作に応答してブラウザを起動しているが、ブラウザを起動せずに、HTTP等のプロトコルに従ってリクエストメッセージを送信するだけでもよい。この場合も、このリクエストメッセージの応答として作品購買画面もしくは作品購買画面に必要なコンテンツを受信することが可能である。また、端末装置101からのリクエストメッセージを受信して応答するサーバー装置は、サーバー装置201に特定されず、インターネット上の如何なるサーバー装置であってもよい。 Although the browser is activated in response to the operation of the button B1, the request message may be simply transmitted according to a protocol such as HTTP without activating the browser. Also in this case, it is possible to receive the work purchase screen or the content necessary for the work purchase screen as a response to this request message. The server device that receives and responds to the request message from the terminal device 101 is not specified by the server device 201 and may be any server device on the Internet.
 このように本実施形態のシステムでは、端末装置101の第1コンテンツ記憶部104内のコンテンツを出力するときに、このコンテンツに関連する他のコンテンツをサーバー装置201の第2コンテンツ記憶部205に記憶されているコンテンツの中から検索して、端末装置101側でそれらのコンテンツを共に出力することができる。例えば、第1コンテンツ記憶部104に自分で撮った旅行の写真画像が記憶され、第2コンテンツ記憶部205に写真サービス会社提供の写真画像が記憶されている場合は、ユーザーが第2コンテンツ記憶部205内の写真画像を選択しなくても、第1コンテンツ記憶部104内の写真画像に関連する写真画像が第2コンテンツ記憶部205に記憶されているコンテンツの中から選択されて、自分で撮った写真画像と写真サービス会社の写真画像とが共に表示出力されるので、クオリティの高い写真画像もしくはスライドショーの表示が可能となる。 As described above, in the system of the present embodiment, when content in the first content storage unit 104 of the terminal device 101 is output, other content related to this content is stored in the second content storage unit 205 of the server device 201. It is possible to search from the stored contents and output the contents together on the terminal device 101 side. For example, when a photographic image of a trip taken by the user is stored in the first content storage unit 104 and a photographic image provided by a photo service company is stored in the second content storage unit 205, the user can use the second content storage unit. Even without selecting a photographic image in 205, a photographic image related to the photographic image in the first content storage unit 104 is selected from the contents stored in the second content storage unit 205 and taken by the user. Since both the photograph image and the photograph image of the photograph service company are displayed and output, it is possible to display a high-quality photograph image or a slide show.
 また、付随情報として、地名、写真画像の提供者の作品購買画面にアクセスするためのURL等の情報を設定しているので、写真画像の撮影位置の地名を表示したり、作品購買画面を速やかに呼び出して、作品購買を促進することができる。 In addition, information such as the place name and URL for accessing the work purchase screen of the photograph image provider is set as accompanying information, so the place name of the shooting position of the photo image can be displayed, and the work purchase screen can be quickly displayed. Can be called to promote the purchase of works.
 尚、写真画像だけではなく、写真画像と共にBGM等の音声情報を第2コンテンツ記憶部205に記憶しておけば、写真画像及びBGM等の音声情報を第2コンテンツ記憶部205に記憶されているコンテンツの中から検索して、これらをサーバー装置201から端末装置101に送信することができ、端末装置101では、写真画像もしくはスライドショーの表示に際し、BGM等を音声再生することができる。この場合も、音声情報に、BGM等の提供者の作品購買画面にアクセスするためのURL等の情報を付随情報として設け、図13に示すように表示部109の画面上にブラウザを起動するためのボタンB2等を表示しておく。そして、入力部102の入力操作により画面上のボタンB2が操作されると、これに応答してコンテンツ管理部103は、ブラウザ等を起動して、インターネットを通じて、URLに対応する作品購買画面を呼び出すようにしてもよい。これにより、BGM等の提供者が自分で作曲した音楽の紹介や販売を広く展開することができる。特に、セミプロ、インディーズなどのアーティストは、楽曲の認知度がプロアーティストに比べて低いため、そのような写真画像の提供サービスとの連携によって作品提供や広告を広く行うことができるようになる。 If not only the photographic image but also audio information such as BGM is stored in the second content storage unit 205 together with the photographic image, the photographic image and audio information such as BGM are stored in the second content storage unit 205. The contents can be searched for and transmitted from the server apparatus 201 to the terminal apparatus 101. The terminal apparatus 101 can reproduce BGM or the like by voice when displaying a photographic image or a slide show. In this case as well, information such as URL for accessing the work purchase screen of the provider such as BGM is provided as accompanying information in the voice information, and the browser is started on the screen of the display unit 109 as shown in FIG. Button B2 and the like are displayed. When the button B2 on the screen is operated by the input operation of the input unit 102, in response to this, the content management unit 103 activates a browser or the like and calls the work purchase screen corresponding to the URL through the Internet. You may do it. This makes it possible to widely introduce and sell music composed by providers such as BGM. In particular, artists such as semi-professional and independent artists have a lower degree of music recognition than professional artists, and thus can provide a wide range of works and advertisements in cooperation with such a photo image providing service.
 また、本実施形態のシステムを他の様々な情報サービスに応用することができる。例えば、本実施形態のシステムをECサービスに利用することができる。端末装置101では、第1コンテンツ記憶部104にユーザーの購入済み商品画像及び付随情報を記憶しておき、コンテンツ分類部105によりその付随情報に基づき商品画像を1つ又は複数のコンテンツ群に分類し、検索条件生成部106により各コンテンツ群の商品画像の付随情報から検索条件を生成し、この検索条件をサーバー装置201に送信する。サーバー装置201では、検索条件に該当する商品画像を、第2コンテンツ記憶部205に記憶された商品画像の中から検索し、この商品画像を端末装置101に送信する。端末装置101では、図14(a)に示すように第1コンテンツ記憶部104内の商品画像を購入済みのものとして表示部109の画面に表示した後に、図14(b)に示すように第2コンテンツ記憶部205に記憶された商品画像の中から検索された商品画像をお勧めのものとして表示する。あるいは、図15に示すように購入済みの商品画像及びお勧めの商品画像を表示部109の画面にレイアウトして表示する。 In addition, the system of this embodiment can be applied to various other information services. For example, the system of this embodiment can be used for EC services. In the terminal device 101, the user's purchased product image and accompanying information are stored in the first content storage unit 104, and the product image is classified into one or a plurality of content groups based on the accompanying information by the content classification unit 105. The search condition generation unit 106 generates a search condition from the accompanying information of the product image of each content group, and transmits this search condition to the server device 201. In the server device 201, a product image corresponding to the search condition is searched from product images stored in the second content storage unit 205, and the product image is transmitted to the terminal device 101. In the terminal device 101, as shown in FIG. 14 (a), the product image in the first content storage unit 104 is displayed on the screen of the display unit 109 as already purchased, and then as shown in FIG. 14 (b). 2. Display the recommended product image from the product images stored in the content storage unit 205 as a recommended product image. Alternatively, as shown in FIG. 15, the purchased product image and the recommended product image are laid out and displayed on the screen of the display unit 109.
 このようなECサービスを実施する場合は、絵画やアマチュアの種々の作品、スーパーなどの店頭に並んでいる種々の商品等をコンテンツとして扱って、これらをスライドショーで見せたり、これらを販売することができる。 When implementing such EC services, it is possible to handle various products such as paintings and various amateur works, and various products that are lined up in stores such as supermarkets as content and show them as a slide show or sell them. it can.
 更に、コンテンツとして、写真画像や音声情報だけではなく、動画像やテキスト等を扱ってもよい。 Furthermore, as content, not only photographic images and audio information but also moving images and texts may be handled.
 図16は、図1の端末装置101の変形例を示すブロック図である。この変形例では、ユーザー個人のコンテンツを記憶する第1コンテンツ記憶部111をネットワークN上のサーバーストレージ等に設けている。また、端末装置101に、ネットワークN上の第1コンテンツ記憶部111にアクセスするための第1コンテンツ取得部112を設けている。不特定多数の人々が利用可能なコンテンツを記憶する第2コンテンツ記憶部113もネットワークN上のサーバーストレージ等に設けている。 FIG. 16 is a block diagram showing a modification of the terminal device 101 of FIG. In this modified example, a first content storage unit 111 that stores user's personal content is provided in a server storage or the like on the network N. Further, the terminal device 101 is provided with a first content acquisition unit 112 for accessing the first content storage unit 111 on the network N. A second content storage unit 113 that stores content that can be used by an unspecified number of people is also provided in a server storage or the like on the network N.
 端末装置101の第1コンテンツ取得部112は、通信部107を介して、ネットワークN上の第1コンテンツ記憶部111にアクセスし、第1コンテンツ記憶部111からコンテンツを読み出して取得する。以降は、図1のシステムと同様の処理を行って、第1コンテンツ記憶部111内のコンテンツを1つ又は複数のコンテンツ群に分類し、各コンテンツ群について、検索条件を生成し、この検索条件に基づき、ネットワークNを通じて第2コンテンツ記憶部113内のコンテンツを検索して、このコンテンツを端末装置101に取り込み、第1コンテンツ記憶部111内のコンテンツと第2コンテンツ記憶部113内のコンテンツを共に出力する。 The first content acquisition unit 112 of the terminal device 101 accesses the first content storage unit 111 on the network N via the communication unit 107 and reads and acquires the content from the first content storage unit 111. Thereafter, the same processing as in the system of FIG. 1 is performed to classify the content in the first content storage unit 111 into one or a plurality of content groups, and a search condition is generated for each content group. The content in the second content storage unit 113 is searched through the network N, the content is taken into the terminal device 101, and both the content in the first content storage unit 111 and the content in the second content storage unit 113 are combined. Output.
 図17は、図1のコンテンツ出力システムの変形例を示している。ここでは、サーバー装置201に、第1コンテンツ記憶部104を設け、また端末装置101に、ネットワークN上の第1コンテンツ記憶部104をアクセスするための第1コンテンツ取得部112を設けている。 FIG. 17 shows a modification of the content output system of FIG. Here, the server device 201 is provided with the first content storage unit 104, and the terminal device 101 is provided with the first content acquisition unit 112 for accessing the first content storage unit 104 on the network N.
 端末装置101の第1コンテンツ取得部112は、通信部107を介して、サーバー装置201の第1コンテンツ記憶部104にアクセスし、第1コンテンツ記憶部104からコンテンツを読み出して取得する。以降は、図1のシステムと同様の処理を行って、第1コンテンツ記憶部104内のコンテンツを1つ又は複数のコンテンツ群に分類し、各コンテンツ群について、検索条件を生成し、この検索条件に基づき、サーバー装置201の第2コンテンツ記憶部205内のコンテンツを検索して、このコンテンツを端末装置101に取り込み、第1コンテンツ記憶部104内のコンテンツと第2コンテンツ記憶部205内のコンテンツを共に出力する。 The first content acquisition unit 112 of the terminal device 101 accesses the first content storage unit 104 of the server device 201 via the communication unit 107 and reads and acquires the content from the first content storage unit 104. Thereafter, the same processing as in the system of FIG. 1 is performed to classify the content in the first content storage unit 104 into one or a plurality of content groups, and a search condition is generated for each content group. The content in the second content storage unit 205 of the server device 201 is searched based on the content, the content is taken into the terminal device 101, and the content in the first content storage unit 104 and the content in the second content storage unit 205 are retrieved. Output together.
 図18は、図1のコンテンツ出力システムの他の変形例を示している。ここでは、サーバー装置201に、第1コンテンツ記憶部104、コンテンツ管理部103、コンテンツ分類部105、及び検索条件生成部106を設けている。 FIG. 18 shows another modification of the content output system of FIG. Here, the server device 201 is provided with a first content storage unit 104, a content management unit 103, a content classification unit 105, and a search condition generation unit 106.
 端末装置101では、入力部102の入力操作によりコンテンツの分類が指示されると、これに応答して制御部115は、コンテンツ分類の指示を通信部107からネットワークNを通じてサーバー装置201へと送信する。 In the terminal device 101, when content classification is instructed by an input operation of the input unit 102, in response to this, the control unit 115 transmits a content classification instruction from the communication unit 107 to the server device 201 through the network N. .
 サーバー装置201では、コンテンツ分類の指示を通信部202で受信してコンテンツ管理部103に入力する。以降は、図1のシステムと同様の処理を行って、第1コンテンツ記憶部111内のコンテンツをコンテンツ群に分類し、検索条件を生成して、この検索条件に基づき、第2コンテンツ記憶部113内のコンテンツを検索する。そして、コンテンツ群別に、第1コンテンツ記憶部111内のコンテンツと第2コンテンツ記憶部113内のコンテンツを読み出して、これらのコンテンツをネットワークNを通じて端末装置101に返信する。 In the server apparatus 201, a content classification instruction is received by the communication unit 202 and input to the content management unit 103. Thereafter, the same processing as in the system of FIG. 1 is performed to classify the contents in the first content storage unit 111 into content groups, generate search conditions, and the second content storage unit 113 based on the search conditions. Search for content in Then, for each content group, the content in the first content storage unit 111 and the content in the second content storage unit 113 are read and returned to the terminal device 101 through the network N.
 端末装置101では、コンテンツ群別に、複数のコンテンツを受信して表示部109の画面に表示する。 The terminal device 101 receives a plurality of contents for each content group and displays them on the screen of the display unit 109.
 図19は、本発明のコンテンツ出力装置の一実施形態を示すブロック図である。本実施形態のコンテンツ出力装置121は、図1の端末装置101に第2コンテンツ記憶部205及び変換表記憶部204を付加して構成される。第2コンテンツ記憶部205には、多数のコンテンツが記憶されている。これらのコンテンツは、他の端末装置やサーバー装置等からネットワークNを通じて収集されたものである。変換表記憶部204は、図1のサーバー装置201における変換表記憶部204と同様の機能を果たす。 FIG. 19 is a block diagram showing an embodiment of the content output apparatus of the present invention. The content output device 121 of this embodiment is configured by adding a second content storage unit 205 and a conversion table storage unit 204 to the terminal device 101 of FIG. The second content storage unit 205 stores a large number of contents. These contents are collected through the network N from other terminal devices and server devices. The conversion table storage unit 204 performs the same function as the conversion table storage unit 204 in the server apparatus 201 of FIG.
 このようなコンテンツ出力装置121においては、第2コンテンツ記憶部205及び変換表記憶部204を内蔵することから、図1の端末装置101のように外部のサーバー装置にアクセスする必要がなく、第1コンテンツ記憶部104内のコンテンツを出力するときに、このコンテンツに関連する他のコンテンツを第2コンテンツ記憶部205に記憶されたコンテンツの中から検索して、それらのコンテンツを共に出力することができる。 Since such a content output device 121 includes the second content storage unit 205 and the conversion table storage unit 204, there is no need to access an external server device unlike the terminal device 101 of FIG. When outputting content in the content storage unit 104, it is possible to search for other content related to this content from the content stored in the second content storage unit 205 and output the content together. .
 以上、添付図面を参照しながら本発明の好適な実施形態について説明したが、本発明は係る例に限定されないことは言うまでもない。当業者であれば、特許請求の範囲に記載された範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、それらについても当然に本発明の技術的範囲に属するものと解される。 As described above, the preferred embodiments of the present invention have been described with reference to the accompanying drawings, but it goes without saying that the present invention is not limited to such examples. It will be apparent to those skilled in the art that various changes and modifications can be made within the scope of the claims, and these are naturally within the technical scope of the present invention. It is understood.
 例えば、写真画像や音声情報だけではなく、図形等の他の静止画像、動画像等のコンテンツであっても、写真画像や音声情報と同様に、本発明で扱うことが出来る。 For example, not only photographic images and audio information, but also contents such as graphics and other still images and moving images can be handled in the present invention in the same manner as photographic images and audio information.
 つまり、上記した実施の形態では、コンテンツを、写真画像等の静止画像とし、位置情報を写真画像の撮影位置、日時情報を撮影日時としているが、本発明でいうコンテンツ及びこのコンテンツの位置情報及び日時情報は、これに限定されない。例えば、コンテンツは、写真画像等の静止画像の他、動画像、音楽、音声情報などであってもよい。コンテンツが静止画像や動画像の場合、コンテンツの出力とは、静止画像や動画像の表示であり、コンテンツが音楽や音声情報である場合、コンテンツの出力は、音楽や音声の再生である。 That is, in the above-described embodiment, the content is a still image such as a photographic image, the position information is the shooting position of the photographic image, and the date / time information is the shooting date / time. The date information is not limited to this. For example, the content may be a moving image, music, audio information, etc. in addition to a still image such as a photographic image. When the content is a still image or a moving image, the output of the content is a display of a still image or a moving image. When the content is music or audio information, the output of the content is reproduction of music or audio.
 コンテンツが写真画像等の静止画像や動画像である場合、そのコンテンツの位置情報は、例えば、写真画像等の静止画像や動画像の撮像位置を示すものであってよく、そのコンテンツの日時情報は、写真画像等の静止画像や動画像の撮像日時を示すものであってよい。このような場合には、相互に関連する撮像位置又は撮像位置のコンテンツが出力されることになる。 When the content is a still image such as a photographic image or a moving image, the position information of the content may indicate, for example, the position where the still image or moving image such as a photographic image is captured. It may indicate the date and time of capturing a still image such as a photographic image or a moving image. In such a case, mutually related imaging positions or contents at the imaging positions are output.
 また、コンテンツが音楽や音声情報である場合、そのコンテンツの位置情報は、例えば、音楽や音声情報の録音位置等を示すものであってよく、そのコンテンツの日時情報は、音楽や音声情報の録音日時又は配信日時等を示すものであってよい。このような場合には、相互に関連する録音位置、録音日時又は配信日時のコンテンツが出力されることになる。 When the content is music or audio information, the location information of the content may indicate, for example, the recording position of the music or audio information, and the date and time information of the content is the recording of the music or audio information. It may indicate date and time or delivery date and time. In such a case, the contents of the recording position, recording date / time, or distribution date / time related to each other are output.
 さらに、上記したコンテンツの付随情報とは、コンテンツに付随されている情報であればいかなるものであってもよく、日時情報及び位置情報に限定されない。例えば、付随情報は、日時情報と位置情報の両方を示す情報であってもよいし、日時情報及び位置情報のいずれか一方のみを示す情報であってよい。 Furthermore, the content accompanying information described above may be any information as long as it is information attached to the content, and is not limited to date information and position information. For example, the accompanying information may be information indicating both date information and position information, or may be information indicating only one of date information and position information.
 また、第1及び第2コンテンツ記憶部のいずれも単一である必要はなく、複数のものを設けることが可能であり、複数種のメモリデバイスを混合して適用しても構わない。例えば、複数の第1コンテンツ記憶部を端末装置及びサーバー装置の一方又は両方に設けたり、ネットワーク上に分散して設けることができる。同様に、第2コンテンツ記憶部を設けることができる。 Also, it is not necessary for both the first and second content storage units to be single, and a plurality of them can be provided, and a plurality of types of memory devices may be mixed and applied. For example, a plurality of first content storage units can be provided in one or both of the terminal device and the server device, or distributed on the network. Similarly, a second content storage unit can be provided.
 更に、端末装置101の表示部109を用いる代わりに、端末装置101からTVへとTV用の画像信号を出力して、表示部109の画面と同様の表示内容をTVの画面に表示することも可能である。あるいは、端末装置101の機能を、TVセットに内蔵させることも可能である。この場合は、TV番組の視聴と同様に、コンテンツを視聴したり、ECサービスを受けることができる。 Further, instead of using the display unit 109 of the terminal device 101, a TV image signal may be output from the terminal device 101 to the TV, and the display content similar to the screen of the display unit 109 may be displayed on the TV screen. Is possible. Alternatively, the function of the terminal device 101 can be incorporated in the TV set. In this case, content can be viewed and EC service can be received in the same manner as viewing TV programs.
 あるいは、他の種類の表示装置用の画像信号を出力して、表示部109の画面と同様の表示内容を該他の種類の表示装置の画面に表示しても構わない。他の種類の表示装置としては、携帯型端末等がある。 Alternatively, an image signal for another type of display device may be output, and display content similar to the screen of the display unit 109 may be displayed on the screen of the other type of display device. Other types of display devices include portable terminals.
 また、本発明は、コンテンツ出力装置やコンテンツ出力システムに限定されるものではなく、コンテンツ出力方法、このコンテンツ出力方法の各ステップをコンピュータに実行させるためのコンテンツ出力プログラム、及びコンテンツ出力プログラムを記憶した記録媒体を含む。コンピュータは、プログラムを実行し得るデバイスであれば、如何なるものであってもよい。 The present invention is not limited to a content output device or a content output system, and stores a content output method, a content output program for causing a computer to execute each step of the content output method, and a content output program Includes recording media. The computer may be any device that can execute the program.
 また、コンピュータは、プログラムを記録媒体から読出したり、プログラムを通信ネットワークを通じて受信し、プログラムを実行して、本発明を実施することができる。複数のコンピュータやインターネットからなるシステムにおいては、複数の処理を複数の端末に分散して行い得る。従って、プログラムは、コンピュータ等の単一の端末だけではなく、システムにも適用し得る。 Further, the computer can implement the present invention by reading a program from a recording medium, receiving the program through a communication network, and executing the program. In a system composed of a plurality of computers and the Internet, a plurality of processes can be distributed to a plurality of terminals. Therefore, the program can be applied not only to a single terminal such as a computer but also to a system.
 本発明は、その精神または主要な特徴から逸脱することなく、他のいろいろな形で実施することができる。そのため、上述の実施例はあらゆる点で単なる例示にすぎず、限定的に解釈してはならない。本発明の範囲は請求の範囲によって示すものであって、明細書本文には、なんら拘束されない。さらに、請求の範囲の均等範囲に属する変形や変更は、全て本発明の範囲内のものである。 The present invention can be implemented in various other forms without departing from the spirit or main features thereof. For this reason, the above-described embodiment is merely an example in all respects and should not be interpreted in a limited manner. The scope of the present invention is shown by the scope of claims, and is not restricted by the text of the specification. Further, all modifications and changes belonging to the equivalent scope of the claims are within the scope of the present invention.
 また、この出願は、2009年4月27日に日本で出願された特願2009-107232に基づく優先権を請求する。これに言及することにより、その全ての内容は本出願に組み込まれるものである。 Also, this application claims priority based on Japanese Patent Application No. 2009-107232 filed in Japan on April 27, 2009. By this reference, the entire contents thereof are incorporated into the present application.
 本発明は、画像、音声等からなるコンテンツを表示したり再生するパーソナルコンピュータ等に適用することが可能である。 The present invention can be applied to a personal computer or the like that displays or reproduces content composed of images, sounds, and the like.
101 端末装置
102 入力部
103 コンテンツ管理部
104 第1コンテンツ記憶部
105 コンテンツ分類部
106 検索条件生成部
107 通信部
108 表示生成部
109 表示部
201 サーバー装置
202 通信部
203 検索部
204 変換表記憶部
205 第2コンテンツ記憶部
101 terminal device 102 input unit 103 content management unit 104 first content storage unit 105 content classification unit 106 search condition generation unit 107 communication unit 108 display generation unit 109 display unit 201 server device 202 communication unit 203 search unit 204 conversion table storage unit 205 Second content storage unit

Claims (19)

  1.  ネットワークを通じて、端末装置及びサーバー装置間で情報通信を行うコンテンツ出力システムにおいて、
     前記端末装置は、
     複数のコンテンツを記憶した第1コンテンツ記憶部と、
     前記第1コンテンツ記憶部に記憶された複数のコンテンツを、分類条件に基づいて、1つ又は複数のコンテンツ群に分類する分類部と、
     前記コンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツの付随情報を基に、検索条件を生成する検索条件生成部と、
     前記コンテンツ群の各々について、前記検索条件生成部により生成された検索条件を前記サーバー装置に送信し、この検索条件の送信に対する応答として検索条件に該当するコンテンツをサーバー装置から受信する通信部と、
     前記コンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツ及び前記サーバー装置から受信したコンテンツを共に出力する出力部とを備え、
     前記サーバー装置は、
     複数のコンテンツを記憶した第2コンテンツ記憶部と、
     前記端末装置からの検索条件を受信し、この検索条件に該当するコンテンツを端末装置に送信する通信部と、
     前記第2コンテンツ記憶部に記憶された複数のコンテンツの中から前記受信した検索条件に該当するコンテンツを検索する検索部とを備えることを特徴とするコンテンツ出力システム。
    In a content output system that performs information communication between a terminal device and a server device through a network,
    The terminal device
    A first content storage unit storing a plurality of contents;
    A classification unit that classifies the plurality of contents stored in the first content storage unit into one or a plurality of content groups based on a classification condition;
    For each of the content groups, a search condition generation unit that generates a search condition based on accompanying information of content classified into the content group;
    For each of the content groups, a communication unit that transmits the search condition generated by the search condition generation unit to the server device, and receives content corresponding to the search condition from the server device as a response to the transmission of the search condition;
    For each of the content groups, an output unit that outputs both the content classified into the content group and the content received from the server device,
    The server device is
    A second content storage unit storing a plurality of contents;
    A communication unit that receives a search condition from the terminal device and transmits content corresponding to the search condition to the terminal device;
    A content output system comprising: a search unit that searches for content that satisfies the received search condition from among a plurality of contents stored in the second content storage unit.
  2.  前記出力部は、前記コンテンツ群に分類されたコンテンツと前記サーバー装置から受信したコンテンツとの表示レイアウトを設定して、これらのコンテンツを該表示レイアウトで表示出力することを特徴とする請求項1に記載のコンテンツ出力システム。 2. The output unit according to claim 1, wherein the output unit sets a display layout of the content classified into the content group and the content received from the server device, and displays and outputs the content in the display layout. The content output system described.
  3.  前記出力部は、前記コンテンツ群に分類されたコンテンツと前記サーバー装置から受信したコンテンツとを識別可能に出力することを特徴とする請求項1に記載のコンテンツ出力システム。 The content output system according to claim 1, wherein the output unit outputs the content classified into the content group and the content received from the server device in an identifiable manner.
  4.  前記出力部は、前記コンテンツ群に分類されたコンテンツ及び前記サーバー装置から受信したコンテンツを、これらコンテンツの各々の付随情報と共に出力することを特徴とする請求項1に記載のコンテンツ出力システム。 The content output system according to claim 1, wherein the output unit outputs the content classified into the content group and the content received from the server device together with accompanying information of each of the content.
  5.  前記端末装置は、前記分類条件を入力するための入力操作部を備えることを特徴とする請求項1に記載のコンテンツ出力システム。 The content output system according to claim 1, wherein the terminal device includes an input operation unit for inputting the classification condition.
  6.  前記分類条件は、予め設定されるか、入力操作部の入力操作により変更されるか、又は入力操作部の入力操作により入力設定されることを特徴とする請求項1に記載のコンテンツ出力システム。 The content output system according to claim 1, wherein the classification condition is preset, changed by an input operation of an input operation unit, or input and set by an input operation of an input operation unit.
  7.  前記付随情報は、位置情報もしくは日時情報であることを特徴とする請求項1に記載のコンテンツ出力システム。 The content output system according to claim 1, wherein the accompanying information is position information or date / time information.
  8.  前記分類部は、前記第1コンテンツ記憶部に記憶された各コンテンツの位置情報もしくは日時情報を閾値と比較して、前記第1コンテンツ記憶部に記憶された複数のコンテンツを1つ又は複数のコンテンツ群に分類することを特徴とする請求項7に記載のコンテンツ出力システム。 The classification unit compares position information or date / time information of each content stored in the first content storage unit with a threshold value, and compares the plurality of contents stored in the first content storage unit with one or more contents The content output system according to claim 7, wherein the content output system is classified into groups.
  9.  前記分類部は、前記第1コンテンツ記憶部に記憶された各コンテンツの日時情報を用いて、それらコンテンツを時系列で配列した後、各コンテンツの位置情報を用いて、配列されたそれらコンテンツを1つ又は複数のコンテンツ群に分類することを特徴とする請求項7に記載のコンテンツ出力システム。 The classification unit uses the date / time information of each content stored in the first content storage unit to arrange the content in time series, and then uses the location information of each content to arrange the arranged content to 1 The content output system according to claim 7, wherein the content output system is classified into one or a plurality of content groups.
  10.  前記閾値は、予め設定されるか、入力操作部の入力操作により変更されるか、入力操作部の入力操作により入力設定されるか、又はコンテンツの付随情報に基づいて変更されることを特徴とする請求項8に記載のコンテンツ出力システム。 The threshold is set in advance, changed by an input operation of an input operation unit, input set by an input operation of an input operation unit, or changed based on accompanying information of content. The content output system according to claim 8.
  11.  前記端末装置は、前記第1コンテンツ記憶部に記憶された各コンテンツの付随情報を前記サーバー装置に送信し、
     前記サーバー装置は、前記端末装置から受信した各コンテンツの付随情報に基づいて分類条件を求め、この分類条件を前記端末装置に送信することを特徴とする請求項1に記載のコンテンツ出力システム。
    The terminal device transmits accompanying information of each content stored in the first content storage unit to the server device;
    The content output system according to claim 1, wherein the server device obtains a classification condition based on accompanying information of each content received from the terminal device, and transmits the classification condition to the terminal device.
  12.  前記サーバー装置から前記端末装置へと送信されたコンテンツには、インターネット上のアドレスが含まれており、
     前記端末装置での入力操作に応答して前記端末装置から前記サーバー装置又は他のサーバー装置へと前記アドレスを送信し、このアドレスを受信したサーバー装置で該アドレスに基づき情報を収集して、この情報を該サーバー装置から前記端末装置へと送信し、前記端末装置で該情報を表示することを特徴とする請求項1に記載のコンテンツ出力システム。
    The content transmitted from the server device to the terminal device includes an address on the Internet,
    In response to an input operation at the terminal device, the address is transmitted from the terminal device to the server device or another server device, and the server device that has received the address collects information based on the address. The content output system according to claim 1, wherein information is transmitted from the server device to the terminal device, and the information is displayed on the terminal device.
  13.  ネットワークを通じて、端末装置及びサーバー装置間で情報通信を行うコンテンツ出力システムにおいて、
     前記サーバー装置は、
     複数のコンテンツを記憶した第1コンテンツ記憶部と、
     複数のコンテンツを記憶した第2コンテンツ記憶部と、
     前記第1コンテンツ記憶部に記憶された複数のコンテンツを前記端末装置に送信し、このコンテンツの送信に対する応答として前記端末装置から検索条件を受信し、この検索条件に該当するコンテンツを前記端末装置に送信する通信部と、
     前記第2コンテンツ記憶部に記憶された複数のコンテンツの中から前記受信した検索条件に該当するコンテンツを検索する検索部とを備え、
     前記端末装置は、
     前記サーバー装置から前記第1コンテンツ記憶部に記憶された複数のコンテンツを受信して、受信した複数のコンテンツを、分類条件に基づいて、1つ又は複数のコンテンツ群に分類する分類部と、
     前記コンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツの付随情報を基に、検索条件を生成する検索条件生成部と、
     前記コンテンツ群の各々について、前記検索条件生成部により生成された検索条件を前記サーバー装置に送信し、この検索条件の送信に対する応答として検索条件に該当するコンテンツをサーバー装置から受信する通信部と、
     前記コンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツ及び前記サーバー装置から受信した各コンテンツを出力する出力部とを備えることを特徴とするコンテンツ出力システム。
    In a content output system that performs information communication between a terminal device and a server device through a network,
    The server device is
    A first content storage unit storing a plurality of contents;
    A second content storage unit storing a plurality of contents;
    A plurality of contents stored in the first content storage unit are transmitted to the terminal device, a search condition is received from the terminal device as a response to the transmission of the content, and content corresponding to the search condition is transmitted to the terminal device. A communication unit to transmit;
    A search unit for searching for content that satisfies the received search condition from among a plurality of contents stored in the second content storage unit;
    The terminal device
    A classification unit that receives a plurality of contents stored in the first content storage unit from the server device, and classifies the received plurality of contents into one or a plurality of content groups based on a classification condition;
    For each of the content groups, a search condition generation unit that generates a search condition based on accompanying information of content classified into the content group;
    For each of the content groups, a communication unit that transmits the search condition generated by the search condition generation unit to the server device, and receives content corresponding to the search condition from the server device as a response to the transmission of the search condition;
    A content output system comprising, for each of the content groups, an output unit that outputs the content classified into the content group and each content received from the server device.
  14.  ネットワークを通じて、端末装置及びサーバー装置間で情報通信を行うコンテンツ出力システムにおいて、
     前記サーバー装置は、
     複数のコンテンツを記憶した第1コンテンツ記憶部と、
     複数のコンテンツを記憶した第2コンテンツ記憶部と、
     前記第1コンテンツ記憶部に記憶された複数のコンテンツを、分類条件に基づいて、1つ又は複数のコンテンツ群に分類する分類部と、
     前記コンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツの付随情報を基に、検索条件を生成する検索条件生成部と、
     前記コンテンツ群の各々について、前記検索条件生成部により生成された検索条件に該当するコンテンツを、前記第2コンテンツ記憶部に記憶された複数のコンテンツの中から検索する検索部と、
     前記コンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツ及び前記検索部により検索されたコンテンツを共に端末装置に送信する通信部とを備え、
     前記端末装置は、
     前記コンテンツの各々をサーバー装置から受信する通信部と、
     これらのコンテンツを出力する出力部とを備えることを特徴とするコンテンツ出力システム。
    In a content output system that performs information communication between a terminal device and a server device through a network,
    The server device is
    A first content storage unit storing a plurality of contents;
    A second content storage unit storing a plurality of contents;
    A classification unit that classifies the plurality of contents stored in the first content storage unit into one or a plurality of content groups based on a classification condition;
    For each of the content groups, a search condition generation unit that generates a search condition based on accompanying information of content classified into the content group;
    For each of the content groups, a search unit that searches the content corresponding to the search condition generated by the search condition generation unit from a plurality of contents stored in the second content storage unit;
    For each of the content groups, a communication unit that transmits the content classified into the content group and the content searched by the search unit to the terminal device,
    The terminal device
    A communication unit for receiving each of the contents from a server device;
    A content output system comprising: an output unit that outputs these contents.
  15.  ネットワークを通じて、端末装置との間で情報通信を行うサーバー装置において、
     複数のコンテンツを記憶した第1コンテンツ記憶部と、
     複数のコンテンツを記憶した第2コンテンツ記憶部と、
     前記第1コンテンツ記憶部に記憶された複数のコンテンツを、分類条件に基づいて、1つ又は複数のコンテンツ群に分類する分類部と、
     前記コンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツの付随情報を基に、検索条件を生成する検索条件生成部と、
     前記コンテンツ群の各々について、前記検索条件生成部により生成された検索条件に該当するコンテンツを、前記第2コンテンツ記憶部に記憶された複数のコンテンツの中から検索する検索部と、
     前記コンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツ及び前記検索部により検索されたコンテンツを共に端末装置に送信する通信部とを備えることを特徴とするサーバー装置。
    In a server device that performs information communication with a terminal device through a network,
    A first content storage unit storing a plurality of contents;
    A second content storage unit storing a plurality of contents;
    A classification unit that classifies the plurality of contents stored in the first content storage unit into one or a plurality of content groups based on a classification condition;
    For each of the content groups, a search condition generation unit that generates a search condition based on accompanying information of content classified into the content group;
    For each of the content groups, a search unit that searches the content corresponding to the search condition generated by the search condition generation unit from a plurality of contents stored in the second content storage unit;
    A server device comprising: a communication unit that transmits, to each of the content groups, the content classified into the content group and the content searched by the search unit to a terminal device.
  16.  複数のコンテンツを記憶した第1コンテンツ記憶部と、
     複数のコンテンツを記憶した第2コンテンツ記憶部と、
     前記第1コンテンツ記憶部に記憶された複数のコンテンツを、分類条件に基づいて、1つ又は複数のコンテンツ群に分類する分類部と、
     前記コンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツの付随情報を基に、検索条件を生成する検索条件生成部と、
     前記コンテンツ群の各々について、前記検索条件生成部により生成された検索条件に該当するコンテンツを、前記第2コンテンツ記憶部に記憶された複数のコンテンツの中から検索する検索部と、
     前記コンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツ及び前記検索部により検索されたコンテンツを共に出力する出力部とを備えることを特徴とするコンテンツ出力装置。
    A first content storage unit storing a plurality of contents;
    A second content storage unit storing a plurality of contents;
    A classification unit that classifies the plurality of contents stored in the first content storage unit into one or a plurality of content groups based on a classification condition;
    For each of the content groups, a search condition generation unit that generates a search condition based on accompanying information of content classified into the content group;
    For each of the content groups, a search unit that searches the content corresponding to the search condition generated by the search condition generation unit from a plurality of contents stored in the second content storage unit;
    A content output apparatus comprising: an output unit that outputs the content classified into the content group and the content searched by the search unit for each of the content groups.
  17.  コンテンツを出力するコンテンツ出力方法において、
     複数のコンテンツを記憶する第1コンテンツ記憶ステップと、
     複数のコンテンツを記憶する第2コンテンツ記憶ステップと、
     前記第1コンテンツ記憶ステップで記憶した複数のコンテンツを、分類条件に基づいて、1つ又は複数のコンテンツ群に分類する分類ステップと、
     前記コンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツの付随情報に基づき、検索条件を生成する検索条件生成ステップと、
     前記コンテンツ群の各々について、前記検索条件生成ステップで生成した検索条件に該当するコンテンツを、前記第2コンテンツ記憶ステップで記憶した複数のコンテンツの中から検索する検索ステップと、
     前記コンテンツ群の各々について、当該コンテンツ群に分類されたコンテンツ及び前記検索ステップで検索したコンテンツを共に出力する出力ステップとを有することを特徴とするコンテンツ出力方法。
    In a content output method for outputting content,
    A first content storage step for storing a plurality of contents;
    A second content storage step for storing a plurality of contents;
    A classification step of classifying the plurality of contents stored in the first content storage step into one or a plurality of content groups based on classification conditions;
    For each of the content groups, a search condition generation step for generating a search condition based on the accompanying information of the content classified into the content group;
    For each of the content groups, a search step for searching the content corresponding to the search condition generated in the search condition generation step from the plurality of contents stored in the second content storage step;
    A content output method comprising: for each of the content groups, an output step of outputting together the content classified into the content group and the content searched in the search step.
  18.  請求項17に記載のコンテンツ出力方法の各ステップをコンピュータに実行させるためのコンテンツ出力プログラム。 A content output program for causing a computer to execute each step of the content output method according to claim 17.
  19.  請求項18に記載のコンテンツ出力プログラムを記憶した記録媒体。 A recording medium storing the content output program according to claim 18.
PCT/JP2010/057464 2009-04-27 2010-04-27 Content output system WO2010126042A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2009107232A JP2010257266A (en) 2009-04-27 2009-04-27 Content output system, server device, device, method, and program for outputting content, and recording medium storing the content output program
JP2009-107232 2009-04-27

Publications (1)

Publication Number Publication Date
WO2010126042A1 true WO2010126042A1 (en) 2010-11-04

Family

ID=43032186

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/057464 WO2010126042A1 (en) 2009-04-27 2010-04-27 Content output system

Country Status (2)

Country Link
JP (1) JP2010257266A (en)
WO (1) WO2010126042A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2011202609B2 (en) * 2011-05-24 2013-05-16 Canon Kabushiki Kaisha Image clustering method
JP6168882B2 (en) * 2013-07-04 2017-07-26 キヤノン株式会社 Display control apparatus, control method thereof, and control program
JP2016031439A (en) * 2014-07-28 2016-03-07 ソニー株式会社 Information processing apparatus and information processing method, computer program, and image display system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004222056A (en) * 2003-01-16 2004-08-05 Fuji Photo Film Co Ltd Method, device, and program for preserving image
JP2007034403A (en) * 2005-07-22 2007-02-08 Nikon Corp Image display device and image display program
JP2008102790A (en) * 2006-10-19 2008-05-01 Kddi Corp Retrieval system
JP2009516951A (en) * 2005-11-21 2009-04-23 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ System and method for finding related audio companions using digital image content features and metadata
JP2009086727A (en) * 2007-09-27 2009-04-23 Fujifilm Corp Image display device and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004222056A (en) * 2003-01-16 2004-08-05 Fuji Photo Film Co Ltd Method, device, and program for preserving image
JP2007034403A (en) * 2005-07-22 2007-02-08 Nikon Corp Image display device and image display program
JP2009516951A (en) * 2005-11-21 2009-04-23 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ System and method for finding related audio companions using digital image content features and metadata
JP2008102790A (en) * 2006-10-19 2008-05-01 Kddi Corp Retrieval system
JP2009086727A (en) * 2007-09-27 2009-04-23 Fujifilm Corp Image display device and program

Also Published As

Publication number Publication date
JP2010257266A (en) 2010-11-11

Similar Documents

Publication Publication Date Title
US8196212B2 (en) Personal information management device
US8538968B2 (en) Saving device for image sharing, image sharing system, and image sharing method
US8294787B2 (en) Display device having album display function
JP3824137B2 (en) DATA REPRODUCING METHOD, DATA REPRODUCING DEVICE, PROGRAM, AND RECORDING MEDIUM THEREOF
CN103124968B (en) For the Content Transformation of back-tilting type amusement
US20080028294A1 (en) Method and system for managing and maintaining multimedia content
WO2009081936A1 (en) Advertisement management system, advertisement management server, advertisement management method, program, and browse client
US20040174443A1 (en) System and method for storing of records in a database
JP2007052788A (en) Method and system for linking digital photograph to electronic document
JP2007517311A (en) Website for publishing and selling images
US8719329B2 (en) Imaging device, imaging system, image management server, image communication system, imaging method, and image management method
US20100228751A1 (en) Method and system for retrieving ucc image based on region of interest
JP2007047959A (en) Information editing and displaying device, information editing and displaying method, server, information processing system, and information editing and displaying program
JP2009217828A (en) Image retrieval device
WO2010126042A1 (en) Content output system
JP2005196615A (en) Information processing system and information processing method
KR101831663B1 (en) Display mehtod of ecotourism contents in smart device
US20150039643A1 (en) System for storing and searching image files, and cloud server
TWM564225U (en) Image information sharing system
JP2021005390A (en) Content management device, and control method
JP2007104326A (en) Content creating device and content creating method
JP2002132825A (en) System, method, and program for image retrieval, computer-readable storage medium with recorded image retrieving program, and image retrieving device
JP2005196613A (en) Information processor and information processing method, information processing system, recording medium and program
JP2010079421A (en) Apparatus, program and method for generating condensed contents, and system
JP4561358B2 (en) Electronic album creation apparatus and electronic album creation system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10769740

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10769740

Country of ref document: EP

Kind code of ref document: A1