WO2014083608A1 - Computer, computer system, and data management method - Google Patents

Computer, computer system, and data management method Download PDF

Info

Publication number
WO2014083608A1
WO2014083608A1 PCT/JP2012/080591 JP2012080591W WO2014083608A1 WO 2014083608 A1 WO2014083608 A1 WO 2014083608A1 JP 2012080591 W JP2012080591 W JP 2012080591W WO 2014083608 A1 WO2014083608 A1 WO 2014083608A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
recognition
unit
structure data
unstructured
Prior art date
Application number
PCT/JP2012/080591
Other languages
French (fr)
Japanese (ja)
Inventor
藤田 雄介
信尾 額賀
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Priority to PCT/JP2012/080591 priority Critical patent/WO2014083608A1/en
Priority to JP2014549661A priority patent/JP5891313B2/en
Publication of WO2014083608A1 publication Critical patent/WO2014083608A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking

Definitions

  • the present invention relates to a computer, a system, and a method for executing recognition processing on unstructured data stored in a storage device and generating metadata including the result of recognition processing in the storage device.
  • Automating information extraction from unstructured data is required by many businesses that handle large amounts of data.
  • techniques such as image recognition, speech recognition, and document structure recognition are required.
  • a mechanism for linking a large-scale storage device and a recognition system is also important.
  • Patent Document 1 is a system dedicated to video data and audio data, and is configured to be able to recognize a document in conjunction with a storage device that stores data of different data types such as a document. It is difficult.
  • the mechanism for linking the storage device and the recognition system is generally complicated. This is because there are many items such as a database for storing recognition results, a function for notifying that data has been recognized, throughput when a large amount of data is input simultaneously, and linkage between multiple recognition systems. This is because it is necessary to consider.
  • the present invention has been made in view of these points, and an object thereof is to provide an apparatus, a system, and a method capable of flexibly linking a storage device and an arbitrary recognition system. .
  • a typical example of the invention disclosed in the present application is as follows. That is, a computer that manages unstructured data that does not have a fixed data structure and structured data that has a fixed data structure, the computer being connected to a processor, a memory connected to the processor, and the processor A storage device; and an I / O interface connected to the processor; and at least one recognition unit that executes recognition processing of a predetermined data type using a predetermined dictionary for the unstructured data; and the recognition And a structural data generating unit that generates the structural data including identification information of the recognizing unit and identification information of a dictionary used by the recognizing unit as a result of recognition processing executed by the recognizing unit.
  • structure data including identification information for recognition processing and identification information for a dictionary used for recognition processing is generated.
  • Various controls using the results of recognition processing on unstructured data such as simultaneous operation of recognition systems, suppression of unnecessary recognition processing, and integration of recognition results output from a plurality of recognition systems, are possible.
  • Example 1 of this invention It is explanatory drawing which shows an example of the structured recognition result in Example 1 of this invention. It is a flowchart explaining the structure data correlation process in Example 1 of this invention. It is explanatory drawing which shows an example of the structure data in which the some structured recognition result in Example 1 of this invention was reflected. It is a flowchart explaining the recognition function registration process in Example 1 of this invention. It is a block diagram explaining the structure of the unstructured data storage device in Example 2 of this invention.
  • FIG. 1 is an explanatory diagram showing a configuration example of a computer system according to the first embodiment of the present invention.
  • the computer system includes a storage server 31, a management server 32, a video server 33, and an audio server 34.
  • the storage server 31, the management server 32, the video server 33, and the audio server 34 are connected to each other via the relay device 38.
  • the computer system may include a terminal used by a user or the like.
  • the storage server 31, the management server 32, the video server 33, and the audio server 34 are also referred to as servers.
  • the storage server 31 of this embodiment includes a CPU 35, a memory 36, a communication device 37, and a storage device 39.
  • a storage device 39 for example, an HDD (Hard Disk Drive) and an SSD (Solid State Drive) can be considered.
  • the storage server 31 may be connected to an external storage apparatus having a control unit, an I / O interface, and a plurality of storage devices.
  • management server 32, the video server 33, and the audio server 34 of the present embodiment have the same hardware configuration.
  • the management server 32, the video server 33, and the audio server 34 include a CPU 35, a memory 36, and a communication device 37.
  • the CPU 35 executes a program stored in memory 36.
  • the functions of the server can be realized by the CPU 35 executing the program.
  • the memory 36 stores a program executed by the CPU 35 and various information necessary for executing the program.
  • the communication device 37 is a device for communicating with other servers.
  • the communication device 37 may be a network interface, for example.
  • the program executed by the CPU 35 transmits / receives data to / from each other by communicating with other servers using the communication device 37.
  • the relay device 38 receives data from an arbitrary device and relays data transmission / reception between devices by transmitting the received data to other devices.
  • the relay device 38 includes a CPU (not shown), a memory (not shown), and a communication device (not shown).
  • the storage server 31 is a computer that stores various data.
  • the memory 36 of the storage server 31 stores programs for realizing the data receiving unit 2, the storage unit 3, the data reference unit 4, and the structural data reference unit 5. Further, the storage device 39 of the storage server 31 stores unstructured data 50, structured data 51, and related information 52.
  • the data receiving unit 2 receives data stored in the storage server 31 from a user or the like.
  • the storage unit 3 stores the received data in the storage device 39.
  • the data reference unit 4 returns the unstructured data 50 stored in the storage device 39 as a response in accordance with an instruction from the user or the like.
  • the structure data reference unit 5 returns the structure data 51 stored in the storage device 39 as a response in accordance with an instruction from the user or the like.
  • the unstructured data 50 is data whose structure is not defined and cannot be easily managed by the database.
  • the structure data 51 is data in which a structure is defined, and is in a format that can be easily managed in a database.
  • the structural data 51 corresponds to the metadata of the unstructured data 50.
  • the related information 52 is information for managing the correspondence relationship between the non-structure data 50 and the structure data 51.
  • the management server 32 is a computer that manages data stored in the storage server 31.
  • the memory 36 of the management server 32 includes a crawling processing unit 6, a data distribution unit 7, an audio filter unit 8, an audio recognition unit 9, an audio post-processing unit 10, an image filter unit 11, an image recognition unit 12, and an image post-processing unit 13.
  • a program for realizing the recognition result receiving unit 14, the structural data association processing unit 15, the data distribution management unit 16, and the recognition function registration unit 17 is stored.
  • the crawling processing unit 6 extracts the unstructured data 50 to be processed from the unstructured data 50 stored in the storage device 39.
  • the data distribution unit 7 transmits the extracted unstructured data 50 to a predetermined recognition function unit or device.
  • the voice filter unit 8 determines whether or not to execute voice data recognition processing on the unstructured data 50.
  • the voice recognition unit 9 performs voice data recognition processing on the unstructured data 50. As a result, the recognition result of the voice data is output.
  • the speech post-processing unit 10 converts the recognition result of the speech data output from the speech recognition unit 9 into data in a format that can be added to the structure data 51.
  • the video filter unit 11 determines whether to perform video data recognition processing on the unstructured data 50.
  • the video recognition unit 12 executes video data recognition processing on the unstructured data 50. Thereby, the recognition result of the video data is output.
  • the video post-processing unit 13 converts the recognition result of the video data output from the video recognition unit 12 into data in a format that can be added to the structure data 51.
  • the recognition result receiving unit 14 receives and temporarily holds the recognition results output from the audio post-processing unit 10 and the video post-processing unit 13.
  • the structural data association processing unit 15 reflects the recognition result for the non-structural data 50 in the structural data 51 currently stored.
  • the data distribution management unit 16 manages information for determining a recognition function unit to which the data distribution unit 7 distributes data.
  • the recognition function registration unit 17 executes processing for newly adding a recognition function unit.
  • the video server 33 is a computer that executes video data recognition processing.
  • the memory 36 of the video server 33 stores programs for realizing the video dictionary unit 19 and the video recognition processing unit 42.
  • the video dictionary unit 19 manages a dictionary used for video data recognition processing.
  • the video recognition processing unit 42 executes video data recognition processing. Note that the video data recognition process may be performed using a known technique, and a description thereof will be omitted.
  • the voice server 34 is a computer that executes voice data recognition processing.
  • the memory 36 of the voice server 34 stores a program for realizing the voice dictionary unit 18 and the voice recognition processing unit 43.
  • the voice dictionary unit 18 manages a dictionary used for voice data recognition processing.
  • the voice recognition processing unit 43 executes voice data recognition processing.
  • voice data since the recognition process of audio
  • FIG. 2 is an explanatory diagram showing an example of the related information 52 in the first embodiment of the present invention.
  • the related information 52 stores information for managing the unstructured data 50 and the structured data 51 associated with the unstructured data 50 in an integrated manner.
  • the related information 52 includes a URL 61, an unstructured data path 62, a structured data path 63, and an update time 64.
  • the URL 61 stores a URL (Uniform Resource Locator) used when accessing the unstructured data 50 or the structured data 51 stored in the storage server 31.
  • URL Uniform Resource Locator
  • the unstructured data path 62 stores the path name of the storage area in which the unstructured data 50 is stored.
  • the structure data path 63 stores the path name of the storage area in which the structure data 51 is stored.
  • the storage server 31 can manage the one URL, the unstructured data 50, and the structured data 51 in association with each other by holding the related information 52.
  • the processing of this system is divided into seven processes: data storage processing, data reference processing, structural data reference processing, data crawling processing, data recognition processing, structural data association processing, and recognition function registration processing.
  • a predetermined recognition process is performed on the stored unstructured data 50.
  • the storage server 31 and the management server 32 cooperate with each other to generate structure data using the recognition processing result.
  • the management server 32 reflects the newly generated structure data in the structure data 51 that has a corresponding relationship with the non-structure data 50.
  • FIG. 3 is a flowchart for explaining data storage processing according to the first embodiment of the present invention.
  • FIG. 4 is an explanatory diagram showing an example of the structure data in the first embodiment of the present invention.
  • the storage server 31 When the storage server 31 receives unstructured data from an external device such as an external PC or server, the storage server 31 starts data storage processing.
  • the data receiving unit 2 receives unstructured data transmitted from the external device via the relay device 38 (step S101).
  • the data reception unit 2 receives unstructured data transmitted using, for example, HTTP (HyperText Transfer Protocol).
  • HTTP HyperText Transfer Protocol
  • the present invention is not limited to the type of unstructured data, and the data receiving unit 2 can receive arbitrary files (unstructured data) such as documents, images, sounds, and moving images.
  • the data receiving unit 2 generates a URL for accessing the received unstructured data (step S102).
  • a method of using a URL specified by HTTP as it is can be considered.
  • the data reception part 2 may produce
  • a URL such as “http: //server/wav/20120401.wav” is generated.
  • the storage unit 3 stores the received unstructured data in the storage device 39 (step S103), and updates the related information 52 (step S104). Thereafter, the storage server 31 ends the process. Specifically, the following processing is executed.
  • the storage unit 3 adds a new entry to the related information 52, and stores the URL generated in step S102 in the URL 61 of the entry.
  • the storage unit 3 stores the path name where the received unstructured data is stored in the unstructured data path 62 of the added entry, and stores the time when the unstructured data is stored as the update time 64. .
  • the structure data path 63 remains blank. This is because the structure data is usually not included when the non-structure data is stored.
  • the data accepting unit 2 can accept any structural data as well as unstructured data.
  • structural data including information such as the owner of unstructured data as shown in FIG. 4 is added to the unstructured data.
  • the storage unit 3 stores the unstructured data and the structured data in the storage device 39, respectively.
  • the storage unit 3 stores the path name in which the structure data is stored in the structure data path 63 of the added entry.
  • the storage unit 3 stores the unstructured data 50 in association with the URL. Therefore, the following data reference process and structural data reference process are possible.
  • the data reference unit 4 searches the entry corresponding to the specified URL with reference to the URL 61 of the related information 52 based on the URL specified by the user. Further, the data reference unit 4 refers to the unstructured data path 62 of the retrieved entry, acquires the unstructured data 50, and returns the acquired unstructured data 50 to the user.
  • the structure data reference unit 5 searches the entry corresponding to the specified URL with reference to the URL 61 of the related information 52 based on the URL specified by the user. Further, the structure data reference unit 5 refers to the structure data path 63 of the retrieved entry, acquires the structure data 51, and returns the acquired structure data 51 to the user.
  • the system can be configured to return the unstructured data 50 or the structured data 51 acquired based on the requested URL to the user using HTTP.
  • the data reference unit 4 when the unstructured data 50 is returned to the user using HTTP, the data reference unit 4 includes the HTTP header to which the content type (data type) of the unstructured data 50 is added, and the unstructured data 50.
  • the system can be configured to return Further, when only the HTTP header is requested, the data reference unit 4 may return only the content type without returning the entire unstructured data 50.
  • FIG. 5 is a flowchart for explaining data crawling processing according to the first embodiment of the present invention.
  • Management server 32 repeatedly executes data crawling processing. For example, the management server 32 executes the data crawling process periodically or when receiving an instruction from a user or the like.
  • the crawling processing unit 6 inquires of the storage unit 3 of the storage server 31 and acquires a list of URLs 61 of the related information 52 (step S201). That is, the unstructured data 50 to be processed is extracted.
  • the crawling processing unit 6 makes an inquiry including the target time.
  • the storage unit 3 refers to the update time 64 stored in the related information 52, lists only the URL 61 of the latest data, and transmits the list of URLs 61 to the crawling processing unit 6.
  • the crawling processing unit 6 temporarily holds the latest update time 64 in the list of URLs 61 and makes an inquiry about the URL 61 that is a time after the update time 64.
  • the URL 61 list may be enlarged.
  • the storage unit 3 may list only a predetermined number of URLs 61 in order from the oldest update time 64. As will be described later, since the data crawling process is repeatedly executed after waiting for a predetermined time, it is not necessary to list all target URLs 61 at a time.
  • the data distribution unit 7 distributes the list of URLs 61 acquired by the crawling processing unit 6 to a predetermined recognition function unit (step S202).
  • the recognition function unit is a function unit that executes recognition processing, and includes a filter unit, a recognition unit, a dictionary unit, and a post-processing unit.
  • the filter unit determines whether or not the unstructured data 50 is a recognition target based on the URL 61.
  • the recognition unit acquires the unstructured data 50 from the storage server 31 based on the URL 61, and executes recognition processing on the acquired unstructured data 50 using the dictionary data held by the dictionary unit.
  • the post-processing unit generates structure data using the recognition result. That is, the post-processing unit corresponds to a functional unit (structure data generation unit) that generates structure data. Specifically, the post-processing unit converts the data into a certain structure based on the recognition result indicating the contents of the unstructured data 50, and adds the ID unique to the recognition process and the ID of the dictionary used to the data. Structure data is generated by assigning.
  • the recognition result is converted to XML format data, but the present invention is not limited to this. It suffices if it can be converted into a data format having at least a certain structure.
  • the voice recognition function unit that performs voice recognition processing includes a voice filter unit 8, a voice recognition unit 9, a voice recognition processing unit 43, a voice dictionary unit 18, and a voice post-processing unit 10, and also video recognition.
  • the video recognition function unit for processing includes a video filter unit 11, a video recognition unit 12, a video recognition processing unit 42, a video dictionary unit 19, and a video post-processing unit 13.
  • a publish / subscribe model is used as a message model for distributing the URL 61.
  • an audio filter unit 8 and a video filter unit 11 that distribute messages are registered in advance in the data distribution management unit 16 as subscriber information.
  • the data distribution unit 7 distributes the list of URLs 61 as messages to the audio filter unit 8 and the video filter unit 11 based on the subscriber information registered in the data distribution management unit 16.
  • step S203 the crawling processing unit 6 waits for a predetermined time (step S203), and then returns to step S201 to execute the same processing.
  • the URL 61 associated with the unstructured data 50 stored in the storage device 39 is notified to each recognition function unit by the data crawling process described above. In addition, by this processing, the URL 61 can be repeatedly delivered every time new unstructured data is stored in the storage server 31.
  • FIG. 6 is a flowchart for explaining data recognition processing in the first embodiment of the present invention.
  • FIG. 7 is an explanatory diagram showing an example of the structure data reflecting the structured recognition result in the first embodiment of the present invention.
  • 8 and 9 are explanatory diagrams illustrating examples of structured recognition results according to the first embodiment of the present invention.
  • Each recognition function unit starts processing upon receiving the list of URLs 61.
  • the voice recognition function unit and the video recognition function unit will be described as examples.
  • the audio filter unit 8 and the video filter unit 11 receive the list of URLs 61 transmitted from the data distribution unit 7 (step S301).
  • each filter unit since the list of URLs 61 is distributed using the publish / subscribe model, each filter unit receives the same list of URLs 61. Thereby, for example, a plurality of recognition processes, such as a voice recognition process and a video recognition process, can be executed on moving image data.
  • the audio filter unit 8 and the video filter unit 11 select one URL 61 included in the list of URLs 61, and execute the following processing on the selected URL 61.
  • the audio filter unit 8 and the video filter unit 11 determine whether or not the unstructured data 50 is a recognition target based on the type of the unstructured data 50 corresponding to the selected URL 61 (step S302). ).
  • the audio filter unit 8 and the video filter unit 11 can determine the content type (data type) of the unstructured data 50 based on the extension of the URL 61.
  • the audio filter unit 8 determines the unstructured data 50 whose URL 61 ends with “.wav” or “.mpg” as a recognition target, and the video filter unit 11 has the URL ends with “.mpg”.
  • Certain unstructured data 50 is determined as a recognition target.
  • the audio filter unit 8 and the video filter unit 11 acquire the content type of the unstructured data 50 by executing the data reference process based on the URL 61, and change the content of the acquired unstructured data 50 into the content. Based on this, it is determined whether or not the unstructured data 50 is a recognition target.
  • the audio filter unit 8 and the video filter unit 11 execute the data reference process based on the URL 61 to acquire the unstructured data 50, and based on the analysis result of the acquired unstructured data 50. Thus, it is determined whether or not the unstructured data 50 is a recognition target.
  • a method of analyzing the acquired non-structured data 50 a method of determining the content type of the non-structured data 50 by analyzing the head of the acquired non-structured data 50 or the like can be considered.
  • step S302 when it is determined that the unstructured data 50 corresponding to the URL 61 is not a recognition target, the recognition function unit ends the process.
  • step S302 When it is determined in step S302 that the unstructured data 50 corresponding to the URL 61 is a recognition target, the audio filter unit 8 and the video filter unit 11 acquire the unstructured data 50 corresponding to the URL 61 (step S303). This can be realized by the structure data reference process described above.
  • the audio filter unit 8 and the video filter unit 11 analyze the content of the acquired unstructured data 50 and determine whether or not the unstructured data 50 has been recognized (step S304).
  • FIG. 7 shows the structure data after the structure data association process described later is executed on the structure data shown in FIG. Comparing FIG. 4 and FIG. 7, it can be seen that a tag “metainfo” is given.
  • a structured recognition result is added to the metainfo tag portion.
  • the filter unit may be a method of detecting the aforementioned tag.
  • the above-described tag may be given by another recognition process, it is not sufficient for making a correct determination.
  • an ID unique to the recognition process is given to the processor_url tag in the metainfo tag.
  • a method may be considered in which the filter unit determines whether or not it has been recognized based on the ID. That is, when the ID unique to the recognition process corresponding to the structure data 51 is included, the filter unit determines that the non-structure data 50 has been recognized.
  • a method of giving the time when the recognition process is completed to the processed tag inside the metainfo tag can be considered.
  • the filter unit is not structured data only when the completion time of the recognition process is before the update time of the recognition function unit. 50 is determined to be a recognition target.
  • step S304 If it is determined in step S304 that the unstructured data 50 has been recognized, the recognition function unit ends the process.
  • step S304 When it is determined in step S304 that the unstructured data 50 has not been recognized, the voice recognition unit 9 and the video recognition unit 12 execute a recognition process on the unstructured data 50 corresponding to the URL 61 (step S305).
  • the voice recognition unit 9 executes voice recognition processing on the unstructured data 50 in cooperation with the voice recognition processing unit 43 and the voice dictionary unit 18.
  • the video recognition unit 12 executes video recognition processing on the unstructured data 50 in cooperation with the video recognition processing unit 42 and the video dictionary unit 19.
  • speech data is received, and words included in the speech data, start times and end times of the words, and the like are output as recognition results.
  • video recognition process video data is received, and the name of a person included in the video data, the appearance time and the appearance position of the person, and the like are output as a recognition result.
  • voice recognition processing and video recognition processing are taken as an example, but the present invention can apply various processing for recognizing unstructured data acquired from a document, image, voice, acceleration sensor, or the like. it can.
  • the video recognition unit 12 of the management server 32 and the video recognition processing unit 42 of the video server 33 cooperate to execute the video recognition processing
  • the voice recognition unit 9 of the management server 32 The voice recognition processing unit 43 of the voice server 34 cooperates to execute voice recognition processing.
  • the system configuration is as follows.
  • the management server 32 itself may have a system configuration that executes recognition processing.
  • the voice recognition unit 9 of the management server 32 executes data reference processing to acquire the unstructured data 50 corresponding to the URL 61, and transmits the acquired unstructured data 50 to the voice server 34.
  • the voice recognition processing unit 43 on the voice server 34 generates a recognition result using the voice dictionary unit 18, and returns the generated recognition result to the management server 32.
  • the voice recognition unit 9 of the management server 32 receives the recognition result.
  • the video recognition unit 12 cooperates with the video server 33, and the video recognition processing unit 42 generates a recognition result using the video dictionary unit 19.
  • the audio post-processing unit 10 and the video post-processing unit 13 perform post-processing on the recognition result of the confirmation formula processing (step S306).
  • the audio post-processing unit 10 and the video post-processing unit 13 generate structured data including a structured recognition result, an ID unique to the recognition process, and an ID unique to the dictionary used for the recognition process. Further, the audio post-processing unit 10 and the video post-processing unit 13 can include the recognition processing completion time in the structure data.
  • the URL of the server that executes the recognition process is used as the ID unique to the recognition process.
  • the URL of the audio server 34 is “http://sound.hitachi.com/”
  • the URL of the video server 33 is “http://video.hitachi.com/”.
  • the ID unique to the recognition process may include an ID unique to the dictionary used for the recognition process.
  • the ID unique to the recognition process including “tvnews” which is the dictionary ID held by the speech dictionary unit 18 is “http: // sound. hitachi.com/tvnews ".
  • step S304 by reflecting the generated structure data in the original structure data 51, it is possible to determine whether or not it is the recognized non-structure data 50 in step S304.
  • each recognition processing unit may be in any format, but each post-recognition processing unit has a unified XML format in order to simplify the structure of the structure data association processing unit 15 described later.
  • Generate structured data An example of XML format structure data generated by the speech post-processing unit 10 is shown in FIG. FIG. 9 shows an example of XML format structure data generated by the video post-processing unit 13.
  • the audio post-processing unit 10 and the video post-processing unit 13 transmit the structure data to the recognition result receiving unit 14 (step S307).
  • the recognition result receiving unit 14 includes a queue so that the structure data can be received from a plurality of recognition function units.
  • the audio post-processing unit 10 and the video post-processing unit 13 each transmit a message including structure data to the queue.
  • a URL 61 corresponding to the unstructured data 50 that is the recognition target in the recognition process is assigned to the header of the message transmitted to the queue.
  • the structural data including the recognition result of the unstructured data 50 stored in the storage server 31 is accumulated in the queue of the recognition result receiving unit 14 by the data recognition process described above.
  • each of the plurality of recognition function units includes a filter unit, so that only necessary recognition processing is executed.
  • FIG. 10 is a flowchart for explaining the structure data associating process according to the first embodiment of the present invention.
  • FIG. 11 is an explanatory diagram illustrating an example of structure data reflecting a plurality of structured recognition results according to the first embodiment of the present invention.
  • the recognition result receiving unit 14 acquires the structure data accumulated in the queue (step S401).
  • the structure data including the recognition result of the audio data is received earlier than the structure data including the recognition result of the video data.
  • XML format structure data as shown in FIG. 9 is acquired from the queue.
  • the structural data association processing unit 15 executes the structural data reference process to specify the URL 61 corresponding to the non-structured data 50 to be recognized, and the structural data 51 corresponding to the specified URL 61 is stored in the storage server 31. (Step S402). Here, as shown in FIG. 5, the structure data 51 not including the recognition result is acquired.
  • the structure data association processing unit 15 integrates the structure data 51 acquired from the storage server 31 and the acquired structure data (step S403).
  • the structure data association processing unit 15 generates one XML format structure data as shown in FIG. 7 by embedding the received structure data in the structure data 51 acquired from the storage server 31. To do. It is a recognition result of the audio
  • the structural data association processing unit 15 analyzes the structural data 51 acquired from the storage server 31 to identify the position where the received structural data is embedded. For example, a method of specifying a position where received data is embedded using a predetermined tag as a key is conceivable. The method described above is an example, and the present invention is not limited to this.
  • the structural data association processing unit 15 transmits the generated structural data to the storage unit 3 of the storage server 31 (step S404), and ends the process.
  • the storage unit 3 overwrites the existing structure data 51 with the received structure data as new structure data.
  • the recognition result of the recognition process for the non-structure data 50 is stored as the structure data 51 associated with the URL 61 by the above-described structure data association process.
  • the process is repeatedly executed, so that a plurality of recognition results can be included in one structure data 51.
  • the recognition result of the video data is received after receiving the recognition result of the audio data, the following processing is executed.
  • step S401 the structure data association processing unit 15 acquires XML-format structure data as shown in FIG. 10 from the queue.
  • step S402 the structure data association processing unit 15 acquires the structure data 51 including the recognition result of the sound data as shown in FIG.
  • step S403 the structure data association processing unit 15 integrates the existing structure data and the acquired structure data to generate XML structure data as shown in FIG. This is a recognition result of video data in which a portion indicated by a dotted frame in FIG. 11 is embedded.
  • step S404 the structural data association processing unit 15 transmits the structural data in which the recognition result of the video data is embedded to the storage server 31. At this time, the storage server 31 overwrites the existing structure data 51 with the received structure data.
  • a plurality of recognition results are integrated into the structure data 51 by repeatedly executing the structure data association process.
  • FIG. 12 is a flowchart illustrating the recognition function registration process according to the first embodiment of the present invention.
  • the recognition function registration unit 17 receives the recognition function unit to be added (step S501). Specifically, the recognition function registration unit 17 receives a program for realizing a predetermined recognition unit.
  • the recognition function unit is realized by the same configuration as the above-described voice recognition function unit and video recognition function unit. That is, the recognition function unit includes a filter unit, a recognition processing unit, a dictionary unit, and a post-processing unit.
  • the recognition function registration unit 17 adds a recognition processing unit by storing the received program in the memory 36 of the management server 32 (step S502).
  • the recognition function registration unit 17 notifies the identification information of the received program to the data distribution management unit 16, and the subscriber of the message distributed from the data distribution unit 7 recognizes the recognition function processing unit realized by the program. (Step S503), and the process ends.
  • the recognition function registration unit 17 can add an arbitrary recognition function unit to the computer system. At this time, by using the publish / subscribe model for the message processing of the data distribution unit 7, it can be ensured that the processing of the existing recognition processing unit is not affected.
  • the post-processing unit generates structure data, but the present invention is not limited to this. For example, the following modifications can be considered.
  • step S306 the post-processing unit generates a structured recognition result from the recognition result received from the recognition unit, and transmits a message including the structured recognition result to the recognition result receiving unit 14.
  • URL 61 ID unique to the recognition process, ID unique to the dictionary, and recognition process completion time are added to the header of the message.
  • the recognition result receiving unit 14 or the structure data association processing unit 15 generates structure data from the received message.
  • the structure data association processing unit 15 integrates the structure data and the existing structure data 51.
  • the present invention is not limited to this. Not.
  • a plurality of target recognition function units may be registered in advance, and the structure data association process may be started when structure data is received from all the recognition function units.
  • the structure data association processing unit 15 integrates a plurality of structure data and the existing structure data 51 at a time.
  • the storage server 31 stores the received unstructured data, and further associates the recognition result indicating the contents of the unstructured data with the information unique to the recognition process and the information of the dictionary. Store as structured data accompanying unstructured data.
  • the recognition result for the non-structure data can be managed as the structure data associated with the same URL when used when referring to the non-structure data.
  • the database function for storing the recognition result and the function for determining the completion of the recognition process can be realized only by the access process to the storage server 31 using the URL.
  • recognition results output from a plurality of recognition function units can be integrated into a single XML structure data for a single unstructured data.
  • the storage processing of unstructured data is realized as the entire computer system.
  • the second embodiment is different in that the storage processing of unstructured data is realized using one apparatus.
  • the second embodiment will be described focusing on differences from the first embodiment.
  • FIG. 13 is a block diagram illustrating the configuration of the unstructured data storage device 1 according to the second embodiment of the present invention.
  • the hard structure of the unstructured data storage device 1 is the same as that of the storage server 31 or the management server 32, etc., and includes a CPU (not shown), a memory (not shown), a communication device (not shown), and a storage device (not shown).
  • the unstructured data storage device 1 includes a data reception unit 2, a storage unit 3, a data reference unit 4, a structural data reference unit 5, a crawling processing unit 6, a data distribution unit 7, a voice filter unit 8, a voice recognition unit 9, Audio post-processing unit 10, video filter unit 11, video recognition unit 12, video post-processing unit 13, recognition result reception unit 14, structural data association processing unit 15, data distribution management unit 16, recognition function registration unit 17, audio dictionary unit 18 and a video dictionary unit 19.
  • the video recognition unit 12 has a function realized by the video recognition unit 12 of the management server 32 and the video recognition processing unit 42 of the video server 33.
  • the voice recognition unit 9 has a function realized by the voice recognition unit 9 of the management server 32 and the voice recognition processing unit 43 of the voice server 34.
  • the unstructured data storage device 1 provides a user interface for operating the data receiving unit 2, the data reference unit 4, the structural data reference unit 5, and the recognition function registration unit 17 to the user.
  • the data receiving unit 2 When the data receiving unit 2 receives unstructured data from the user, the data receiving unit 2 executes data storage processing in cooperation with the storage unit 3. Moreover, the data reference part 4 will perform a data reference process, if the reference request
  • the crawling processing unit 6 and the data distribution unit 7 execute data crawling processing when receiving support from the user periodically or.
  • the crawling processing unit 6 generates a URL list, and inputs the generated URL list to the data distribution unit 7.
  • the data distribution unit 7 inputs a list of URLs to a filter unit that constitutes a predetermined recognition function unit.
  • a URL list is input to at least one of the audio filter unit 8 and the video filter unit 11. Thereby, the data recognition process is started.
  • the audio filter unit 8 and the video filter unit 11 determine whether or not the unstructured data 50 corresponding to the URL is a recognition target, and whether or not the recognition process for the unstructured data 50 has been executed. judge.
  • the audio filter unit 8 and the video filter unit 11 request the audio recognition unit 9 and the video recognition unit 12 to execute processing based on the determination result.
  • the voice recognition unit 9 performs a voice data recognition process on the unstructured data 50 in cooperation with the voice dictionary unit 18 and inputs a recognition result to the voice post-processing unit 10.
  • the video recognition unit 12 performs a video data recognition process on the unstructured data 50 in cooperation with the video dictionary unit 19, and inputs a recognition result to the video post-processing unit 13.
  • the speech post-processing unit 10 generates structure data including the recognition result, the ID unique to the recognition process of the speech data, and the process completion time, and inputs the structure data to the recognition result receiving unit 14. Further, the video post-processing unit 13 generates structural data including the recognition result, the ID unique to the recognition processing of the video data, and the completion time of the processing, and inputs the structural data to the recognition result receiving unit 14.
  • the recognition result receiving unit 14 executes the structural data association processing in cooperation with the structural data association processing unit 15.
  • the structure data association processing unit 15 inputs new structure data into which the input structure data is integrated into the storage unit 3.
  • the storage unit 3 updates the input structure data by overwriting the existing structure data 51.
  • the recognition function registration unit 17 adds a new recognition function unit to the unstructured data storage device 1 by executing the recognition function registration process, and distributes subscriber information for distributing the URL to the recognition function unit. Register in the management unit 16.
  • the configuration of the computer, the processing unit, and the processing unit described in the present invention may be partially or entirely realized by dedicated hardware.
  • the various software exemplified in the present embodiment can be stored in various recording media (for example, non-transitory storage media) such as electromagnetic, electronic, and optical, and through a communication network such as the Internet. It can be downloaded to a computer.
  • the present invention is not limited to the above-described embodiment, and includes various modifications.
  • a computer system that stores unstructured data is assumed.
  • a portable information management system in which portable devices have the functions of a management server 32 and a storage server 31 and a recognition server is placed on the cloud.
  • the present invention can be applied to apparatuses and systems having various configurations.

Abstract

A framework that associates a storage computer with a recognition system is generally complex. The reason is that it is necessary to consider many such matters as a database that stores recognition results, a function that reports that recognition with respect to data has completed, throughput if a large quantity of data is entered at the same time, and coordination among a plurality of recognition systems. The present invention is a computer which manages unstructured data and structural data, wherein the computer is characterized in being provided with: a recognition unit which, with respect to the unstructured data, performs recognition processing of predetermined data types using predetermined dictionaries; and a structural data generation unit which, as a result of the recognition processing performed by the recognition unit, generates structural data which includes identification information of the recognition unit and identification information of the dictionaries used by the recognition unit.

Description

計算機、計算機システム、及びデータ管理方法Computer, computer system, and data management method
 本発明は、記憶装置に格納される非構造データに対して認識処理を実行し、記憶装置に、認識処理の結果を含むメタデータを生成する計算機、システム、及び方法に関する。 The present invention relates to a computer, a system, and a method for executing recognition processing on unstructured data stored in a storage device and generating metadata including the result of recognition processing in the storage device.
 非構造データからの情報抽出作業の自動化は、大量データを扱う多くの事業者から求められている。非構造データから情報を抽出するためには、画像認識、音声認識、及び文書構造認識といった技術が必要となる。さらに、大規模な記憶装置と認識システムとを連携させる仕組みも重要となる。 Automating information extraction from unstructured data is required by many businesses that handle large amounts of data. In order to extract information from unstructured data, techniques such as image recognition, speech recognition, and document structure recognition are required. Furthermore, a mechanism for linking a large-scale storage device and a recognition system is also important.
 記憶装置と認識システムとを連携させる仕組みの一例としては、映像データ及び音声データを個別に処理し、オブジェクトデータとメタデータとを関連付けてデータベースに格納する方法が開示されている(例えば、特許文献1参照)。 As an example of a mechanism for linking a storage device and a recognition system, a method is disclosed in which video data and audio data are individually processed, and object data and metadata are associated with each other and stored in a database (for example, Patent Documents). 1).
特開2001-167099号公報JP 2001-167099 A
 しかし、特許文献1に開示されるシステムは、映像データ及び音声データ専用のシステムであって、文書等のデータ種別が異なるデータを格納する記憶装置と連動させ、文書も認識できるように構成することは困難である。 However, the system disclosed in Patent Document 1 is a system dedicated to video data and audio data, and is configured to be able to recognize a document in conjunction with a storage device that stores data of different data types such as a document. It is difficult.
 また、記憶装置と認識システムとを連携させる仕組みは、一般に複雑である。なぜならば、認識結果を格納するデータベース、データに対して認識が完了したことを通知する機能、大量のデータが同時に入力された場合のスループット、及び複数の認識システム間の連動等、多くの事項を考慮する必要があるためである。 Also, the mechanism for linking the storage device and the recognition system is generally complicated. This is because there are many items such as a database for storing recognition results, a function for notifying that data has been recognized, throughput when a large amount of data is input simultaneously, and linkage between multiple recognition systems. This is because it is necessary to consider.
 本発明は、このような点に鑑みてなされたものであり、その目的は、記憶装置と任意の認識システムとを柔軟に連携させることが可能な装置、システム、及び方法を提供することにある。 The present invention has been made in view of these points, and an object thereof is to provide an apparatus, a system, and a method capable of flexibly linking a storage device and an arbitrary recognition system. .
 本願において開示される発明の代表的な一例を示せば以下の通りである。すなわち、一定のデータ構造を有さない非構造データ及び一定のデータ構造を有する構造データを管理する計算機であって、前記計算機は、プロセッサ、前記プロセッサに接続されるメモリ、前記プロセッサに接続される記憶デバイス、及び前記プロセッサに接続されるI/Oインタフェースを備え、前記非構造データに対して、所定の辞書を用いて所定のデータ種別の認識処理を実行する少なくとも一つの認識部と、前記認識部が実行する認識処理の結果、前記認識部の識別情報、及び前記認識部が使用した辞書の識別情報を含む前記構造データを生成する構造データ生成部と、を備えることを特徴とする。 A typical example of the invention disclosed in the present application is as follows. That is, a computer that manages unstructured data that does not have a fixed data structure and structured data that has a fixed data structure, the computer being connected to a processor, a memory connected to the processor, and the processor A storage device; and an I / O interface connected to the processor; and at least one recognition unit that executes recognition processing of a predetermined data type using a predetermined dictionary for the unstructured data; and the recognition And a structural data generating unit that generates the structural data including identification information of the recognizing unit and identification information of a dictionary used by the recognizing unit as a result of recognition processing executed by the recognizing unit.
 本発明によれば、非構造データに対する認識処理の結果、認識処理の識別情報、及び認識処理に用いられた辞書の識別情報を含む構造データを生成することによって、検索システムとの連動、複数の認識システムの同時稼働、不要な認識処理の抑止、及び複数の認識システムから出力される認識結果の統合等、非構造データに対する認識処理の結果を用いた様々な制御が可能となる。 According to the present invention, as a result of recognition processing for non-structural data, structure data including identification information for recognition processing and identification information for a dictionary used for recognition processing is generated. Various controls using the results of recognition processing on unstructured data, such as simultaneous operation of recognition systems, suppression of unnecessary recognition processing, and integration of recognition results output from a plurality of recognition systems, are possible.
 前述した以外の課題、構成及び効果は、以下の実施形態の説明によって明らかにされる。 Issues, configurations, and effects other than those described above will be clarified by the following description of the embodiments.
本発明の実施例1における計算機システムの構成例を示す説明図である。It is explanatory drawing which shows the structural example of the computer system in Example 1 of this invention. 本発明の実施例1における関連情報の一例を示す説明図である。It is explanatory drawing which shows an example of the relevant information in Example 1 of this invention. 本発明の実施例1におけるデータ格納処理を説明するフローチャートである。It is a flowchart explaining the data storage process in Example 1 of this invention. 本発明の実施例1における構造データの一例を示す説明図である。It is explanatory drawing which shows an example of the structure data in Example 1 of this invention. 本発明の実施例1におけるデータクローリング処理を説明するフローチャートである。It is a flowchart explaining the data crawling process in Example 1 of this invention. 本発明の実施例1におけるデータ認識処理を説明するフローチャートである。It is a flowchart explaining the data recognition process in Example 1 of this invention. 本発明の実施例1における構造化された認識結果が反映された構造データの一例を示す説明図である。It is explanatory drawing which shows an example of the structure data in which the structured recognition result in Example 1 of this invention was reflected. 本発明の実施例1における構造化された認識結果の一例を示す説明図である。It is explanatory drawing which shows an example of the structured recognition result in Example 1 of this invention. 本発明の実施例1における構造化された認識結果の一例を示す説明図である。It is explanatory drawing which shows an example of the structured recognition result in Example 1 of this invention. 本発明の実施例1における構造データ関連づけ処理を説明するフローチャートである。It is a flowchart explaining the structure data correlation process in Example 1 of this invention. 本発明の実施例1における複数の構造化された認識結果が反映された構造データの一例を示す説明図である。It is explanatory drawing which shows an example of the structure data in which the some structured recognition result in Example 1 of this invention was reflected. 本発明の実施例1における認識機能登録処理を説明するフローチャートである。It is a flowchart explaining the recognition function registration process in Example 1 of this invention. 本発明の実施例2における非構造データ記憶装置の構成を説明するブロック図である。It is a block diagram explaining the structure of the unstructured data storage device in Example 2 of this invention.
 以下、実施例を、図面を用いて説明する。 Hereinafter, examples will be described with reference to the drawings.
 本実施例では、画像や音声を含む非構造データを格納する記憶装置の例を説明する。 In this embodiment, an example of a storage device that stores unstructured data including images and sounds will be described.
 図1は、本発明の実施例1における計算機システムの構成例を示す説明図である。 FIG. 1 is an explanatory diagram showing a configuration example of a computer system according to the first embodiment of the present invention.
 実施例1の計算機システムは、記憶サーバ31、管理サーバ32、映像サーバ33、及び音声サーバ34から構成される。記憶サーバ31、管理サーバ32、映像サーバ33、及び音声サーバ34は、中継装置38を介して互いに接続される。なお、計算機システムは、ユーザ等が使用する端末を備えてもよい。 The computer system according to the first embodiment includes a storage server 31, a management server 32, a video server 33, and an audio server 34. The storage server 31, the management server 32, the video server 33, and the audio server 34 are connected to each other via the relay device 38. Note that the computer system may include a terminal used by a user or the like.
 以下では、記憶サーバ31、管理サーバ32、映像サーバ33、及び音声サーバ34を区別しない場合、サーバとも記載する。 Hereinafter, when the storage server 31, the management server 32, the video server 33, and the audio server 34 are not distinguished, they are also referred to as servers.
 本実施例の記憶サーバ31は、CPU35、メモリ36、通信装置37、及び記憶デバイス39を有する。記憶デバイス39は、例えば、HDD(Hard Disk Drive)及びSSD(Solid State Drive)等が考えられる。なお、記憶サーバ31は、制御部、I/Oインタフェース、及び複数の記憶デバイスを有する外部ストレージ装置と接続されてもよい。 The storage server 31 of this embodiment includes a CPU 35, a memory 36, a communication device 37, and a storage device 39. As the storage device 39, for example, an HDD (Hard Disk Drive) and an SSD (Solid State Drive) can be considered. The storage server 31 may be connected to an external storage apparatus having a control unit, an I / O interface, and a plurality of storage devices.
 また、本実施例の管理サーバ32、映像サーバ33、及び音声サーバ34は同一のハードウェア構成である。具体的には、管理サーバ32、映像サーバ33、及び音声サーバ34は、CPU35、メモリ36、通信装置37を有する。 Further, the management server 32, the video server 33, and the audio server 34 of the present embodiment have the same hardware configuration. Specifically, the management server 32, the video server 33, and the audio server 34 include a CPU 35, a memory 36, and a communication device 37.
 CPU35は、メモリ36に格納されたプログラムを実行する。CPU35がプログラムを実行することによってサーバが備える機能を実現することができる。メモリ36は、CPU35によって実行されるプログラム及び当該プログラムを実行するために必要な各種情報を格納する。通信装置37は、他のサーバと通信するための装置である。通信装置37は、例えば、ネットワークインタフェース等が考えられる。 CPU 35 executes a program stored in memory 36. The functions of the server can be realized by the CPU 35 executing the program. The memory 36 stores a program executed by the CPU 35 and various information necessary for executing the program. The communication device 37 is a device for communicating with other servers. The communication device 37 may be a network interface, for example.
 CPU35によって実行されるプログラムは、通信装置37を用いて、他のサーバと通信することによって、互いにデータを送受信する。 The program executed by the CPU 35 transmits / receives data to / from each other by communicating with other servers using the communication device 37.
 なお、記憶サーバ31、管理サーバ32、映像サーバ33、及び音声サーバ34のソフトウェア構成については後述する。 Note that the software configurations of the storage server 31, the management server 32, the video server 33, and the audio server 34 will be described later.
 中継装置38は、任意の装置からデータを受信し、他の装置に受信したデータを送信することによって、装置間のデータの送受信を中継する。なお、中継装置38は、CPU(図示省略)、メモリ(図示省略)、及び通信装置(図示省略)を有する。 The relay device 38 receives data from an arbitrary device and relays data transmission / reception between devices by transmitting the received data to other devices. The relay device 38 includes a CPU (not shown), a memory (not shown), and a communication device (not shown).
 記憶サーバ31は、各種データを格納する計算機である。記憶サーバ31のメモリ36には、データ受付部2、記憶部3、データ参照部4、構造データ参照部5を実現するプログラムが格納される。また、記憶サーバ31の記憶デバイス39には、非構造データ50、構造データ51、及び関連情報52が格納される。 The storage server 31 is a computer that stores various data. The memory 36 of the storage server 31 stores programs for realizing the data receiving unit 2, the storage unit 3, the data reference unit 4, and the structural data reference unit 5. Further, the storage device 39 of the storage server 31 stores unstructured data 50, structured data 51, and related information 52.
 データ受付部2は、ユーザ等から、記憶サーバ31に格納するデータを受け付ける。記憶部3は、受け付けたデータを記憶デバイス39に格納する。 The data receiving unit 2 receives data stored in the storage server 31 from a user or the like. The storage unit 3 stores the received data in the storage device 39.
 データ参照部4は、ユーザ等からの指示にしたがって、記憶デバイス39に格納された非構造データ50を応答として返す。構造データ参照部5は、ユーザ等からの指示にしたがって、記憶デバイス39に格納された構造データ51を応答として返す。 The data reference unit 4 returns the unstructured data 50 stored in the storage device 39 as a response in accordance with an instruction from the user or the like. The structure data reference unit 5 returns the structure data 51 stored in the storage device 39 as a response in accordance with an instruction from the user or the like.
 非構造データ50は、構造が定義されていないデータであり、データベースで容易に管理できないデータである。構造データ51は、構造が定義されたデータあり、データベースで容易に管理可能な形式のデータである。なお、構造データ51は、非構造データ50のメタデータに対応する。 The unstructured data 50 is data whose structure is not defined and cannot be easily managed by the database. The structure data 51 is data in which a structure is defined, and is in a format that can be easily managed in a database. The structural data 51 corresponds to the metadata of the unstructured data 50.
 関連情報52は、非構造データ50及び構造データ51との対応関係を管理する情報である。 The related information 52 is information for managing the correspondence relationship between the non-structure data 50 and the structure data 51.
 管理サーバ32は、記憶サーバ31に格納されるデータを管理する計算機である。管理サーバ32のメモリ36には、クローリング処理部6、データ配信部7、音声フィルタ部8、音声認識部9、音声後処理部10、映像フィルタ部11、映像認識部12、映像後処理部13、認識結果受信部14、構造データ関連づけ処理部15、データ配信管理部16、及び認識機能登録部17を実現するプログラムが格納される。 The management server 32 is a computer that manages data stored in the storage server 31. The memory 36 of the management server 32 includes a crawling processing unit 6, a data distribution unit 7, an audio filter unit 8, an audio recognition unit 9, an audio post-processing unit 10, an image filter unit 11, an image recognition unit 12, and an image post-processing unit 13. A program for realizing the recognition result receiving unit 14, the structural data association processing unit 15, the data distribution management unit 16, and the recognition function registration unit 17 is stored.
 クローリング処理部6は、記憶デバイス39に格納される非構造データ50の中から、処理対象の非構造データ50を抽出する。データ配信部7は、抽出された非構造データ50を、所定の認識機能部又は装置に送信する。 The crawling processing unit 6 extracts the unstructured data 50 to be processed from the unstructured data 50 stored in the storage device 39. The data distribution unit 7 transmits the extracted unstructured data 50 to a predetermined recognition function unit or device.
 音声フィルタ部8は、非構造データ50に対して音声データの認識処理を実行するか否かを判定する。音声認識部9は、非構造データ50に対して音声データの認識処理を実行する。これによって、音声データの認識結果が出力される。音声後処理部10は、音声認識部9から出力された音声データの認識結果を、構造データ51に追加可能な形式のデータに変換する。 The voice filter unit 8 determines whether or not to execute voice data recognition processing on the unstructured data 50. The voice recognition unit 9 performs voice data recognition processing on the unstructured data 50. As a result, the recognition result of the voice data is output. The speech post-processing unit 10 converts the recognition result of the speech data output from the speech recognition unit 9 into data in a format that can be added to the structure data 51.
 映像フィルタ部11は、非構造データ50に対して、映像データの認識処理を実行するか否かを判定する。映像認識部12は、非構造データ50に対して映像データの認識処理を実行する。これによって、映像データの認識結果が出力される。映像後処理部13は、映像認識部12から出力された映像データの認識結果を、構造データ51に追加可能な形式のデータに変換する。 The video filter unit 11 determines whether to perform video data recognition processing on the unstructured data 50. The video recognition unit 12 executes video data recognition processing on the unstructured data 50. Thereby, the recognition result of the video data is output. The video post-processing unit 13 converts the recognition result of the video data output from the video recognition unit 12 into data in a format that can be added to the structure data 51.
 認識結果受信部14は、音声後処理部10及び映像後処理部13から出力された認識結果を受信し、一時的に保持する。 The recognition result receiving unit 14 receives and temporarily holds the recognition results output from the audio post-processing unit 10 and the video post-processing unit 13.
 構造データ関連づけ処理部15は、非構造データ50に対する認識結果を、現在格納される構造データ51に反映する。 The structural data association processing unit 15 reflects the recognition result for the non-structural data 50 in the structural data 51 currently stored.
 データ配信管理部16は、データ配信部7がデータを配信する認識機能部を決定するための情報を管理する。 The data distribution management unit 16 manages information for determining a recognition function unit to which the data distribution unit 7 distributes data.
 認識機能登録部17は、新たに認識機能部を追加するための処理を実行する。 The recognition function registration unit 17 executes processing for newly adding a recognition function unit.
 映像サーバ33は、映像データの認識処理を実行する計算機である。映像サーバ33のメモリ36には、映像辞書部19、及び映像認識処理部42を実現するプログラムが格納される。 The video server 33 is a computer that executes video data recognition processing. The memory 36 of the video server 33 stores programs for realizing the video dictionary unit 19 and the video recognition processing unit 42.
 映像辞書部19は、映像データの認識処理に用いる辞書を管理する。映像認識処理部42は、映像データの認識処理を実行する。なお、映像データの認識処理は公知の技術を用いればよいため説明を省略する。 The video dictionary unit 19 manages a dictionary used for video data recognition processing. The video recognition processing unit 42 executes video data recognition processing. Note that the video data recognition process may be performed using a known technique, and a description thereof will be omitted.
 音声サーバ34は、音声データの認識処理を実行する計算機である。音声サーバ34のメモリ36には、音声辞書部18、及び音声認識処理部43を実現するプログラムが格納される。 The voice server 34 is a computer that executes voice data recognition processing. The memory 36 of the voice server 34 stores a program for realizing the voice dictionary unit 18 and the voice recognition processing unit 43.
 音声辞書部18は、音声データの認識処理に用いる辞書を管理する。音声認識処理部43は、音声データの認識処理を実行する。なお、音声データの認識処理は公知の技術を用いればよいため説明を省略する。 The voice dictionary unit 18 manages a dictionary used for voice data recognition processing. The voice recognition processing unit 43 executes voice data recognition processing. In addition, since the recognition process of audio | voice data should just use a known technique, description is abbreviate | omitted.
 図2は、本発明の実施例1における関連情報52の一例を示す説明図である。 FIG. 2 is an explanatory diagram showing an example of the related information 52 in the first embodiment of the present invention.
 関連情報52は、非構造データ50、及び非構造データ50に対応づけられる構造データ51を一元的に管理するための情報を格納する。具体的には、関連情報52は、URL61、非構造データパス62、構造データパス63、及び更新時刻64を含む。 The related information 52 stores information for managing the unstructured data 50 and the structured data 51 associated with the unstructured data 50 in an integrated manner. Specifically, the related information 52 includes a URL 61, an unstructured data path 62, a structured data path 63, and an update time 64.
 URL61には、記憶サーバ31に格納される非構造データ50又は構造データ51にアクセスする場合に用いるURL(Uniform Resource Locator)が格納される。 The URL 61 stores a URL (Uniform Resource Locator) used when accessing the unstructured data 50 or the structured data 51 stored in the storage server 31.
 非構造データパス62には、非構造データ50が格納される記憶領域のパス名が格納される。構造データパス63には、構造データ51が格納される記憶領域のパス名が格納される。 The unstructured data path 62 stores the path name of the storage area in which the unstructured data 50 is stored. The structure data path 63 stores the path name of the storage area in which the structure data 51 is stored.
 本発明では、記憶サーバ31は、関連情報52を保持することによって、一つのURLと、非構造データ50及び構造データ51とを対応づけて管理することができる。 In the present invention, the storage server 31 can manage the one URL, the unstructured data 50, and the structured data 51 in association with each other by holding the related information 52.
 次に、本実施例における計算機システムの処理について説明する。本システムの処理は、データ格納処理、データ参照処理、構造データ参照処理、データクローリング処理、データ認識処理、構造データ関連づけ処理、及び認識機能登録処理の七つの処理に分けられる。 Next, processing of the computer system in this embodiment will be described. The processing of this system is divided into seven processes: data storage processing, data reference processing, structural data reference processing, data crawling processing, data recognition processing, structural data association processing, and recognition function registration processing.
 本実施例の特徴的な処理としては以下のような処理が実行される。 The following processing is executed as characteristic processing of this embodiment.
 データ認識処理では、格納された非構造データ50に対して、所定の認識処理が実行される。このとき、記憶サーバ31及び管理サーバ32は、互いに連携して、認識処理の結果を用いて、構造データを生成する。 In the data recognition process, a predetermined recognition process is performed on the stored unstructured data 50. At this time, the storage server 31 and the management server 32 cooperate with each other to generate structure data using the recognition processing result.
 構造データ関連づけ処理において、管理サーバ32は、新たに生成された構造データを、非構造データ50と対応関係のある構造データ51に反映させる。 In the structure data association process, the management server 32 reflects the newly generated structure data in the structure data 51 that has a corresponding relationship with the non-structure data 50.
 まず、本実施例におけるデータ格納処理について説明する。 First, the data storage process in this embodiment will be described.
 図3は、本発明の実施例1におけるデータ格納処理を説明するフローチャートである。図4は、本発明の実施例1における構造データの一例を示す説明図である。 FIG. 3 is a flowchart for explaining data storage processing according to the first embodiment of the present invention. FIG. 4 is an explanatory diagram showing an example of the structure data in the first embodiment of the present invention.
 記憶サーバ31は、外部のPC又はサーバ等の外部装置から非構造データを受信すると、データ格納処理を開始する。 When the storage server 31 receives unstructured data from an external device such as an external PC or server, the storage server 31 starts data storage processing.
 データ受付部2は、外部装置から中継装置38を介して送信される非構造データを受信する(ステップS101)。データ受付部2は、例えば、HTTP(HyperText Transfer Protocol)を用いて送信された非構造データを受信する。なお、本発明は非構造データの種類に限定されず、データ受付部2は、文書、画像、音声、及び動画等の任意のファイル(非構造データ)を受信することができる。 The data receiving unit 2 receives unstructured data transmitted from the external device via the relay device 38 (step S101). The data reception unit 2 receives unstructured data transmitted using, for example, HTTP (HyperText Transfer Protocol). Note that the present invention is not limited to the type of unstructured data, and the data receiving unit 2 can receive arbitrary files (unstructured data) such as documents, images, sounds, and moving images.
 次に、データ受付部2は、受信した非構造データにアクセスするためのURLを生成する(ステップS102)。 Next, the data receiving unit 2 generates a URL for accessing the received unstructured data (step S102).
 URLの生成方法としては、HTTPで指定されたURLをそのまま利用する方法が考えられる。また、データ受付部2は、必要に応じて、送信されたファイルの名称、拡張子、及び時刻等を用いてURLを生成してもよい。この場合、例えば、「http://server/wav/20120401.wav」のようなURLが生成される。 As a URL generation method, a method of using a URL specified by HTTP as it is can be considered. Moreover, the data reception part 2 may produce | generate URL using the name of a transmitted file, an extension, time, etc. as needed. In this case, for example, a URL such as “http: //server/wav/20120401.wav” is generated.
 次に、記憶部3は、受信した非構造データを記憶デバイス39に格納し(ステップS103)、また、関連情報52を更新する(ステップS104)。その後、記憶サーバ31は、処理を終了する。具体的には、以下のような処理が実行される。 Next, the storage unit 3 stores the received unstructured data in the storage device 39 (step S103), and updates the related information 52 (step S104). Thereafter, the storage server 31 ends the process. Specifically, the following processing is executed.
 記憶部3は、関連情報52に新たなエントリを追加し、当該エントリのURL61にステップS102において生成されたURLを格納する。また、記憶部3は、追加されたエントリの非構造データパス62に、受信した非構造データが格納されるパス名を格納し、更新時刻64に、非構造データが格納された時刻を格納する。 The storage unit 3 adds a new entry to the related information 52, and stores the URL generated in step S102 in the URL 61 of the entry. In addition, the storage unit 3 stores the path name where the received unstructured data is stored in the unstructured data path 62 of the added entry, and stores the time when the unstructured data is stored as the update time 64. .
 このとき、構造データパス63は、空欄のままである。これは、通常、非構造データが格納された時点では、構造データが含まれないためである。 At this time, the structure data path 63 remains blank. This is because the structure data is usually not included when the non-structure data is stored.
 ただし、データ受付部2は、非構造データとともに、任意の構造データを受け付けることも可能である。例えば、図4に示すような非構造データの所有者等の情報を含む構造データが、非構造データに付加されることが考えられる。この場合、ステップS103において、記憶部3は、非構造データ及び構造データをそれぞれ記憶デバイス39に格納する。また、ステップS104において、記憶部3は、追加されたエントリの構造データパス63に、構造データが格納されるパス名を格納する。 However, the data accepting unit 2 can accept any structural data as well as unstructured data. For example, it is conceivable that structural data including information such as the owner of unstructured data as shown in FIG. 4 is added to the unstructured data. In this case, in step S103, the storage unit 3 stores the unstructured data and the structured data in the storage device 39, respectively. In step S104, the storage unit 3 stores the path name in which the structure data is stored in the structure data path 63 of the added entry.
 前述したようにデータ格納処理において、記憶部3は、URLと対応づけて非構造データ50を格納するため、以下のようなデータ参照処理及び構造データ参照処理が可能となる。 As described above, in the data storage process, the storage unit 3 stores the unstructured data 50 in association with the URL. Therefore, the following data reference process and structural data reference process are possible.
 データ参照処理では、データ参照部4が、ユーザが指定したURLに基づいて、関連情報52のURL61を参照して、指定されたURLに対応するエントリを検索する。さらに、データ参照部4は、検索されたエントリの非構造データパス62を参照して非構造データ50を取得し、ユーザに対して取得された非構造データ50を返す。 In the data reference process, the data reference unit 4 searches the entry corresponding to the specified URL with reference to the URL 61 of the related information 52 based on the URL specified by the user. Further, the data reference unit 4 refers to the unstructured data path 62 of the retrieved entry, acquires the unstructured data 50, and returns the acquired unstructured data 50 to the user.
 構造データ参照動作では、構造データ参照部5が、ユーザが指定したURLに基づいて、関連情報52のURL61を参照して、指定されたURLに対応するエントリを検索する。さらに、構造データ参照部5は、検索されたエントリの構造データパス63を参照して構造データ51を取得し、ユーザに対して取得された構造データ51を返す。 In the structure data reference operation, the structure data reference unit 5 searches the entry corresponding to the specified URL with reference to the URL 61 of the related information 52 based on the URL specified by the user. Further, the structure data reference unit 5 refers to the structure data path 63 of the retrieved entry, acquires the structure data 51, and returns the acquired structure data 51 to the user.
 例えば、HTTPを用いて、要求されたURLに基づいて取得された非構造データ50又は構造データ51をユーザに返すように、システムを構成することができる。また、データ参照処理において、HTTPを用いて非構造データ50をユーザに返す場合、データ参照部4は、非構造データ50のコンテンツ種別(データ種別)が付与されたHTTPヘッダとともに、非構造データ50を返すようにシステムを構成することができる。また、HTTPヘッダのみが要求された場合、データ参照部4は、非構造データ50全体を返さず、コンテンツ種別のみを返すようにしてもよい。 For example, the system can be configured to return the unstructured data 50 or the structured data 51 acquired based on the requested URL to the user using HTTP. In the data reference process, when the unstructured data 50 is returned to the user using HTTP, the data reference unit 4 includes the HTTP header to which the content type (data type) of the unstructured data 50 is added, and the unstructured data 50. The system can be configured to return Further, when only the HTTP header is requested, the data reference unit 4 may return only the content type without returning the entire unstructured data 50.
 次に、本実施例におけるデータクローリング処理を説明する。 Next, data crawling processing in the present embodiment will be described.
 図5は、本発明の実施例1におけるデータクローリング処理を説明するフローチャートである。 FIG. 5 is a flowchart for explaining data crawling processing according to the first embodiment of the present invention.
 管理サーバ32は、繰り返しデータクローリング処理を実行する。例えば、管理サーバ32は、周期的、又は、ユーザ等から指示を受け付けた場合に、データクローリング処理を実行する。 Management server 32 repeatedly executes data crawling processing. For example, the management server 32 executes the data crawling process periodically or when receiving an instruction from a user or the like.
 クローリング処理部6は、記憶サーバ31の記憶部3に問い合わせて、関連情報52のURL61のリストを取得する(ステップS201)。すなわち、処理対象の非構造データ50が抽出される。 The crawling processing unit 6 inquires of the storage unit 3 of the storage server 31 and acquires a list of URLs 61 of the related information 52 (step S201). That is, the unstructured data 50 to be processed is extracted.
 本実施例では、新たに格納された非構造データ50と対応づけられるURL61のみを抽出する対象とする。したがって、クローリング処理部6は、対象となる時刻を含めた問い合わせを行う。記憶部3は、当該問い合わせを受け付けると、関連情報52に格納される更新時刻64を参照して、最新のデータのURL61のみをリスト化し、クローリング処理部6にURL61のリストを送信する。 In the present embodiment, only the URL 61 associated with the newly stored unstructured data 50 is the target to be extracted. Therefore, the crawling processing unit 6 makes an inquiry including the target time. Upon receipt of the inquiry, the storage unit 3 refers to the update time 64 stored in the related information 52, lists only the URL 61 of the latest data, and transmits the list of URLs 61 to the crawling processing unit 6.
 前述した問い合わせを行うために、クローリング処理部6は、URL61のリストにおける最新の更新時刻64を一時的に保持しておき、当該更新時刻64以後の時刻であるURL61の問い合わせを行う。 In order to make the above-described inquiry, the crawling processing unit 6 temporarily holds the latest update time 64 in the list of URLs 61 and makes an inquiry about the URL 61 that is a time after the update time 64.
 なお、一定期間内に大量の非構造データが格納された場合、URL61のリストが肥大化するケースが考えられる。この場合、記憶部3は、所定数のURL61のみを、更新時刻64の古い順にリスト化するようにしてもよい。データクローリング処理は、後述するように、一定時間の待機した後に繰り返し実行されるため、一度に、対象となる全てのURL61をリスト化する必要はない。 Note that when a large amount of unstructured data is stored within a certain period of time, the URL 61 list may be enlarged. In this case, the storage unit 3 may list only a predetermined number of URLs 61 in order from the oldest update time 64. As will be described later, since the data crawling process is repeatedly executed after waiting for a predetermined time, it is not necessary to list all target URLs 61 at a time.
 次に、データ配信部7は、クローリング処理部6によって取得されたURL61のリストを、所定の認識機能部に配信する(ステップS202)。 Next, the data distribution unit 7 distributes the list of URLs 61 acquired by the crawling processing unit 6 to a predetermined recognition function unit (step S202).
 ここで、認識機能部とは、認識処理を実行する機能部であり、フィルタ部、認識部、辞書部、及び後処理部から構成される。 Here, the recognition function unit is a function unit that executes recognition processing, and includes a filter unit, a recognition unit, a dictionary unit, and a post-processing unit.
 フィルタ部は、URL61に基づいて、非構造データ50が認識対象であるか否かを判定する。 The filter unit determines whether or not the unstructured data 50 is a recognition target based on the URL 61.
 認識部は、URL61に基づいて、記憶サーバ31から非構造データ50を取得し、辞書部が保持する辞書データを用いて、取得された非構造データ50に対する認識処理を実行する。 The recognition unit acquires the unstructured data 50 from the storage server 31 based on the URL 61, and executes recognition processing on the acquired unstructured data 50 using the dictionary data held by the dictionary unit.
 後処理部は、認識結果を用いて構造データを生成する。すなわち、後処理部は、構造データを生成する機能部(構造データ生成部)に対応する。具体的には、後処理部は、非構造データ50の内容を示す認識結果に基づいて、一定の構造を有するデータに変換し、当該データに認識処理固有のID及び使用された辞書のIDを付与することによって構造データを生成する。 The post-processing unit generates structure data using the recognition result. That is, the post-processing unit corresponds to a functional unit (structure data generation unit) that generates structure data. Specifically, the post-processing unit converts the data into a certain structure based on the recognition result indicating the contents of the unstructured data 50, and adds the ID unique to the recognition process and the ID of the dictionary used to the data. Structure data is generated by assigning.
 本実施例では、認識結果はXML形式のデータに変換されるものとするが、本発明はこれに限定されない。少なくとも一定の構造を有するデータ形式に変換できればよい。 In this embodiment, the recognition result is converted to XML format data, but the present invention is not limited to this. It suffices if it can be converted into a data format having at least a certain structure.
 具体的には、音声認識処理を行う音声認識機能部は、音声フィルタ部8、音声認識部9、音声認識処理部43、音声辞書部18、音声後処理部10から構成され、また、映像認識処理を行う映像認識機能部は、映像フィルタ部11、映像認識部12、映像認識処理部42、映像辞書部19、及び映像後処理部13から構成される。 Specifically, the voice recognition function unit that performs voice recognition processing includes a voice filter unit 8, a voice recognition unit 9, a voice recognition processing unit 43, a voice dictionary unit 18, and a voice post-processing unit 10, and also video recognition. The video recognition function unit for processing includes a video filter unit 11, a video recognition unit 12, a video recognition processing unit 42, a video dictionary unit 19, and a video post-processing unit 13.
 本実施例では、URL61を配信するためのメッセージモデルとして、パブリッシュ・サブスクライブモデルを用いる。具体的には、あらかじめ、購読者情報として、メッセージを配信する音声フィルタ部8と映像フィルタ部11をデータ配信管理部16に登録しておく。データ配信部7は、データ配信管理部16に登録される購読者情報に基づいて、URL61のリストをメッセージとして音声フィルタ部8及び映像フィルタ部11に配信する。 In this embodiment, a publish / subscribe model is used as a message model for distributing the URL 61. Specifically, an audio filter unit 8 and a video filter unit 11 that distribute messages are registered in advance in the data distribution management unit 16 as subscriber information. The data distribution unit 7 distributes the list of URLs 61 as messages to the audio filter unit 8 and the video filter unit 11 based on the subscriber information registered in the data distribution management unit 16.
 最後に、クローリング処理部6は、一定時間待機し(ステップS203)、その後、ステップS201に戻り、同様の処理を実行する。 Finally, the crawling processing unit 6 waits for a predetermined time (step S203), and then returns to step S201 to execute the same processing.
 前述したデータクローリング処理によって、記憶デバイス39に格納された非構造データ50に対応づけられたURL61が、各認識機能部に通知される。また、当該処理によって、記憶サーバ31に新規の非構造データが格納されるたびに、繰り返し、URL61を配信することができるようになる。 The URL 61 associated with the unstructured data 50 stored in the storage device 39 is notified to each recognition function unit by the data crawling process described above. In addition, by this processing, the URL 61 can be repeatedly delivered every time new unstructured data is stored in the storage server 31.
 次に、本実施例におけるデータ認識処理について説明する。 Next, data recognition processing in this embodiment will be described.
 図6は、本発明の実施例1におけるデータ認識処理を説明するフローチャートである。図7は、本発明の実施例1における構造化された認識結果が反映された構造データの一例を示す説明図である。図8及び図9は、本発明の実施例1における構造化された認識結果の一例を示す説明図である。 FIG. 6 is a flowchart for explaining data recognition processing in the first embodiment of the present invention. FIG. 7 is an explanatory diagram showing an example of the structure data reflecting the structured recognition result in the first embodiment of the present invention. 8 and 9 are explanatory diagrams illustrating examples of structured recognition results according to the first embodiment of the present invention.
 各認識機能部は、URL61のリストを受信すると処理を開始する。以下では、音声認識機能部及び映像認識機能部を例に説明する。 Each recognition function unit starts processing upon receiving the list of URLs 61. Hereinafter, the voice recognition function unit and the video recognition function unit will be described as examples.
 音声フィルタ部8及び映像フィルタ部11は、データ配信部7から送信されたURL61のリストを受信する(ステップS301)。 The audio filter unit 8 and the video filter unit 11 receive the list of URLs 61 transmitted from the data distribution unit 7 (step S301).
 前述したデータクローリング処理では、パブリッシュ・サブスクライブモデルを用いてURL61のリストが配信されるため、それぞれのフィルタ部は同一のURL61のリストを受信する。これによって、例えば、動画像データに対し、音声認識処理及び映像認識処理の複数の認識処理を実行することができる。 In the above-described data crawling process, since the list of URLs 61 is distributed using the publish / subscribe model, each filter unit receives the same list of URLs 61. Thereby, for example, a plurality of recognition processes, such as a voice recognition process and a video recognition process, can be executed on moving image data.
 音声フィルタ部8及び映像フィルタ部11は、URL61のリストに含まれるURL61を一つ選択し、選択されたURL61に対して以下の処理を実行する。 The audio filter unit 8 and the video filter unit 11 select one URL 61 included in the list of URLs 61, and execute the following processing on the selected URL 61.
 次に、音声フィルタ部8及び映像フィルタ部11は、選択されたURL61に対応する非構造データ50の種別に基づいて、当該非構造データ50が認識対象であるか否かを判定する(ステップS302)。 Next, the audio filter unit 8 and the video filter unit 11 determine whether or not the unstructured data 50 is a recognition target based on the type of the unstructured data 50 corresponding to the selected URL 61 (step S302). ).
 例えば、音声フィルタ部8及び映像フィルタ部11は、URL61の拡張子に基づいて、非構造データ50のコンテンツ種別(データ種別)を判定することができる。このとき、音声フィルタ部8は、URL61の末尾が「.wav」又は「.mpg」である非構造データ50を認識対象として判定し、映像フィルタ部11は、URLの末尾が「.mpg」である非構造データ50を認識対象として判定する。 For example, the audio filter unit 8 and the video filter unit 11 can determine the content type (data type) of the unstructured data 50 based on the extension of the URL 61. At this time, the audio filter unit 8 determines the unstructured data 50 whose URL 61 ends with “.wav” or “.mpg” as a recognition target, and the video filter unit 11 has the URL ends with “.mpg”. Certain unstructured data 50 is determined as a recognition target.
 また、他の方法として、音声フィルタ部8及び映像フィルタ部11は、URL61に基づくデータ参照処理を実行することによって非構造データ50のコンテンツ種別を取得し、取得された非構造データ50の内容に基づいて、非構造データ50が認識対象であるか否かを判定する。 As another method, the audio filter unit 8 and the video filter unit 11 acquire the content type of the unstructured data 50 by executing the data reference process based on the URL 61, and change the content of the acquired unstructured data 50 into the content. Based on this, it is determined whether or not the unstructured data 50 is a recognition target.
 また、他の方法としては、音声フィルタ部8及び映像フィルタ部11は、URL61に基づくデータ参照処理を実行することによって非構造データ50を取得し、取得された非構造データ50の解析結果に基づいて、非構造データ50が認識対象であるか否かを判定する。なお、取得された非構造データ50の解析方法としては、取得された非構造データ50の先頭部等を解析して、当該非構造データ50のコンテンツ種別を判定する方法が考えられる。 As another method, the audio filter unit 8 and the video filter unit 11 execute the data reference process based on the URL 61 to acquire the unstructured data 50, and based on the analysis result of the acquired unstructured data 50. Thus, it is determined whether or not the unstructured data 50 is a recognition target. As a method of analyzing the acquired non-structured data 50, a method of determining the content type of the non-structured data 50 by analyzing the head of the acquired non-structured data 50 or the like can be considered.
 ステップS302において、URL61に対応する非構造データ50が認識対象でないと判定された場合、認識機能部は、処理を終了する。 In step S302, when it is determined that the unstructured data 50 corresponding to the URL 61 is not a recognition target, the recognition function unit ends the process.
 ステップS302において、URL61に対応する非構造データ50が認識対象であると判定された場合、音声フィルタ部8及び映像フィルタ部11は、URL61に対応する非構造データ50を取得する(ステップS303)。これは、前述した構造データ参照処理によって実現できる。 When it is determined in step S302 that the unstructured data 50 corresponding to the URL 61 is a recognition target, the audio filter unit 8 and the video filter unit 11 acquire the unstructured data 50 corresponding to the URL 61 (step S303). This can be realized by the structure data reference process described above.
 次に、音声フィルタ部8及び映像フィルタ部11は、取得された非構造データ50の内容を解析して、当該非構造データ50が認識済みであるか否かを判定する(ステップS304)。 Next, the audio filter unit 8 and the video filter unit 11 analyze the content of the acquired unstructured data 50 and determine whether or not the unstructured data 50 has been recognized (step S304).
 ここで、図7を用いて認識済みであるか否かを判定する方法の一例について説明する。図7は、図4に示す構造データに対して、後述する構造データ関連づけ処理が実行された後の構造データを示す。図4と図7とを比較すると、metainfoというタグが付与されていることが分かる。本実施例では、metainfoタグの部分に構造化された認識結果が追加される。 Here, an example of a method for determining whether or not it has been recognized will be described with reference to FIG. FIG. 7 shows the structure data after the structure data association process described later is executed on the structure data shown in FIG. Comparing FIG. 4 and FIG. 7, it can be seen that a tag “metainfo” is given. In the present embodiment, a structured recognition result is added to the metainfo tag portion.
 認識済みであるか否かを判定する最も簡単な方法としては、フィルタ部は、前述のタグを検出する方法が考えられる。しかし、前述のタグは、別の認識処理によって付与されたものである可能性があるため、正しい判定を行うには不十分である。 As the simplest method for determining whether or not it has been recognized, the filter unit may be a method of detecting the aforementioned tag. However, since the above-described tag may be given by another recognition process, it is not sufficient for making a correct determination.
 そこで、本実施例では、metainfoタグの中にあるprocessor_urlタグに認識処理固有のIDを付与する。これによって、フィルタ部は、当該IDに基づいて、認識済みであるか否かを判定する方法が考えられる。すなわち、フィルタ部は、構造データ51に対応する認識処理固有のIDが含まれる場合、非構造データ50が認識済みであると判定する。 Therefore, in this embodiment, an ID unique to the recognition process is given to the processor_url tag in the metainfo tag. Thus, a method may be considered in which the filter unit determines whether or not it has been recognized based on the ID. That is, when the ID unique to the recognition process corresponding to the structure data 51 is included, the filter unit determines that the non-structure data 50 has been recognized.
 また、他の方法としては、metainfoタグの内部のprocessedタグに認識処理が完了した時刻を付与する方法が考えられる。これによって、例えば、認識機能部の更新に伴って、再度、認識処理が実行されると、フィルタ部は、認識処理の完了時刻が認識機能部の更新時刻より以前の場合にのみ、非構造データ50が認識対象であると判定する。 Also, as another method, a method of giving the time when the recognition process is completed to the processed tag inside the metainfo tag can be considered. Thus, for example, when the recognition process is executed again in association with the update of the recognition function unit, the filter unit is not structured data only when the completion time of the recognition process is before the update time of the recognition function unit. 50 is determined to be a recognition target.
 ステップS304において、非構造データ50が認識済みであると判定された場合、認識機能部は、処理を終了する。 If it is determined in step S304 that the unstructured data 50 has been recognized, the recognition function unit ends the process.
 ステップS304において、非構造データ50が認識済みでないと判定された場合、音声認識部9及び映像認識部12は、URL61に対応する非構造データ50に対して認識処理を実行する(ステップS305)。 When it is determined in step S304 that the unstructured data 50 has not been recognized, the voice recognition unit 9 and the video recognition unit 12 execute a recognition process on the unstructured data 50 corresponding to the URL 61 (step S305).
 具体的には、音声認識部9は、音声認識処理部43及び音声辞書部18と連携して、非構造データ50に対して音声認識処理を実行する。また、映像認識部12は、映像認識処理部42及び映像辞書部19と連携して、非構造データ50に対する映像認識処理を実行する。 Specifically, the voice recognition unit 9 executes voice recognition processing on the unstructured data 50 in cooperation with the voice recognition processing unit 43 and the voice dictionary unit 18. In addition, the video recognition unit 12 executes video recognition processing on the unstructured data 50 in cooperation with the video recognition processing unit 42 and the video dictionary unit 19.
 ここで、音声認識処理では、音声データを受け付け、音声データの中に含まれる単語、当該単語の開始時刻及び終了時刻などが認識結果として出力される。また、映像認識処理では、映像データを受け付け、映像データの中に含まれる人物の名前、当該人物の出現時刻及び出現位置などが認識結果として出力される。 Here, in the speech recognition process, speech data is received, and words included in the speech data, start times and end times of the words, and the like are output as recognition results. In the video recognition process, video data is received, and the name of a person included in the video data, the appearance time and the appearance position of the person, and the like are output as a recognition result.
 ここでは、一例として音声認識処理及び映像認識処理を取り上げたが、本発明は、文書、画像、音声、又は加速度センサ等から取得された非構造データを認識するための各種処理を適用することができる。 Here, voice recognition processing and video recognition processing are taken as an example, but the present invention can apply various processing for recognizing unstructured data acquired from a document, image, voice, acceleration sensor, or the like. it can.
 本実施例では、前述したように、管理サーバ32の映像認識部12と、映像サーバ33の映像認識処理部42とが連携して映像認識処理を実行し、管理サーバ32の音声認識部9と、音声サーバ34の音声認識処理部43とが連携して音声認識処理を実行する。 In this embodiment, as described above, the video recognition unit 12 of the management server 32 and the video recognition processing unit 42 of the video server 33 cooperate to execute the video recognition processing, and the voice recognition unit 9 of the management server 32 The voice recognition processing unit 43 of the voice server 34 cooperates to execute voice recognition processing.
 一般に、映像認識処理及び音声認識処理は、メッセージ転送等の処理より処理時間が長いため、別のサーバが認識処理を実行することによってシステム全体の処理性能を落とさないようにするために、前述したようなシステム構成となっている。なお、管理サーバ32自身が認識処理を実行するシステム構成であってもよい。 In general, the video recognition process and the voice recognition process have a longer processing time than a process such as message transfer. Therefore, in order to prevent another server from executing the recognition process, the processing performance of the entire system is not degraded. The system configuration is as follows. The management server 32 itself may have a system configuration that executes recognition processing.
 前述したシステム構成では、管理サーバ32の音声認識部9は、データ参照処理を実行してURL61に対応する非構造データ50を取得し、取得された非構造データ50を音声サーバ34に送信する。次に、音声サーバ34上の音声認識処理部43は、音声辞書部18を用いて認識結果を生成し、生成された認識結果を管理サーバ32に返す。管理サーバ32の音声認識部9は、その認識結果を受信する。同様に、映像認識部12も映像サーバ33と連携し、映像認識処理部42が映像辞書部19を用いて認識結果を生成する。 In the system configuration described above, the voice recognition unit 9 of the management server 32 executes data reference processing to acquire the unstructured data 50 corresponding to the URL 61, and transmits the acquired unstructured data 50 to the voice server 34. Next, the voice recognition processing unit 43 on the voice server 34 generates a recognition result using the voice dictionary unit 18, and returns the generated recognition result to the management server 32. The voice recognition unit 9 of the management server 32 receives the recognition result. Similarly, the video recognition unit 12 cooperates with the video server 33, and the video recognition processing unit 42 generates a recognition result using the video dictionary unit 19.
 次に、音声後処理部10及び映像後処理部13は、確認式処理の認識結果に対して、後処理を実行する(ステップS306)。 Next, the audio post-processing unit 10 and the video post-processing unit 13 perform post-processing on the recognition result of the confirmation formula processing (step S306).
 具体的には、音声後処理部10と映像後処理部13は、構造化された認識結果、認識処理固有のID、及び認識処理に用いられる辞書固有のIDを含む構造データを生成する。また、音声後処理部10と映像後処理部13は、構造データに、認識処理完了時刻を含めることもできる。 Specifically, the audio post-processing unit 10 and the video post-processing unit 13 generate structured data including a structured recognition result, an ID unique to the recognition process, and an ID unique to the dictionary used for the recognition process. Further, the audio post-processing unit 10 and the video post-processing unit 13 can include the recognition processing completion time in the structure data.
 本実施例では、認識処理固有のIDとして認識処理を実行するサーバのURLを用いる。ここでは、音声サーバ34のURLを「http://sound.hitachi.com/」、映像サーバ33のURLを「http://video.hitachi.com/」とする。また、認識処理固有のIDには、認識処理に用いられる辞書固有のIDを含めることもできる。認識処理に用いられる辞書もURLによって指定するようにシステムを構成した場合、音声辞書部18が保持する辞書のIDである「tvnews」を含む認識処理固有のIDは、「http://sound.hitachi.com/tvnews」のように決定される。 In this embodiment, the URL of the server that executes the recognition process is used as the ID unique to the recognition process. Here, it is assumed that the URL of the audio server 34 is “http://sound.hitachi.com/”, and the URL of the video server 33 is “http://video.hitachi.com/”. Further, the ID unique to the recognition process may include an ID unique to the dictionary used for the recognition process. When the system is configured so that the dictionary used for the recognition process is also specified by the URL, the ID unique to the recognition process including “tvnews” which is the dictionary ID held by the speech dictionary unit 18 is “http: // sound. hitachi.com/tvnews ".
 後述するように、元の構造データ51に、生成された構造データが反映されることによって、ステップS304において、認識済みの非構造データ50であるか否かを判定することができる。 As described later, by reflecting the generated structure data in the original structure data 51, it is possible to determine whether or not it is the recognized non-structure data 50 in step S304.
 また、各認識処理部から出力される認識結果は任意の形式でよいが、各認識後処理部は、後述の構造データ関連づけ処理部15の構成を単純にするために、統一されたXML形式の構造化データを生成する。音声後処理部10が生成するXML形式の構造データの一例を図8に示す。また、映像後処理部13が生成するXML形式の構造データの一例を図9に示す。 The recognition result output from each recognition processing unit may be in any format, but each post-recognition processing unit has a unified XML format in order to simplify the structure of the structure data association processing unit 15 described later. Generate structured data. An example of XML format structure data generated by the speech post-processing unit 10 is shown in FIG. FIG. 9 shows an example of XML format structure data generated by the video post-processing unit 13.
 次に、音声後処理部10及び映像後処理部13は、認識結果受信部14に、構造データを送信する(ステップS307)。 Next, the audio post-processing unit 10 and the video post-processing unit 13 transmit the structure data to the recognition result receiving unit 14 (step S307).
 ここでは、複数の認識機能部から構造データを受信することができるようにするために、認識結果受信部14はキューを備えるものとする。この場合、音声後処理部10及び映像後処理部13は、それぞれ、構造データが含まれるメッセージを、当該キューに送信する。また、当該キューに送信されるメッセージのヘッダには、認識処理において認識対象とされた非構造データ50に対応するURL61が付与される。 Here, it is assumed that the recognition result receiving unit 14 includes a queue so that the structure data can be received from a plurality of recognition function units. In this case, the audio post-processing unit 10 and the video post-processing unit 13 each transmit a message including structure data to the queue. In addition, a URL 61 corresponding to the unstructured data 50 that is the recognition target in the recognition process is assigned to the header of the message transmitted to the queue.
 前述のデータ認識処理によって、記憶サーバ31に格納された非構造データ50の認識結果を含む構造データが、認識結果受信部14のキューに蓄積される。 The structural data including the recognition result of the unstructured data 50 stored in the storage server 31 is accumulated in the queue of the recognition result receiving unit 14 by the data recognition process described above.
 また、本実施例では、複数の認識機能部の各々が、フィルタ部を備えることによって、必要な認識処理のみが実行される。 In the present embodiment, each of the plurality of recognition function units includes a filter unit, so that only necessary recognition processing is executed.
 次に、本実施例における構造データ関連づけ処理を説明する。 Next, the structure data association process in this embodiment will be described.
 図10は、本発明の実施例1における構造データ関連づけ処理を説明するフローチャートである。図11は、本発明の実施例1における複数の構造化された認識結果が反映された構造データの一例を示す説明図である。 FIG. 10 is a flowchart for explaining the structure data associating process according to the first embodiment of the present invention. FIG. 11 is an explanatory diagram illustrating an example of structure data reflecting a plurality of structured recognition results according to the first embodiment of the present invention.
 まず、認識結果受信部14は、キューに蓄積された構造データを取得する(ステップS401)。ここでは、音声データの認識結果を含む構造データが映像データの認識結果を含む構造データよりも早く受信された場合を想定する。この場合、図9のようなXML形式の構造データがキューから取得される。 First, the recognition result receiving unit 14 acquires the structure data accumulated in the queue (step S401). Here, it is assumed that the structure data including the recognition result of the audio data is received earlier than the structure data including the recognition result of the video data. In this case, XML format structure data as shown in FIG. 9 is acquired from the queue.
 次に、構造データ関連づけ処理部15は、構造データ参照処理を実行することによって、認識対象の非構造データ50に対応するURL61を特定し、特定されたURL61に対応する構造データ51を記憶サーバ31から取得する(ステップS402)。ここでは、図5に示すような、認識結果が含まない構造データ51が取得される。 Next, the structural data association processing unit 15 executes the structural data reference process to specify the URL 61 corresponding to the non-structured data 50 to be recognized, and the structural data 51 corresponding to the specified URL 61 is stored in the storage server 31. (Step S402). Here, as shown in FIG. 5, the structure data 51 not including the recognition result is acquired.
 次に、構造データ関連づけ処理部15は、記憶サーバ31から取得された構造データ51と、取得された構造データとを統合する(ステップS403)。 Next, the structure data association processing unit 15 integrates the structure data 51 acquired from the storage server 31 and the acquired structure data (step S403).
 具体的には、構造データ関連づけ処理部15は、記憶サーバ31から取得された構造データ51の内部に、受信した構造データを埋め込むことによって、図7のような一つのXML形式の構造データを生成する。図7の点線の枠で示した部分が埋め込まれた音声データの認識結果である。 Specifically, the structure data association processing unit 15 generates one XML format structure data as shown in FIG. 7 by embedding the received structure data in the structure data 51 acquired from the storage server 31. To do. It is a recognition result of the audio | speech data in which the part shown with the dotted-line frame of FIG. 7 was embedded.
 ここで、受信した構造データを埋め込む方法としては、構造データ関連づけ処理部15は、記憶サーバ31から取得された構造データ51を解析することによって、受信した構造データを埋め込む位置を特定する。例えば、所定のタグをキーとして、受信したデータを埋め込む位置を特定する方法が考えられる。なお、前述した方法は一例であって、本発明はこれに限定されない。 Here, as a method of embedding the received structural data, the structural data association processing unit 15 analyzes the structural data 51 acquired from the storage server 31 to identify the position where the received structural data is embedded. For example, a method of specifying a position where received data is embedded using a predetermined tag as a key is conceivable. The method described above is an example, and the present invention is not limited to this.
 次に、構造データ関連づけ処理部15は、記憶サーバ31の記憶部3に、生成された構造データを送信し(ステップS404)、処理を終了する。 Next, the structural data association processing unit 15 transmits the generated structural data to the storage unit 3 of the storage server 31 (step S404), and ends the process.
 このとき、記憶部3は、受信した構造データを、新たな構造データとして、既存の構造データ51に上書きする。 At this time, the storage unit 3 overwrites the existing structure data 51 with the received structure data as new structure data.
 前述の構造データ関連づけ処理によって、非構造データ50に対する認識処理の認識結果が、URL61に対応づけられた構造データ51として格納される。また、認識機能部から認識結果を受信するたびに、当該処理が繰り返し実行されるため、複数の認識結果を一つの構造データ51に含めることが可能となる。 The recognition result of the recognition process for the non-structure data 50 is stored as the structure data 51 associated with the URL 61 by the above-described structure data association process. In addition, each time a recognition result is received from the recognition function unit, the process is repeatedly executed, so that a plurality of recognition results can be included in one structure data 51.
 ここで、音声データの認識結果を受信した後に、映像データの認識結果が受信された場合には以下のような処理が実行される。 Here, if the recognition result of the video data is received after receiving the recognition result of the audio data, the following processing is executed.
 ステップS401において、構造データ関連づけ処理部15は、図10に示すようなXML形式の構造データをキューから取得する。 In step S401, the structure data association processing unit 15 acquires XML-format structure data as shown in FIG. 10 from the queue.
 ステップS402において、構造データ関連づけ処理部15は、記憶サーバ31から図8に示すような音声データの認識結果を含む構造データ51を取得する。 In step S402, the structure data association processing unit 15 acquires the structure data 51 including the recognition result of the sound data as shown in FIG.
 ステップS403において、構造データ関連づけ処理部15は、既存の構造データと、取得された構造データとを統合することによって、図11に示すようなXML形式の構造データを生成する。図11の点線の枠で示した部分が埋め込まれた映像データの認識結果である。 In step S403, the structure data association processing unit 15 integrates the existing structure data and the acquired structure data to generate XML structure data as shown in FIG. This is a recognition result of video data in which a portion indicated by a dotted frame in FIG. 11 is embedded.
 ステップS404において、構造データ関連づけ処理部15は、映像データの認識結果が埋め込まれた構造データを記憶サーバ31に送信する。このとき、記憶サーバ31は、受信した構造データを、既存の構造データ51に上書きする。 In step S404, the structural data association processing unit 15 transmits the structural data in which the recognition result of the video data is embedded to the storage server 31. At this time, the storage server 31 overwrites the existing structure data 51 with the received structure data.
 前述のように、構造データ関連づけ処理が繰り返し実行されることによって、複数の認識結果が構造データ51に統合される。 As described above, a plurality of recognition results are integrated into the structure data 51 by repeatedly executing the structure data association process.
 次に、本実施例における認識機能登録処理について説明する。 Next, the recognition function registration process in the present embodiment will be described.
 図12は、本発明の実施例1における認識機能登録処理を説明するフローチャートである。 FIG. 12 is a flowchart illustrating the recognition function registration process according to the first embodiment of the present invention.
 認識機能登録部17は、追加される認識機能部を受信する(ステップS501)。具体的には、認識機能登録部17は、所定の認識部を実現するためのプログラムを受信する。 The recognition function registration unit 17 receives the recognition function unit to be added (step S501). Specifically, the recognition function registration unit 17 receives a program for realizing a predetermined recognition unit.
 ここで、認識機能部は、前述の音声認識機能部及び映像認識機能部と同様の構成によって実現される。すなわち、認識機能部は、フィルタ部、認識処理部、辞書部、及び後処理部から構成される。 Here, the recognition function unit is realized by the same configuration as the above-described voice recognition function unit and video recognition function unit. That is, the recognition function unit includes a filter unit, a recognition processing unit, a dictionary unit, and a post-processing unit.
 次に、認識機能登録部17は、受信したプログラムを管理サーバ32のメモリ36に格納することによって、認識処理部を追加する(ステップS502)。 Next, the recognition function registration unit 17 adds a recognition processing unit by storing the received program in the memory 36 of the management server 32 (step S502).
 次に、認識機能登録部17は、受信したプログラムの識別情報を、データ配信管理部16に通知し、当該プログラムによって実現される認識機能処理部をデータ配信部7から配信されるメッセージの購読者として登録し(ステップS503)、処理を終了する。 Next, the recognition function registration unit 17 notifies the identification information of the received program to the data distribution management unit 16, and the subscriber of the message distributed from the data distribution unit 7 recognizes the recognition function processing unit realized by the program. (Step S503), and the process ends.
 以上の処理によって、認識機能登録部17は、任意の認識機能部を計算機システムに追加することができる。このとき、データ配信部7のメッセージ処理に、パブリッシュ・サブスクライブモデルを用いることによって、既存の認識処理部の処理には影響を与えないことを保証できる。 Through the above processing, the recognition function registration unit 17 can add an arbitrary recognition function unit to the computer system. At this time, by using the publish / subscribe model for the message processing of the data distribution unit 7, it can be ensured that the processing of the existing recognition processing unit is not affected.
 なお、データ認識処理では、後処理部が、構造データを生成しているが、本発明はこれに限定されない。例えば、以下のような変形例も考えられる。 In the data recognition process, the post-processing unit generates structure data, but the present invention is not limited to this. For example, the following modifications can be considered.
 ステップS306において、後処理部は、認識部から受信した認識結果から、構造化された認識結果を生成し、構造化された認識結果を含むメッセージを認識結果受信部14に送信する。このとき、メッセージのヘッダには、URL61、認識処理固有のID、辞書固有のID、及び認識処理完了時刻が付加される。この場合、認識結果受信部14又は構造データ関連づけ処理部15が、受信したメッセージから構造データを生成する。 In step S306, the post-processing unit generates a structured recognition result from the recognition result received from the recognition unit, and transmits a message including the structured recognition result to the recognition result receiving unit 14. At this time, URL 61, ID unique to the recognition process, ID unique to the dictionary, and recognition process completion time are added to the header of the message. In this case, the recognition result receiving unit 14 or the structure data association processing unit 15 generates structure data from the received message.
 なお、構造データ関連づけ処理では、キューに構造データが格納されるたびに、構造データ関連づけ処理部15が、当該構造データと既存の構造データ51とを統合していたが、本発明はこれに限定されない。例えば、予め、対象となる複数の認識機能部を登録しておき、全ての認識機能部から構造データを受信した場合に、構造データ関連づけ処理が開始されるようにしてもよい。この場合、構造データ関連づけ処理部15は、一度に、複数の構造データと、既存の構造データ51とを統合する。 In the structure data association processing, each time structure data is stored in the queue, the structure data association processing unit 15 integrates the structure data and the existing structure data 51. However, the present invention is not limited to this. Not. For example, a plurality of target recognition function units may be registered in advance, and the structure data association process may be started when structure data is received from all the recognition function units. In this case, the structure data association processing unit 15 integrates a plurality of structure data and the existing structure data 51 at a time.
 実施例1によれば、記憶サーバ31は、受信した非構造データを格納し、さらに、非構造データの内容を示す認識結果を、認識処理固有の情報及び辞書の情報と対応づけた上で、非構造データに付随する構造データとして格納する。これによって、非構造データに対する認識結果は、非構造データを参照するときに用いられると同一のURLと対応づけられた構造データとして管理することができる。 According to the first embodiment, the storage server 31 stores the received unstructured data, and further associates the recognition result indicating the contents of the unstructured data with the information unique to the recognition process and the information of the dictionary. Store as structured data accompanying unstructured data. As a result, the recognition result for the non-structure data can be managed as the structure data associated with the same URL when used when referring to the non-structure data.
 したがって、URLを用いた記憶サーバ31へのアクセス処理のみによって、認識結果を格納するデータベース機能、及び認識処理の完了を判定する機能を実現できる。 Therefore, the database function for storing the recognition result and the function for determining the completion of the recognition process can be realized only by the access process to the storage server 31 using the URL.
 さらに、複数の認識機能部を連動させる場合に、認識結果の格納場所、及び複数の認識機能部間の対応関係の設計が不要となる。また、複数の認識機能部が同時に処理を実行するための計算機システムの処理性能を、各認識機能部の性能に応じて容易に制御できる。 Furthermore, when linking a plurality of recognition function units, it is not necessary to design a storage location of recognition results and a correspondence relationship between the plurality of recognition function units. In addition, the processing performance of the computer system for simultaneously executing processing by a plurality of recognition function units can be easily controlled according to the performance of each recognition function unit.
 さらに、非構造データを移動又は複製させる場合に、不必要な認識処理の実行を避けることができ、また、認識機能部を追加又は更新する場合にも、不必要な認識処理の実行を避けることができる。 Furthermore, it is possible to avoid unnecessary recognition processing when moving or duplicating unstructured data, and avoid unnecessary recognition processing when adding or updating recognition function units. Can do.
 さらに、複数の認識機能部を連動させる場合に、一つの非構造データに対し、複数の認識機能部から出力された認識結果を一つのXML形式の構造データとして統合できる。 Furthermore, when a plurality of recognition function units are linked, recognition results output from a plurality of recognition function units can be integrated into a single XML structure data for a single unstructured data.
 実施例1では、計算機システム全体として非構造データの記憶処理を実現していたが、実施例2では、一つの装置を用いて非構造データの記憶処理を実現する点が異なる。以下、実施例1との差異を中心に実施例2について説明する。 In the first embodiment, the storage processing of unstructured data is realized as the entire computer system. However, the second embodiment is different in that the storage processing of unstructured data is realized using one apparatus. Hereinafter, the second embodiment will be described focusing on differences from the first embodiment.
 図13は、本発明の実施例2における非構造データ記憶装置1の構成を説明するブロック図である。 FIG. 13 is a block diagram illustrating the configuration of the unstructured data storage device 1 according to the second embodiment of the present invention.
 非構造データ記憶装置1のハードゥエア構成は、記憶サーバ31又は管理サーバ32等と同一であり、CPU(図示省略)、メモリ(図示省略)、通信装置(図示省略)、及び記憶デバイス(図示省略)を有する。 The hard structure of the unstructured data storage device 1 is the same as that of the storage server 31 or the management server 32, etc., and includes a CPU (not shown), a memory (not shown), a communication device (not shown), and a storage device (not shown). Have
 また、非構造データ記憶装置1は、データ受付部2、記憶部3、データ参照部4、構造データ参照部5、クローリング処理部6、データ配信部7、音声フィルタ部8、音声認識部9、音声後処理部10、映像フィルタ部11、映像認識部12、映像後処理部13、認識結果受信部14、構造データ関連づけ処理部15、データ配信管理部16、認識機能登録部17、音声辞書部18、及び映像辞書部19を備える。 The unstructured data storage device 1 includes a data reception unit 2, a storage unit 3, a data reference unit 4, a structural data reference unit 5, a crawling processing unit 6, a data distribution unit 7, a voice filter unit 8, a voice recognition unit 9, Audio post-processing unit 10, video filter unit 11, video recognition unit 12, video post-processing unit 13, recognition result reception unit 14, structural data association processing unit 15, data distribution management unit 16, recognition function registration unit 17, audio dictionary unit 18 and a video dictionary unit 19.
 ここで、映像認識部12は、管理サーバ32の映像認識部12及び映像サーバ33の映像認識処理部42によって実現される機能を有する。同様に、音声認識部9は、管理サーバ32の音声認識部9及び音声サーバ34の音声認識処理部43によって実現される機能を有する。 Here, the video recognition unit 12 has a function realized by the video recognition unit 12 of the management server 32 and the video recognition processing unit 42 of the video server 33. Similarly, the voice recognition unit 9 has a function realized by the voice recognition unit 9 of the management server 32 and the voice recognition processing unit 43 of the voice server 34.
 その他の構成は、実施例1と同一であるため説明を省略する。 Other configurations are the same as those in the first embodiment, and thus description thereof is omitted.
 非構造データ記憶装置1は、ユーザに対して、データ受付部2、データ参照部4、構造データ参照部5、及び認識機能登録部17を操作するためのユーザインタフェースを提供する。 The unstructured data storage device 1 provides a user interface for operating the data receiving unit 2, the data reference unit 4, the structural data reference unit 5, and the recognition function registration unit 17 to the user.
 データ受付部2は、ユーザから非構造データを受け付けると、記憶部3と連携して、データ格納処理を実行する。また、データ参照部4は、ユーザから、URLを含む非構造データの参照要求を受け付けると、データ参照処理を実行する。構造データ参照部5は、ユーザから、URLを含む構造データの参照要求を受け付けると、構造データ参照処理を実行する。 When the data receiving unit 2 receives unstructured data from the user, the data receiving unit 2 executes data storage processing in cooperation with the storage unit 3. Moreover, the data reference part 4 will perform a data reference process, if the reference request | requirement of unstructured data containing URL is received from a user. When receiving a structural data reference request including a URL from the user, the structural data reference unit 5 executes a structural data reference process.
 また、クローリング処理部6及びデータ配信部7は、周期的又はユーザからの支持を受け付けると、データクローリング処理を実行する。具体的には、クローリング処理部6は、URLのリストを生成し、データ配信部7に生成されたURLのリストを入力する。データ配信部7は、データ配信管理部16に格納される購読者情報に基づいて、所定の認識機能部を構成するフィルタ部にURLのリストを入力する。図13に示す例では、音声フィルタ部8又は映像フィルタ部11の少なくともいずれかに、URLのリストが入力される。これによって、データ認識処理が開始される。 In addition, the crawling processing unit 6 and the data distribution unit 7 execute data crawling processing when receiving support from the user periodically or. Specifically, the crawling processing unit 6 generates a URL list, and inputs the generated URL list to the data distribution unit 7. Based on the subscriber information stored in the data distribution management unit 16, the data distribution unit 7 inputs a list of URLs to a filter unit that constitutes a predetermined recognition function unit. In the example illustrated in FIG. 13, a URL list is input to at least one of the audio filter unit 8 and the video filter unit 11. Thereby, the data recognition process is started.
 音声フィルタ部8及び映像フィルタ部11は、URLに対応する非構造データ50が認識対象であるか否かを判定し、また、当該非構造データ50に対する認識処理が実行済みであるか否かを判定する。音声フィルタ部8及び映像フィルタ部11は、前述の判定結果に基づいて、音声認識部9及び映像認識部12に処理の実行を要求する。 The audio filter unit 8 and the video filter unit 11 determine whether or not the unstructured data 50 corresponding to the URL is a recognition target, and whether or not the recognition process for the unstructured data 50 has been executed. judge. The audio filter unit 8 and the video filter unit 11 request the audio recognition unit 9 and the video recognition unit 12 to execute processing based on the determination result.
 音声認識部9は、音声辞書部18と連携して、非構造データ50に対する音声データの認識処理を実行し、認識結果を音声後処理部10に入力する。また、映像認識部12は、映像辞書部19と連携して、非構造データ50に対する映像データの認識処理を実行し、認識結果を映像後処理部13に入力する。 The voice recognition unit 9 performs a voice data recognition process on the unstructured data 50 in cooperation with the voice dictionary unit 18 and inputs a recognition result to the voice post-processing unit 10. In addition, the video recognition unit 12 performs a video data recognition process on the unstructured data 50 in cooperation with the video dictionary unit 19, and inputs a recognition result to the video post-processing unit 13.
 音声後処理部10は、認識結果、音声データの認識処理固有のID、及び処理の完了時刻を含む構造データを生成し、認識結果受信部14に構造データを入力する。また、映像後処理部13は、認識結果、映像データの認識処理固有のID、及び処理の完了時刻を含む構造データを生成し、認識結果受信部14に構造データを入力する。 The speech post-processing unit 10 generates structure data including the recognition result, the ID unique to the recognition process of the speech data, and the process completion time, and inputs the structure data to the recognition result receiving unit 14. Further, the video post-processing unit 13 generates structural data including the recognition result, the ID unique to the recognition processing of the video data, and the completion time of the processing, and inputs the structural data to the recognition result receiving unit 14.
 認識結果受信部14は、構造データが入力されると、構造データ関連づけ処理部15と連携して、構造データ関連づけ処理を実行する。このとき、構造データ関連づけ処理部15は、入力された構造データが統合された新たな構造データを記憶部3に入力する。記憶部3は、入力された構造データを、既存の構造データ51に上書きすることによって更新する。 When the structural data is input, the recognition result receiving unit 14 executes the structural data association processing in cooperation with the structural data association processing unit 15. At this time, the structure data association processing unit 15 inputs new structure data into which the input structure data is integrated into the storage unit 3. The storage unit 3 updates the input structure data by overwriting the existing structure data 51.
 認識機能登録部17は、認識機能登録処理を実行することによって、非構造データ記憶装置1に新たな認識機能部を追加し、当該認識機能部へURLを配信するための購読者情報をデータ配信管理部16に登録する。 The recognition function registration unit 17 adds a new recognition function unit to the unstructured data storage device 1 by executing the recognition function registration process, and distributes subscriber information for distributing the URL to the recognition function unit. Register in the management unit 16.
 なお、各処理の具体的な内容は実施例1と同一であるため説明を省略する。 In addition, since the specific content of each process is the same as Example 1, description is abbreviate | omitted.
 なお、本発明において説明した計算機等の構成、処理部及び処理手段等は、それらの一部又は全部を、専用のハードウェアによって実現してもよい。また、本実施例で例示した種々のソフトウェアは、電磁的、電子的及び光学式等の種々の記録媒体(例えば、非一時的な記憶媒体)に格納可能であり、インターネット等の通信網を通じて、コンピュータにダウンロード可能である。 The configuration of the computer, the processing unit, and the processing unit described in the present invention may be partially or entirely realized by dedicated hardware. In addition, the various software exemplified in the present embodiment can be stored in various recording media (for example, non-transitory storage media) such as electromagnetic, electronic, and optical, and through a communication network such as the Internet. It can be downloaded to a computer.
 また、本発明は前述した実施形態に限定されるものではなく、様々な変形例が含まれる。本実施例では、非構造データを格納する計算機システムを想定したが、例えば、携帯機器に管理サーバ32及び記憶サーバ31の機能を備え、認識サーバをクラウド上に置くようにした携帯情報管理システムなど、様々な構成の装置、システムに適用することができる。 Further, the present invention is not limited to the above-described embodiment, and includes various modifications. In the present embodiment, a computer system that stores unstructured data is assumed. For example, a portable information management system in which portable devices have the functions of a management server 32 and a storage server 31 and a recognition server is placed on the cloud. The present invention can be applied to apparatuses and systems having various configurations.

Claims (15)

  1.  一定のデータ構造を有さない非構造データ及び一定のデータ構造を有する構造データを管理する計算機であって、
     前記計算機は、プロセッサ、前記プロセッサに接続されるメモリ、前記プロセッサに接続される記憶デバイス、及び前記プロセッサに接続されるI/Oインタフェースを備え、
     前記非構造データに対して、所定の辞書を用いて所定のデータ種別の認識処理を実行する少なくとも一つの認識部と、
     前記認識部が実行する認識処理の結果、前記認識部の識別情報、及び前記認識部が使用した辞書の識別情報を含む前記構造データを生成する構造データ生成部と、を備えることを特徴とする計算機。
    A computer that manages unstructured data not having a fixed data structure and structured data having a fixed data structure,
    The computer includes a processor, a memory connected to the processor, a storage device connected to the processor, and an I / O interface connected to the processor,
    For the unstructured data, at least one recognition unit that executes a recognition process of a predetermined data type using a predetermined dictionary;
    A structure data generation unit that generates the structure data including identification information of the recognition unit and identification information of a dictionary used by the recognition unit as a result of a recognition process executed by the recognition unit. calculator.
  2.  請求項1に記載の計算機であって、
     前記構造データ生成部は、前記計算機が管理する前記構造データと統合可能なデータ構造の構造データを生成することを特徴とする計算機。
    The computer according to claim 1,
    The structural data generation unit generates structural data having a data structure that can be integrated with the structural data managed by the computer.
  3.  請求項2に記載の計算機であって、
     前記計算機は、前記非構造データに関連する前記構造データと、前記構造データ生成部によって生成された前記構造データとを統合することによって、新たな構造データを生成する構造データ関連づけ処理部を備えることを特徴とする計算機。
    The computer according to claim 2,
    The computer includes a structural data association processing unit that generates new structural data by integrating the structural data related to the non-structural data and the structural data generated by the structural data generation unit. A computer characterized by
  4.  請求項3に記載の計算機であって、
     前記計算機は、第1の構造データ生成部、及び第2の構造データ生成部を備え、
     前記第1の構造データ生成部は、第1の構造データを生成し、
     前記第2の構造データ生成部は、第2の構造データを生成し、
     前記構造データ関連づけ処理部は、
     前記第1の構造データ生成部から前記第1の構造データが入力された場合に、前記非構造データに関連する第3の構造データを取得し、
     前記取得された第3の構造データと、前記入力された第1の構造データとを統合することによって、第4の構造データを生成し、
     前記第4の構造データが格納された後に、前記第2の構造データ生成部から前記第2の構造データが入力された場合に、前記第4の構造データを取得し、
     前記取得された第4の構造データと、前記入力された第2の構造データとを統合することによって、第5の構造データを生成することを特徴とする計算機。
    The computer according to claim 3, wherein
    The computer includes a first structure data generation unit and a second structure data generation unit,
    The first structure data generation unit generates first structure data,
    The second structure data generation unit generates second structure data,
    The structural data association processing unit
    When the first structure data is input from the first structure data generation unit, the third structure data related to the non-structure data is acquired,
    Generating fourth structure data by integrating the acquired third structure data and the input first structure data;
    After the fourth structure data is stored, when the second structure data is input from the second structure data generation unit, the fourth structure data is acquired,
    A computer that generates fifth structure data by integrating the acquired fourth structure data and the input second structure data.
  5.  請求項3に記載の計算機であって、
     認識処理の対象となるデータの種別に応じて、前記認識部は複数設けられ、
     前記計算機は、
     前記非構造データに関連する前記構造データを参照して、前記非構造データが所定のデータ種別の認識処理の対象であるか否かを判定する複数のフィルタ部を備え、
     前記複数のフィルタ部は、
     前記複数の認識部のいずれかに対応づけられ、
     前記構造データを参照して、前記非構造データが、前記対応づけられる認識部が対象とする所定のデータ種別を有するデータであるか否かを判定し、
     前記構造データを参照して、前記対応づけられる認識部が前記非構造データに対する認識処理を完了したか否かを判定することを特徴とする計算機。
    The computer according to claim 3, wherein
    Depending on the type of data to be recognized, a plurality of recognition units are provided,
    The calculator is
    A plurality of filter units for referring to the structural data related to the non-structural data and determining whether the non-structural data is a target of recognition processing of a predetermined data type;
    The plurality of filter units are:
    Is associated with one of the plurality of recognition units,
    With reference to the structure data, it is determined whether or not the non-structure data is data having a predetermined data type targeted by the associated recognition unit,
    A computer characterized by referring to the structure data to determine whether or not the associated recognition unit has completed recognition processing for the non-structure data.
  6.  請求項5に記載の計算機であって、
     前記計算機は、
     前記複数の認識部のうち、処理対象となる前記非構造データを入力する前記少なくとも一つの認識部に関する入力情報を管理するデータ入力管理部と、
     前記入力情報を参照して、前記処理対象となる非構造データを入力する前記少なくとも一つの認識部を特定し、前記特定された認識部に、前記処理対象となる非構造データを入力するデータ入力部と、
     を備えることを特徴とする計算機。
    The computer according to claim 5, wherein
    The calculator is
    A data input management unit that manages input information related to the at least one recognition unit that inputs the unstructured data to be processed among the plurality of recognition units;
    Data input for specifying the at least one recognition unit that inputs the non-structure data to be processed with reference to the input information and inputting the non-structure data to be processed to the specified recognition unit And
    A computer comprising:
  7.  請求項3に記載の計算機であって、
     前記非構造データと、前記非構造データに関連する構造データとを対応づけて管理する記憶部を備え、
     前記記憶部は、前記構造データ関連づけ処理部が入力した新たな構造データを、前記非構造データと対応づけて格納することを特徴とする計算機。
    The computer according to claim 3, wherein
    A storage unit that manages the unstructured data and the structure data related to the unstructured data in association with each other;
    The storage unit stores the new structure data input by the structure data association processing unit in association with the non-structure data.
  8.  複数の計算機を備える計算機システムであって、
     前記複数の計算機の各々は、プロセッサ、前記プロセッサに接続されるメモリ、前記プロセッサに接続される記憶デバイス、及び前記プロセッサに接続されるI/Oインタフェースを備え、
     前記複数の計算機は、一定のデータ構造を有さない非構造データ及び一定のデータ構造を有する構造データを管理するストレージサーバと、前記非構造データに対する所定の処理の結果を含む構造データを生成する管理サーバとを含み、
     前記管理サーバは、
     前記非構造データに対して、所定の辞書を用いて所定のデータ種別の認識処理を実行する少なくとも一つの認識部と、
     前記認識部が実行する認識処理の結果、前記認識部の識別情報、及び前記認識部が使用した辞書の識別情報を含む前記構造データを生成する構造データ生成部と、を有することを特徴とする計算機システム。
    A computer system comprising a plurality of computers,
    Each of the plurality of computers includes a processor, a memory connected to the processor, a storage device connected to the processor, and an I / O interface connected to the processor,
    The plurality of computers generate storage data for managing unstructured data not having a fixed data structure and structured data having a fixed data structure, and structured data including a result of predetermined processing on the unstructured data. Including a management server,
    The management server
    For the unstructured data, at least one recognition unit that executes a recognition process of a predetermined data type using a predetermined dictionary;
    And a structural data generating unit that generates the structural data including identification information of the recognizing unit and identification information of a dictionary used by the recognizing unit as a result of recognition processing executed by the recognizing unit. Computer system.
  9.  請求項8に記載の計算機システムであって、
     前記構造データ生成部は、前記ストレージサーバが管理する前記構造データと統合可能なデータ構造の構造データを生成することを特徴とする計算機システム。
    A computer system according to claim 8, wherein
    The computer system characterized in that the structure data generation unit generates structure data having a data structure that can be integrated with the structure data managed by the storage server.
  10.  請求項9に記載の計算機システムであって、
     前記管理サーバは、前記非構造データに関連する前記構造データと、前記構造データ生成部によって生成された前記構造データとを統合することによって、新たな構造データを生成する構造データ関連づけ処理部を有することを特徴とする計算機システム。
    A computer system according to claim 9, wherein
    The management server includes a structure data association processing unit that generates new structure data by integrating the structure data related to the non-structure data and the structure data generated by the structure data generation unit. A computer system characterized by that.
  11.  請求項10に記載の計算機システムであって、
     前記管理サーバは、第1の構造データ生成部、及び第2の構造データ生成部を有し、
     前記第1の構造データ生成部は、第1の構造データを生成し、
     前記第2の構造データ生成部は、第2の構造データを生成し、
     前記構造データ関連づけ処理部は、
     前記第1の構造データ生成部から前記第1の構造データが入力された場合に、前記非構造データに関連する第3の構造データを取得し、
     前記取得された第3の構造データと、前記入力された第1の構造データとを統合することによって、第4の構造データを生成し、
     前記第4の構造データが格納された後に、前記第2の構造データ生成部から前記第2の構造データが入力された場合に、前記第4の構造データを取得し、
     前記取得された第4の構造データと、前記入力された第2の構造データとを統合することによって、第5の構造データを生成することを特徴とする計算機システム。
    A computer system according to claim 10, wherein
    The management server has a first structure data generation unit and a second structure data generation unit,
    The first structure data generation unit generates first structure data,
    The second structure data generation unit generates second structure data,
    The structural data association processing unit
    When the first structure data is input from the first structure data generation unit, the third structure data related to the non-structure data is acquired,
    Generating fourth structure data by integrating the acquired third structure data and the input first structure data;
    After the fourth structure data is stored, when the second structure data is input from the second structure data generation unit, the fourth structure data is acquired,
    5. A computer system, characterized in that fifth structure data is generated by integrating the acquired fourth structure data and the input second structure data.
  12.  請求項10に記載の計算機システムであって、
     認識処理の対象となるデータの種別に応じて、前記認識部は複数設けられ、
     前記管理サーバは、前記非構造データに関連する前記構造データを参照して、前記非構造データが所定のデータ種別の認識処理の対象であるか否かを判定する複数のフィルタ部を有し、
     前記複数のフィルタ部は、
     前記複数の認識部のいずれかに対応づけられ、
     前記構造データを参照して、前記非構造データが、前記対応づけられる認識部が対象とする所定のデータ種別を有するデータであるか否かを判定し、
     前記構造データを参照して、前記対応づけられる認識部が前記非構造データに対する認識処理を完了したか否かを判定することを特徴とする計算機システム。
    A computer system according to claim 10, wherein
    Depending on the type of data to be recognized, a plurality of recognition units are provided,
    The management server has a plurality of filter units that determine whether or not the unstructured data is a target of recognition processing of a predetermined data type with reference to the structure data related to the unstructured data,
    The plurality of filter units are:
    Is associated with one of the plurality of recognition units,
    With reference to the structure data, it is determined whether or not the non-structure data is data having a predetermined data type targeted by the associated recognition unit,
    A computer system that refers to the structural data and determines whether or not the associated recognition unit has completed recognition processing for the unstructured data.
  13.  請求項12に記載の計算機システムであって、
     前記管理サーバは、
     処理対象となる前記非構造データが入力される少なくとも一つの前記認識部に関する入力情報を管理するデータ入力管理部と、
     前記入力情報を参照して、前記処理対象となる非構造データを入力する少なくとも一つの認識部を特定し、前記特定された認識部に、前記処理対象となる非構造データを入力するデータ入力部と、を有することを特徴とする計算機システム。
    A computer system according to claim 12, wherein
    The management server
    A data input management unit that manages input information related to at least one recognition unit to which the unstructured data to be processed is input;
    A data input unit that refers to the input information, specifies at least one recognition unit that inputs the non-structure data to be processed, and inputs the non-structure data to be processed to the specified recognition unit And a computer system characterized by comprising:
  14.  請求項10に記載の計算機システムであって、
     前記ストレージサーバは、前記非構造データと、前記非構造データに関連する構造データとを対応づけて管理する記憶部を有し、
     前記記憶部は、前記構造データ関連づけ処理部から前記新たな構造データが入力された場合に、前記新たな構造データを前記非構造データと対応づけて格納することを特徴とする計算機システム。
    A computer system according to claim 10, wherein
    The storage server has a storage unit that manages the unstructured data and the structure data related to the unstructured data in association with each other,
    The storage system stores the new structural data in association with the non-structural data when the new structural data is input from the structural data association processing unit.
  15.  一定のデータ構造を有さない非構造データ及び一定のデータ構造を有する構造データを管理する計算機におけるデータ管理方法であって、
     前記計算機は、プロセッサ、前記プロセッサに接続されるメモリ、前記プロセッサに接続される記憶デバイス、及び前記プロセッサに接続されるI/Oインタフェースを備え、
     前記方法は、
     前記プロセッサが、前記非構造データに対して、データ種別毎に、所定の辞書を用いた複数の認識処理を実行するステップと、
     前記プロセッサが、前記複数の認識処理毎に、前記認識処理の結果、前記認識処理の識別情報、及び前記認識処理において用いられた辞書の識別情報に基づいて、統合可能なデータ構造の構造データを生成するステップと、
     前記プロセッサが、複数の前記構造データを統合することによって、新たな構造データを生成するステップと、
     前記プロセッサが、前記非構造データと、前記新たな構造データとを対応づけて格納するステップと、を含むことを特徴とするデータ管理方法。
    A data management method in a computer for managing unstructured data not having a fixed data structure and structured data having a fixed data structure,
    The computer includes a processor, a memory connected to the processor, a storage device connected to the processor, and an I / O interface connected to the processor,
    The method
    The processor executing a plurality of recognition processes using a predetermined dictionary for each data type with respect to the unstructured data;
    For each of the plurality of recognition processes, the processor obtains structure data having a data structure that can be integrated based on the result of the recognition process, the identification information of the recognition process, and the identification information of the dictionary used in the recognition process. Generating step;
    The processor generates new structure data by integrating a plurality of the structure data; and
    A data management method comprising the step of storing the unstructured data and the new structured data in association with each other.
PCT/JP2012/080591 2012-11-27 2012-11-27 Computer, computer system, and data management method WO2014083608A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2012/080591 WO2014083608A1 (en) 2012-11-27 2012-11-27 Computer, computer system, and data management method
JP2014549661A JP5891313B2 (en) 2012-11-27 2012-11-27 Computer, computer system, and data management method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2012/080591 WO2014083608A1 (en) 2012-11-27 2012-11-27 Computer, computer system, and data management method

Publications (1)

Publication Number Publication Date
WO2014083608A1 true WO2014083608A1 (en) 2014-06-05

Family

ID=50827284

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/080591 WO2014083608A1 (en) 2012-11-27 2012-11-27 Computer, computer system, and data management method

Country Status (2)

Country Link
JP (1) JP5891313B2 (en)
WO (1) WO2014083608A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2508791A1 (en) * 2002-12-06 2004-06-24 Attensity Corporation Systems and methods for providing a mixed data integration service
CN101086741A (en) * 2006-06-09 2007-12-12 索尼株式会社 Information processing apparatus and information processing method
EP1883026A1 (en) * 2006-07-26 2008-01-30 Xerox Corporation Reference resolution for text enrichment and normalization in mining mixed data
US20080114725A1 (en) * 2006-11-13 2008-05-15 Exegy Incorporated Method and System for High Performance Data Metatagging and Data Indexing Using Coprocessors
US20080114724A1 (en) * 2006-11-13 2008-05-15 Exegy Incorporated Method and System for High Performance Integration, Processing and Searching of Structured and Unstructured Data Using Coprocessors
WO2008063974A2 (en) * 2006-11-13 2008-05-29 Exegy Incorporated Method and system for high performance integration, processing and searching of structured and unstructured data using coprocessors

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006509307A (en) * 2002-12-06 2006-03-16 アテンシティ コーポレーション Providing system and providing method for mixed data integration service
WO2004053645A2 (en) * 2002-12-06 2004-06-24 Attensity Corporation Systems and methods for providing a mixed data integration service
AU2003297732A1 (en) * 2002-12-06 2004-06-30 Attensity Corporation Systems and methods for providing a mixed data integration service
US20040167883A1 (en) * 2002-12-06 2004-08-26 Attensity Corporation Methods and systems for providing a service for producing structured data elements from free text sources
US20040167870A1 (en) * 2002-12-06 2004-08-26 Attensity Corporation Systems and methods for providing a mixed data integration service
EP1588277A2 (en) * 2002-12-06 2005-10-26 Attensity Corporation Systems and methods for providing a mixed data integration service
CA2508791A1 (en) * 2002-12-06 2004-06-24 Attensity Corporation Systems and methods for providing a mixed data integration service
CN101655867A (en) * 2006-06-09 2010-02-24 索尼株式会社 Information processing apparatus, information processing method
CN101086741A (en) * 2006-06-09 2007-12-12 索尼株式会社 Information processing apparatus and information processing method
EP1865426A2 (en) * 2006-06-09 2007-12-12 Sony Corporation Information processing apparatus, information processing method, and computer program
KR20070118038A (en) * 2006-06-09 2007-12-13 소니 가부시끼 가이샤 Information processing apparatus, information processing method, and computer program
JP2007328675A (en) * 2006-06-09 2007-12-20 Sony Corp Information processor, information processing method, and computer program
US20080010060A1 (en) * 2006-06-09 2008-01-10 Yasuharu Asano Information Processing Apparatus, Information Processing Method, and Computer Program
EP1883026A1 (en) * 2006-07-26 2008-01-30 Xerox Corporation Reference resolution for text enrichment and normalization in mining mixed data
JP2008033931A (en) * 2006-07-26 2008-02-14 Xerox Corp Method for enrichment of text, method for acquiring text in response to query, and system
US20080027893A1 (en) * 2006-07-26 2008-01-31 Xerox Corporation Reference resolution for text enrichment and normalization in mining mixed data
US20080114725A1 (en) * 2006-11-13 2008-05-15 Exegy Incorporated Method and System for High Performance Data Metatagging and Data Indexing Using Coprocessors
US20080114724A1 (en) * 2006-11-13 2008-05-15 Exegy Incorporated Method and System for High Performance Integration, Processing and Searching of Structured and Unstructured Data Using Coprocessors
WO2008063974A2 (en) * 2006-11-13 2008-05-29 Exegy Incorporated Method and system for high performance integration, processing and searching of structured and unstructured data using coprocessors
WO2008063973A2 (en) * 2006-11-13 2008-05-29 Exegy Incorporated Method and system for high performance data metatagging and data indexing using coprocessors
EP2092440A2 (en) * 2006-11-13 2009-08-26 Exegy Incorporated Method and system for high performance integration, processing and searching of structured and unstructured data using coprocessors
EP2092419A2 (en) * 2006-11-13 2009-08-26 Exegy Incorporated Method and system for high performance data metatagging and data indexing using coprocessors
JP2010509691A (en) * 2006-11-13 2010-03-25 エクセジー・インコーポレイテツド High-performance data metatagging and data indexing method and system using a coprocessor
JP2010511925A (en) * 2006-11-13 2010-04-15 エクセジー・インコーポレイテツド Method and system for high performance integration, processing and search of structured and unstructured data using coprocessors
US20100094858A1 (en) * 2006-11-13 2010-04-15 Exegy Incorporated Method and System for High Performance Integration, Processing and Searching of Structured and Unstructured Data Using Coprocessors

Also Published As

Publication number Publication date
JP5891313B2 (en) 2016-03-22
JPWO2014083608A1 (en) 2017-01-05

Similar Documents

Publication Publication Date Title
CN105516233B (en) Method and system for application deployment portable on one or more cloud systems
KR101777392B1 (en) Central server and method for processing of voice of user
JP5172714B2 (en) RSS data processing object
US10306022B2 (en) Facilitating the operation of a client/server application while a client is offline or online
CN101089856A (en) Method for abstracting network data and web reptile system
JP4880376B2 (en) Support apparatus, program, information processing system, and support method
CN111901294A (en) Method for constructing online machine learning project and machine learning system
CN101090337A (en) System and method for scalable distribution of semantic web updates
CN102971707A (en) Configuring a computer system for a software package installation
WO2014120467A1 (en) Database shard arbiter
CN110321544B (en) Method and device for generating information
CN110851681A (en) Crawler processing method and device, server and computer readable storage medium
KR20110008179A (en) Generating sitemaps
CN108701130A (en) Hints model is updated using auto-browsing cluster
US9128886B2 (en) Computer implemented method, computer system, electronic interface, mobile computing device and computer readable medium
CN105653360A (en) Method and system for cross-app function acquisition
JP6192423B2 (en) Information processing apparatus, information processing method, information processing system, and program
Biörnstad et al. Let it flow: Building mashups with data processing pipelines
CN102640126A (en) Management apparatus and method therefor
JP5891313B2 (en) Computer, computer system, and data management method
CN112948733B (en) Interface maintenance method, device, computing equipment and medium
JP2007026296A (en) Integrated retrieval processing method and device
US9465876B2 (en) Managing content available for content prediction
CN108073638B (en) Data diagnosis method and device
CN108491448B (en) Data pushing method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12889234

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2014549661

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12889234

Country of ref document: EP

Kind code of ref document: A1