EP1590748A2 - Identifying similarities and history of modification within large collections of unstructured data - Google Patents
Identifying similarities and history of modification within large collections of unstructured dataInfo
- Publication number
- EP1590748A2 EP1590748A2 EP04704049A EP04704049A EP1590748A2 EP 1590748 A2 EP1590748 A2 EP 1590748A2 EP 04704049 A EP04704049 A EP 04704049A EP 04704049 A EP04704049 A EP 04704049A EP 1590748 A2 EP1590748 A2 EP 1590748A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- document
- documents
- clusters
- coefficients
- similar
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
Definitions
- a refinement of this approach enables the user to input the information in a more user-friendly, human language form (as opposed to a set of words or word Boston AND sale").
- These so-called “natural language” interfaces permit a user to input a query such as "Which truck dealer in Boston area is currently advertising a sale?".
- Other techniques such as image pattern recognition and mathematical correlation can be used for finding information in non-textual data collections, such as in pictures (e.g. to find if a person whose face is captured by a security camera is located in a database of known criminals).
- a method and system for efficient discovery of the similarity between data from a large document collection and a given piece of data (which may be new or which may belong to that collection) is provided.
- the system can be implemented as a software program that is distributed across the computers in an organization.
- a client-side monitor process reports digital asset related activities of computer users (e.g., sensitive user documents being copied, modified, removed or transmitted).
- a data security application can maintain a Document Distribution Path (DDP) as a directional graph that is a representation of the historic dependencies between the documents.
- DDP Document Distribution Path
- the system also preferably maintains a very much reduced (“lossy") hierarchical representation of the user data files, indexed in a way that allows for fast queries for similar (but not necessarily equivalent) information.
- the system can thus respond to queries such as "find documents similar to a given document”. This information is then used in further completing the DDP graph in instances when certain operations are not visible to the client monitor process.
- Document similarity queries can originate from users, manually, or can be applied and/or be implemented as part of a distributed data processing system service.
- a document similarity service called the Similarity Detection Engine (SDE) can be used to provide an organization-wide security solution that is capable of finding existing files "which contain data similar to a new file", and applying the appropriate controls to new files automatically.
- the SDE uses sparse representations of documents to speed up the similarity determination.
- the sparse representation preferably consists of a hierarchy of solicited Fourier coefficients determined from selected portions or "chunks" of the file. Algorithms are used to selectively choose Fourier coefficient components that are a best representation of the document.
- the system is transparent to an end user and exploits only a small fraction of available resources of a modern computer workstation.
- the system may require a dedicated server or a server cluster to support a large number of client workstations.
- the system can thus be used to provide a data management application that has the ability to automatically maintain and/or reconstruct a document distribution path.
- This path identifies: 1) the origin of a document, 2) its distribution path from its point of origin, and 3) the name of the user who altered the document and the time the alterations occurred.
- An organization can apply this ability of the present invention to a number of end uses.
- the invention can be used to monitor document flow and streamline corporate practices by identifying and resolving critical information exchange bottlenecks that impact work flow.
- This feature can also be implemented in information security applications by enabling automatic identification of similar documents in real time, even across large collections of documents in an enterprise.
- Document similarity analysis can be utilized to determine document sensitivity, which is a necessary data security function, to prevent improper access or the distribution of sensitive data without interfering with the exchange of non-sensitive documents.
- Fig. 1 illustrates the components of a Similarity Discovery System, according to the present invention.
- a Similarity Detection Engine SDE
- SDE Similarity Detection Engine
- the SDE monitors system events related to document management and maintains a hierarchical structure representative of document files.
- the elements of the hierarchical structure of a given file are referenced as Fourier components of data "chunks", whose identifiers (IDs) as well as locations within the original source file are stored in a built-in Document Chunk Database.
- the client-side database also stores a Document Distribution Path (DDP).
- DDP Document Distribution Path
- An optional enterprise-wide server can be used to collect the data from client-based SDEs and services queries which can not be serviced by the local SDE.
- Fig 2 illustrates one example scenario of the paths of document flow within a computer system.
- the SDE has no information on the origin of the documents at time o and scans the file system in order to generate the built-in hierarchical structure, as well as the Document Distribution Path (DDP).
- DDP Document Distribution Path
- the similarity of new versions of documents with the sources of their origin can sometimes be uncovered by monitoring the activity of the computer system (e.g. when a document is renamed or copied or merged). In other cases (e.g. when a document is received from a network) this similarity can best be revealed by querying the SDE.
- Fig. 3 is an example of entries in a relational database of representation of the Document Distribution Path (DDP), which records the relationships between documents and how they were created.
- DDP Document Distribution Path
- Fig. 4 is a high level flow diagram of the algorithm that the SDE uses to identify similar documents.
- Fig. 5 illustrates a convolution of two vectors, which might each represent the components of a lowest level in a document chunk hierarchy. The convolution here has two relatively offset common parts, a quarter of the vector length each, as well as two peaks on top of random noise.
- Fig. 6 illustrates the architecture of a hierarchical structure used by the SDE to represent a data file.
- the structure represents the space of vectors of Fourier coefficients of data stored in chunks of documents.
- Each higher-level cluster holds a reference to a collection of lower-level clusters.
- the bottom level clusters host the elements of the above-mentioned Fourier coefficient space.
- Fig. 7 is a flow chart of operations used to query the hierarchical structure for clusters, similar to a given element (referred to as "the base of the query").
- FIG. 1 A high level conceptual representation of a data Similarity Discovery System 100 is presented in Fig. 1.
- Client 102 and server 104 computers constantly monitor user activity and collect information on data files or other "digital assets" such as document files that contain valuable information.
- the monitored events only include detecting and recording information about documents being modified (created, copied, moved, deleted, edited, or merged) by the computer operating system (OS) as well as its users.
- This information is represented as a data structure referred to as the Document Distribution Path (DDP) 150, which is typically implemented as a directed graph where the vertices represent documents and edges describe historic relationships between the documents.
- the DDP 150 is stored in the database, together with other information on files and their chunks.
- DDP Document Distribution Path
- OS and networking protocol architecture prevents a system 100 from reconstructing historic relationships between all documents.
- OS and networking protocol architecture prevents a system 100 from reconstructing historic relationships between all documents.
- existing email protocols do not support applications that track the file back to its origin on another workstation on the organizational network (document origin).
- the system 100 can use a Similarity Detection Engine (SDE) 160 (to be described later in detail) to query the received document against the database of existing documents. The system will then use the query results to initially construct the DDP 150.
- SDE Similarity Detection Engine
- the SDE 160 maintains a database of "chunks" of documents available on the system. It converts data in these chunks into a highly-compressed hierarchical structure representation 170, which is an optimal form to use to approximately measure similarity between chunks. It also maintains chunk source information within Document Chunk Database 175.
- the system may be configured to run on a single standalone local machine 102 in which case the DDP 150, SDE 160, and hierarchical structure 170 all reside therein.
- the system can also be implemented as an enterprise- wide data management or security solution.
- client devices 102 and servers 104 are connected via local area network and/or internetwork connections 106. Connections to an outside network, such as the Internet 108, can also be made in such systems, so that files may originate and/or be distributed outside the ente ⁇ rise.
- the DDP 150, SDE 160, and hierarchical structure 170 components will typically be distributed among multiple clients 102 and servers 104 and/or server clusters.
- the SDE 160 can thus maintain the hierarchical database 170 representation of documents on a local machine 102, on a server 104, in a distributed fashion, and/or a cluster of servers 104 in the same compressed representation.
- a local SDE 160 queries a server SDE 104 when it cannot respond to a query against a newly received document.
- the local SDE 160 updates the server SDE 104 when a user creates a new document or modifies an existing document.
- the server SDE 104 Once the update reaches the server SDE 104, it is immediately available for the queries by other local SDEs 160 running on other client workstations. In a situation where the client 102 is disconnected from the network 106 (e.g. a laptop user is out of the office on a trip), communication requests are postponed and queued till the time when the network connection is restored.
- the client 102 is disconnected from the network 106 (e.g. a laptop user is out of the office on a trip)
- communication requests are postponed and queued till the time when the network connection is restored.
- the DDP 150 and SDE 160 can be used in a number of different applications 120.
- a data security application can be used to establish a perimeter of accountability for document usage at the point of use.
- the accountability model can not only track authorized users access to documents but more importantly, can monitor attempts to access or move copies of sensitive documents to peripherals or over network connections.
- the SDE-dependent security application 120 can be used to control or thwart attempts to distribute or record sensitive intellectual property or other information, or other possible abuse of authority events.
- a system component called the transparent system event monitor 180 acts as an agent of the application 120.
- the monitor 180 is interposed between an Operating System (OS) running on the client 102 and end user applications 190.
- the monitor process 180 has sensors or shims to detect read or write operations to file system 192, network interfaces 194, ports 196, and/or system clipboard 198.
- the sensors in the monitor process 180 may be used to detect possible abuse events that may occur whenever a user accesses devices which are not visible to or controllable by a local file server. These events may include writing documents to uncontrolled media such as Compact Disk-Read Write (CD-RW) drives, Personal Digital Assistants (PDA), Universal Serial Bus (USB) storage devices, wireless devices, digital video recorders, or printing them.
- CD-RW Compact Disk-Read Write
- PDA Personal Digital Assistants
- USB Universal Serial Bus
- suspect events can be detected by the network sensors 194 to detect events such as external Peer-to-Peer (P2P) applications, sending documents via external e-mail applications, running Instant Messaging (IM) applications, uploading documents to web sites via the Internet 108, and the like.
- P2P Peer-to-Peer
- IM Instant Messaging
- Data typically collected with an event depends on the event type and the type of information which is desired to be maintained in the DDP 150.
- Such information can include:
- source / destination file name For file operations, source / destination file name, operation type (open, write, delete, rename, move to recycle bin), device type, first and last access time
- time and user identification For user operations, such as log on or log off, the time and user identification (ID)
- the monitor process 180 may also be used to receive and enforce access policies as defined by the security application 120, such as by restricting access to local documents, forbidding writes to removable media, or limiting network traffic.
- the event monitor 180 process may include heuristics to limit processing by the application 120, DDP 150 and/or SDE 160.
- a typical heuristic may include an approved file filter to automatically filter the dozens of inconsequential events generated by standard calls to system files. For example, it is quite common for many different executable and dynamic library operating system files, font files, etc. to be opened and accessed repeatedly from the same application.
- the system typically creates a Document Distribution Path (DDP) 150 representation of the historical events concerning document flow within a system.
- the DDP may typically be a directed graph where the nodes or vertices are document identifiers and edges describe historic relationships between the documents. By maintaining such a graph, security policies can be applied to documents, in real time, as they are created, modified, and/or accessed.
- the similarity of new versions of documents with the sources of their origin can also sometimes be uncovered by monitoring the activity of the computer system (e.g. whenever a document is renamed or copied or merged). In other cases (e.g. when a document is received from a network 108) this similarity can only be revealed by determining whether a document is similar to an exiting document in the database. That is another example of a situation where the SDE 160 becomes an important part of the security application 120.
- Fig 2 illustrates one example scenario of the paths of document flow within a computer system, and how the representative DDP150 might be constructed.
- the system has no information on the origin of three documents (labeled "Doc" A, B, and C in Fig. 2) in the database.
- the security application can however use the SDE 160 to run a comparison of Documents A, B, and C, and to establish an initial conclusion that Documents A and C are similar. This result is then stored as an entry 301 in a set of relational data entries in the DDP 150, as shown in Fig. 3.
- a copy event 202 is detected by the event monitor 180 (Fig. 1), reporting that Document A has been copied and stored as Document A'. This is recorded in the DDP 150 as another entry 302 (see Fig. 3). Since this was a simple copy operation, the similarity of the documents is assumed, and the SDE 160 does not need to be used to complete the relation between the two documents.
- Time t 3 sees a file merge event 203, where Document B and Document C have been merged in to a new Document BC. Since Document C has carried a high security label, one result might be that such a label is then applied automatically to the merged document BC.
- the event monitor 180 reports a rename 204 of Document A to Document A". This event is stored in the DDP 150 as entry 304 (see Fig. 3).
- Event 205-1 reports that the sensitive Document A has been loaded into an editing program (such as Microsoft Word).
- Event 205-3 reports that Document D has been received from the Internet and also opened in the editor.
- the SDE 160 does not presently know the origin of Document D (in fact, in this example, the user is working on Document D as a personal birthday party invitation, and to make a correct decision, the system should not classify it as a sensitive document).
- Time t 6 sees a cut and paste operation event 206 with the clipboard.
- Document E is sent over the Internet. Has the user stored and sent information from a sensitive Document A" as Document E, compromising security? Or is she just created a birthday invitation Document E from Document D?
- the results of the SDE 160, requesting a comparison of Document A" to E and Document D to E can greatly improve the accuracy of the security classifications. If Document E is reported back as being very similar to D, then this is a low security event, no breach has occurred and the Internet transfer operation can be permitted to continue (and/or not reported). However, if Document E is similar to Document A", then a possible violation has occurred, and the security application can take appropriate steps, as stipulated by the ente ⁇ rise security policy. It is generally not satisfactory to misclassify low-risk event as a high-risk event, since that error leads to many false alerts, which significantly raise the cost of operating the security system.
- a save event 209 is detected from some application, with different data being saved to a new file having same name as an old file, Document C.
- the SDE 160 engine can be used to classify Document C by comparing its contents against the database, rather than simply assuming that files with the same filename should be assigned the same security classification.
- a Forensic Investigation was required because the security department of the company received a report of a leak of proprietary information. Such an investigation can be substantially simplified and made more accurate if DDP 150 information is available to the investigators. Therefore, even if the system is not configured to block distribution of sensitive information outside the enterprise, the forthcoming investigation may detect such leaks and take legal measures against violators, once appropriate logging and reporting are provided.
- the SDE 160 can also report a degree of similarity (a real number) as a result of a comparison of two files. That number can then be used and/or carried into the DDP. So, for example, if the SDE 160 reports that a new Document E is 60% similar to Document A" and 32% similar to Document D, this information can also be important in reconstructing forensics of how documents were created.
- a degree of similarity a real number
- the document-to-document degree of similarity is preferably calculated on the basis of a number of similar "chunks" in two documents, relatively to overall number of chunks in the documents. (A detailed discussion of one such algorithm is contained below.) Formulae common to probability theory might be used as an estimate, when one of the files is unavailable, and similarity to it should be calculated on the basis of known similarities to other files: e.g. if the similarity of an unavailable Document A to B is known to be S ⁇ , and similarity of Document B to
- a desirable chunk size is an adjustable parameter with a typical value of one (1) KiloByte (KBt). This number is a parameter of the system and can be made larger or smaller, depending on the desired speed versus accuracy tradeoffs of the SDE 160, amount of information it has to hold, typical size of a document, etc.
- a typical operational scenario involves a stream of data thus contains more than one chunk, and, separately, a (possibly large) set of chunks that this data stream must be matched against.
- the goal is to find out whether a chunk similar to one from the stream is present in the data set.
- Classical algorithms such as "substring search” or "number of edits” are not practical because they query every chunk of the stream, starting from every character position, against the dataset of chunks. If classical algorithms are improved to query only non-intersecting chunks from the given stream, they will very rarely find a pair of similar chunks, because when they break the data stream, they cannot properly guess the positional shift or "phase" of the break.
- the SDE 160 instead matches the absolute values of the Fourier coefficients of chunks, and even detects the similarity between chunks that are phase-shifted with respect to one another by a substantial amount.
- the SDE 160 only needs about 10% of the whole set of Fourier coefficients to identify a correct match, and can maintain them in low accuracy form (byte, or even half-byte per each).
- the compressed internal representation of data which can be effectively used for data comparison pu ⁇ oses, is a subset of absolute values of Fourier coefficients of short chunks of the data, kept in low accuracy form.
- Fig. 4 is a representative flow chart of the SDE 160 process at a high level.
- a first step 400 is thus to receive a stream of data, and then to determine its chunks in 410.
- the Fourier coefficients of the chunks are calculated, only few of them are retained, while the rest are discarded (more on this later).
- a sequence of steps 430 are performed to compare the Fourier coefficients of the chunks against Fourier coefficients of chunks of files in the database, in an ordered fashion, to determine a degree of similarity in step 440.
- the number of chunks a typical file system is broken down into is very large, and an efficient query mechanism into the database of their Fourier coefficients and a way to maintain the data in its compressed format is needed.
- simple SQL-based queries cannot locate similar data chunks because they will consider a great disparity of only few Fourier coefficients, even out- weighed by a good match of others, as a mismatch.
- the SDE 160 exploits a so-called nearest neighbor search, and does not regard a mismatch of a small number of Fourier coefficients as a critical disparity.
- an efficient representation of the set of vectors, comprised of chunk coefficients is a tree-like structure of large clusters of coefficients, split into smaller clusters until the cluster size is small enough to represent a group of sufficiently similar chunks.
- the clustering algorithm implements a concept of a hash function on the sets of Fourier coefficients, playing a role somewhat similar to indexing a database.
- the SDE 160 first searches the clusters at the highest level to find the cluster that contains the chunk being queried. It continues this process until it reaches a matching chunk (or set of chunks) at the bottom of the cluster hierarchy or discovers that a similar chunk does not exist.
- the SDE 160 can thus map similar documents into the same sets of clusters, thus, a high level of data compression is achieved by keeping only the coordinates of the clusters themselves, rather than of all the chunks which fit into them.
- the SDE 160 query finds the correct matches in only a majority of cases, as opposed to all cases, and returns a formally erroneous mismatch or "not found" response in others. In an environment of such relaxed requirements, the query can be significantly optimized for speed.
- the clusters within the hierarchy have a substantial degree of intersection, so that going down all the branches of the tree where the similar clusters might possibly be found drives the query down most of the branches and eliminates the benefit of having a hierarchy (as compared to a simple set of clusters).
- the query uses probabilistic estimates to determine which clusters are most likely the hosts of the given chunk and proceeds only to explore the braches of the hierarchy, passing . through these clusters.
- This multi-branch, probabilistic search provides a configurable balance between the required accuracy and performance that is vital to determine document similarity in real time.
- Query accuracy in step 440 can be significantly improved, if, besides the original query, the SDE 160 initiates two more similar queries. In these queries only the data from either the first or the last half of the original chunk is used for Fourier- transforming, while the data from the other half is set to zero. If a chunk, similar to the one being queried, exists on the system, it would include (rather than intersect with) one of the half-chunks being queried and their similarity would be significantly larger. Of the three queries, the query that retrieves the set of the most similar chunks will generate the most reliable result. A single chunk query is unable to determine what document contains a chunk similar to the given, because many chunks from the file system may and typically do fall into a single cluster.
- the query inte ⁇ reting procedure 440 thus integrates results from a number of queries 430 the SDE 160 executes for several consecutive chunks of a given file or stream and outputs the names (or IDs) of a few files that are most similar to the given.
- the SDE 160 also outputs a probabilistic measure of its result to support the accuracy of the query result. This measure is used as a similarity estimate within a document distribution path, or a certainty factor within an infonnation security system.
- Some common types of files carry information of different nature separately, in different streams. There are methods that separate this information, on a stream-by-stream basis. These tools can be leveraged for the pu ⁇ ose of faster chunk database lookups. For example, text information does not need to be matched against a database of pictures, and a given implementation may decide to not consider certain type of information (e.g. downloaded web pages) as sensitive.
- the aim in designing the comparison process using a sparse representation of the Fourier coefficients was to design an algorithm capable of matching data from a stream to a pre-defined database that contains all the chunks from all documents available to the SDE 160.
- the convolution of the vectors is defined as: conv(x, y) ⁇ x ® y ⁇ ⁇ x p y q _ p
- Fig. 5 is an example convolution result. The following matlab script was used to generate the signal shown:
- this structure 600 substantially supports more efficient queries for chunks similar to the given than the "check against every" method, i.e., an exhaustive search.
- the queries must drill down in to the branches of a structure 600 that pass through the centers of the clusters that correlate with the vector being queried.
- Every cluster changes its location in space as new elements are deposited into it, while such a deposition takes place only when an element falls within the cluster (if there is no such cluster, another one is automatically created by the structure).
- the clusters we use in our structure are of spherical shape with a predefined radius. The radii of clusters at the same level of hierarchy are the same, and they decrease from top to bottom of the hierarchy. Several branches of the hierarchy may originate from a single cluster of any non-bottom level. All the branches reach the common bottom. The elements are registered at the bottom level of the structure. To build our theory, we will use the expression: "a cluster is similar to an element" in place of the more rigorous one: "a cluster with a center, which is similar to an element.” The radius of a cluster is associated with the minimal correlation coefficient its member has with its center.
- a cluster contains only a few elements, it moves substantially, and "learns" its appropriate position in space as elements are deposited into it. The steps the cluster makes become smaller as it grows, and eventually, the cluster will become practically immobile. We chose to update the coordinate of the center of the cluster as new elements are deposited into it in such a way that the center is always the mean of all the elements the cluster hosts. Once a cluster moves from its original position, it can no longer be guaranteed that its elements stay within the cluster. It follows from the Central Limit Theorem of statistics, however, that the total distance the center of a cluster drifts from its initial location as new chunks are deposited into it is finite, regardless of how many chunks it hosts. For this reason, elements infrequently fall outside their cluster.
- the algorithm periodically examines the hierarchical structure 600 periodically examines the motion of its clusters and estimates the probability of the elements of each cluster falling outside their host. It then automatically rechecks into itself the elements of those clusters, for which that probability exceeds a certain threshold (typically 10 "3 ).
- Clusters 610 in our structure 600 appear to have a large degree of intersection with each other.
- an element i.e., the Fourier coefficient set
- clusters 610 all of which exhibit a degree of similarity to the element, which is sufficiently high for depositing the element into any of the clusters. We are therefore often required to decide which cluster among those is the most appropriate host for the element being deposited. We define this logic further in this section.
- Our hierarchical structure 600 has several issues that are common in all treelike structures. First these structures perform well only when they are properly balanced, i.e., the number of elements in each branch, starting from a given level, is roughly the same. Simple tree structures allow on-the-fly balancing (as the elements are deposited), whereas the more complex tree structures require periodic rebalancing procedures. Our structure also requires such procedures, and the SDE 160 invokes the appropriate methods while the workstation 102 is idle (see Fig. 1).
- the next stage 703 of the procedure computes the correlation coefficients of q with all c° .
- step 705 is to sort the clusters according to the values of these coefficients.
- step 707 the procedure selects a subset of clusters j C.
- Parameter P is the probability that the procedure will not report an element, exhibiting a high similarity with q .
- the procedure automatically calculates the correlation threshold r° , corresponding to P at the top-most level of the hierarchical structure.
- the procedure selects specifies the subset of the branches in
- the procedure examines the subsequent (lower) level of the hierarchical structure. It collects all of the clusters that belong to that level of the structure, which also belong to the subset of branches that we found to be worth penetrating at the fist stage of the procedure. A subset of clusters I Ct is thus formed at step 709, and the analysis
- is reduced further to
- cluster selection As mentioned above, when an element q is being deposited into the hierarchical structure, there often exists more than one cluster at a level / of the structure that can host the element. These clusters are such that cor ⁇ q,c'J > r , where r 1 is a correlation threshold that defines the cluster radius at level / . Out of this subset of clusters, suitable for hosting q we have to choose the cluster that will be the most appropriate host for q . We now describe how we determine cluster selection.
- bottom-level cluster of the hierarchical structure Cj which, together with other clusters on its branch, hosts the element q (L here designates the bottom level of the hierarchy).
- L designates the bottom level of the hierarchy.
- the following criterion specifies the bottom-level cluster as the most appropriate host for q . It is the cluster in which subsequent similarity queries will be able to find the same element with the highest degree of certainty. Note that "greedy" depositing logic, according to which the cluster that is most similar to q is located at each level of the hierarchy and its branch is chosen as the host for q , does not necessarily satisfy the formulated criterion.
- an element similarity query at step 714 then typically returns a set of clusters similar to the element being queried (the query base). Each cluster within this set contains data chunks from different documents; therefore, a single query is not sufficient to determine which single document hosts the chunk being queried.
- the SDE 160 can execute a number of similarity queries with subsequent chunks from a document taken as bases and then deduce which document hosts the desired chunk based on the results of these queries. To meet this goal, the SDE 160 maintains a database of chunks of documents, which maps the chunks to the clusters of the hierarchy they fall into.
- This procedure accesses document chunk database and retrieves documents, subsequent chunks of which fall into the same clusters as those discovered by the similarity query and do so in the same order. These documents are reported as being similar to the unknown document being queried.
- the accuracy of post-processing increases exponentially with the number of chunks of the unknown document being queried, so that only a few subsequent chunks of that document need to be examined in order to discover the similarity of the document with one of the pre-processed documents with a high degree of certainty.
- this parameter specifies the similarity threshold of the base of the query with the clusters our procedure retrieves.
- this parameter must have a high enough value to eliminate the procedure from retrieving as many of the clusters as possible that are coincidentally similar to the base.
- the parameter can not be too high, since it might prevent the procedure from retrieving the cluster hosting a chunk similar to the element, which is the ultimate goal of the query. Therefore, this parameter depends on how the query postprocessing procedure is implemented; as well as on the dimensionality of the hierarchical structure's space (i.e. number of Fourier modes involved). In our experiments, we found the dimensionality of 70 to be adequate for our pu ⁇ oses, and the parameter r was chosen to have about one percent of coincidental cluster retrievals.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Storage Device Security (AREA)
- Document Processing Apparatus (AREA)
Abstract
Description
Claims
Applications Claiming Priority (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US738919 | 1985-05-29 | ||
US738924 | 1991-07-31 | ||
US44246403P | 2003-01-23 | 2003-01-23 | |
US442464P | 2003-01-23 | ||
US10/738,919 US6947933B2 (en) | 2003-01-23 | 2003-12-17 | Identifying similarities within large collections of unstructured data |
US10/738,924 US7490116B2 (en) | 2003-01-23 | 2003-12-17 | Identifying history of modification within large collections of unstructured data |
PCT/US2004/001530 WO2004066086A2 (en) | 2003-01-23 | 2004-01-21 | Identifying similarities and history of modification within large collections of unstructured data |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1590748A2 true EP1590748A2 (en) | 2005-11-02 |
EP1590748A4 EP1590748A4 (en) | 2008-07-30 |
Family
ID=32777026
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP04704049A Withdrawn EP1590748A4 (en) | 2003-01-23 | 2004-01-21 | Identifying similarities and history of modification within large collections of unstructured data |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP1590748A4 (en) |
JP (1) | JP4667362B2 (en) |
CA (1) | CA2553654C (en) |
WO (1) | WO2004066086A2 (en) |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4695388B2 (en) * | 2004-12-27 | 2011-06-08 | 株式会社リコー | Security information estimation apparatus, security information estimation method, security information estimation program, and recording medium |
JP2006338147A (en) * | 2005-05-31 | 2006-12-14 | Ricoh Co Ltd | Document management device, document management method and program |
JP4791776B2 (en) * | 2005-07-26 | 2011-10-12 | 株式会社リコー | Security information estimation apparatus, security information estimation method, security information estimation program, and recording medium |
US20070239802A1 (en) * | 2006-04-07 | 2007-10-11 | Razdow Allen M | System and method for maintaining the genealogy of documents |
JP4895696B2 (en) * | 2006-06-14 | 2012-03-14 | 株式会社リコー | Information processing apparatus, information processing method, and information processing program |
JP5003131B2 (en) | 2006-12-04 | 2012-08-15 | 富士ゼロックス株式会社 | Document providing system and information providing program |
JP5023715B2 (en) * | 2007-01-25 | 2012-09-12 | 富士ゼロックス株式会社 | Information processing system, information processing apparatus, and program |
JP2008305094A (en) * | 2007-06-06 | 2008-12-18 | Canon Inc | Documentation management method and its apparatus |
JP5294002B2 (en) * | 2008-07-22 | 2013-09-18 | 株式会社日立製作所 | Document management system, document management program, and document management method |
JP5213758B2 (en) * | 2009-02-26 | 2013-06-19 | 三菱電機株式会社 | Information processing apparatus, information processing method, and program |
JP2011022705A (en) | 2009-07-14 | 2011-02-03 | Hitachi Ltd | Trail management method, system, and program |
JP5264643B2 (en) * | 2009-07-28 | 2013-08-14 | 日本電信電話株式会社 | Content distribution monitoring method and system, and apparatus and program used in this system |
JP5621490B2 (en) * | 2010-10-08 | 2014-11-12 | 富士通株式会社 | Log management program, log management apparatus, and log management method |
JP5630193B2 (en) * | 2010-10-08 | 2014-11-26 | 富士通株式会社 | Operation restriction management program, operation restriction management apparatus, and operation restriction management method |
US20120215908A1 (en) * | 2011-02-18 | 2012-08-23 | Hitachi, Ltd. | Method and system for detecting improper operation and computer-readable non-transitory storage medium |
JP5701096B2 (en) * | 2011-02-24 | 2015-04-15 | 三菱電機株式会社 | File tracking apparatus, file tracking method, and file tracking program |
US9384177B2 (en) | 2011-05-27 | 2016-07-05 | Hitachi, Ltd. | File history recording system, file history management system and file history recording method |
CN112199936B (en) * | 2020-11-12 | 2024-01-23 | 深圳供电局有限公司 | Intelligent analysis method and storage medium for repeated declaration of scientific research projects |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5940830A (en) * | 1996-09-05 | 1999-08-17 | Fujitsu Limited | Distributed document management system |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0581096A (en) * | 1991-09-19 | 1993-04-02 | Matsushita Electric Ind Co Ltd | Page deletion system for electronic filing device |
JP3584540B2 (en) * | 1995-04-20 | 2004-11-04 | 富士ゼロックス株式会社 | Document copy relation management system |
JPH0944432A (en) * | 1995-05-24 | 1997-02-14 | Fuji Xerox Co Ltd | Information processing method and information processor |
JPH0950410A (en) * | 1995-06-01 | 1997-02-18 | Fuji Xerox Co Ltd | Information processing method and information processor |
US5926812A (en) * | 1996-06-20 | 1999-07-20 | Mantra Technologies, Inc. | Document extraction and comparison method with applications to automatic personalized database searching |
JPH10133934A (en) * | 1996-09-05 | 1998-05-22 | Fujitsu Ltd | Distributed document managing system and program storing medium realizing it |
JP3832077B2 (en) * | 1998-03-06 | 2006-10-11 | 富士ゼロックス株式会社 | Document management device |
JP3689593B2 (en) * | 1999-07-02 | 2005-08-31 | シャープ株式会社 | Content distribution management device and program recording medium |
JP2001136363A (en) * | 1999-11-02 | 2001-05-18 | Nippon Telegraph & Telephone West Corp | Contents use acceptance managing method and its device |
US6633882B1 (en) * | 2000-06-29 | 2003-10-14 | Microsoft Corporation | Multi-dimensional database record compression utilizing optimized cluster models |
-
2004
- 2004-01-21 EP EP04704049A patent/EP1590748A4/en not_active Withdrawn
- 2004-01-21 CA CA2553654A patent/CA2553654C/en not_active Expired - Lifetime
- 2004-01-21 WO PCT/US2004/001530 patent/WO2004066086A2/en active Application Filing
- 2004-01-21 JP JP2006501066A patent/JP4667362B2/en not_active Expired - Lifetime
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5940830A (en) * | 1996-09-05 | 1999-08-17 | Fujitsu Limited | Distributed document management system |
Non-Patent Citations (2)
Title |
---|
BRIN S ET AL: "COPY DETECTION MECHANISMS FOR DIGITAL DOCUMENTS*" SIGMOD RECORD, ACM, NEW YORK, NY, US, vol. 24, no. 2, 1 June 1995 (1995-06-01), pages 398-409, XP000527686 ISSN: 0163-5808 * |
See also references of WO2004066086A2 * |
Also Published As
Publication number | Publication date |
---|---|
CA2553654A1 (en) | 2004-08-05 |
WO2004066086A3 (en) | 2005-01-20 |
JP4667362B2 (en) | 2011-04-13 |
EP1590748A4 (en) | 2008-07-30 |
WO2004066086A2 (en) | 2004-08-05 |
JP2006516775A (en) | 2006-07-06 |
CA2553654C (en) | 2014-04-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7490116B2 (en) | Identifying history of modification within large collections of unstructured data | |
CA2553654C (en) | Identifying similarities and history of modification within large collections of unstructured data | |
US11561931B2 (en) | Information source agent systems and methods for distributed data storage and management using content signatures | |
Singh et al. | Probabilistic data structures for big data analytics: A comprehensive review | |
US11188657B2 (en) | Method and system for managing electronic documents based on sensitivity of information | |
US7617231B2 (en) | Data hashing method, data processing method, and data processing system using similarity-based hashing algorithm | |
US6898592B2 (en) | Scoping queries in a search engine | |
EP2248062B1 (en) | Automated forensic document signatures | |
US7401080B2 (en) | Storage reports duplicate file detection | |
US8463815B1 (en) | System and method for access controls | |
US8965941B2 (en) | File list generation method, system, and program, and file list generation device | |
US10417265B2 (en) | High performance parallel indexing for forensics and electronic discovery | |
US20120131001A1 (en) | Methods and computer program products for generating search results using file identicality | |
US9064119B2 (en) | Information scanning across multiple devices | |
US8095540B2 (en) | Identifying superphrases of text strings | |
US20230252140A1 (en) | Methods and systems for identifying anomalous computer events to detect security incidents | |
Moia et al. | Similarity digest search: A survey and comparative analysis of strategies to perform known file filtering using approximate matching | |
JP2005539334A (en) | Searchable information content for pre-selected data | |
Wang et al. | A novel hash-based approach for mining frequent itemsets over data streams requiring less memory space | |
US9734195B1 (en) | Automated data flow tracking | |
CN114461762A (en) | Archive change identification method, device, equipment and storage medium | |
SalahEldeen et al. | Reading the correct history? Modeling temporal intention in resource sharing | |
US11928135B2 (en) | Edge computing data reproduction and filtering gatekeeper | |
Hua et al. | Locality-sensitive Bloom filter for approximate membership query | |
AU2014202526A1 (en) | Automated forensic document signatures |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20050823 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL LT LV MK |
|
DAX | Request for extension of the european patent (deleted) | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 1083375 Country of ref document: HK |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: SMOLSKY, MICHAEL Inventor name: BUCCELLA, DONATO Inventor name: CARSON, DWAYNE, A. |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: VERDASYS, INC. |
|
A4 | Supplementary search report drawn up and despatched |
Effective date: 20080626 |
|
17Q | First examination report despatched |
Effective date: 20090310 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20120821 |
|
REG | Reference to a national code |
Ref country code: HK Ref legal event code: WD Ref document number: 1083375 Country of ref document: HK |