WO2001052109A1 - Process and system for gathering and handling information - Google Patents

Process and system for gathering and handling information Download PDF

Info

Publication number
WO2001052109A1
WO2001052109A1 PCT/US2001/001013 US0101013W WO0152109A1 WO 2001052109 A1 WO2001052109 A1 WO 2001052109A1 US 0101013 W US0101013 W US 0101013W WO 0152109 A1 WO0152109 A1 WO 0152109A1
Authority
WO
WIPO (PCT)
Prior art keywords
booklet
level
items
item
value
Prior art date
Application number
PCT/US2001/001013
Other languages
French (fr)
Inventor
Stanley Benjamin Smith
Paul Charles Elliott
Barbara Jordan Elliott
Dominique Louis Proudhon
Original Assignee
Stanley Benjamin Smith
Paul Charles Elliott
Barbara Jordan Elliott
Dominique Louis Proudhon
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Stanley Benjamin Smith, Paul Charles Elliott, Barbara Jordan Elliott, Dominique Louis Proudhon filed Critical Stanley Benjamin Smith
Priority to AU2001227862A priority Critical patent/AU2001227862A1/en
Publication of WO2001052109A1 publication Critical patent/WO2001052109A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Operations Research (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A process for capturing information, calculating and scoring the captured information to provide useful feedback, and applying the feedback, utilizing a unique means of grouping and handling data that is collected. In particular, the invention (figure 2) provides a method of collecting and handling observations related to a researchable model or a theory or practice, such as performance appraisal, cost accounting, outcome measurement, hiring and selection, job classification and compensation, project management, or other process.

Description

PROCESS AND SYSTEM FOR GATHERING AND HANDLING INFORMATION
BACKGROUND OF THE INVENTION
Field of the Invention
This invention relates to methods and systems for data handling and analysis.
Description of the Related Art
There are many fields where it is desirable to be able to collect and analyze large amounts of sparse data, and to organize and make calculations based on the data for immediate feedback to various users or subscribers. Immediate calculation and feedback enable a researcher or other user to adjust a research design or to modify an experiment on the basis of available data, without having to complete the total cycle of research. There is also a need in fields where diagnosis plays a significant role for selecting and automating the chain of questions used to reach or refine a diagnosis. This pragmatic approach to adjusting an experiment or research process has broad and diverse potential applicability to any business process that also accumulates sparse data and benefits from rapid adjustment to feedback. The lack of efficient, accurate and cost-effective methods for collecting, handling, scoring and reporting data is a problem that crosses many disciplines. Similar problems occur, for example, in diagnostic and outcome research for medical or social services, quality control systems in manufacturing, or research on drug interactions for pharmaceutical products.
For example, employers, such as businesses, police departments, schools, and the like, have a procedure for performance evaluation of employees. Often, however, performance information is collected in a haphazard manner. Sometimes there is no opportunity for input by actual observers of performance behavior. Sometimes there is no specification of performance standards or expectations; this leads to inconsistent or unjustified performance ratings. Sometimes there is no method for accumulation of calculations of units of performance; as a result global or arbitrary final ratings are assigned. Sometimes there is no feedback to employees about performance and neither appraisers nor employees remember the event(s) used to justify the appraisal at the time of the appraisal.
If a researcher or employee obtains timely, calculated or scored feedback on experimental or performance information, informed corrective adjustments and behavioral changes can be made. Immediate, accurately proportioned and scored specific feedback will generate business process improvement information and, therefore, impact cost and quality of service. It is an object of the invention to provide a method of data collection and calculation to organize research protocols and improve research process efficiency.
It is also an object of the invention to provide a method for rapidly designing information gathering and research routines for sparse as well as dense data, continuously capture information and research observations, immediately calculate the information or research observations, and provide feedback based on the information and research observations.
It is further an object of the invention to provide a method that weights observations so as to select or identify additional paths for research.
It is a further object of the invention to provide a method to increase the efficiency and decrease the cost of accumulating and handling information and scoring and applying this information.
It is further an object of the invention to categorize the data collected so it can be applied to multiple fields of inquiry with little or no loss in statistical validity.
It is further an object of the invention to organize and distribute the data collected so cross-organizational benchmarking can be easily and efficiently implemented.
It is further an object of the invention to provide a method for rapid feedback and distribution of data and reports on the data to users for immediate application to improve processes, behavior, outcomes, or research results.
Other objects and advantages will be more fully apparent from the following disclosure and appended claims.
SUMMARY OF THE INVENTION The invention herein is a process for capturing information, using the captured information to provide useful feedback utilizing a unique means of grouping, handling and calculating the data that is collected. In particular, the invention provides a method of collecting and handling observations related to a researchable model or a theory or practice, such as biosynthesis, a sales process, a production assembly line, performance appraisal, cost accounting, outcome measurement, hiring and selection, project management, diagnosis, or other process.
Other objects and features of the inventions will be more fully apparent from the following disclosure and appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 is a schematic diagram of how the invention may be used to structure a research process. It is titled Research Process Chart.
Figure 2 is a schematic diagram of how components of the invention link together. It is titled Application Cluster Design.
Figure 3 is a schematic diagram of the preferred organization of the components of the invention herein. It is titled Business Structure Chart.
Figure 4 is a schematic diagram of an overview of the calculation sequence and method of the invention herein. It is titled Algorithm Rollup Overview. Figure 5 is a schematic diagram of detailed steps of the calculation sequence and method of the invention herein. It is titled Algorithm Rollup Detail.
Figure 6 is a schematic diagram of the implementation of the algorithm for "expert" or "artificial intelligence" uses. It is titled Dynamic Element Inclusion Overview
DETAILED DESCRIPTION OF THE INVENTION AND PREFERRED EMBODIMENTS
THEREOF The present invention is a process and system for capturing information, calculating captured information, using calculated captured information to provide feedback, determining alternate paths for capturing additional information, and applying and distributing that feedback. In particular, the invention provides a method for collecting and handling observations related to a researchable model or a theory or practice.
Because the various components of the invention, once defined, can be easily applied to data from any source, and because these components are interrelated and can be redefined at the user's option or redefined automatically, the process and system of the invention are extraordinarily flexible and enable rapid corrective adjustments in experimental design, data accumulation, and data analysis.
Following is a list of definitions of terms used in this description of the invention:
Figure imgf000005_0001
Figure imgf000006_0001
Figure imgf000007_0001
Figure imgf000008_0001
Because data accumulated through the invention is attached to an evolving list of "root elements" (see definition above) that are relevant across activities, industries and disciplines, the process and system of the invention allow for cross-correlations between any dimensions in any field. This enables benchmarking research to be performed rapidly and easily. The steps of the present invention are best carried out through a number of customizable computer screens that enable users or subscribers to accumulate, route, research, and calculate data for a specific application. A unique roll-up algorithm (see definition above) is used in the calculation process, a detailed example of the use of which is provided herein.
Prior to providing more details on the invention, following is a discussion of the invention where important terms as related to this invention are defined (see Figure 3).
As used herein, the term "application cluster" (see definition above) means the business process or research process to which the invention is being applied. The application cluster enables the accumulation of observations about a definable set of "hubs" (see definition above) by or about which data can be created that are in identifiable relationships to one another, for example; machine for, catalyst for, supervisor of, therapist to, patient of, design stage of.
The application cluster provides the specifications for the behavior of the information - handling process. An application cluster is operationalized by a set of computer screens and screen labels. These screens and labels have selectable items or sets of items that enable entry or review of observations.
"Observations" (see definition above) are data or comments about hubs defined by the application cluster, which in the method of the invention are entered into the computer by an authorized user or subscriber. Each observation contains one or more "atoms" (see definition above), which are the discrete data entries that constitute the observation. Each atom in the invention must be defined by two criteria. First, because data collection is essentially accumulation of information about "elements" (see definition above) being researched by an application cluster, e.g., how long has the person been employed, what is the mass of a compound, how much time was involved in the sale, etc., each atom is attached to one of the elements being researched by that application cluster through a "booklet item" (see definition above). Second, the atom includes the value entered by the user or subscriber for the booklet item. This value is called the atom's "input value" (see definition above).
When creating lists of elements that will become booklet items, the user of the invention uses screens in the application cluster setup routine to determine the style or type of input that can be entered by the person making the entry. For example, the user of the invention determines any of a number of "input specifications" (see definition above) for the experiment or research process, such as numeric; Boolean — present or not present; scalar — 0 to 100%; typological — with or without another amino acid attached; or many other possible options. The input specification forces the hub entering an observation to "score" the booklet item in a way that the research design requires.
As used herein, a "booklet item" is both the phrasing or label of an element that is relevant to the research topic of the application cluster and is in a specific position in a "booklet" (see definition above). The booklet is a list of element labels organized in a hierarchical tree structure. A booklet item is the element label along with its location in the tree and branch hierarchy of a booklet. The information handling process can thereby locate and correlate all atoms related to the same booklet item. Each booklet item is also given properties, for example, a default value, a possible range, a root element trigger range (see definition above), and the coefficient (see definition above) or relative weight as compared with other booklet items at the same level in the tree as discussed below. The type and properties of booklet items within booklets can be defined by the user or subscriber or the provider of the invention as needed. This enables the invention to organize observations for calculation and make the calculations immediately available as feedback for in-course adjustment by the user of the invention or subscriber or by calculations that generate inclusion of additional booklet items. All calculations are also available for data management that requires long-term research and data handling for multiple datasets
A "booklet" as used herein, is a list of element labels arranged into a tree and branch hierarchy about a given subject or content area called "booklet style" (see definition above). Examples of booklet styles are job descriptions or functions, equipment specifications, personal or individual demographics, questionnaire items, candidate selection criteria, organizational goals and the like. Each member of an element label list in the hierarchy takes on the properties of a booklet item as its "position" (see definition above) in the hierarchical tree structure is set. As defined herein, the "level" of a booklet item in the hierarchy within a booklet is based on where the booklet item is in the hierarchy. Thus, booklet items at the highest level in the hierarchy are said to be at level 1, and there are no higher (parent) booklet items in that booklet. Each level 1 booklet item has zero to many "children" booklet items at level 2, and so on, through levels 3, 4, etc., as is deemed appropriate for the type of information being handled and the exigencies of the situation.
As an example of the hierarchical arrangement of the booklet and booklet items of the invention, a booklet related to job performance might have several different major required job functions associated with the job, each of which functions might have one or more aspects that may be separately followed and analyzed. One way of looking at booklets is to think of the characteristics of the particular subject being outlined in a standard outline format. The extent of the specificity of these tree-structures, in other words, the number of levels, is limited only by practical considerations.
An application cluster can use any number of booklets and any number of booklets can be attached to a hub. For example, employees can have duties, goals, and/or project booklets in a performance appraisal cluster; a chain of production can have quality, process, resource, and/or time management booklets. Subscribers can assign weights to booklets to account for the relative importance of the various booklets attached to a hub. For example, the quality booklet attached to a chain of production hub could have a weight of 0.5, while the weight of the process booklet is 0.2 and the time management booklet is 0.3. The capability of the invention to assign weights to individual booklet items as well as booklets themselves enables the user or subscriber to put a precise emphasis on key aspects of the content area that is researched. Weights can be modified over time to allow the research design to evolve as the emphasis shifts with the changes in the research or the organization. Weights can also be modified at any time by the user or subscriber in order to alter calculation parameters and compare calculation results.
As defined above, an "element" is a unit about which data may be gathered or calculated. Examples include the mass of an object, the amount of ly sine in a cell, the preference of an individual to dominate others, the rating of an employee on a work assignment, and an employee's home telephone number. Ah element is independent from booklets: it does not belong to a hierarchy but is listed with all other existing elements in a database or "element list" from which a subscriber can pick, modify or create element labels to be inserted as booklet items in booklets.
Therefore, booklet items are created from elements and the labels assigned to those elements and placed in a hierarchy within a booklet. For example, in a performance appraisal cluster, the element "Performs regular checkups" can become a booklet item in a "Mechanic" booklet, a "Safety Officer" booldet, or a "Police Officer" booklet. The crucial difference between a booklet item and an element is that elements are context-independent while booldet items have a context, which is defined by the booklet.
The subscriber controls at which booklet level observations can be entered in an application cluster. Users may be forced to select booklet items at the lowest level of a booklet tree before being able to enter observations, or, if "Higher-level Input" is enabled by the subscriber, users may enter observations about an element that is at the next to lowest level in the tree and branch structure of a booklet. The Higher-level Input weight affects the relative impact of these observations compared to observations made at the lowest level of the tree and branch structure of a booklet. It can be set from 0 to 1.
As the invention is used and the number of booklets increases, each element gets linked to a growing number of booklet items. This design enables research to be done on elements, not only booklets items, thus allowing for cross-correlations between booklets, between content areas (booklet styles) within a given application cluster, across application clusters, and across research or business processes. As an example, information about the element "respond to customer requests" can be used for research on receptionists, engineers, and department heads; or information about that same element in a performance appraisal cluster can be used for research in a quality management cluster.
The use of root elements and booklet styles also enables cross-industry research and benchmarking. Information gathered in a given industry can be used for research in another industry. The invention links each element to a single root element, which is the generic expression of that element. For example, "budget knowledge" is the root element for the element "develops a budget" used in a performance appraisal application cluster. In a job classification cluster, this same root element might be expressed as "budget experience". A root element can cut across activities, industries and disciplines. A root element can, therefore, have several elements linked to it, each of which is expressed in a discipline-specific style, jargon, or language. In other words, all elements may have multiple phrasings that allow them to have the "look and feel" relevant to the particular application cluster.
The use of root elements also enables the summation of accumulated observations to determine whether a trigger range has been reached that appends a set number (Element sub- set size indicator - see definition above) of additional elements (variants of the root element) to a research booklet. This is called Dynamic Element Inclusion (see Figure 6). In effect, this scoring method combines the advantages of an expert system or decision tree system for research with the advantages of a weighted and scored system. This method functions as a form of artificial intelligence.
This unique architecture of the invention ensures data cleanliness and, by design, makes data on root elements readily available for benchmarking studies, with minimal data cleaning or organizing as well as rapid reporting. The link between root elements and elements can be established by the subscriber and/or by the "publisher" (see the definition above). Elements and root elements are listed in the same element list, functioning as the central database gathering all research items across application clusters and across "clients" (see definition above). The status of an element evolves into various stages of validation as data accumulates about it from hypothetical (i.e., insufficient numbers of observations for this element for a statistical analysis to be run on the data to confirm validity), to validated as statistical operations and standards of validity get performed and the statistical target level set by the user of the invention or research design is reached.
The Roll up and Dynamic Element Inclusion algorithms enable calculation of the cumulative weight of observations upon booklet items of differing correlations with a root element. If an observation about an item in a booklet reaches the "root element trigger range" (see definition above), another sub-set of elements associated with that root element is retrieved to generate additional booklet items that can be inserted into the list of booklet items until the cumulative calculations fail to be within the root element trigger range. The user of the invention controls whether the additional sub-set of elements is to be inserted into the booklet as booklet items and if and how the additional items are presented to the subject. The user of the invention may also reset the root element calculations (previous root calculation results will be ignored) or add to previous root calculation results (previous root calculation results are retrieved and added to new results).
To reiterate, the information handling process of the invention facilitates the validation of root elements through correlation of observations from different booklets, insofar as their booklet items refer to the same root element. Research is possible across booklets.
As defined above, a "hub" is determined by the application cluster, and is an entity by or about which observations can be created. A single position, a single department, a single enzyme, a single car, a single user or subscriber, any entity that plays a role in a given business or research process can be designated as a hub. When hubs are grouped into classes, they become a "hub category" (see definition above) Thus, there are various categories of hubs, for example, the position category, department category, enzyme category, and so on. A hub may be a source and/or target of observations within an application cluster and to which booklets can be attached. A hub entering an observation is the "source" (see definition above) of the observation, while the hub about which the observation is entered is the "target" (see definition above) of the observation. A hub can be both the source and the target of an observation; for example, an employee enters an observation about herself. Hubs can have any number of booklets attached to them, containing the booklet items that are connected to elements that are part of the field of inquiry relevant to the application cluster. Examples of hubs and related booklets are: Hub Booklet Equipment Specifications
Manager Goals
Personnel or other employees Job description
Patients Medications
Manufacturing processes Welding specifications When an observation is entered about a research question, the observation retains the identity of the source and target hub. Each atom is related to the hub that created the observation and the hub about which the observation was written. The type of relationship that existed between the two hubs at the time of the observation is also retained. As described above, the observation also retains atoms that have been entered. Thus the information in each observation completely specifies the context, through the hubs and the relationship between them, and the content, the atoms, of all data. There is no requirement for observations to have similar structures, but rather the atoms present in each observation are entirely dependent on the booklet items to which the atoms relate. Because of this design feature, the information handling process of the invention can thereby research elements by source hub, target hub, or type of relationship. See Figure 2 for a schematic of the relationship between hubs, atoms, booklets, and elements.
A hub can also be assigned a weight for observations that it serves as a source. This is called the "source input weight" (see definition above) and enables the impact of an observation to be retained and scored by the algorithm based on the relative impact of the source or observing hub upon the target hub of the observation. An example of the practical application of this feature is the differential weighting that might be assigned to input from a trained observer versus an untrained observer. The user of the invention may determine that the input from the trained observer is four times more accurate than that from an untrained observer. The user of the invention might then decide that four observations from untrained observers might be equivalent to one observation from a trained observer and weight observations from these two hubs on identical booklet items to reflect this difference.
To reiterate, one can evaluate all atoms of data written by or about any hub based a) on the related booklet items, b) the hub itself, c) the hub's booklets, d) the hub's relationship with other hubs, and e) the application cluster to which the hub belongs through a hub category.
The relationship between each hub and the booklet(s) related to it specifies the behavior of the information handling process when operating on the hub in a specific application cluster. An application cluster may have a root element iterative process enabled that will append additional booklet items to a booklet for additional observations until the correlation with the root element no longer holds at a determined level (input item trigger value). As these additional booldet items are appended, their correlations with other/different root elements may also reach a trigger value; thus enabling the process of the invention to uncover relationships among root elements. Likewise, the relationship of the .hub and its booklets specifies the information that can be gathered about the hub. Only one category of hub, the "pivot hub category" (see definition above), can be the object of research for a given application cluster. For example, "positions" is the hub category that isfhe object of research in a performance appraisal cluster; "steps" is the hub category that is the object of research in a cluster that tracks through a process; "employees" is the hub category that is the object of research in a goal management cluster.
The pivot hub category is. the anchor of an application cluster. It is the category about which results are calculated and feedback produced. In effect, the pivot hub category serves as the framework for the research being performed through the application cluster; for example, for a cost accounting application cluster: what are the proportional costs of this configuration of resources where the pivot hub equals resources; for a performance appraisal cluster: what are the strengths and weaknesses of this employee in this position where the pivot hub equals position.
A hub category can be the pivot hub category for a given application cluster and simply a hub for another application cluster. For example, the pivot hub for a performance appraisal application cluster is usually the job or position, and the job or position is also the pivot hub for a job classification application cluster - but the pivot hub for a quality control application cluster is the stage in the quality cycle that is being measured. Some of the same hubs, in this case employees, may be entering observations, but the pivot hub is different. Any hub belonging to a pivot hub category is called a pivot hub. In other words, the pivot hub is a hub that becomes the object of analysis and calculations.
The invention uses three types of relationship between hubs. These three types of relationship are so generic that they apply to any system that can be studied, whether it is organic, inorganic, or conceptual. Hub relationships define the rights and entitlements of hubs to and with one another. These may be viewed as a diagram or chart that defines the levels of hierarchy and the directionality of vectors in a system. Thus, an example of a one-way vector is where the manager directs the employee, and a two-way vector is where peers exchange information with one another. The three possible types of hub relationship are: a) inclusion, where a hub (i.e., a neighborhood or department) includes one or more other -hubs (i.e., street corners, or divisions); b) assignment, where a hub can be assigned to one or more other hubs (i.e., an employee assigned to a position); and c) entitlement, where a hub can be the source or target of some action by another hub.
The latter relationship includes types of possible action: 1) information, where the hub is subject to being a source or target of accretion or accumulation of information, facts or features without weights or scores, for example, a series of police officers enters narratives about a hub that is a particular street corner, 2) influence, where a hub is subject to being the source or target of an influence, weight or score that can change its nature or composition or characteristics, for example, a series of police officers enter scored observations that cumulatively change the status of the hub that is the particular street corner, and 3) decision, where the hub is subject to being the source or target of a decision about it that changes its nature or composition or characteristics, for example, a police sergeant decides to take action upon the accumulated information in the narratives provided by the police officers. Because all application clusters are built on the same design using hubs, hub relationships, booklets styles, booklets, booklet items, elements, and root elements, regardless of the field and process involved, the invention can easily convert screen labels from one application cluster to another. By simply translating the labels used by an application cluster to designate hubs, relationships, and booklets into the specific language or jargon of another field or process, the set of computer screens can be cloned into a different application cluster with no structural design changes and only minimal screen customization. As a result of this sof ware interface, the set of screens and menus that are presented to the user or subscriber to the invention can handle any business or research process. This is in contrast to current software configurations that are designed to serve only one or a few applications. The organization of booklet items, elements, and root elements in the invention also makes the data itself readily researchable across languages, dialects or jargons. This level of isomorphism, including both the software interface and the researched data, is unique to this invention. To further describe the benefit and application of isomorphism: it enables the substitution of one set of descriptors in a field of enquiry, study, research or business practice to another field. For example, a performance appraisal cluster and a job classification cluster both use booklet items from the element list that have root elements in common. For performance appraisal, the booklet item for an Accountant 1 hub might be "makes accurate general ledger entries" and for job classification of an Accountant 1 hub, the booklet item might be
"knowledge of general ledger procedures". The root element for both application clusters may be "general ledger competence" and a user of the invention can compare the number of observations of "general ledger competence" in performance appraisals to determine how important it might be to include "knowledge of general ledger procedures" in an Accountant 1 position description. If a training application cluster is added later and also has a root element of "general ledger competence" stated as "general ledger -refresher training course", then the training official can determine which job role needs the training (Accountant 1 or Accountant 2) and also which particular employees need the training.
The use of the invention is based on a publisher-subscriber relationship between the licensee of the invention and the subscriber (see definition above). This type of business relationship maximizes the amount of information available for analysis using the method herein. The "publisher" (see definition above) is the licensed vendor of the invention, who provides the software to accomplish the method of the invention with at least one application cluster, provides training in using the system, and organizes and maintains the data. A "subscriber" is an organization or a user of the invention who purchases the right to use the software in at least one application cluster.
Each subscriber may have several "clients" (see definition above), who are sub-sets of a subscriber and that use at least one application cluster. The subscriber to an application cluster entitles a hub category and individual hubs to make entries into the software of observations about any booldet item relevant to that application cluster. For example, a hub can enter observations about a stage in a quality cycle (quality management application cluster) and observations about the performance of a supervisor (performance appraisal cluster), but another hub can enter information only about the subordinate's performance. Screens are made available to the hub to access the software and to select from menus that open the booldet and display the appropriate booklet items to enter observations or change the status of booklet items in a structured and ordered fashion. Using standard encryption technology and data transport utilities, the software provided to implement the method of the invention ports non-confidential information between subscribers and publisher, while maintaining the security of the information -Subscribers can choose to tag confidential fields and to entitle the publisher to serve as a warehouse for data ported into the publisher's computers from their site. As subscribers accumulate observations about the elements and root elements in the element list through their booklets, the observations are uploaded to the publisher's data warehouse. Other subscribers also upload observations about the same elements from identical or similar booklets to the data warehouse. As the accumulated observations on elements undergo statistical analysis and fine-tuning, they are modified for greater validity and may be, depending on authorization and subscription rights, downloaded back to the subscribers with better wording and stronger statistical relevance. For example, data from only one police department would not necessarily provide a sufficient sample size or cross-section of police- related performance events to inform a decision, but if 100 police departments all use the same elements to appraise their patrol officers, the analysis of the data from all the departments might indicate that some elements need to be replaced, reworded, scored differently or changed.
In the preferred operation of the invention herein, depending on the agreement of the parties, subscribers may purchase a license to an application cluster with a specified number of attached booklets. The subscriber may also purchase rights to additional application clusters and to add additional sub-sets of their system that are their own client, such as a large corporation with multiple national divisions. Subscribers in the preferred method of the invention pay consulting fees for the configuration, installation, design, and service and/or maintenance of the software, and a licensing fee for use of the software.
Subscribers may subscribe to updated items/elements and modified booklets just as one would subscribe to a magazine or a newspaper. Benchmarking results and reports regarding differences among subscribers and their clients can also be purchased on a subscription basis. As new application clusters are developed along with their hubs, elements and booklets, subscribers can choose to subscribe to the additional application clusters and have these seamlessly and effortlessly downloaded into their network servers.
By design, application clusters run in an integrated fashion, allowing users to expand the use of the invention to any number of business or research processes with little setup work. For example, a job description and a classification and compensation application cluster can run simultaneously with an employee selection application cluster and with a performance management and appraisal application cluster. The data collected in one application cluster is readily available, if needed, for the other clusters.
Referring in greater detail to the Figures, Figure 1 shows how the invention structures a typical research process. A subscriber provides one or more clients with the ability to use software that implements the use of the method of the invention. Each client formulates research questions that an application cluster will be used to research.
In order to design the application cluster, the publisher creates or renames hub categories, structures, labels, and defines the hub relationship categories, and determines what set of access and data entry rights are needed for each hub categoiy. The publisher then renames computer screen labels and determines the pivot hub category that will be the target of observations and feedback for that application cluster. The publisher also builds booklets to be attached to each pivot hub and sets up booklet and booklet item weights. Finally the publisher sets up the calculation algorithm parameters (see below) in order to provide the client with the measurement needed by the client. Once these steps have been completed, the new application cluster is uploaded to the client site and is ready for use. From that point, the client can accumulate observations, produce reports, and develop new booklets as needed.
The calculation algorithm of the invention is a unique mathematical method that takes advantage of the unique structure of the invention to calculate values from a number of observations associated with tree-structures or outlines made of booklet items. To keep the following description of the algorithm simple, all examples will be drawn from a performance appraisal application cluster.
The algorithm calculates from the bottom up, i.e., it retrieves the input values for the booklet items at the lowest level of the tree (level n) and averages them proportionately by applying the booklet item weights if assigned, or distributing the weight evenly among all booklet items at this level if no specific weight was given. This generates the calculated values at the next level up (level n-1). The algorithm then averages this first series of calculated values proportionately, generating the next level values (level n-2) and so on, until the level at which the client needs the final result is reached. Because of this step by step calculation from the bottom up, the algorithm is called "roll-up algorithm". When several input sources are used to enter observations about pivot hubs, the roll-up algorithm performs the roll-up calculation described above in parallel for each input source, and merges them at the level where the client needs consolidated results showing the average from all input sources (see algorithm example below).
A number of client-defined parameters control what is calculated and how the results are displayed for a given application cluster. Some of these parameters can be changed by the user of the invention when producing reports to test how variations in the research hypothesis alter the results of the calculation. A number of these parameters are set up, as discussed previously, when the booldet is created as follows:
The "booklet item default value" (see definition aboye) determines the value to be used for a given element if no score was entered for the corresponding booklet item, such as 2.5 on a 5 point scale.
The "booklet item weight" (see definition above) determines the proportional weight to be attributed to the input value for a booklet item compared with all other booklet items at the same level in the booklet tree, such as 25% of the total weight for that level.
The "booklet weight" (see definition above) determines the proportional weight to be attributed to the calculated results for each booklet attached to a given pivot hub, such as 0.3 for a "goals" booklet and 0.7 for a "functions" booklet.
All other calculation parameters are set up when the cluster is created, as follows: "Input source weight" (see definition above), as previously described, determines the proportional weight to be attributed to each input source for a given pivot hub, such as 0.9 for a supervisor and 0.1 for a peer.
"Cluster default value" (see definition above) as used herein is the value to be used in the calculation for a given booklet item if no input value was entered and no default value was set up for the booklet item.
"Missing replacement level" (see definition above) as used herein is the booldet level at which the default value is to be inserted if no input value exists at that level or below. A missing replacement level of "1" replaces missing values (no observation retrieved for the booklet item and the items below) with the default value chosen by the client only at level 1 in the booklets. A missing replacement value of "3" replaces missing values with the default value at level 3 in the booklets. If a default value has not been assigned to the booklet item by the client, the cluster default value is used (set by the client at the cluster level, see definition above). The cluster default value is sometimes mid-range if such a value is considered a typical score, or may be any other value as is appropriate for the application cluster (e.g., in biological research if there is no observation, an appropriate default value would be likely to be zero), or as is considered useful for the particular application.
"Roll-up level" (see definition above) as used herein is the level at which roll-up calculations stop. The algorithm can roll-up several booklets (roll-up level 0), or roll-up only to a given booklet level (roll-up level 1, 2,..). In the first case, an average value for all booklets attached to a pivot hub is produced. For example, the average value for the "job duties booklet" and the "goals booklet" attached to an Accountant I position (pivot hub) is calculated for Joan (hub) who is an incumbent in that position. This enables the client to compare Joan with other employees. In the second case, an average value for all booklet items at the roll-up level in the booklets attached to a pivot hub is calculated. For example, for a roll-up level of 1 , the average value for "performs general ledger entries" and "maintains the filing system", the two level 1 booklet items in the "job duties" booklet attached to the Accountant I position is calculated. To do so, the algorithm rolled up all level 3 booklet items, then all level 2 booldet items below each of the two level 1 booklet items. The same thing is done for the level 1 booklet items in the "goals booldet". This enables the client to compare the results for specific booklet items across employees or across departments.
"Display level" (see definition above) as used herein is the level at which weighted averages from several different input sources are calculated and displayed. A display level of "1" merges all input sources such as the supervisor, the peers, the self, and the subordinates of a single employee at level 1 in the booklets attached to the position that the employee occupies. The display level provides the user with a detailed analysis of the results for the given pivot hub. It may also be appropriate, depending on the application cluster, to calculate more than one result by doing calculations on multiple sets of data in parallel. The calculation result itself is considered an observation and becomes part of the observation pool, and can also be used to produce a number of reports about the pivot hubs (macro level research).
All the algorithm parameters, except higher level input, can be changed by the user of the invention after observations have been accumulated in order to alter the result of the calculation. For example, booklet weights or display level can be modified to perform a different calculation on the same data. The user of the invention can also select which booklet styles are to be included or excluded from the calculation, leading to the capacity to produce reports about specific subject or content area (goals, job functions, equipment...) The following is a detailed example of the roll-up algorithm. The overall steps discussed below are shown in Figure 4, with details being added in Figure 5.
The following discussion is a simple example of a roll-up calculation as used in the algorithm of the invention. In this example, the application cluster has two booklets about its pivot hub. Within booklet 1, booklet item 1 (at level 1) contains booldet item 2 (at level 2), which further contains booklet items 3-5 (at level 3) and booklet item 6 (at level 2), which further contains booklet items 7-8 (level 3). Booklet item 9 is at level 1 within booklet 1. Within booklet 2, booklet item 10 (level 1) contains booklet items 11-12 (level 2). This structure of these two booklets may be diagrammed as follows: Booklet 1 Booklet item 1
Booklet item 2
Booklet item 3 Booklet item 4 Booklet item 5 Booklet item 6
Booklet item 7 Booklet item 8 Booklet item 9
Booklet 2
Booklet item 10
Booklet item 11 Booldet item 12
In this example, a number of algorithm parameters have been set by the publisher to configure the application cluster for the user of the invention. Table la lists these parameters. These will be used at different stages in the roll-up calculation. Table la
Figure imgf000023_0001
A number of algorithm parameters have also been set up by the user of the invention when configuring the booklets used in the application cluster. Table lb lists these parameters.
Table lb
Figure imgf000023_0002
To perform the analysis of the application cluster, all observations about booldet items that have been entered into the system by authorized persons (sources) are first retrieved from the database. For example, in a performance appraisal application cluster, all the observations (called performance notes in this cluster) are retrieved. As previously described, an observation could contain any number of atoms (data elements) that are defined by the booklet item to which they relate and the input value that was entered. In the following example, ten observations are retrieved from the database for the time period fixed by the user. For simplicity, each observation contains only one atom with its booldet item and input value as shown in Table 2. Because there are no observations about booklet items 1, 2, 6, 7 and 10, there are no entries for these booklet items in this table. Table 2
Figure imgf000024_0001
The "source" in the above table is the single individual who entered the observation. Thus A, B, and C could be names or employee number for example. It can be seen from Table 2 that observation numbers 1 and 9 are from two different sources (A and B), for example, from two different co-workers of an employee, but both relate to booklet item 4. An example of such a situation would be two peers entering an observation about the same job function such as "fires weapons accurately".
The atoms in the collection are further categorized by input source, which is essentially a grouping of the individual sources, a list of which was established when the application cluster was built. Examples of such input sources are "self, "direct supervisor", "assessor", "client" and the like. For each input source a coefficient is assigned, based on the weight to be assigned that source's input in the calculations. For example in a performance appraisal cluster, the input from self might be given less weight than the input from supervisors. From that point on, the calculations are performed in parallel for each input source, until the results are merged together at the level at which the user of the invention chose to do so (Display level, see definition).
Table 3 sets forth the two input sources used in the example. In this table, the information is arranged in order of the input sources. Input source 1 is assigned a coefficient of 0.3 and input source 2 is assigned a coefficient of 0.7. In this example, sources A and B are both associated with input source 1 and source C is associated with input source 2.
Table 3:
Figure imgf000025_0001
If the input source cannot be established for a given observation or is deemed to be invalid in some way or inactive, the atoms contained in that observation are removed from the calculation, which also occurs if the input source has a coefficient of zero. Similarly, if the booklet associated with each atom are not valid or active, or if the booklet has a coefficient of zero, the atom is excluded from the calculations. Information about booklet items that are associated with the atoms to be used in the calculations is retrieved from the database. The input values may be continuous or discrete numeric values, Boolean, multiple-choice, etc. depending on the type of booklet item. Any numeric responses may be used in the calculation process, for example a true/false type of booldet item with two possible responses (e.g., true = 1 or false = 0). For a response that is a numeric value, the information retrieved would include the range of acceptable values, the coefficient and the default value (if any).
The information about each booldet item is then analyzed. Table 4 shows the booklet item information that is retrieved for the booklet items in booklet 1 and booldet 2, including the level in the tree where the booklet item appears, the range of acceptable values, the coefficient assigned to that booklet item, and the assigned default value of that booklet item. In addition, Table 4 indicates the element associated with each booldet item (arbitrarily assigned a letter A- L). Note that in this example, all of the booklet items have ranges of 1-5, except booklet item 8, which has a range of 1-9. Also note that booklet item 9 was not assigned a default value.
Table 4
Figure imgf000026_0001
The software verifies that the element associated with each booklet item exists and is valid for the subscriber or client. The software also verifies that the booklet item coefficient is not zero and that the input value satisfies the conditions, such as range, specified for this booldet item; otherwise the associated atom is excluded from calculations. Normally, this process would exclude very few atoms.
Because the range of values specified is not necessarily the same for all booklet items, and because different booklet items may have different types of response, all input values are scaled on a range from 0 to 1 , and all calculations are done on this scaled value. Scaled values for input sources 1 and 2 are shown in Tables 5a and 5b, respectively.
Figure imgf000027_0001
Not shown above are the booklet items for which there were no observation retrieved, for example, booldet item 9, input source 1, and booldet item 10, input source 2.
The default value initially assigned by the subscriber to each booklet item is entered for booklet items that are at the missing replacement level (see definition above) and for which no observations are retrieved at that level or below. The missing value replacement level is set when the application cluster is designed. A missing value replacement level of " 1 " replaces missing values (no observation retrieved for the booklet item or below) with the default value only at level 1 in the booklets. A missing value replacement level of "3" replaces missing values with the default value at level 3 in the booklets. If a default value has not been assigned to the booklet item information, the cluster value (see definition above) is used (set by the user of the invention at the cluster level). The cluster default can be the mid-range if such a value is considered a typical score, or may be any other value as is appropriate for the application cluster (e.g., in biological research if there is no observation, an appropriate default value would be likely to be zero), or as is considered useful for the particular application. The example uses a missing replacement level of 1. Table 6 shows only the level 1 booklet items (booklet items 9 and 10) for which there is no observation (for booklet item 9, input source 1; and for booklet item 10, input source 2). Note that since booklet item 9 does not have a default value assigned to it, the cluster value is used for replacement, which is 3 in this example.
Table 6
Figure imgf000028_0001
For booklet items that have more than one observation retrieved (in the example, booklet item 4 and booklet item 3), the average of all of the scaled input values for that item are calculated. The resultant averages for these booldet items are shown in Table 7.
Table 7
Figure imgf000028_0002
Next, the coefficients for all level 3 items that have an input value are scaled to a 0 to 1 scale so that the sum of these coefficients equals 1. This process redistributes the weights among the items that have an input value and ignores all items for which no observation was retrieved. Thus, if booklet items 3-5 are at level 3 and have original coefficients of 0.4, 0.3 and 0.2 respectively, and if booldet item 3 is ignored because no observation is retrieved for that item, the scaled coefficients for booldet items 4 and 5 will be 0.6 and 0.4, respectively. Similarly, if the coefficients for booklet items 7 and 8 are each 1.0, but booklet item 7 is ignored, the scaled coefficient of booklet item 8 will remain 1.0. The scaling of coefficients is done for each input source independently. The scaled coefficients for level 3 items in the example herein are thus shown in Tables 8a and 8b for input Sources 1 and 2, respectively. Table 8a-Input source 1
Figure imgf000029_0001
The roll-up of level three values to level two is shown in Table 9. The scaled averages or scaled value and the scaled coefficients for the level three items are used. If a booklet item does not have a scaled average or an input value, it is ignored in the calculations. For booldet items 4 and 5, the scaled average or scaled value, respectively, was multiplied times the scaled coefficient, and the products added together to form the roll-up level two value. Similar calculations are done for the remaining level 3 booklet items.
Table 9a-Input source 1
Figure imgf000029_0002
Figure imgf000030_0001
Table 9b-Input source 2
Figure imgf000030_0002
After the roll-up to level two is accomplished, the coefficients for all level 2 items that have an input value or a weighted average are scaled as was done for the coefficients for the level 3 items. Results for the example above are shown in Tables 10a and 10b.
Table lOa-Input source 1
Figure imgf000030_0003
Table lOb-Input source 2
Figure imgf000031_0001
After this, the level two values are rolled up to level one using the weighted average or input value and the scaled coefficient for the level two items as shown in Tables 11a and lib. If a booklet item does not have a weighted average or an input value, it is ignored in the calculations.
Table 1 la-Input source 1
Figure imgf000031_0002
Table lib-Input source 2
Figure imgf000032_0001
At this point, the calculations have reached level 1 , which is the selected display level in this example (the level at which the user of the invention wishes to consolidate the data from all input sources, see definition above). The consolidated average for each level one booklet item is then calculated. This calculation merges all the sources of input into a single result for each level one booklet item. The weighted averages for the level one items and the coefficients for each source of input are used. The consolidated average results for this example are shown in Table 12 for booklet items 1, 9 and 10, which are each at level 1.
Table 12
Figure imgf000032_0002
After the roll-up to level one is accomplished, the coefficients for all level 1 items are scaled. Results for the example above are shown in Table 13.
Table 13
Figure imgf000033_0001
The level 1 values are rolled up to level 0 (the booldet level). The consolidated averages and the scaled coefficient for the level 1 items are used. Results for the example are shown in Table 14.
Table 14
Figure imgf000033_0002
Finally, the roll-up level (level "-1" in this example) value from all booklets is calculated as shown in Table 15. Table 15
Figure imgf000034_0001
In the case of a performance appraisal application cluster, this result would represent the score of the employee on the appraisal. This information can be used comparatively to determine pay increases among a pool of employees, to determine standards against which to compare year to year performance for the employee, to keep the employee apprised of current performance ratings, to provide a benchmark for like positions across a group of employers using the same booklets for the position, to track which supervisors or peers are monitoring performance, to compare rating patterns across departmental divisions or units, to identify outlier observations or outlier input sources, and so forth.
In a quality control application cluster, this result would represent for the manager the final quality score for a set of quality measures contained in quality booklets about a particular manufacturing process for a pre-defined product. In that case the data might be input by inspectors, customers, employees and other input sources. A subscriber or client to a quality control application cluster might set target results, compare results, run the algorithm for different roll-up and display levels to identify which booklet item (and therefore which element) in a booklet is worth monitoring because it significantly impacts the results, which input source is worth training or requiring input from to improve the quality or quantity of the inputs, wliich elements proportionately impact the final quality score/result, what pattern of quality observation are being made, and many other questions related to the interest of a user to that application cluster. Root element calculations
If an application cluster has enabled root element calculation, the algorithm function in an on-going manner, calculating each new atom as it is entered by the subject. This is particularly useful when people respond to questionnaires or surveys: the software "responds" to the subject answering the questions by scoring their answers and deciding which areas need additional questions, thus adapting the questionnaire to the subject's responses.
Because questionnaires and surveys are completed by a single person, there is only one input source (the subject) and therefore only one calculation path in the algorithm. The following example shows how the algorithm performs root element calculations.
Booldet Structure:
Booklet 1 Booklet item 1
Booklet item 2 Booklet item 3 Booklet item 4 Booklet weight=l
In this example, additional algorithm parameters have been set by the publisher to configure the application cluster for the user of the invention. Table 16a lists these parameters. In addition, the user of the invention can instruct the algorithm to append additional items to the booklet, reset calculation upon reaching a trigger value, and adjust the presentation of booldet items to the subject.
Table 16a
Figure imgf000035_0001
A user of the invention sets the root coefficients and trigger ranges to be used by the algorithm. These may have been obtained by statistical analysis or be empirical. Table 16b lists the booklet items with their associated element, and the root element(s) with the root coefficient(s). Table 16b
Figure imgf000036_0001
Trigger ranges have been set as follows. They can be different for each research model or questionnaire. Trigger ranges are showed in Table 16c. Table 16c
Figure imgf000036_0002
To perform the calculation, each atom about booklet items entered by the subject is scored upon entry. For example, in a satisfaction survey cluster, all the atoms (questiom aire responses in this cluster) are calculated upon entiy.
In this example, the first entry made by the subject is :
Atom 1 :
Booklet 1, Booklet item 1, Input value = 1
Information about the booklet item selected is retrieved from the database. The input values may be continuous or discrete numeric values, Boolean, multiple-choice, etc. depending on the type of booldet item. Any numeric responses may be used in the calculation process, for example a true/false type of booklet item with two possible responses (e.g., true = 1 or false = 0). For a response that is a numeric value, the information retrieved would include the range of acceptable values, the coefficient and the default value (if any). With questionnaires-based clusters, the default value and the coefficient of a booklet item do not apply because the subject is forced to answer every question and because booklet items may be added that would modify the coefficients assigned to booklet items.
Because root element calculation has been enabled for the application cluster, the algorithm also retrieves the root elements that are correlated with the element associated with the booklet item, as well as their root coefficient. The information retrieved for booldet item 1 is as follows:
Figure imgf000037_0001
The input value is scaled on a 0 to 1 scale:
Figure imgf000037_0002
Because the results of atom 1 are below the trigger range, the questionnaire or survey will not modified and the subject continues answering the existing questions.
Atom 2:
Booklet 1, Booklet item 2, Input value = 5 Information about the booklet item selected is retrieved from the database.
The algorithm also retrieves any previous root calculation result for root element Z in order to include them in the new calculation. In this example, no previous calculation exists for root element Z.
The input value is scaled:
Figure imgf000038_0002
Because the result of the root calculation is lower than the trigger range (0.45 > 0.47), no additional item need to be added to the questionnaire and/or booldet.
Atom 3: Booldet 1 , Booklet item 3 , Input value = 2
Information about the booklet item selected is retrieved from the database.
Figure imgf000038_0003
The algorithm also retrieves any previous root calculation results for the same root elements as above (Y and Z) in order to include them in the new calculation. In this example, the following previous calculation results are retrieved:
Figure imgf000039_0001
The input value is scaled:
Figure imgf000039_0002
The atom is then calculated:
Figure imgf000039_0003
Because the result of the root calculation for root element Z is within the trigger range (0.47 < 0.475 < 0.54), 3 elements (Item size sub-set indicator is 3) correlated with root element Z will be randomly selected. The user of the invention may insert the 3 selected items into the original booklet to create a new booklet version. The user of the invention may also instruct the algorithm to present the 3 selected items to the subject at this point or to wait until the original booklet has been completed (this example). Elements selected are removed from the list of available elements. If there are not enough additional elements available for addition to the booklet, only available ones are added and the missing ones are simply ignored. The user of the invention has the option to allow for the same element to be added more than one time in the same questionnaire. This option is enabled by the publisher. If this option is selected, the software will first use all the available elements before adding an element twice. Here booklet items 5, 6, 7 will be added at the end of the questionnaire booklet Atom 4:
Booklet 1, Booldet item 4, Input value = 2
Information about the booklet item selected is retrieved from the database.
Figure imgf000040_0001
The algorithm also retrieves any previous root calculation result for root element W in order to include them in the new calculation. In this example, no previous calculation exists for root element W.
The input value is scaled:
Atom No. Input Scaled Booklet Booklet Value Value Item
2 2 0.25 4 1
The atom is then calculate d:
Root Calculation Result Trigger range Action element
W 0.3 x 0.25 0.075 0.89-0.95 none
Because the result of the root calculation is lower than the trigger range, no additional item need be added to the questionnaire/booklet.
The Subject has now reached the end of the original booklet. All selected items for root elements that have reached the trigger value are displayed and the subject can respond to the additional questions.:
Atom 5:
Booklet 1, Booklet item 5, Input value = 2 Information about the booklet item selected is retrieved from the database.
Booklet Booklet Level Range Element Root Root item elements Coefficients
1 1 1-5 0.28
If the trigger value calculation is reset, the algorithm will ignore previous root calculation results. If the trigger value is not reset (this example), the algorithm retrieves previous root calculation results for the same root element.
Figure imgf000041_0001
The input value is scaled:
Figure imgf000041_0002
The atom is then calculated:
Figure imgf000041_0003
Because the result of the root calculation for root element Z is higher than the trigger range (0.545 > 0.54), no additional items will be added.
The algorithm then proceeds as above, calculating atoms 6 and 7 to determine whether additional booklet items need to be tested.
Higher Level Input
Following are cases showing the impact of different configurations of Higher Level Input on algorithm calculations:
Suppose there is a booklet item, "Evaluates Community Needs," and that contains two lower level booklet items "Gets input from colleagues" and "Interacts with target population". Assume that both of the lower level booklet items are weighed the same. Further assume that the "Higher-level input Weight" is 0.3000.
Case 1: No higher-level input Input: "Gets Input from Colleagues" = 3
"Gets Input from Colleagues" = 5 "Interacts with target population" = 1
Calculates: Mean for "Gets Input from Colleagues" is 4.0
Mean for "Interacts with target population" is 1.0 Mean for "Evaluates Community Needs" from lower-level means is 2.5
Comments: The value of "Higher-level input Weight" is not used in the calculations.
Case 2: No lower-level input
Input: "Evaluates Community Needs" = 3
"Evaluates Community Needs" = 4 Calculates: Mean for "Evaluates Community Needs" from observations is 3.5
Comments: The value of "Higher-level input Weight" is not used in the calculations.
Case 3: Higher-level input and lower-level nieans
Input: "Gets Input from Colleagues" = 3
"Gets Input from Colleagues" = 5 "Interacts with target population" = 1
"Evaluates Community Needs" = 3 "Evaluates Community Needs" = 4
Calculates: Mean for "Gets Input from Colleagues:" is 4.0
Mean for "Interacts with target population" is 1.0 Mean for "Evaluates Community Needs" from lower-level means is 2.5
Mean for "Evaluates Community Needs" from observations is 3.5 Combined mean for "Evaluates Community Needs" is 2.8 (2.5 @ 70% and 3.5 @ 30%). Comments: In this case, the "Higher-level input Weight" determines the relative importance of the mean of the higher-level observations versus the mean of the lower-level means being rolled up.
There is one exception to the examples presented above. If higher-level input is present and all the lower-level means are only based on missing data, then the lower-level means are ignored and only the higher-level input is used. This case will only arise if the "Missing Value Replacement Level" is at least 2 for a cluster.
Flexibility of the Invention While the invention has been described with reference to specific embodiments, it will be appreciated that the design of the invention makes numerous variations, modifications, and embodiments possible, and accordingly , all such variations, modifications, and embodiments are to be regarded as being within the spirit and scope of the invention.
- Illustrations and Diagrams
The Research Process is illustrated by Figure 1 Client Rights (See Figure 1.) The client has a number of individual hubs The client has a pool of observations. The client has access to the generic rollup algorithm.
Cluster Role (See Figure 1.) The Application cluster is designed to respond to a given research question.
The application Cluster controls which Hub(s) become Pivot Hub(s) for the application and how the Algorithm calculates the Observations about the Pivot Hub(s). Accumulation of Observations using the Rollup Algorithm (See Figure 1.)
Cumulative Observations about the Pivot Hub(s) are processed in the rollup Algorithm.
Calculation can include several parallel paths and can stop at any level within the booklets. The calculation result itself is an observation and can become υart of the observation pool. The calculation result can be used to produce a number of reports about the Pivot Hub(s).
The Application Cluster Design is illustrated by Figure 2
The Business Structure is illustrated by Figure 3.)
Step 1 The Patent Holder houses all data and may distribute this data to licensed Subscribers. (See Figure 3.) Step 2 Each Subscriber purchases licenses for any number of individual internal Clients. Subscribers purchase licenses for any number of application Clusters and grant use to individual Clients. Subscribers also subscribe to a list of Elements and a number of Booklets to be used for accumulation of observations. (See Figure 3.) Step 3 Booklets list a number of individual Items in a tree and branch structure. Each Item refers to a single Element and its specific position in, the tree and branch structure. It also includes a number of cluster-specific parameters, such as the measurement Scale, the Default value and the item Coefficient. (See Figure 3.) Step 4 Booklets are attached to Hubs. (See Figure 3.) Step 5 An Observation contains at least one Atom. (See Figure 3.)
Links between Structure and Research process are illustrated in Figure 3.) Comment 1 : Some Hubs have rights to enter observations about other Hubs. (See Figure 3.) Comment 2: An Application Cluster has a unique structure based on the research topic covered. That topic defines the content of the software windows. (See Figure 3.)
Algorithm Roll-Up Overview is illustrated in Figure 4.) Step 1 : Atoms are grouped by Input Sources. (See Figure 4.)
Comment 1 (Structure): Each observation contains at least one Atom: a reference to a single Booklet Item selected from a Booklet and the Input Value. (See Figure 4.) Step 2: The rollup algorithm performs its calculations in parallel for each Input Source.
Comment 2 (Process): The number of Input Sources (and therefore of calculation paths) is defined by the Cluster. (See Figure 4.) Step 3 : After getting rid of invalid Atoms (zero coefficient, out of range, invalid Element, invalid Subscriber...) all Input Values are converted to the same scale (See Figure 4.) Comment 3 (Process): Item scales may vary. (See Figure 4.) Step 4: For each level within the Booklet(s) tree, the Rollup Algorithm: a) Determines the average or the default value for the Item(s) at this level (only Items at the Missing Replacement Level for the given Cluster receive a Default Value if there are no values at or below this level). b) Re-scales the coefficients (weights) proportionally among the Items having a value (Items with no value are ignored). c) Calculates the Weighted Average or Value for the given level.
Several iterations of this calculation are performed. The Algorithm starts at the lowest level in the tree(s) and Rolls up to the next level until it reaches the Display Level. (See Figure 4.)
Comment 4a (Process): The Display level is set by the user for the Cluster. (See Figure 4.) Comment 4b (Process): Dynamic Element Inclusion may be initiated at this point (See
Figure 6.)
Step 5: The Algorithm calculates the weighted average from all Input Sources for every Item at the Display Level. The results are displayed. (See Figure 4.)
Comment 5a (Process): The algorithm can stop here. (See Figure 4.) Comment 5b (Process): The Roll-up Level is defined by the user for the Cluster. (See Figure
4.)
Step 6 (Optional): The algorithm may perform additional rollups as above until it reaches the
Roll-up Level to display summary results from the Consolidated Averages. (See Figure 4.)
The Algorithm Roll-Up Detail is illustrated by Figure 5.
The Algorithm Dynamic Element Inclusion Overview is illustrated by Figure 6.

Claims

THE CLAIMSWhat is Claimed Is:
1. An information handling process on which to base a research design comprised of hubs, relationships between hubs, booklets that include lists of booklets items that are linked to researchable elements and root elements, and a calculation algorithm, said research design being configurable to suit any research or business process through an application cluster.
2. The information handling process according to claim 1, wherein said research design enables continuous, discontinuous or sporadic accumulation of observations, each of which observations includes an atom or atoms, about booklet items from disparate input sources, said atom or atoms being correlated insofar as the booklet items refer to identical elements or root elements, enabling research to be performed across application clusters and across industries, business processes, and research protocols.
3. The information handling process according to claim 1, wherein each application cluster provides specifications for the behavior of the information handling system when operating on information related to that application cluster, and to each booklet used within the application cluster.
4. The information handling process according to claim 1, wherein atoms related to booklet items within one application cluster may be used by another application cluster if the application clusters each use booklet items related to identical elements or root elements.
5. The information handling process according to claim 1, wherein each application cluster is constructed by steps comprising: a) providing at least one booklet in the application cluster; b) providing booklet items related to said booklets, each of said booklets items at a defined position and level in a booklet; c) providing a hub that may serve as a source or target of observations within said application cluster; d) attaching said booklets to said hub; and e) determining ranges, coefficients, default values, calculation specifications
(higher level input value), and input specifications to be assigned to the booklet items; f) obtaining atoms of information from observations related to said application cluster, each of said atoms having a specified input value; g) relating each of the atoms to a particular booldet item; h) utilizing the relationship between the atoms, booklet items and booklets, the input values of the atoms, and the ranges, coefficients and default values assigned to the booklet items to determine scaled averages and scaled coefficients at the lowest defined booklet item level; i) utilizing the scaled averages and the scaled coefficients of the lowest defined booklet item level, rolling up to the next lowest booklet item level, if any, to yield scaled averages and scaled coefficients at the next to lowest booldet item level, or if there is no remaining next lowest booklet item level, rolling up to the booklet; and j) repeating steps h) and i) until the highest booklet item level is reached.
6. The information handling process according to claim 1, each research design comprising the following interrelated components: a) one or more application clusters; b) one or more booklets; c) one or more booklet items; and d) one or more atoms, each atom recording a single input value in an observation; wherein:
(a) each application cluster comprises at least one booklet; (b) each booklet comprises at least one booklet item;
(c) each atom is related to a booklet item; wherein the relationship between each atom and its related booklet item specifies the means for the information handling system to interpret the meaning of the value in the atom, so that the information handling system can correlate all atoms related to the same booklet item and can identify the booldet item associated with each atom; and (d) each booklet item is correlated with a root element.
7. The information handling system according to claim 1, wherein booklet items within a particular booklet are related to each other and to the particular booklet in a hierarchical structure, wherein information in the booklet items in the hierarchical structure can be rolled up to provide data about the booldet.
8. An information handling process for use with observations, comprising: a) defining one or more booklets in a first level, each of said booklets comprising one or more booklet items in a second level and having a defined weight; b) defining a plurality of input sources, each of said input sources having a defined weight; c) providing an opportunity for one or more atoms of each observation to be collected from said input sources, and assigning the collected atoms to related booklet items; d) analyzing the atoms for validity; e) determining an actual value for each collected atom; f) determining a scaled value for each atom based on a possible range of values for the atom and the actual value of the atom; g) determining at which level default values are to be entered for missing values; h) entering a missing replacement value at the determined level where there are no observations; i) for booklet items containing one or more atoms, enabling an option for one or both of: averaging the scaled values to result in a scaled average or averaging the scaled values to store the atom as a data point in a database; j) determining a roll-up value for the first level; and k) utilizing the roll-up value for the first level to determine a booklet value for each booklet.
9. The information handling process according to claim 8, further comprising using the booklet value for each booldet and a defined weight of the booklet to determine a roll-up value for the booklets; and utilizing the roll-up value for the booklets to determine a summary level roll-up value.
10. The information handling process according to claim 8, wherein one or more of said booklet items in said second level comprises one or more booldet items in a third level, each of said booklet items having a value, and further comprising determining a roll-up value for the second level prior to determining the roll-up value for the first level.
11. The information handling process according to claim 8, wherein one or more of the observations comprise a plurality of atoms of information.
12. The information handling process according to claim 8, wherein the observations are collected by more than one input source, and wherein the observations are grouped by input source.
13. The information handling process according to claim 8, wherein the observations are attached to a root element by: a) marking or attaching a correlation coefficient; b) setting a roll-up level for calculating whether a trigger value has been reached; c) defining a trigger value range for appending additional booldet items; and d) setting a number of booklet items to be appended if a trigger value has been reached.
14. The information handling process according to claim 8 wherein additional root elements are selected for research through the accumulation of observations that reach trigger value ranges.
PCT/US2001/001013 2000-01-13 2001-01-11 Process and system for gathering and handling information WO2001052109A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001227862A AU2001227862A1 (en) 2000-01-13 2001-01-11 Process and system for gathering and handling information

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US48346000A 2000-01-13 2000-01-13
US09/483,460 2000-01-13

Publications (1)

Publication Number Publication Date
WO2001052109A1 true WO2001052109A1 (en) 2001-07-19

Family

ID=23920117

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/001013 WO2001052109A1 (en) 2000-01-13 2001-01-11 Process and system for gathering and handling information

Country Status (2)

Country Link
AU (1) AU2001227862A1 (en)
WO (1) WO2001052109A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130132881A1 (en) * 2006-02-14 2013-05-23 Tony Barr Satisfaction metrics and methods of implementation
CN116257623A (en) * 2022-09-07 2023-06-13 中债金科信息技术有限公司 Text emotion classification model training method, text emotion classification method and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5832496A (en) * 1995-10-12 1998-11-03 Ncr Corporation System and method for performing intelligent analysis of a computer database
US5870746A (en) * 1995-10-12 1999-02-09 Ncr Corporation System and method for segmenting a database based upon data attributes
WO2000013112A1 (en) * 1998-08-31 2000-03-09 Cabletron Systems, Inc. Method and apparatus for managing data for use by data applications
US6044374A (en) * 1997-11-14 2000-03-28 Informatica Corporation Method and apparatus for sharing metadata between multiple data marts through object references
WO2000026821A1 (en) * 1998-11-03 2000-05-11 Platinum Technology, Inc. Method and apparatus for populating sparse matrix entries from corresponding data
US6195681B1 (en) * 1997-02-07 2001-02-27 About.Com, Inc. Guide-based internet directory system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5832496A (en) * 1995-10-12 1998-11-03 Ncr Corporation System and method for performing intelligent analysis of a computer database
US5870746A (en) * 1995-10-12 1999-02-09 Ncr Corporation System and method for segmenting a database based upon data attributes
US6195681B1 (en) * 1997-02-07 2001-02-27 About.Com, Inc. Guide-based internet directory system and method
US6044374A (en) * 1997-11-14 2000-03-28 Informatica Corporation Method and apparatus for sharing metadata between multiple data marts through object references
WO2000013112A1 (en) * 1998-08-31 2000-03-09 Cabletron Systems, Inc. Method and apparatus for managing data for use by data applications
WO2000026821A1 (en) * 1998-11-03 2000-05-11 Platinum Technology, Inc. Method and apparatus for populating sparse matrix entries from corresponding data

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
DATABASE GENUINE ARTICLE [online] MCCLEAN ET AL.: "Incorporating domain knowledge into attribute-oriented data mining", XP002940033, Database accession no. 08683031 *
DATABASE GENUINE ARTICLE [online] MORZY ET AL.: "Pattern-oriented hierarchical clustering", XP002940031, Database accession no. 08897138 *
DATABASE INSPEC [online] JAGADISH ET AL.: "What can hierarchies do for data warehouses", XP002940032, Database accession no. 6596713 *
DATABASE INSPEC [online] KUDOH ET AL.: "Data mining by generalizing database based on the appropriate abstraction", XP002940034, Database accession no. 6719474 *
INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, vol. 15, no. 6, June 2000 (2000-06-01), pages 535 - 547 *
JOURNAL OF JAPANESE SOCIETY FOR ARTIFICIAL INTELLIGENCE, vol. 15, no. 4, July 2000 (2000-07-01), pages 638 - 648 *
POZNAN UNIV. TECHNOL. INST. COMP. SCI. UL PIOTROWO 3A/PL-60965, vol. 1691, 1999, pages 179 - 190 *
PROCEEDINGS OF THE TWENTY-FIFTH INTERNATIONAL CONFERENCE ON VERY LARGE DATA BASES, 7 September 1999 (1999-09-07) - 10 September 1999 (1999-09-10), pages 530 - 541 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130132881A1 (en) * 2006-02-14 2013-05-23 Tony Barr Satisfaction metrics and methods of implementation
CN116257623A (en) * 2022-09-07 2023-06-13 中债金科信息技术有限公司 Text emotion classification model training method, text emotion classification method and equipment
CN116257623B (en) * 2022-09-07 2023-11-28 中债金科信息技术有限公司 Text emotion classification model training method, text emotion classification method and equipment

Also Published As

Publication number Publication date
AU2001227862A1 (en) 2001-07-24

Similar Documents

Publication Publication Date Title
US20120173381A1 (en) Process and system for pricing and processing weighted data in a federated or subscription based data source
US7788120B2 (en) Method and system for interfacing clients with relationship management (RM) accounts and for permissioning marketing
Hill et al. Network-based marketing: Identifying likely adopters via consumer networks
Verma et al. Understanding customer choices: A key to successful management of hospitality services
Russell et al. People and information technology in the supply chain: Social and organizational influences on adoption
Roth Handbook of metrics for research in operations management: Multi-item measurement scales and objective items
US7200607B2 (en) Data analysis system for creating a comparative profile report
Mannino et al. Efficiency evaluation of data warehouse operations
Paradi et al. Knowledge worker performance analysis using DEA: an application to engineering design teams at Bell Canada
Martin et al. An information delivery model for banking business
Wei et al. Analytic network process-based model for selecting an optimal product design solution with zero–one goal programming
Smith A framework for analysing the measurement of outcome
US20130117037A1 (en) Goal Tracking and Segmented Marketing Systems and Methods with Network Analysis and Visualization
US20030208394A1 (en) Sales tracking and forecasting application tool
Nedjati et al. Evaluating the intellectual capital by ANP method in a dairy company
Wang Perception and reality in developing an outcome performance measurement system
JP2002269329A (en) System and method of supporting improvement of business
Sadoughi et al. Ranking evaluation factors in hospital information systems.
Zhao et al. The roles of aspirations, coefficients and utility functions in multiple objective decision making
WO2001052109A1 (en) Process and system for gathering and handling information
Zhang et al. Evaluation on collaborative satisfaction for project management team in integrated project delivery mode
US20200202279A1 (en) System and Method for Improving Sales Force Performance
Zaied et al. Critical success factors framework for implementing and adapting BIS on organisational performance
Jansen van Rensburg et al. Approaches taken by South African advertisers to select and appoint advertising agencies
Naser SadrAbadi et al. Process-oriented improvement: a modern look at drawing the organizational progress roadmap

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: COMMUNICATION PURSUANT TO RULE 69 EPC (EPO FORM 1205A OF 301002)

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP