WO2013034917A1 - Analytics - Google Patents
Analytics Download PDFInfo
- Publication number
- WO2013034917A1 WO2013034917A1 PCT/GB2012/052198 GB2012052198W WO2013034917A1 WO 2013034917 A1 WO2013034917 A1 WO 2013034917A1 GB 2012052198 W GB2012052198 W GB 2012052198W WO 2013034917 A1 WO2013034917 A1 WO 2013034917A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- comparison
- group
- benchmark
- metadata
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
- G06Q10/06398—Performance of employee with respect to a job function
Definitions
- This invention relates to apparatus for and a method of providing access to comparison metrics data relating to the comparison of a test or target group with a reference group, such as a benchmark group.
- An analytics system is also described.
- the invention has particular relevance in the sphere of talent management.
- the invention allows for a user or organisation to determine or identify a parameter such as a "benchstrength" in talent acquisition (recruitment and selection), talent development and succession against a number of defined metrics through which actions to improve their talent management processes can be identified.
- Comparison of the characteristics of an individual against those of a group or a population is commonplace. Traditionally, assessment testing has followed similar thinking, typically comparing an individual's scores on an assessment or personality test with the mean test scores of a group or a population. Such a comparison allows evaluation and ranking of the individual relative to the group or population and consequent conclusions are often drawn, for example regarding the individual's suitability for a particular role. Although such comparisons have proved useful, it has been appreciated pursuant to the present invention that further pertinent information may be extracted from assessment test data, and in particular from comparisons based on macro aggregation of assessment data through which organisations can be compared to industry sector benchmarks as well as by geography and business function.
- a related problem is how to provide interested parties with access to this further information given the inherent, not least commercial, value and sensitivity of what may be a large body of test data, which can be manipulated to provide an analytics view of a user's talent goals and issues, and which requires a balance to be struck between ease-of-access and data security.
- the present invention aims to address at least some of these problems.
- apparatus for providing access to comparison data relating to a comparison of properties of a target group with those of a reference group, the apparatus comprising any some or all of the following features: a database of reference metrics data determined from testing of members of a reference population; means for selecting target group metrics data, the metrics data determined from testing of the members of the target group and associated with metadata relating to the target group; means for selecting at least one item of metadata; means for selecting a reference group from the reference population in dependence on the selected metadata, the reference group being associated with reference group metrics data determined from testing of the members of the reference group and associated with metadata relating to the reference group; means for selecting a comparison aspect, the comparison aspect being associated with a subset of metrics data; means for generating comparison data relating to the comparison of the distribution of metrics data values for the target group with that of the reference group in accordance with the selected comparison aspect; and means for outputting the resulting comparison data.
- the apparatus further comprises means for preventing a user from gaining direct access to the database of reference metrics data.
- the apparatus further comprises means for selecting a particular reference group for comparison with the target group.
- the particular reference group is a standardised group.
- the particular reference group may be an idealised group.
- the testing of the members of the target group comprises applying a substantially identical test for each member.
- the target group may be an individual.
- the metrics data relates to at least one personal characteristic.
- the personal characteristic may comprise at least one of: aptitude, ability, competency, skill, personality, knowledge, motivation, or behaviour.
- the comparison aspect relates to a potential future property of the target group.
- the comparison aspect may be one of: Leadership potential, Competency, or Ability.
- the Ability is one of: verbal, numerical or inductive reasoning.
- the metadata relates to a property of the metrics data.
- the metadata may relate to a property of the testing, for example at least one of: type of test, type of parameter tested, date of test, location of test, language in which test was conducted, or reason for the testing.
- the metadata may relate to the outcome of the testing, for example at least one of: an offer of a position, acceptance of an offer, successful employment for a specific duration, or progression of the employee.
- the metadata relates to a property of the target group, for example: spoken language(s), place of birth, residence, nationality, age, gender, level of education, or field of education.
- the metadata relates to a relationship with an organisation.
- the metadata relates to: Geography, Industry sector, Business function, or Job-level.
- the metadata relates to an employment status or role
- the employment status may comprise at least one of: full or part-time employment, consultancy, prospective employment, or retirement
- the employment role may comprise at least one of: employment location, level, role, function, field, or type.
- the metadata relates to a property of the organisation.
- the property of the organisation comprises at least one of: company; industry; sector; location; or size.
- the metadata relates to performance of the target group or individual.
- the performance may comprise at least one of: sales volume, profit, or public ranking.
- the apparatus further comprises means for editing the metadata of the target group metrics data.
- the metadata relates to an assessment of a property of the target group.
- the value of the metadata is identical for target and reference groups.
- the output comparison data comprises an aggregate of resulting comparison data.
- the apparatus may further comprise means for separating the aggregated resulting comparison data into constituent parts.
- the apparatus may further comprise means for filtering the resulting comparison data.
- the means for filtering is adapted to filter in dependence on a selected a further item of metadata; alternatively or in addition, the means for filtering may be adapted to filter in dependence on a selected comparison aspect.
- the apparatus further comprises means for presenting a series of prior comparison data outputs in the form of a carousel; or, alternatively or in addition, in the form of a slide deck.
- the apparatus further comprises means for periodically updating the database of reference metrics data.
- the apparatus further comprises means for periodically updating the comparison data.
- the apparatus further comprises means for generating a comparison parameter in dependence on the comparison data, comprising a value for the proportion of the target group having metrics data values in a pre-determined segment of the reference group metrics data value distribution.
- the comparison parameter comprises a percentage, fraction or segment. More preferably, the comparison parameter comprises at least one of: top decile, bottom decile, top quartile, bottom quartile, top percentile, or bottom percentile.
- the apparatus further comprises means for providing a commentary relating to at least one element of the comparison data, more preferably the commentary is adapted to provide information correlating the metrics data value or value range to an outcome.
- a method of providing access to comparison data relating to a comparison of properties of a target group with those of a reference group comprising: providing a database of reference metrics data determined from testing of members of a reference population; selecting target group metrics data, the metrics data determined from testing of the members of the target group and associated with metadata relating to the target group; selecting at least one item of metadata; selecting a reference group from the reference population in dependence on the selected metadata, the reference group being associated with reference group metrics data determined from testing of the members of the reference group and associated with metadata relating to the reference group; selecting a comparison aspect, the comparison aspect being associated with a subset of metrics data; generating comparison data relating to the comparison of the distribution of metrics data values for the target group with that of the reference group in accordance with the selected comparison aspect; and outputting the resulting comparison data.
- the method further comprises preventing a user from gaining direct access to the database of reference metrics data.
- the method further comprises selecting a particular reference group for comparison with the target group.
- the particular reference group may be a standardised group.
- the particular reference group may be an idealised group.
- testing of the members of the target group comprises applying a substantially identical test for each member.
- the target group may be an individual.
- the metrics data relates to at least one personal characteristic.
- the personal characteristic may comprise at least one of: aptitude, ability, competency, skill, personality, knowledge, motivation, or behaviour.
- the comparison aspect relates to a potential future property of the target group.
- the comparison aspect may be one of: Leadership potential, Competency, or Ability.
- the Ability may be one of: verbal, numerical or inductive reasoning.
- the metadata relates to a property of the metrics data.
- the metadata relates to a property of the testing. This may be at least one of: type of test, type of parameter tested, date of test, location of test, language in which test was conducted, or reason for the testing.
- the metadata may relate to the outcome of the testing. The outcome may be at least one of: an offer of a position, acceptance of an offer, successful employment for a specific duration, or progression of the employee.
- the metadata relates to a property of the target group.
- the metadata may relate to: spoken language(s), place of birth, residence, nationality, age, gender, level of education, or field of education.
- the metadata may relate to a relationship with an organisation.
- the metadata may relate to: Geography, Industry sector, Business function, or Job-level.
- the metadata relates to an employment status or role.
- This may be at least one of: full or part- time employment, consultancy, prospective employment, or retirement;alternatively, or in addition, it may be at least one of: employment location, level, role, function, field, or type.
- the metadata relates to a property of the organisation.
- the property of the organisation may comprise at least one of: company; industry; sector; location; or size.
- the metadata may relate to performance of the target group or individual.
- the performance may comprise at least one of: sales volume, profit, or public ranking.
- the method further comprises means for editing the metadata of the target group metrics data.
- the metadata may relate to an assessment of a property of the target group.
- the value of the metadata may be identical for target and reference groups.
- the method further comprises outputting comparison data comprising an aggregate of resulting comparison data.
- the method further comprises separating the aggregated resulting comparison data into constituent parts.
- the method further comprises filtering the resulting comparison data. This may be in dependence on a selected a further item of metadata. Alternatively, this may be in dependence on a selected comparison aspect.
- the method further comprises presenting a series of prior comparison data outputs in the form of a carousel; or, alternatively (or in addition) in the form of a slide deck.
- the method further comprises periodically updating the database of reference metrics data.
- the method further comprises periodically updating the comparison data.
- the method further comprises generating a comparison parameter in dependence on the comparison data, comprising a value for the proportion of the target group having metrics data values in a pre-determined segment of the reference group metrics data value distribution.
- the comparison parameter may comprise a percentage, fraction or segment.
- the comparison parameter comprises at least one of: top decile, bottom decile, top quartile, bottom quartile, top percentile, or bottom percentile.
- the method further comprises providing a commentary relating to at least one element of the comparison data.
- the commentary is adapted to provide information correlating the metrics data value or value range to an outcome.
- test or target group metrics data comprising metrics data, preferably obtained from a particular measurement series, with each metric datum having (preferably a plurality of) metadata associated with it;
- reference group such as a benchmark group, metrics data
- the reference (benchmark) group metrics data comprising metrics data from a plurality of (further) target groups for which only metrics data having a predefined combination of metadata associated with them are included;
- a test or target group of individuals against a reference group such as a benchmark group - the groups being defined by metadata associated with the respective metrics data, thereby allowing a specific reference benchmark group to be chosen by means of selecting a predefined combination of metadata - useful information may be extracted from a set of individuals' data.
- the comparison may allow evaluation of a group or groups as a whole, rather than an individual, and therefore may enable identification of features that may be systemic rather than individual.
- test or target group such as an organisation or part thereof
- reference group such as a benchmark group
- the metrics data in the reference group is preferably drawn from a larger group or pool that includes metrics data from a plurality of (further) target groups.
- the pool from which metrics data for the reference group is selected may include metrics data from a large range of sources.
- the data pool may include data from target groups that are for instance from different companies, from different nations, and/or taken at different times.
- a predefined combination of metadata is preferably used to select a reference group (or benchmark group).
- a user may specify metadata of interest.
- a selection may for example include metrics data that has a particular value in a particular type of metadata
- the predefined combination of metadata may include metrics data that has a particular value in a particular type of metadata, and any value in any other type of metadata. If only one type of metadata is in use, the predefined combination may just be a single value.
- Means for defining the combination of metadata used for selection may include a user input for example via a web interface, the user selection being inputted with a mouse, keyboard, or other input device.
- a plurality of metadata may be combined as a single new instance of metadata.
- Metadata is preferably descriptive of the data contents. Metadata may include values, or tags or other descriptors.
- one type of metrics data is selected for comparison. If more than one type of metrics data is selected for comparison, then preferably metrics data of the same type is compared. In some embodiments requests to combine metrics data of different and/or incompatible types is detected and optionally prevented.
- the method, of providing access to comparison metrics data relating to the comparison of a test or target group with a reference group, such as a benchmark group comprises:
- target group metrics data comprising metrics data, preferably obtained from a particular measurement series, with each metric datum having (preferably a plurality of) metadata associated with it;
- reference group such as a benchmark group, metrics data
- the reference (benchmark) group metrics data comprising metrics data from a plurality of (further) target groups for which only metrics data having a predefined combination of metadata associated with them are included;
- aspects of the invention may be combined to produce an analytics system for comparison of metrics data - such as that obtained from assessment testing or assessment data - between a test or target group and one or more reference groups, such as benchmark groups.
- a reference or benchmark group that includes data from a plurality of target groups may be representative of a wider range of scenarios and possibilities than data from a single target group, and comparison against the former in preference to the latter may help identify features that are unusual. Comparison across multiple target groups may allow for a wider scope of reference, enabling more robust and meaningful comparisons. The information gained by the comparison may provide basis for decisions and may allow identification of conflicts.
- comparison of a target group against a benchmark group is made against a subset of metrics data.
- the subset may be user-selectable.
- the comparison of a target group against a benchmark group is aggregated and/or determined at a first level of detail or coarseness, optionally at a second level of detail or coarseness.
- At least one database for storing each of the target group or user's assessment or metrics data, the reference or benchmark group metrics data, and the comparison of the distribution of the metrics data values.
- the sets of metrics metadata and metrics data values, user data and benchmark data values may be stored in separate databases; alternatively, multiple of the metrics metadata and metrics data values, and/or of the user data and benchmark data values may be stored in a single database.
- At least one server for housing and/or controlling the at least one database.
- a plurality of servers may also be used, for example in a distributed or redundant arrangement.
- At least one server for processing the assessment and benchmark data, and adapted to access the data from the one or more databases.
- At least one server for providing access for a client or user either directly or via a computer, for example via a web interface, to the results of processing the assessment and benchmark data.
- the metrics data is obtained from assessments relating to at least one personal characteristic such as: aptitude, ability, competency, skill, personality, knowledge, motivation and behaviour.
- a tool which caters for a broader category of assessment data than psychometric or personality testing, and which can include all of the above-mentioned classes.
- the target group may be a group of individuals that all relate to an institution (such as a company, charity, industry body or other organisation) in a particular way.
- the individuals that form a target group are preferably subject to substantially the same series of measurements (such as a set of assessments or tests).
- Examples of the relationship between the individuals and an institution may include employment status or role, for example at least one of: full or part-time employment; consultancy; prospective employment; retirement; or any other appropriate relationship.
- benchmark data through which the user can get a sense of their, for example, institution or group, "benchstrength" against a number of analytic indices; and providing a tool that looks at groups broken down using a number of filters related to demographics, business function and other categories.
- the reference group may be a "benchmark” group (for example, a "best-in-class” or "best-of- breed” group).
- the benchmark group is preferably a group of individuals that each relate to a respective one of a plurality of institutions (such as a company, other corporate body or organisation).
- the individuals contributing metrics data are a representative worldwide selection of individuals.
- Each individual may be categorised by one or more parameters such as: spoken language(s), place of birth, residence or nationality.
- the types of metadata include at least one of: characteristics of the metrics data; characteristics of the relationship between the individual and the institution; characteristics of the institution; and/or characteristics of the individual. For example:
- Characteristics of the metrics data may include at least one of: type of test; type of parameter tested; date of test; location of test; language in which test was conducted; or further information relating to the test or the metrics data,
- Characteristics of the relationship between the individual and the institution may include at least one of: reason of conducting test; characteristic of the occupation to which test relates (location, level, role, function, field, type); and further information relating to the relationship between the individual and the institution,
- Characteristics of the institution may include at least one of: company; industry; sector; location; size of institution; and further information relating to the institution,
- Characteristics of the individual may include at least one of: nationality; country of residence; age; gender; ethnic origin; level of education; field of education; language; culture; or further information relating to the individual.
- the types of metadata may further include information relating to the outcome of the test. For example, after testing an applicant, the following may be steps in progression of the test outcome:
- the types of metadata may further include information relating to outcomes, especially business outcomes, or measures of performance, for example at least one of: sales volume; profit; public ranking; or further information relating to business outcomes or measures of performance.
- the information may relate specifically to an individual, it may relate to a group of individuals, or it may relate to a group to which an individual is associated.
- the predefined combination of metadata may be chosen to select a very specific benchmark group. This may allow comparisons across organisations, across groups within organisations, across stages in the progression of the relationship between organisations and individuals, across time periods, across groups of success, or across many other groups.
- the wide range of choice in selection of a benchmark group may allow tailoring a comparison to a wide range of situations and investigations, and may therefore provide a very versatile tool.
- the ability to tailor a comparison to a very specific situation or investigation may provide highly meaningful comparisons, and therefore result in a powerful analysis tool.
- comparison of the distribution of the metrics data values between the target and benchmark group results in the generation of a graphical display, for example a plurality of histograms, to enable the user to extract insight from the "benchstrength" view presented.
- a graphical display for example a plurality of histograms
- suitable displays may include horizontal and vertical bar charts, line charts, pie charts, area charts, 3D charts, surface charts, or other charts.
- a measure may be extracted from the comparison of the distribution of the metrics data values between the target and reference group. For example, a value for the proportion or percentage of the target group that have metrics data values in a pre-determined segment of the reference group metrics data value distribution may calculated.
- the pre-determined segment may be the top decile, the bottom decile, the top quartile, the bottom quartile, the top percentile, the bottom percentile, or any other percentage, fraction or segment.
- a commentary or narrative is included in the display; more preferably, the commentary or narrative relates to an element of a chart, such as a bar in a bar chart or a segment in a pie chart.
- the commentary or narrative may also relate to a particular metrics data value or value range.
- the commentary or narrative provides information correlating the metrics data value or value range to an outcome, for example a business outcome.
- the commentary or narrative may be provided in at least one of: a mouse-over text field; a hover-over text field; a static or dynamic text panel; a linked document; a linked web page; and a linked application page.
- the data included in the benchmark group metrics data is updated periodically, preferably every other year, annually, every 6 months, every 4 months, every 3 months, every 2 months, monthly, or weekly.
- the data included in the benchmark group metrics data is recent as in less than 20 years, 10 years, 7 years, 6 years, 5 years, 4 years, 3 years, 2 years, or 1 year old; more preferably, the benchmark group metrics data is less than 6 months, 4 months, 3 months, 2 months, 1 month, or at most one week old.
- an apparatus for generating a reference distribution of metrics data comprising:
- metric data e.g. from a plurality of measurement series
- metadata comprises at least one outcome, for example a business outcome
- metrics data e.g. from a plurality of measurement series
- each metric datum having associated with it a plurality of metadata
- the metadata comprises at least one outcome, for example a business outcome
- the characteristics of particularly successful or unsuccessful groups may be identified. This may allow optimisation of groups to reflect characteristics that have the potential to be successful. In particular, identification of an individual that would bring a group closer to an 'ideal profile' may be possible.
- the metrics data is obtained from assessment tests relating to at least one of: aptitude, ability, competency, skill, personality, knowledge, motivation and behaviour.
- the outcome for example a business outcome, is determined from at least one of: sales volume; profit; public ranking; or further information relating to business outcomes or measures of performance.
- the business outcome may relate specifically to an individual, it may relate to a group of individuals, or it may relate to a group to which an individual is associated.
- the metadata further includes ones or more of: characteristics of the metrics data; characteristics of the relationship between the individual and the institution; characteristics of the institution; and characteristics of the individual.
- the metadata may further include information relating to the outcome of the test. For example after testing an applicant, the following may be steps in progression of the test outcome: offer of a position; acceptance of an offer; successful employment for a duration of, for example, at least twelve, six, four, three or two months or at least one month; progression of the employee into for example a management role; or further outcomes of the test throughout the duration of the relationship between the individual and the organisation.
- a combination of metadata may be chosen to select a very specific benchmark group. This may allow comparisons across organisations, across groups within organisations, across stages in the progression of the relationship between organisations and individuals, across time periods, or across many other groups.
- the ability to tailor a benchmark group to a very specific situation or investigation may provide highly specific benchmark groups, and therefore a stronger correlation between the characteristics and the outcome.
- the wide range of choice in selection of a benchmark group may allow tailoring to a wide range of situations and investigations, and therefore may provide a very versatile tool.
- a measure of personal potential comprising combining personality metrics
- apparatus for generating a measure of personal potential comprising combining personality metrics
- a measure for the individual's potential for success may be defined. Certain components of the metrics data from assessment testing may be combined into aggregate parameters that may be indicative of the potential of an individual. Conversely, a risk parameter may be defined based on a combination of metrics data. This might be especially useful for assessing individuals who have not been in full time employment in the past and therefore only little confidence can be placed in assessments relating to work experience, work skills, or work-related competencies.
- aggregated metrics data for an individual or a group may be used to define a 'fingerprint' for that individual or group. Comparison may therefore be made between the aggregated metrics for the individual or group and those for benchmark groups. The differences between these values may be identified each difference independently or as an aggregate difference. Identifiers, system or software flags may be generated in dependence on the extent and/or nature of the identified differences. These may result in the generation of summary or interpretive commentary.
- a range of alternative aggregations of metrics data is provided. These may be selected, submitted by upload or otherwise defined by a client or user of the analytics system, for example according to particular interest, requirements or according to access permissions, optionally set for example by a subscription level.
- User-selected, submitted or otherwise defined aggregations of metrics data may be stored for future retrieval, optionally by other parties.
- user test data may be incorporated into the main body of test or reference data. This may be a condition of use of the analytics system and may occur as part of the comparison process. Future user comparisons may be offered with or without including the user test data in the main body of test data.
- the output results of the comparison of the distribution of the metrics data values between the target group and reference (benchmark) group comprise at least one chart; preferably the chart is a histogram.
- generated charts form a series and are navigable via a carousel display, preferably comprising an active foreground chart and at least one inactive background chart, wherein the background chart is user-selectable and consequently made active and brought to the foreground.
- the apparatus comprises means for selecting a subset of metrics data and/or benchmark group by means of at least one filter process.
- the apparatus further comprises means for applying the same or equivalent filter process to the target group.
- test data test results, metrics data, assessment data, assessment results
- the invention also provides a computer program and a computer program product for carrying out any of the methods described herein and/or for embodying any of the apparatus features described herein, and a computer readable medium having stored thereon a program for carrying out any of the methods described herein and/or for embodying any of the apparatus features described herein.
- the invention also provides a signal embodying a computer program for carrying out any of the methods described herein and/or for embodying any of the apparatus features described herein, a method of transmitting such a signal, and a computer product having an operating system which supports a computer program for carrying out any of the methods described herein and/or for embodying any of the apparatus features described herein.
- Any apparatus feature as described herein may also be provided as a method feature, and vice versa.
- means plus function features may be expressed alternatively in terms of their corresponding structure, such as a suitably programmed processor and associated memory.
- any feature in one aspect of the invention may be applied to other aspects of the invention, in any appropriate combination.
- method aspects may be applied to apparatus aspects, and vice versa.
- any, some and/or all features in one aspect can be applied to any, some and/or all features in any other aspect, in any appropriate combination.
- Figure 1 shows an overview of a process for comparing assessment test metrics of a test or target group with those of a reference group
- Figure 2 shows an example of the results of a comparison between a test or target group and a reference group
- Figure 3 shows a system that is designed to provide the comparison
- Figure 4 shows the steps in obtaining a display
- Figure 5 shows an overview of a process for identifying the characteristics of particularly successful or unsuccessful groups
- Figure 6 shows an overview of a process for generating aggregate parameters that are a measure of, for example, personal potential
- Figure 7 shows the user welcome screen
- Figure 8 shows an example of the main benchmark selection interface
- Figure 9 shows an example of the benchmark information screen
- Figures 10 to 13 show examples benchmark categories selectable by the user
- Figures 14 and 15 show examples of the data selection interface
- Figures 16 and 17 show examples of the data search options interface in "basic” and “advanced” variants;
- Figures 18 shows an example of the update data function;
- Figure 19 shows the different available options for viewing (benchmarking) the selected data
- Figure 20 shows a further benchmark selection interface
- Figures 21 to 24 show examples of basic benchmarking output display screens
- Figure 21 shows a basic benchmarking output display screen
- Figure 22 shows a display screen with a pop-up commentary
- Figure 23 shows a display with reference groups and target groups
- Figure 24 shows a display for a plurality of metrics
- FIGS. 25 to 28 show examples of more sophisticated benchmarking output display screens
- Figure 25 shows an example of a benchmarking output screen
- Figure 26 shows a further example of a benchmarking output screen
- Figure 27 shows the carousel feature in use
- Figure 28 shows the slide deck feature in use
- Figure 29 shows an example of the stored views interface
- Figure 30 shows the "drill-down" facility in more detail
- Figure 31 shows a further example of a benchmarking output screen
- Figure 32 shows a further example of a benchmarking output screen
- Figure 33 shows the corresponding drill-down
- Figures 34 and 35 show further examples of benchmarking output screens
- Figure 36 shows an example of a Numerical reasoning benchmark
- Figure 37 shows an example of a design overview with a single platform
- Figure 38 shows an example of a design overview with multiple platforms
- Figure 39 shows an example of a design overview where the analytics application sits within a central system
- Figure 40 shows some possible interactions between different elements of the analytics system
- Figure 41 shows various examples of render charts
- Figures 42 and 43 show examples of charts available via drill-down
- Figure 44 shows functional requirements that relate to user registration for the analytics tool
- Figure 45 shows functional requirements that relate to analytics administration and services
- Figure 46 shows functional requirements that relate to different users viewing the analytics
- Figure 47 shows a 2D grid chart
- Figure 48 shows the elements in the entity model
- Figure 49 shows the elements broken down into sections
- Figure 50 shows the elements in the entity model elements that relate to the 'Saved Query' section
- Figure 51 shows the elements in the entity model elements that relate to different databases
- Figure 52 shows the elements in the entity model elements that relate to content and chart
- Figures 53 to 66 show a high-level view of the design considerations for the introduction of the Analytics application into the Central platform
- Figure 53 shows how Analytics sits within the Central system but sources its data primarily from external databases
- Figure 54 shows the interaction between the Analytics layers (Central, Central Business Layer, WCF Service Layer and Business Layer) with the Analytics Data;
- Analytics layers Central, Central Business Layer, WCF Service Layer and Business Layer
- Figure 55 shows database tables for the Benchmark and index measures
- Figure 56 shows database tables for the Content Metadata
- Figure 57 shows an overview of the Feedback Updates process
- Figure 58 shows the ETL process in outline
- Figure 59 shows the Service Contract in overview
- Figures 60 and 61 show the Data Contracts in overview
- Figures 62 and 63 show some sequence diagrams
- Figure 64 shows the caching service in overview
- Figure 65 shows an example of a suitable class design for the caching implementation
- Figure 66 shows an example of ETL workflow
- Figure 67 shows the Universal Competency Framework Great 8.
- Figure 68 shows a talent profile
- Figure 69 shows the relationship between the SHL Leadership Potential Benchmark and the SHL Leadership Model
- Figure 70 shows an analysis of leadership potential
- Figure 71 shows an analysis of the Leadership potential by sector and geography
- Figure 72 shows an analysis of ability
- Fioure 73 shows an analysis of ability by line of business:
- Figure 74 shows the relationship between appetite for risk and resilience to risk
- Figure 75 shows the a) first and b) second perspective of resilience to risk
- Figure 76 shows an example of risk index by industry sector.
- Figure 77 shows an example of risk banding
- FIGS 78 to 96 show various further features of the analytics system.
- Figures 97 to 100 show further aspects of the analytics system.
- Figure 1 shows an overview of a process for comparing assessment test metrics of a test or target group with those of a reference group.
- a plurality of individuals 10 participate in assessment testing, the results of which 20 are collected and processed by processor 30 and stored in database 40.
- the assessments may be standard ones, in which the individuals complete a questionnaire designed to draw out particular characteristics of interest. Such assessments may be computerised or paper-based questionnaires subsequently scanned or otherwise digitised for processing.
- an interested and authorised party may use client computer 50 to access services of the analytics system - such as a benchmark tool - provided by server 60 which allow the characteristics of the test or target group 70 (the individuals of which have also participated in assessment testing) to be compared against those of a reference or "benchmark" group 72, 73, which may be considered as reference groups.
- the characteristics of an individual 71 may also be compared against those of the population 75.
- the benchmark tool therefore allows a user of computer 50 to compare a particular test or target group 70 or an individual 71 against a benchmark group 72, 73.
- server 60 is configured to allow only very restricted access to the data of database 40.
- client computer 50 may only access database 40 indirectly, server 60 only providing aggregated summary information and the results of comparative calculations, for example via a web interface and/or with suitable firewalls and other network access restrictions.
- Some configurations make use of a secondary database - which may be a partially mirrored or replicated version of database 40 or only store aggregated data - to further isolate database 40 from client computer 50.
- Suitable computer servers may run common operating systems such as the Windows systems provided by Microsoft Corporation, OS X provided by Apple, various Linux or Unix systems or any other suitable operating system.
- Suitable databases include ones based on SQL, for example as provided by Microsoft Corporation or those from Oracle or others.
- Remote access to the analytics system may be provided via one or more web servers configured to provide a website or other remotely-accessible interface.
- Web interfaces and other code may be written in any suitable language including PHP and JavaScript.
- a Microsoft .Net based stack may be used.
- Figure 2 shows an example of the results of a comparison between a test or target group and a reference group.
- an employer may wish to compare the characteristics of job applicants the employer attracts (the particular test or target group) against those of the applicants the industry attracts overall (the reference group).
- Such a comparison could, for example, give an indication as to whether the job applicants the employer attracts compare unfavourably to the job applicants the industry attracts overall, and the employer might consequently wish to re-evaluate their recruitment strategy.
- the relative proportions 100 of members of the respective test and reference groups with a test metric T1 102 are plotted as a histogram or bar chart.
- test metric scores (histogram bars) that relates to the bank A 104 is shown alongside that of the test results of the group that relates to the banking sector overall 106.
- the values are grouped in range 'bins'.
- the proportion 108 in bank A and the proportion 1 10 in the banking sector overall which fall within the 'medium' bin of test metric T1 are the same, while a far greater proportion 1 12 in bank A fall within the 'high' bin of test metric T1 than in the banking sector overall 1 14.
- test metric T1 may be comprise one or more metrics relating to aptitude, ability, competencies, skills, personality, knowledge, and/or behaviour, obtained by a suitable assessment test.
- the data under comparison may also be aggregate parameters based on these metrics. These are created according to equations which translate test taker assessment results into an interpretation model, for example relating to sales, teamwork, leadership or risk profiles. In some variants they can also be based on more than one result from more than one test or assessment instrument.
- the data for each individual of a group contributes to a distribution for the group.
- the user's target group distribution is compared to a reference distribution. By comparing a particular target group distribution against a reference group distribution more information can be extracted from already available data. The comparison may provide visibility and inform strategic decisions.
- the reference group may relate in a particular way to the target group.
- the reference group may relate to the same industry, or the same nationality, or the same career level.
- the reference and target group both relate to the same industry and to applicants undergoing testing.
- comparison of annual graduate job applicant test results may be drawn up to assess attractiveness of the employer. This may however depend upon external factors such as economic environment or media coverage of an industry. For example, during an industry-wide advertising or public relations slump fewer highly qualified graduates might apply to a particular industry overall. If an employer compares its graduate job applicants of one year to those of a second year, then it might appear that the employer has suddenly attracted fewer highly qualified applicants.
- Comparison of the distribution of characteristics of a test or target group to those of a particular reference group may provide further information of interest. For example, instead of comparing the distribution of a characteristic of the middle management of a company to the distribution of the same characteristic in the middle management of the overall industry, an analysis may rather compare the distribution of the same characteristic of the middle management of a company to the distribution of the same characteristic in the middle management associated with a particular role. If, for example, an aim of a company is to develop a culture that resembles a 'sales' mentality, then a strategy could be to assemble a middle management group that is similar in characteristics to a 'sales' reference group. In this manner comparisons across groups that would normally not necessarily relate to one another may be a useful tool.
- Comparisons of groups within an organisation may also provide meaningful information. For example, comparison of the characteristics of a present-day sales group to that of the sales group of one year ago and the sales group of two years ago may help identify changes that could be a cause for problems. Other time periods may also be compared. In some alternatives a series of characteristics over successive time periods may be compared to allow for the tracing of the evolution of group characteristics. This may allow for overall group characteristics to be compared even when the individual members of the group undergo changes in their own characteristics (for example, as a result of training) or when the constituent members of the group change due to individuals joining or leaving the group.
- the proportion (e.g. the percentage) of the target group that is in the top quartile of the reference group may be calculated. If more detailed information is required, then for example values for the top quartile and the top decile may be helpful; alternatively, values for the top and bottom quartile, for example, could be informative.
- the top quartile value may be considered as expressing the "breadth" of a metric; the top decile value may be considered as expressing the "depth" of a metric.
- the metric data contains test responses that reflect characteristics of individuals.
- 'metrics metadata' further information that relates to the respective set of metrics data
- the metrics metadata can provide information for associating the metrics data with a particular reference group. Such information can, for example, relate to the circumstance under which the individual is tested, e.g.:
- Test reason (applicant pre-screen, applicant selection, employee development, HR research)
- Job level e.g. graduate, lower management
- Job type e.g. sales, research and development, finance
- the metrics metadata can potentially relate to the individual being tested, e.g.:
- the metrics metadata may include further aspects, such as outcome of the test. For example after testing an applicant, the following may be steps in progression of the test outcome:
- This type of information can provide reference groups characterised in terms such as 'candidates that entered into permanent employment'. Monitoring further along the career progression of the employee can provide further useful information, such as reference groups of 'graduate applicants who progressed to upper management roles'. Such reference groups may provide helpful information, not only regarding the characteristics of successful individuals, but also for comparing groups. For example, comparison of a group of unsuccessfully employed candidates (applicants that accept an offer but do not complete a probation period) to a reference group of unsuccessfully employed candidates may help identify systematic problems in the recruitment process.
- the metrics metadata may also include other information not supplied by the metric data, for example sales volume, profit, or other business outcomes or measures of performance.
- reference groups such as 'employees in teams of above-average profitability' or 'managers of groups with high sales volume' could be formed.
- Such reference groups may provide helpful information in identifying how especially successful groups are composed.
- the measures of performance may be obtained from an external source, such as a public ranking (e.g. FORTUNE, Forbes, or other rankings).
- the correlation of an outcome to metric data is a useful tool.
- the analysis may be at a group level, where for example a particular combination of individuals has the potential to perform especially well; or it may be at the individual level, where for example a particular test result in a graduate applicant indicates the individual has the potential to perform especially well.
- particular personality metrics may be combined to determine a measure of potential and extrapolated to make a prediction.
- certain components of the metrics data may be combined into aggregate parameters that may be indicative of the potential of an individual.
- a risk parameter may be defined based on a combination of metrics data.
- An example of where this might be especially useful is in the assessment of graduates. An individual who has not been in full time employment in the past may not have substantial work experience, work skills, or work-related competencies. Based, for example, on an individual's knowledge, personality, and motivation, a measure for the individual's potential for success (or risk for failure) may be defined.
- correlating individuality and success for example by analysing a reference group of highly successful people, it may be possible to determine an 'ideal individual'.
- the correlation could be particularly reliable if the reference group is narrowed down, for example to a particular job (occupation, task, role, or situation), in a particular industry, in a particular country.
- such an ideal individuality profile may not necessarily reflect high test results in all areas, and an individual with exceptional scores in an area might not be highly suitable for a particular job.
- By comparing an individual's test results to the 'ideal individual' it might be possible to predict which individuals have the potential to achieve well.
- FIG. 3 shows a metrics data database 200 that contains all the client metrics data 202.
- a benchmark database 204 contains all the benchmarks (benchmark groups).
- Aggregate parameters (such as risk) are calculated based on data from the metrics data database 200 and stored in an aggregate parameters database. In alternative arrangements, aggregate parameters may be stored in the metrics data database 200 alongside the metrics data, or elsewhere.
- Metrics metadata are stored in a metrics metadata database, or alternatively in the metrics data database 200 alongside the metrics data, or elsewhere.
- User data is stored in a separate user database 206.
- the aggregate parameters and other data are based on the physical location of the applications that are to query the data, taking account of the need to minimise latency effects. Also, in some variants, the aggregate parameters and other data may be used by other services than the server 60. As the aggregate parameters or other data is based on a subset of the metrics data in database 200, the schema may be different, in which case they may be kept in a separate database.
- Metrics metadata may not always be stored directly with the metrics, which may, for example, be for historical system reasons.
- a shared metrics metadata database may be implemented to be shared by different testing systems, data being aggregated from multiple systems.
- the term 'benchmark' refers to a 'best-in-class' group (e.g. the ten most profitable companies in an industry), whereas a 'norm group' is representative of a specific group (e.g. an industry) but not necessarily a ranked selection.
- the term 'benchmark' is used in reference to norm groups as well as to best-in-class groups or any further types of reference groups.
- the benchmark database 204 may contain metrics data from the metrics database 200 as well as aggregate parameters.
- the metrics data that is included in the benchmark database is selected to be representative of the reference groups defined by the metrics metadata. In particular, not each data set in the metrics data database 200 is included in the benchmark database 204. For example, if the metrics data were to have an over-representative proportion of data from the US, then not all of the US data would be included. Further, data may be excluded if it does not satisfy data quality standards.
- the selection of data for the benchmark database 204 may occur automatically according to a pre-defined set of rules, or it may be done manually or semi-automatically or in any other manner.
- the metrics data sets for inclusion in the benchmark database 204 may be stored for the user to access and filter as required to produce reference groups.
- the selected metrics data sets may also be subjected to analysis, and the distribution determined for each metric in each group, and the distributions stored in the benchmark database 204. In this case the user would not access and filter the metrics data sets, only retrieve the required reference group metric distribution.
- the benchmark data may only include data for a pre-determined time period, such as the last five years.
- the update frequency of the benchmark data may for example be annual. A very high update frequency increases the effort the maintenance requires, and may not provide a significant advantage, if the underlying test results only change very slowly. If data selection occurs automatically then a high update frequency is possible, however automatic data selection may be more susceptible to errors and the benchmark data may not be as robust.
- the user database 206 contains the metrics data sets that belong to the user's particular target groups (for example: graduate applicants who participated in an assessment exercise).
- the data sets associated to a user 208 may be organised into groups or "projects".
- a project is a predefined group of candidates that undergo a predefined assessment or set of tests. Examples of projects could be:
- the user database 206 is refreshed more frequently than the benchmark database 204, for example daily. In this case new test results only appear in an existing project the next day.
- the user 208 can supplement the metrics data sets with metrics metadata. For example, a test result for an employee may be labelled with the employee's job level and job function. This metadata may be stored in the user database 206, and it may also be added to the metrics data database 200 alongside the metrics data.
- Different levels of access rights and available functionality may be defined for different users. For example:
- on-demand users can access the benchmark database 204, but not store, access or use data on the user database 206;
- Another example of different levels of access rights and available functionality for different users may be:
- Figure 4 shows the steps in obtaining a display (by building a comparison). From the start of a new query 920 to obtaining the desired display 922 the following steps may be included in the process:
- a desired display may be saved 932, printed 934 or sent 936 or otherwise submitted for further use. If saved queries are available, they may be loaded 938 to obtain the desired display.
- Figure 5 shows an overview of a process for identifying the characteristics of particularly successful or unsuccessful groups.
- a plurality of individuals 10 participate in assessment testing, the results of which 20 are collected and processed by processor 30 and stored in database 40. Further, data 940 that relates to a particular outcome, for example a business outcome, is collected and processed by processor 30 and stored in database 40 (or in alternative processor and/or database).
- a computer 50 may be used to access services (for example provided by server 60) which allow selection 942 of groups with particular outcomes (such as business success) and analysis 946 of characteristics of the group. This may allow optimisation of groups to reflect characteristics that have the potential to be successful 948. Further, the characteristics of individual that would bring a group closer to an 'ideal profile' may be identified.
- services for example provided by server 60
- This may allow selection 942 of groups with particular outcomes (such as business success) and analysis 946 of characteristics of the group. This may allow optimisation of groups to reflect characteristics that have the potential to be successful 948. Further, the characteristics of individual that would bring a group closer to an 'ideal profile' may be identified.
- Figure 6 shows an overview of a process for generating aggregate parameters that are a measure of, for example, personal potential.
- a plurality of individuals 10 participate in assessment testing, the results of which 20 are collected and processed by processor 30 and stored in database 40. Further, data 940 that relates to a particular outcome, for example a business outcome, is collected and processed by processor 30 and stored in database 40 (or in alternative processor and/or database).
- the processor 30 processes the test results 20 (and potentially the outcomes data 940) to generate a new measure, or a variety of new measures, that are particularly representative of the individual.
- test results may include a large number of different measures, it is useful to distil the test results into representatives 950 952 that are available for further analysis, for instance comparison with other individuals and /or groups.
- a subset of test results of an individual may be combined.
- An average may be calculated over all or some test results, and a difference may be calculated.
- the standard deviation of the test results of the individual may be used for calculation of an aggregate parameter.
- a user 208 logs on to a platform 210 where the user 208 has already been granted suitable access to the appropriate databases.
- Figure 7 shows the user welcome screen once the user has logged on and is beginning to use the talent analytics system. As will be described in more detail below, the user is presented with options for selecting benchmarks, for selecting data to be benchmarked and for accessing previously saved results.
- the user 208 starts the application and selects the desired query in an entry screen.
- Figure 8 shows an example of the main benchmark selection interface.
- Several benchmarks are available for selection by the user, including:
- the user benchmark selection for a desired query is guided by means of a directed menu with pre-formulated propositions.
- Figure 9 shows an example of the benchmark information screen displayed when the corresponding benchmark is selected by the user.
- Benchmarks such as Leadership Potential and Competency benchmarks, which are based on personality assessments such as OPQ32, allow for more detailed or nuanced benchmarking, accessible via a "drill down” facility, to permit investigation of benchmarking to specific detailed criteria.
- Benchmarks such as Verbal, Numeric and Inductive Reasoning benchmarks are based on simper 'assessments such as "Verify”. These assessments provide a coarser assessment, without a "drill down” option.
- Figures 10 - 13 show examples of benchmark categories selectable by the user, arranged by:
- the user may access a benchmark via a query tool as described above.
- the user may be offered a list of benchmarks on a home (or library) screen and/or the user may start by looking at their data/projects and then selecting the index / benchmark they want to compare against.
- Figures 14 and 15 show examples of the data selection interface.
- User test data may be searched for by name (optionally filtered by compatibility with the selected benchmark) and/or by other parameters such as date, location and test name and/or type; sets of data may be ordered by, for example, name, date, location, source (test name and/or type), number of test takers (candidates).
- a colour may be assigned to the selected data set; multiple selections may selected and assigned different colours (for identification in subsequent views), and/or variously combined by assigning the same colour.
- Figures 16 and 17 show examples of the data search options interface in "basic” and “advanced” variants, the former providing a simple keyword search, the latter further options.
- a target group is smaller than a pre-defined minimum, for example ten individuals, then display in the list may be suppressed or marked as unavailable.
- the list of available test data groups may be filtered depending on the selection of the reference group. For example if “Switzerland” is selected as category under "geography”, only Swiss test takers (or only Swiss test data groups) could be included. Display of projects older than a pre-defined age (for instance older than 5 years) may be suppressed. Selection of a project may allow determining display details, for instance colour of the bar in a bar chart. Multiple groups may be combined for display as a single group. A plurality of groups may be selected and displayed as individual groups. Options may be provided to clear a selection, save a selection, update the chart display, navigate to a previous selection, or perform other operations.
- the selection of the reference group may include one, two or more selection fields, such as: career lever (applicant, employee, management); benchmark criteria (geography, industry).
- Benchmark queries can be grouped into pre-formulated propositions, such as: analysis for 'quality of hire'.
- a reference group is associated to the proposition, and may be narrowed down further by user selection.
- Figure 18 shows an example of the update data function.
- the test data may also be 'edited' or 'backfilled' to add further information.
- This feature (accessed by the user via the 'pencil' icon adjacent a data entry) may be used when the uploaded data is missing (known) information, for example location or category details, which once added to the user data may allow for improved benchmarking.
- Figure 19 shows the different available options for viewing (benchmarking) the selected data:
- Figure 20 shows a further benchmark selection interface, wherein the selected benchmark or sub- category thereof may be identified more precisely for comparison with the user test taker data.
- the "global" benchmark across all industry sectors has been selected and assigned the colour "green”; benchmark selection may be more granular by selecting subcategories either singly or in combination.
- Other means for selecting groups may be presented, for example a cascade or tree structure, which allows for drilling down into the data or filtering for a particular selection.
- Subgroups may be combined (e.g. combine data for banks and insurance into a group and display as a single group), and a plurality of groups may be selected for display (e.g. display data for banks and insurance as individual groups).
- Selection of a project may allow determining display details, for instance colour of the bar in a bar chart. Options may be provided to clear a selection, save a selection, update the chart display, navigate to a previous selection, or perform other operations.
- the benchmarking category currently being used is indicated by a pin icon.
- test taker data is used in the benchmarking; a "Filter my data” option is optionally provided which allows for a subset of the selected user test taker data to be used, for example matching the selected benchmark subcategory.
- a minimum number of test takers typically 30 is required for benchmarking to be performed - otherwise the user is informed that insufficient test data exists. This is especially useful in those embodiments which allow multiple filters to be applied.
- the user initiates the benchmarking calculation of the selected user test taker data against the selected benchmark by selecting the "Use data” option.
- the default display screen may be a bar chart with the variables being measured in the x axis, and the magnitude being measured in the y- axis.
- Figures 21 to 24 show examples of basic benchmarking output display screens.
- Figure 21 shows a basic benchmarking output display screen, comprising histogram 500 (or bar chart) where the variable being measured is risk 502, and the magnitude is the proportion of the group (in percent) 504 that falls within one of the risk categories 506 (or bins). Two adjacent bars indicate different industry sectors (here: marketing 508 and finance 510). Other types of charts may be selected and displayed, including line charts 512, pie charts 514, horizontal bar charts 516, and area charts 518.
- one of the selected reference groups is smaller than a pre-defined minimum, for example ten individuals, then display may be suppressed. Small groups may not be highly representative and may not be suitable for use as a reference group.
- Figure 22 shows a display screen with a pop-up commentary 600 that can be displayed when the user moves an indicator or cursor over areas of the chart.
- This hover-over narrative provides information for interpreting the chart.
- the hover-over may also provide information regarding how a distribution differs from an 'ideal' profile as determined from the correlation between performance and a metric. For example, a narrative may indicate "these individuals may provide a 10% increase in sales".
- Figure 23 shows a display with reference groups and target groups.
- the reference groups are (in this example) marketing 900 and finance 902, as already described in an earlier example; here, additionally, two project groups 904 906 are also displayed.
- the display shows that the risk distribution in the group 'project set 2' 906 is different to both of the references 900 902, whereas the group 'project set 1 ' 904 is roughly comparable to the marketing reference group 900.
- Figure 24 shows an example of a display for a plurality of metrics 910.
- a comparison against a corresponding reference group is undertaken, and a value for the percentage of two target groups A and B that is in the top decile of the reference group is calculated.
- group B 916 achieves particularly high test results in metrics 2 and 8
- metrics 1 and 8 group B achieves fewer high test results than the reference group 912, and than group A 914.
- group A and B share a common reference group, but for a different analysis group A and B may each have a respective reference group.
- An option for selecting this type of display may be included in the application, along with suitable selections of metrics, target groups and reference groups.
- Figures 25 to 28 show examples of more sophisticated benchmarking output display screens.
- Figure 25 shows an example of a benchmarking output screen, showing a chart or graph generated by the benchmarking calculation.
- the selected data set (a business sales group) has been benchmarked against Leadership Potential by industry sector.
- the result is displayed as a comparative histogram, with scoring for the selected data set shown alongside that for the benchmark group for a range of "potential" scores or values from "very low” - via “low”, “moderate” and "high” - to "very high”.
- the selected data set exhibits a distribution representative of the results determined from test scores deemed to be relevant to leadership potential; in the present case, the benchmark group is "global" and therefore exhibits an expected normal distribution of scores.
- the user can access the benchmark information screens via a "info” option. Further options are provided to
- Figure 26 shows a further example of a benchmarking output screen.
- the slide deck allows for multiple charts to be saved for later recall - initially in a preview mode, with the user option to revert to the chart in question becoming the active chart.
- chart A is shown in the background and displayed to one side and partially obscured by the present or active chart “B".
- Chart A can be selected by the user and brought to the fore of the display - thereby becoming the active chart and in turn relegating chart B to the background.
- a history or "carousel” of charts remains accessible to the user by selection of the chart displayed either side of the active chart.
- the carousel maintains the order charts according to their time creation; other orderings and/or filters may be available in alternative embodiments.
- Figure 27 shows the carousel feature in use, with the previous chart brought to the fore and made active, the previous chart relegated to the background.
- Figure 28 shows the slide deck feature in use, with a selected saved chart being previewed, with option to revert and make the chart being previewed the active chart.
- the user also has the option of displaying the filter(s) in use.
- Figure 29 shows an example of the stored views interface, which allows for previous benchmarking results to be recalled by the user.
- Figure 30 shows the "drill-down” facility in more detail. This is accessed by the user selecting any of the “potential" values ranges ("very low”, “low”, “moderate”, “high” or “very high") of the benchmarking results chart, and results in display of a further chart showing the breakdown of the (aggregate) potential value scores into their constituent benchmark scores (termed “Great Eight” characteristics).
- the scores in a "very high" leadership potential (by industry sector) range are shown decomposed into the separate scores for those (eight) tested characteristics used to determine the aggregate scores, namely:
- Figure 31 shows a further example of a benchmarking output screen.
- the test data is benchmarked for leadership potential by geography, with comparisons with global, UK and US data.
- the slide deck is shown populated with several saved previous charts, and the carousel shows the user has the option to navigate to an earlier chart.
- Figure 32 shows a further example of a benchmarking output screen.
- the test data is benchmarked for competency by industry sector, with comparisons with global, banking and Public Sector & NGO data.
- Figure 33 shows the corresponding drill-down into the detail of the scoring for the 'Enterprising' characteristic - namely the "Achieving” and "Entrepreneurial Thinking” aspects.
- Figures 34 and 35 show further examples of benchmarking output screens.
- Figure 36 shows an example of a Numerical reasoning benchmark. As explained above, this is based on much coarser test data and no drill-down feature is available.
- the display may include information summarising the active settings, such as the selected reference group(s) and target group(s) along with their display settings. Further options may be provided to save display views, and/or print display views.
- the application may also provide graphic charts without numeric values. The application may provide options to save, retrieve, copy, edit, or delete queries.
- the goal is to create a single source web application that combines assessment data from assessment platforms. It should also provide an easy to use, modern looking interface where authorised users can access the benchmark data as well as relevant data from their organisation's assessment projects on the same platform, and combine the information to allow for detailed analytics and graphical viewing. o to market
- Analytics of the data may create news-worthy stories around indexes, industry findings and trends
- Analytics can tease clients to ask the right questions in their organisations (e.g. are my candidates of a lower calibre than in my competition?), it can also lead on to talent audit service and other exercises
- the benchmarking tool may also provide a unique capability linked to products and services clients have already purchased.
- Clients may be given access to the benchmarking tool as part of a product/platform license or subscription fee deal. Additional charge may apply within the subscription for data access. A charge may be added for transactional clients who would like access (annually, per project or one-off), via subscriptions charges or a pay as you go cost.
- a graduate recruitment manager in a bank wants to see how the bank's candidates this year compare with last year or with the rest of the industry and competition when it comes to scores on a numeric reasoning test.
- the user logs on to a platform where access to the application has already been granted o
- the user opens the application and selects the desired query (e.g. industry comparison) o
- the user filters the data on their industry (e.g. financial services), country (e.g. UK) and the type of role (e.g. graduate)
- the user can view their data in the application, both compared to the same data of last year (2 projects), and compared to the general benchmarking data from the benchmark database (e.g. UK financial services organisations who use the numeric reasoning test).
- the benchmark database e.g. UK financial services organisations who use the numeric reasoning test.
- the user can view average numeric percentile scores, high / low scores and see their own data compared with the benchmark in a graphical format on the screen
- the user can change the view of data from e.g. monthly values to different score types
- the user can filter further to view assessment results from only a sub set of test takers, e.g. male applicants or people under 20 years of age.
- a VP of HR want to look at trends on competency score values across management teams globally and see how their senior managers compare against the management team in other organisations of a similar size and area of business.
- the user logs on to a platform where access to the application has already been granted o
- the user opens the application and selects the desired query (e.g. competency comparison)
- the user filters the data on job level/type of role (e.g. senior managers), country (e.g. global/all) and period (e.g. 2010)
- job level/type of role e.g. senior managers
- country e.g. global/all
- period e.g. 2010
- the user can preview the benchmark at any time to make sure it displays what they are expecting and to look at general trends o
- the user selects the project in an on-demand database where the data they want to benchmark against resides; in this example the user undertook a specific project to assess their management team last October (e.g. management Oct 2010)
- the user can view average competency scores on a 5 or 10 point scale, high / low scores and see their own data compared with the benchmark in a graphical format on the screen
- the user can change the view of data from e.g. monthly values to different score types
- the user can filter further to view assessment results from only a sub set of test takers, e.g. only applicants in companies with more than 500 employees.
- the user filters the data on job level/type of role (e.g. sales staff), country (e.g. US) and industry (e.g. retail)
- job level/type of role e.g. sales staff
- country e.g. US
- industry e.g. retail
- the user can preview the benchmark at any time to make sure it displays what they are expecting and to look at general trends
- the user selects the project in an on-demand database where the data they want to benchmark against resides; in this example they use assessment data from the last 3 years from both their recruitment and development assessment projects.
- the user can view both competency scores and ability scores and see their own data compared with the benchmark in a graphical format on the screen
- the user can change the view of data from e.g. monthly values to different score types
- the user can filter further to view assessment results from only a sub set of test takers, e.g. entry level sales roles or sales team leads.
- the database will require data from a large number of data sources such as on demand assessment and score platforms, test taker demographics bio data, project firmographics information, client information and industry codes.
- the database is stored and indexed to allow for high performance queries and data views.
- the database should allow for the assessment data to be categorised by multiple attributes. These attributes will be used to enable search, query and filter functionality in the analytics/user interface. (Initially we can use a number of pre defied data sets (canned views) with parameters that can be varied rather than a fully scoped data base.)
- An internal user is defined as a user within a pre-defined network
- oAn external user is defined as an approved user • Provide a graphical interface for the user to select the data they want to use and the actions they want to take to do a comparison/benchmarking exercise using the data.
- o lt should be possible for the user to add classification/tags to this data where it is needed/missing (e.g. type of assessment) and for these to be added to the database for future use
- a user is be able to save their selection and queries and re-use this when they return to the application
- Administration of the database includes, but is not limited to, adding new data, modifying existing data, deleting data, adding data tags to data, creating new benchmark sets, designing new views.
- Figure 37 shows an example of a design overview with a single platform.
- assessment data 1008 is passed to the master data warehouse 1012.
- the data editing application 1014 interfaces between the master data warehouse 1012 and the benchmark data warehouse 1016, and serves to clean and consolidate assessment data and industry benchmark information.
- the user can log into a platform 1006 for performing and controlling analytics.
- client query application 1004 client specific live assessment data 1002 from the assessment platform 1000 may be accessed.
- Benchmark data 1010 from the benchmark data warehouse 1016 is accessed via the same client query application 1004.
- Figure 38 shows an example of a design overview with a multiple platforms.
- a multitude of platforms (including assessment platform 1000, analytics platform 1006, external platforms 1018, and other systems 1020) pass data to and access data from the master data warehouse 1012.
- Benchmark data from the benchmark data warehouse 1016 is accessed via a client query application 1004.
- FIG 39 shows an example of a design overview where the analytics application sits within a central system 1022 but sources its data primarily from external databases (e.g. a content metadata database 1028 and a benchmark and index measures database 1030). These databases are managed and populated via extract, transform, load (ETL) processes using assessment (score) data 1034, demographics (candidate, project) data 1036, and other sources to access.
- External databases e.g. a content metadata database 1028 and a benchmark and index measures database 1030.
- These databases are managed and populated via extract, transform, load (ETL) processes using assessment (score) data 1034, demographics (candidate, project) data 1036, and other sources to access.
- ETL extract, transform, load
- Analytics service 1026 represents the service implementation responsible for data access and transformation of raw data into the business model
- Benchmark and index measures 1030 and content metadata 1028 are logically separate but may be physically together
- An include client marker 1038 may be passed between the central and the database(s).
- Demographic direct feedback 1040 may be passed between the different part of the central system.
- Benchmark measures and metadata 1042 from a data warehouse 1050 are subject to an irregular ETL process 1044 to populate a benchmark measures and metadata database 1048.
- This benchmark measures and metadata database 1048 resides on an internal domain 1046 and may be linked to benchmark measures and metadata database 1058 on a customer database domain 1062 by multiprotocol label switching (MPLS) 1056 or other log shipping procedures.
- MPLS multiprotocol label switching
- On the customer database domain 1062 reside a plurality of databases 1068 with client data, fro example from client assessments, demographics, or other data.
- the data from these databases 1068 is accessible for daily ETL 1052, for example with open database connectivity (ODBC).
- ODBC open database connectivity
- the daily ETL 1052 deposits data in a client measures database 1054 that resides on an internal domain 1046. Data from the client measures database 1054 may be log shipped daily to a client measures database 1060 that resided on the customer database domain 1062.
- the analytics 1064 operates from the customer database domain 1062 with data from the client measures database 1054 and the benchmark measures and metadata database 1058.
- the analytics application 1064 aggregates candidates and benchmarks from the benchmark measures and metadata database 1058.
- the analytics application 1064 obtains client registration information, as well as information relating to saved projects and candidate metadata from a central database 1066.
- the analytics application 1064 may operated from the central database.
- the analytics application output is deposited in the central database 1066, which is included for daily ETL 1052.
- the benchmark measures and metadata database 1058 and client measures database 1060 on the customer database domain 1062 may be read-only copies of the benchmark measures and metadata database 1048 and client measures database 1054 on the internal domain 1046.
- the analytics application 1064 uses the read-only copies 1058 1060. This minimises the risk of any communication latency in querying the data for individual reports.
- Central 1066 may have knowledge of the schema (i.e., the interface is the schema).
- a service may be implemented internally to central 1066.
- the measures for candidates on project belonging clients that are registered users reside in the measures database 1068.
- the analytics application 1064 in central 1066 aggregates data (for example: calculate the average for a measure for the set of candidates or projects selected for comparison with a benchmark) but does not do any calculation of measures.
- a mechanism may be necessary to permit central 1066 to inform the daily ETL 1052 (warehouse ETL job) which clients have registered for the analytics application.
- the ETL needs to note on the projects in the client measure data, which ones can be used for Benchmark measures because the matching measure is available. This can also be used to reduce the volume of client data that is loaded into the client measures database, based on whether the project has measure data that can be used for any of the current benchmark measures. For example, the ETL may read the client list via ODBC similar to other source data.
- Augmented metadata on projects and candidates may be stored in central 1066 to avoid the application becoming coupled to the assessments.
- a service to allow this to be written back can be implemented separately.
- Central 1066 retrieves the project and candidate list in the client measures database 1060. This may need to be filtered to projects with data that can be used for the measures. Rules may be defined for hidden projects (such as projects that are not deleted). Data can be deleted from the assessment database, so ETL procedures and central need to cope with that.
- Benchmarks may be biodata and demographic data specific, so the client measures feed may need to take this data from the demographics database and other databases.
- Range-specific text for labelling benchmarks may be stored with the benchmark data. This means there is one master database for storing benchmark information (that may need to be reused outside the analytics application).
- the analytics system is based around the selection of three options:
- Benchmark Queries Users create Benchmark Queries by selecting a Benchmark Template and optionally adding filters and chart format preferences. Benchmark Queries are then saved to the analytics database. Users may have the option of saving Benchmark Queries as Global (also referred to as 'Universal') Benchmark Queries (available to all users). Other users may only have the option of saving User Benchmark Queries (for their own use).
- Global also referred to as 'Universal'
- Benchmark Queries available to all users.
- Other users may only have the option of saving User Benchmark Queries (for their own use).
- the analytics system will generate graphical representations of Benchmark Queries by linking them with their corresponding Benchmarks and Assessment Measures. These graphical representations can be displayed either externally or within the analytics application itself.
- Admin User can update Measures and Chart Type; can save Global and hidden Benchmark Queries. May be done using SQL scripts initially.
- a chart is rendered to represent the selected Benchmark and Data Type Values.
- the assigned chart type is used for a saved benchmark query.
- Data is retrieved based on the selected data type values.
- OR operator When multiple filters (data type values from different data types) are selected then use OR operator to select data in the same data type and AND operator between data types, e.g. ( 'uk' OR 'france' ) AND ('finance' OR 'marketing' ).
- Figure 41 shows various examples of render charts:
- a) shows how for pie charts and simple bar charts, a one dimensional set of data values is provided, e.g. 6,5,2,4.
- b) shows how for grouped and stacked bar charts, a two dimensional set of data values will be provided, e.g. (6,5,2), (8,4,3), (3,7,3).
- measures are split accordingly. If one of the data types is set as primary then the data is split into corresponding groups. If no data source is set as primary, then only a one dimensional data set is used (for simple bar chart or pie chart). For data:
- Chart values (6, 10), (8, 12) (sum for Measurel and sum for Measure2 split by Finance and Marketing).
- project data is filtered on all data type values selected except for primary data type (if enabled).
- the user may choose for the project data not to be filtered with the benchmark data.
- the user may be presented with a choice to apply the filters/drilldown to the project data or not.
- Content (html) is retrieved from the benchmark database. Content may potentially be configured within properties.
- the title of the chart is derived from the selected Benchmark and Data Types Values.
- the title may be defined within properties, and it may be held with the benchmark data.
- the Filter Summary (as illustrated in Figure 105) is derived from the selected Benchmark and Data Types Values. Further logic may be added to this function.
- Figures 42 and 43 show examples of charts available via drill-down.
- the functionality to filter and drill down on charts may be available to all SHL Central users (not just Premium Users).
- a drill down option When a drill down option is selected (for example using a link available on hover over a data section), then it is linked to the associated saved benchmark query and inherits the selection for the initial chart.
- a carousel and Side Bar (of associated benchmarks) may be provided. Saved queries may be assigned to Sections. Saved queries may be assigned to Propositions. Some benchmarks may be highlighted (or featured). For administration purposes, Benchmarks may have Draft or Live status. A link to "Latest 10" benchmarks accessed may be shown. A data type value may be defined as corresponding to null (to retrieve other data).
- the 'My Assessment Data' tab provides access to the user's assessment data.
- Figure 44 shows functional requirements that relate to user registration for the analytics tool.
- Fi ure 45 shows functional re ariesments that relate to anal tics administration and services.
- Benchmark Model e.g. People Risk
- Benchmark Model Band e.g. Very high risk people
- Content may be hard coded.
- Chart Types available will be conditionally on:
- Benchmark Model e.g. People Risk
- a process is required to identify users no longer deactivate a user's account so that
- authorised to view client data and deselect them. 1 can block access to ex- employees (provider and Client).
- Figure 46 shows functional requirements that relate to different users viewing the analytics.
- Charts never display information that could identify a single candidate or client (other than the owning client).
- the exact score may not be displayed. For example, it may be treated as 5 instead.
- Chart Data can be displayed in a 2 dimensional grid.
- Figure 47 shows a 2D grid chart with the data in the above table.
- the Scales to be retrieved are determined by the Benchmark Model (which map to a single Assessment scale tag) and the Benchmark Bands (which map to specific scores).
- the application shows a list of available saved queries.
- Premium Client users will have the option to Edit, Delete, and Deactivate (hide from others) their own queries, and copy all available queries.
- Benchmark Chart correlates to the creation of a draft Saved Query on the Analytics database.
- the set of available Benchmark Templates is filtered, and any options (in other selections) no longer available are inhibited.
- a clear option allows the user to clear selected data. If selection is cleared, then clear the corresponding fields on the draft template and clear the chart area.
- Inhibit action command button (to display chart) until all options are selected (or a single Benchmark template is selected).
- Filter Data Select one or more filters (data types) and for each, select one or more values to be added
- Options could include:
- Selecting no values for a data type corresponds to all data.
- the OR operator is used for all items of the same data type, and the AND operator is used between data types.
- the query is for:
- filters belong to the same data Type as that linked to the Benchmark Template then additionally allow the user to assign the filter to a bar (1 to 3). This is used to assign the data to a data set for comparison on the chart. E.g. by assigning UK to bar 1 , and France to bar 2, the chart will show a graph of UK compared with France.
- On drill down chart y axes may be % of total, x axes are selected data type for drill down.
- Drill down shows charts at a lower level of User Story Notes
- Figure 48 shows the elements in the entity model.
- Figure 49 shows the elements broken down into sections. Theme
- Sequence Integer Determines the sequence of bands on the chart.
- Name e.g. Industry.
- a Fixed Filter e.g. A specific Instrument like OPQ32R.
- Name Name. e.g. France.
- Benchmark Model ID Link to Benchmark Model. May be null (corresponding to all scales)
- Theme ID Link to Theme. May be null (corresponding to all client
- Figure 50 shows the elements in the entity model elements that relate to the 'Saved Query' sec Query constructed using the Analytics system and saved.
- Token Random number (in range 1 to 1 ,000,000,000).
- Benchmark Template Link to allowable Benchmark Template
- ID Normally mandatory but can be null for draft query.
- Draft Original ID Set only for draft queries when an existing query is being
- Chart xml Cache of chart xml (data for chart control).
- This cache is cleared when related data is updated.
- Chart xml date Date chart xml is populated
- Content xml Cache of content xml (links, text, images, and pop up
- This cache is cleared when related data is updated.
- Filters associated with a saved query e.g. Germany, France; Marketing and Finance.
- a filter with the same data type as the owning Project Template can be assigned to a bar on the top level chart (to show comparisons between different data sets). For example, to show a comparison between Marketing and Finance, assign Marketing to bar 0 and Finance to bar 1.
- Queries are grouped into Propositions, and users have the option to
- Source System Source system ( Assessment measures source)
- Selection criteria to be defined, to include data from a variety of measures sources.
- Selection criteria may include Firmographics
- Selection criteria may include Demographics.
- Figure 52 shows the elements in the entity model elements that relate to content and chart.
- Type of content e.g. Pop-up on band.
- Information to be displaye d for a band May be limited to a specific Theme and/or Data Type, e.g. "Employees in this category 1 prove to be 20% more effective"
- Theme ID Link to Theme. May be null (corresponding to all client
- Queries are grouped into Propositions, and users have the option to search for queries (benchmarks) for a specified proposition.
- Queries are grouped into Sections, and universal Queries (benchmarks) displayed in Analytics are
- a) Users may be blocked from selecting data sets of less than 10 rows.
- the system may block
- benchmark template selection when there are less than 10 scores in the results. Further action may be defined for when data is changed and the number of scores in a data set (query) drops to below 10. Whenever a bar (in a chart) related to less than 10 scores, a value of 5 may be used. b) An option may be provided to clear all (start a new query).
- Drafts may be cleared, for example periodically, or when a user logs out. Alternatively a user may always return to current draft.
- each template may relate to a single measurement type (more meaningful and controlled).
- a process may be defined to clear data (Benchmark DB and Assessment Measures DB) when over e.g. five years old.
- Figures 53 to 66 show a high-level view of the design considerations for the introduction of the Analytics application into the Central platform, including the overall approach, designs and constraints envisaged at the outset of the project.
- the implementation needs to fit with the overall Central framework in order to enable integration and ongoing code management.
- An example of a suitable framework is based on the following components:
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Theoretical Computer Science (AREA)
- Economics (AREA)
- Entrepreneurship & Innovation (AREA)
- Strategic Management (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Marketing (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Quality & Reliability (AREA)
- Operations Research (AREA)
- Development Economics (AREA)
- Educational Administration (AREA)
- Game Theory and Decision Science (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP20174107.1A EP3767557A1 (en) | 2011-09-06 | 2012-09-06 | Analytics |
US14/343,265 US9760601B2 (en) | 2011-09-06 | 2012-09-06 | Analytics |
ROA201400187A RO130136A2 (en) | 2011-09-06 | 2012-09-06 | Analysis system |
AU2012306084A AU2012306084A1 (en) | 2011-09-06 | 2012-09-06 | Analytics |
EP12783642.7A EP2754102A1 (en) | 2011-09-06 | 2012-09-06 | Analytics |
AU2017225128A AU2017225128A1 (en) | 2011-09-06 | 2017-09-08 | Analytics |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GBGB1115418.4A GB201115418D0 (en) | 2011-09-06 | 2011-09-06 | Analytics |
GB1115418.4 | 2011-09-06 | ||
GBGB1116863.0A GB201116863D0 (en) | 2011-09-06 | 2011-09-29 | Analytics |
GB1116863.0 | 2011-09-29 | ||
GB1200884.3 | 2012-01-18 | ||
GBGB1200884.3A GB201200884D0 (en) | 2011-09-06 | 2012-01-18 | Analytics |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013034917A1 true WO2013034917A1 (en) | 2013-03-14 |
Family
ID=44882314
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/GB2012/052198 WO2013034917A1 (en) | 2011-09-06 | 2012-09-06 | Analytics |
Country Status (6)
Country | Link |
---|---|
US (1) | US9760601B2 (en) |
EP (2) | EP2754102A1 (en) |
AU (2) | AU2012306084A1 (en) |
GB (3) | GB201115418D0 (en) |
RO (1) | RO130136A2 (en) |
WO (1) | WO2013034917A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
USD772246S1 (en) | 2015-03-18 | 2016-11-22 | Adp, Llc | Display screen or portion thereof with animated graphical user interface |
USD798320S1 (en) | 2015-03-18 | 2017-09-26 | Adp, Llc | Display screen with graphical user interface |
USD805090S1 (en) | 2015-03-18 | 2017-12-12 | Adp, Llc | Display screen with graphical user interface |
EP3848871A1 (en) | 2013-10-16 | 2021-07-14 | SHL Group Ltd | Assessment system |
US20220215317A1 (en) * | 2020-04-07 | 2022-07-07 | Institute For Supply Management, Inc. | Methods and Apparatus for Talent Assessment |
Families Citing this family (78)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10185477B1 (en) | 2013-03-15 | 2019-01-22 | Narrative Science Inc. | Method and system for configuring automatic generation of narratives from data |
US9720899B1 (en) | 2011-01-07 | 2017-08-01 | Narrative Science, Inc. | Automatic generation of narratives from data using communication goals and narrative analytics |
US20140278737A1 (en) * | 2013-03-13 | 2014-09-18 | Sap Ag | Presenting characteristics of customer accounts |
US11397520B2 (en) | 2013-08-01 | 2022-07-26 | Yogesh Chunilal Rathod | Application program interface or page processing method and device |
WO2015015251A1 (en) * | 2013-08-01 | 2015-02-05 | Yogesh Chunilal Rathod | Presenting plurality types of interfaces and functions for conducting various activities |
US9773018B2 (en) * | 2013-08-13 | 2017-09-26 | Ebay Inc. | Mapping item categories to ambiguous queries by geo-location |
US9589024B2 (en) * | 2013-09-27 | 2017-03-07 | Intel Corporation | Mechanism for facilitating dynamic and proactive data management for computing devices |
US20150154527A1 (en) * | 2013-11-29 | 2015-06-04 | LaborVoices, Inc. | Workplace information systems and methods for confidentially collecting, validating, analyzing and displaying information |
WO2015134546A1 (en) * | 2014-03-03 | 2015-09-11 | Career Analytics Network, Inc. | Personal attribute valuation and matching with occupations and organizations |
US9779151B2 (en) * | 2014-09-25 | 2017-10-03 | Business Objects Software Ltd. | Visualizing relationships in data sets |
US11922344B2 (en) | 2014-10-22 | 2024-03-05 | Narrative Science Llc | Automatic generation of narratives from data using communication goals and narrative analytics |
US11288328B2 (en) | 2014-10-22 | 2022-03-29 | Narrative Science Inc. | Interactive and conversational data exploration |
US11341338B1 (en) | 2016-08-31 | 2022-05-24 | Narrative Science Inc. | Applied artificial intelligence technology for interactively using narrative analytics to focus and control visualizations of data |
US11238090B1 (en) | 2015-11-02 | 2022-02-01 | Narrative Science Inc. | Applied artificial intelligence technology for using narrative analytics to automatically generate narratives from visualization data |
US10116563B1 (en) | 2014-10-30 | 2018-10-30 | Pearson Education, Inc. | System and method for automatically updating data packet metadata |
US10110486B1 (en) | 2014-10-30 | 2018-10-23 | Pearson Education, Inc. | Automatic determination of initial content difficulty |
US10218630B2 (en) | 2014-10-30 | 2019-02-26 | Pearson Education, Inc. | System and method for increasing data transmission rates through a content distribution network |
US10318499B2 (en) | 2014-10-30 | 2019-06-11 | Pearson Education, Inc. | Content database generation |
US10027740B2 (en) * | 2014-10-31 | 2018-07-17 | Pearson Education, Inc. | System and method for increasing data transmission rates through a content distribution network with customized aggregations |
US10735402B1 (en) | 2014-10-30 | 2020-08-04 | Pearson Education, Inc. | Systems and method for automated data packet selection and delivery |
US10333857B1 (en) | 2014-10-30 | 2019-06-25 | Pearson Education, Inc. | Systems and methods for data packet metadata stabilization |
US10726376B2 (en) * | 2014-11-04 | 2020-07-28 | Energage, Llc | Manager-employee communication |
US10692027B2 (en) | 2014-11-04 | 2020-06-23 | Energage, Llc | Confidentiality protection for survey respondents |
US20160140322A1 (en) * | 2014-11-14 | 2016-05-19 | Ims Health Incorporated | System and Method for Conducting Cohort Trials |
US20160140609A1 (en) * | 2014-11-14 | 2016-05-19 | Facebook, Inc. | Visualizing Audience Metrics |
US10621535B1 (en) * | 2015-04-24 | 2020-04-14 | Mark Lawrence | Method and apparatus to onboard resources |
US9697105B2 (en) * | 2015-04-30 | 2017-07-04 | EMC IP Holding Company LLC | Composable test automation framework |
US10331899B2 (en) * | 2015-10-24 | 2019-06-25 | Oracle International Corporation | Display of dynamic contextual pivot grid analytics |
US11170038B1 (en) | 2015-11-02 | 2021-11-09 | Narrative Science Inc. | Applied artificial intelligence technology for using narrative analytics to automatically generate narratives from multiple visualizations |
US11222184B1 (en) | 2015-11-02 | 2022-01-11 | Narrative Science Inc. | Applied artificial intelligence technology for using narrative analytics to automatically generate narratives from bar charts |
US11232268B1 (en) | 2015-11-02 | 2022-01-25 | Narrative Science Inc. | Applied artificial intelligence technology for using narrative analytics to automatically generate narratives from line charts |
EP3283984A4 (en) * | 2015-11-03 | 2018-04-04 | Hewlett-Packard Enterprise Development LP | Relevance optimized representative content associated with a data storage system |
US10509396B2 (en) | 2016-06-09 | 2019-12-17 | Rockwell Automation Technologies, Inc. | Scalable analytics architecture for automation control systems |
US10613521B2 (en) | 2016-06-09 | 2020-04-07 | Rockwell Automation Technologies, Inc. | Scalable analytics architecture for automation control systems |
JP2019522279A (en) * | 2016-06-17 | 2019-08-08 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | Filtering guided by data |
US11188864B2 (en) * | 2016-06-27 | 2021-11-30 | International Business Machines Corporation | Calculating an expertise score from aggregated employee data |
US10909130B1 (en) * | 2016-07-01 | 2021-02-02 | Palantir Technologies Inc. | Graphical user interface for a database system |
WO2018042547A1 (en) * | 2016-08-31 | 2018-03-08 | 株式会社オプティム | Response data selecting system, response data selecting method and program |
US10628738B2 (en) | 2017-01-31 | 2020-04-21 | Conduent Business Services, Llc | Stance classification of multi-perspective consumer health information |
US11568148B1 (en) | 2017-02-17 | 2023-01-31 | Narrative Science Inc. | Applied artificial intelligence technology for narrative generation based on explanation communication goals |
US11954445B2 (en) | 2017-02-17 | 2024-04-09 | Narrative Science Llc | Applied artificial intelligence technology for narrative generation based on explanation communication goals |
US11068661B1 (en) | 2017-02-17 | 2021-07-20 | Narrative Science Inc. | Applied artificial intelligence technology for narrative generation based on smart attributes |
US10943069B1 (en) | 2017-02-17 | 2021-03-09 | Narrative Science Inc. | Applied artificial intelligence technology for narrative generation based on a conditional outcome framework |
US10452998B2 (en) | 2017-03-19 | 2019-10-22 | International Business Machines Corporation | Cognitive blockchain automation and management |
US10515233B2 (en) * | 2017-03-19 | 2019-12-24 | International Business Machines Corporation | Automatic generating analytics from blockchain data |
US10528700B2 (en) | 2017-04-17 | 2020-01-07 | Rockwell Automation Technologies, Inc. | Industrial automation information contextualization method and system |
CN108304368B (en) * | 2017-04-20 | 2022-02-08 | 腾讯科技(深圳)有限公司 | Text information type identification method and device, storage medium and processor |
US10877464B2 (en) | 2017-06-08 | 2020-12-29 | Rockwell Automation Technologies, Inc. | Discovery of relationships in a scalable industrial analytics platform |
US10785337B2 (en) | 2017-06-29 | 2020-09-22 | Microsoft Technology Licensing, Llc | Analytics and data visualization through file attachments |
US10803092B1 (en) | 2017-09-01 | 2020-10-13 | Workday, Inc. | Metadata driven catalog definition |
US10839025B1 (en) * | 2017-09-01 | 2020-11-17 | Workday, Inc. | Benchmark definition using client based tools |
US20190102710A1 (en) * | 2017-09-30 | 2019-04-04 | Microsoft Technology Licensing, Llc | Employer ranking for inter-company employee flow |
US20190147407A1 (en) * | 2017-11-16 | 2019-05-16 | International Business Machines Corporation | Automated hiring assessments |
US11042708B1 (en) | 2018-01-02 | 2021-06-22 | Narrative Science Inc. | Context saliency-based deictic parser for natural language generation |
US11003866B1 (en) | 2018-01-17 | 2021-05-11 | Narrative Science Inc. | Applied artificial intelligence technology for narrative generation using an invocable analysis service and data re-organization |
US11182556B1 (en) | 2018-02-19 | 2021-11-23 | Narrative Science Inc. | Applied artificial intelligence technology for building a knowledge base using natural language processing |
US11379481B2 (en) * | 2018-05-03 | 2022-07-05 | Sap Se | Query and metadata repositories to facilitate content management and lifecycles in remote analytical application integration |
US10706236B1 (en) | 2018-06-28 | 2020-07-07 | Narrative Science Inc. | Applied artificial intelligence technology for using natural language processing and concept expression templates to train a natural language generation system |
US11144042B2 (en) | 2018-07-09 | 2021-10-12 | Rockwell Automation Technologies, Inc. | Industrial automation information contextualization method and system |
US11410111B1 (en) * | 2018-08-08 | 2022-08-09 | Wells Fargo Bank, N.A. | Generating predicted values based on data analysis using machine learning |
KR102521408B1 (en) * | 2018-08-27 | 2023-04-14 | 삼성전자주식회사 | Electronic device for providing infographics and method thereof |
US11461726B2 (en) * | 2019-01-21 | 2022-10-04 | Adp, Inc. | Business insight generation system |
US11403541B2 (en) | 2019-02-14 | 2022-08-02 | Rockwell Automation Technologies, Inc. | AI extensions and intelligent model validation for an industrial digital twin |
US11301798B2 (en) | 2019-04-11 | 2022-04-12 | International Business Machines Corporation | Cognitive analytics using group data |
US11086298B2 (en) | 2019-04-15 | 2021-08-10 | Rockwell Automation Technologies, Inc. | Smart gateway platform for industrial internet of things |
US11029820B2 (en) * | 2019-06-26 | 2021-06-08 | Kyocera Document Solutions Inc. | Information processing apparatus, non-transitory computer readable recording medium that records a dashboard application program, and image forming apparatus management system |
US11841699B2 (en) | 2019-09-30 | 2023-12-12 | Rockwell Automation Technologies, Inc. | Artificial intelligence channel for industrial automation |
US11435726B2 (en) | 2019-09-30 | 2022-09-06 | Rockwell Automation Technologies, Inc. | Contextualization of industrial data at the device level |
US11914623B2 (en) * | 2019-10-24 | 2024-02-27 | Palantir Technologies Inc. | Approaches for managing access control permissions |
US20210134434A1 (en) * | 2019-11-05 | 2021-05-06 | American Heart Association, Inc. | System and Method for Improving Food Selections |
US11314796B2 (en) * | 2019-12-09 | 2022-04-26 | Sap Se | Dimension-specific dynamic text interface for data analytics |
US11249462B2 (en) | 2020-01-06 | 2022-02-15 | Rockwell Automation Technologies, Inc. | Industrial data services platform |
US11726459B2 (en) | 2020-06-18 | 2023-08-15 | Rockwell Automation Technologies, Inc. | Industrial automation control program generation from computer-aided design |
US11461292B2 (en) * | 2020-07-01 | 2022-10-04 | International Business Machines Corporation | Quick data exploration |
CN113420194A (en) * | 2021-05-07 | 2021-09-21 | 上海汇付数据服务有限公司 | Method and system for displaying data |
US11949641B2 (en) * | 2022-01-11 | 2024-04-02 | Cloudflare, Inc. | Verification of selected inbound electronic mail messages |
WO2023137425A1 (en) * | 2022-01-14 | 2023-07-20 | Institute For Supply Management, Inc. | Methods and apparatus for talent assessment |
WO2023209693A1 (en) * | 2022-04-29 | 2023-11-02 | Mtn Group Management Services (Proprietary) Limited | An advanced data and analytics management platform |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7729964B2 (en) * | 2004-08-06 | 2010-06-01 | General Electric Company | Methods and systems for anomaly detection in small datasets |
US20080208647A1 (en) * | 2007-02-28 | 2008-08-28 | Dale Hawley | Information Technologies Operations Performance Benchmarking |
-
2011
- 2011-09-06 GB GBGB1115418.4A patent/GB201115418D0/en not_active Ceased
- 2011-09-29 GB GBGB1116863.0A patent/GB201116863D0/en not_active Ceased
-
2012
- 2012-01-18 GB GBGB1200884.3A patent/GB201200884D0/en not_active Ceased
- 2012-09-06 WO PCT/GB2012/052198 patent/WO2013034917A1/en active Application Filing
- 2012-09-06 US US14/343,265 patent/US9760601B2/en active Active
- 2012-09-06 AU AU2012306084A patent/AU2012306084A1/en not_active Abandoned
- 2012-09-06 EP EP12783642.7A patent/EP2754102A1/en not_active Withdrawn
- 2012-09-06 EP EP20174107.1A patent/EP3767557A1/en active Pending
- 2012-09-06 RO ROA201400187A patent/RO130136A2/en unknown
-
2017
- 2017-09-08 AU AU2017225128A patent/AU2017225128A1/en not_active Abandoned
Non-Patent Citations (3)
Title |
---|
"Of all the management tasks in the period leading up to the global recession, none was bungled more than the management of risk", HARVARD BUSINESS REVIEW, October 2009 (2009-10-01) |
"Of all the management tasks that were bungled in the period leading up to the global recession, none was bungled more than the management of risk.", HARVARD BUSINESS REVIEW (2009, October 2009 (2009-10-01) |
EPO: "Mitteilung des Europäischen Patentamts vom 1. Oktober 2007 über Geschäftsmethoden = Notice from the European Patent Office dated 1 October 2007 concerning business methods = Communiqué de l'Office européen des brevets,en date du 1er octobre 2007, concernant les méthodes dans le domaine des activités", JOURNAL OFFICIEL DE L'OFFICE EUROPEEN DES BREVETS.OFFICIAL JOURNAL OF THE EUROPEAN PATENT OFFICE.AMTSBLATTT DES EUROPAEISCHEN PATENTAMTS, OEB, MUNCHEN, DE, vol. 30, no. 11, 1 November 2007 (2007-11-01), pages 592 - 593, XP007905525, ISSN: 0170-9291 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3848871A1 (en) | 2013-10-16 | 2021-07-14 | SHL Group Ltd | Assessment system |
USD772246S1 (en) | 2015-03-18 | 2016-11-22 | Adp, Llc | Display screen or portion thereof with animated graphical user interface |
USD798320S1 (en) | 2015-03-18 | 2017-09-26 | Adp, Llc | Display screen with graphical user interface |
USD805090S1 (en) | 2015-03-18 | 2017-12-12 | Adp, Llc | Display screen with graphical user interface |
US20220215317A1 (en) * | 2020-04-07 | 2022-07-07 | Institute For Supply Management, Inc. | Methods and Apparatus for Talent Assessment |
Also Published As
Publication number | Publication date |
---|---|
AU2017225128A1 (en) | 2017-10-05 |
GB201200884D0 (en) | 2012-02-29 |
GB201115418D0 (en) | 2011-10-19 |
US20150134694A1 (en) | 2015-05-14 |
US9760601B2 (en) | 2017-09-12 |
EP2754102A1 (en) | 2014-07-16 |
GB201116863D0 (en) | 2011-11-09 |
AU2012306084A1 (en) | 2014-04-24 |
EP3767557A1 (en) | 2021-01-20 |
RO130136A2 (en) | 2015-03-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9760601B2 (en) | Analytics | |
Raisinghani | Business intelligence in the digital economy: opportunities, limitations and risks | |
Garvin | Manufacturing strategic planning | |
Sadiq et al. | Artificial intelligence maturity model: a systematic literature review | |
Verreynne et al. | Employment systems in small firms: A multilevel analysis | |
Glykas | Effort based performance measurement in business process management | |
US20110295656A1 (en) | System and method for providing balanced scorecard based on a business intelligence server | |
Schuff et al. | Enabling self-service BI: A methodology and a case study for a model management warehouse | |
JP2007520775A (en) | System for facilitating management and organizational development processes | |
Taylor et al. | Real-world decision modeling with DMN | |
Stocker et al. | Dismissal: Important criteria in managerial decision-making | |
US20060287909A1 (en) | Systems and methods for conducting due diligence | |
Steens et al. | Developing digital competencies of controllers: Evidence from the Netherlands | |
Pham et al. | Barriers in adopting IT and data analytics for internal auditing: findings from Vietnam's banking sector | |
Samsonowa et al. | Performance Management | |
Ngoc | Adopted robotics process automation and the role of data science in recruitment and selection process | |
Popara et al. | Application of Digital Tools, Data Analytics and Machine Learning in Internal Audit | |
Malik et al. | Recreating Efficient Framework for Resource-Constrained Environment: HR Analytics and Its Trends for Society 5.0 | |
Sorour | Holistic Framework for Monitoring Quality in Higher Education Institutions in the Kingdom of Saudi Arabia using Business Intelligence Dashboards | |
Ferreira | Implementation of a business intelligence solution: a case study of a workforce and staffing solutions company | |
Jalonen | Assessing robotic processing automation potential | |
ZERAY | OF PROJECT MANAGEMENT POST GRADUATE PROGRAM | |
Ganoo | Evaluation Model for Software Tools: Using Merinova’s TAS System as a Case Study and Outlining the Key Principles in respect to the Design, Development and the User | |
K Amiri et al. | Creating an Aligned (Big) Data Analytics Strategy: An Action Research | |
Altdorf | Operational Work Management with Data & Management Reporting: Utilizing Power BI Reporting and Visualization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12783642 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2014 201400187 Country of ref document: RO Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012783642 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2012306084 Country of ref document: AU Date of ref document: 20120906 Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2014 201400341 Country of ref document: RO Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14343265 Country of ref document: US |