WO2011060480A1 - Evaluation tool, system and method for evaluating stakeholder interactions between multiple groups - Google Patents

Evaluation tool, system and method for evaluating stakeholder interactions between multiple groups Download PDF

Info

Publication number
WO2011060480A1
WO2011060480A1 PCT/AU2010/001249 AU2010001249W WO2011060480A1 WO 2011060480 A1 WO2011060480 A1 WO 2011060480A1 AU 2010001249 W AU2010001249 W AU 2010001249W WO 2011060480 A1 WO2011060480 A1 WO 2011060480A1
Authority
WO
WIPO (PCT)
Prior art keywords
evaluation
groups
group
project
interdependent
Prior art date
Application number
PCT/AU2010/001249
Other languages
French (fr)
Inventor
Darren Woolley
Original Assignee
Trinityp3 Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2009905712A external-priority patent/AU2009905712A0/en
Application filed by Trinityp3 Pty Ltd filed Critical Trinityp3 Pty Ltd
Publication of WO2011060480A1 publication Critical patent/WO2011060480A1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management

Definitions

  • the present invention relates to evaluation tools, systems and methods for evaluating, managing and improving stakeholder interactions between multiple interdependent groups.
  • Virtually every business activity in an organisation may be described as a project, with interdependent stakeholders contributing to the success or otherwise of the project.
  • Stakeholders can be internal to the organisation, external to it, or a mixture of internal and external.
  • evaluation systems There are known evaluation systems, methods and tools for evaluating and monitoring performance. Typically, these are systems, methods and tools that identify and examine performance (based on empirical or quantitative data, or metrics obtained from sources within a group) and measure it against key performance indicators and/or success drivers.
  • US 6,604,084 describes a system and method for generating an evaluation in a performance evaluation system.
  • the system uses productivity and quality data to evaluate the performance of an individual, group, process or other suitable type of item or operation.
  • Performance evaluations are generated by defining a plurality of questions and a plurality of performance areas.
  • a performance area is a group of questions that relates to a particular area of job performance.
  • performance areas for a call centre may include on-call greeting and call closing. Evaluations can be automatically generated for disparate groups by selecting different performance areas. While US 6,604,084 can measure performance of disparate groups in an organisation, it does not evaluate how performance between disparate groups might affect organisational performance or how the outcome of tasks carried out by interdependent groups might be affected by the performance of those groups.
  • US 5,684,964 is a method and system for monitoring and controlling the performance of an organisation, including an algorithm for selecting variables that relate to an organisation's performance and for constructing an interaction table that relates the performance variables to one another and calculating an efficiency rating using the collected data.
  • US 5,684,964 determines the relative impact of each performance variable on the organisation's efficiency rating for a specified time interval. The function can be repeated for managerial assessment of interactions among performance variables as well as the accuracy of the calculated efficiency rating. While US 5,684,964 examines the interaction of performance variables, it does not examine the interaction/interdependence of stakeholders whose collective and relative performance affects organisational (or overall project) performance and the achievement of success drivers.
  • the success of a project typically involves a number of interdependent stakeholders and the ultimate outcome (success or otherwise) will depend on the status (e.g. level and quality) of interactions between multiple groups of interdependent stakeholders.
  • US 2005/0086189 describes systems and methods for evaluating the level of collaboration among members of a team in relation to knowledge-centred collaboration (that is, the collection, storage, sharing of knowledge across groups). However, the systems and methods described in US 2005/0086189 perform the evaluation by presenting to a user (or users), one or more pre-determined topics of concern (e.g. statements concerning team collaboration areas of concern and associated issues). The user(s) agree or disagree with the statements. The selections made by the user(s) are analysed using information in a knowledge database, to set an evaluation value for each area of concern.
  • pre-determined topics of concern e.g. statements concerning team collaboration areas of concern and associated issues
  • US 2005/0086189 does not measure how well the groups collaborate and it assesses knowledge-centred collaboration on an individual rather than a group basis. US 2005/0086189 relies on a knowledge database to identify problems in sharing, storing or collection of knowledge across groups.
  • US 5,684,964 collects information from a plurality of sources and relates performance variables to one another, it does not examine interactions between different groups of stakeholders.
  • US 2005/0086189 does not examine relationships or the level of collaboration (how groups interact) in complex arrangements of multiple interdependent groups. Rather it provides an evaluation on an individual basis of how well information is shared across groups by reference to a knowledge database.
  • US 6,604,084 nor US 5,684,964 provide a means for evaluating complex many-to-many relationships between multiple stakeholder groups.
  • a one-to-one relationship exists between the information collected and the performance metrics (e.g. performance evaluation of an individual).
  • the performance metrics e.g. efficiency rating.
  • the status of these relationships impacts on performance between groups in complex arrangements of multiple interdependent groups and the ability to achieve desired project or strategy outcomes.
  • an evaluation system for evaluating relationship status between multiple interdependent stakeholder groups in a project including an evaluation tool comprising:
  • evaluation collection means for collecting one or more responses from at least one member of each of at least two interdependent groups in a project, each member being a participant in an evaluation
  • response(s) collected from each participant includes one or more responses that evaluate other interdependent groups with which the participant's group interacts in the project in relation to a set of performance drivers for the project, and
  • evaluation processing means for processing responses collected from each participant, wherein the processing of responses includes utilising group trend data from at least two interdependent groups to determine an assessment of a relationship status between the interdependent groups; and (c) evaluation reporting means for generating a report on the processed responses, wherein the report includes a relationship status between at least two interdependent groups
  • the evaluation tool enables multiple interdependent groups in a project to evaluate each other in relation to an identified set of performance drivers for the project, the respective evaluations of each group being provided by a collective response of each group based on group trend data and an analysis of group trend data across multiple interdependent groups thereby enabling an assessment of relationship status between said interdependent groups.
  • an evaluation method for evaluating relationship status between multiple, interdependent stakeholder groups including the steps of:
  • the report includes a relationship status between at least two interdependent groups such that the evaluation tool enables multiple interdependent groups in a project to evaluate each other in relation to an identified set of performance drivers for the project, the respective evaluations of each group being provided by a collective response of each group based on group trend data and an analysis of group trend data across multiple interdependent groups thereby enabling an assessment of relationship status between said interdependent groups.
  • the invention thus provides an evaluation tool, method and system for evaluating, managing and improving stakeholder interactions in a group, which overcomes the disadvantages of earlier evaluation tools by providing a means for evaluating the inter-relationship between interdependent stakeholder groups and the impact of these relationships on performance across a complex arrangement of multiple interdependent stakeholder groups.
  • FIGURE 1 is a schematic illustration comparing a one-to-one relationship (A), a many-to-one relationship (B) and a many-to-many relationship (C) between a target (the entity, group or individual being evaluated) and an evaluating party (the entity, group or individual performing the evaluation).
  • FIGURE 2 is a schematic diagram showing an evaluation tool according to a preferred embodiment of the invention.
  • FIGURE 3 is a schematic diagram showing part of one embodiment of an evaluation report according to the invention.
  • the evaluation environment 100 is shown at the top of the diagram.
  • the arrows map the relationships between multiple interdependent groups 110.
  • the inset depicts the relationship between two of the interdependent groups A and B, by way of example only.
  • the evaluation report applies a code (bottom of inset) to the status of the relationship between A and B - the solid line depicting the status as evaluated by A; the dotted line as evaluated by B.
  • FIGURE 4 is a flowchart showing steps in two embodiments of an evaluation method for evaluating the status of relationships in a complex arrangement of multiple groups.
  • Figure 4A shows the steps involved in the preferred embodiment.
  • there are two additional steps shown below the dashed line - Figure 4B).
  • FIGURE 5 is a schematic illustration of an evaluation system for evaluating the status of relationships in a complex arrangement of multiple groups according to a preferred embodiment of the invention.
  • the invention provides a new or alternative evaluation tool, method and system for use in evaluating, managing and improving stakeholder interactions in a group.
  • a group may include stakeholders that are internal or external to an organisation or business unit, or a combination of internal and external stakeholders.
  • the preferred embodiments of the evaluation tool, method and system are useful as a means for measuring and managing cultural alignment, or the alignment of values between multiple groups in many contexts - for example:
  • the evaluation tool provides a means of evaluating the status of relationships (interactions) across a number of inter-relating groups, including in complex interrelationships as depicted in Figure 1C.
  • the number 100 depicts the evaluation "environment" in each of Figures 1A, IB and 1C.
  • Figure 2 shows the evaluation tool 120 in a preferred embodiment.
  • the tool 120 includes:
  • evaluation collection means 130 including a storage means 140 (e.g. server, computer or other processing device), for collecting one or more responses (e.g. through poll, survey or questionnaire answers) from participants, each participant being a member of an inter-dependent group in a project involving multiple interdependent groups;
  • storage means 140 e.g. server, computer or other processing device
  • responses e.g. through poll, survey or questionnaire answers
  • evaluation processing means 150 for processing responses collected from stakeholders
  • the reporting means includes a mapping means, including an algorithm enabled by software and run on any computer-implemented system, for mapping relationship status between multiple stakeholder groups to a visual format such as a graphic (e.g. a relationship map).
  • the reporting means can also include reporting on the status of relationships across complex arrangements of interdependent groups as scores and/or comments; and
  • communication means 180 for communicating between one or more of the evaluation collection means 130, the evaluation processing means 150, the evaluation reporting means 160, a storage means 140 and a display 190 (e.g. a computer screen or digital display, including a user interface).
  • Evaluation environment complexity of evaluating environment
  • the evaluation environment 100 is made up of a series of interdependent groups (each group 110 depicted by a circle), each group evaluating all of the other groups (many-to-many evaluations) in a project.
  • the groups may be cross-functional groups within an organisation working on a project together, or different organisations collaborating on a mutual project.
  • the project can be any joint venture, alliance, a product or product line (including goods or service line), or any other collaboration involving multiple groups.
  • each group is able to evaluate the other groups that it interacts with and to compare the way it works with each group against other groups within the project. If insufficient responses are provided from, say, participants of one of the interdependent groups (let's call this group A), this does not affect the overall evaluation or the assessments of the remaining groups in the project (since each of the remaining groups will have been evaluated by groups other than A) or of the other groups' evaluations of group A.
  • the preferred embodiment contrasts with known evaluation tools, as depicted in Figures 1A and B.
  • the preferred embodiment ( Figure 1C) enables complex inter-relationships to be evaluated, including the evaluation of multiple inter-dependent groups simultaneously (that is, many-to-many assessments in a single evaluation).
  • the preferred embodiment assesses groups, not individuals using group trend data obtained by collecting and analysing responses from participants in each group. Thus group trend data for each reflects a collective response from the relevant group. Group trend data is obtained for each interdependent stakeholder group in a project.
  • many known performance evaluation tools focus on singular relationships between two entities - for example, a single party (e.g. an individual) evaluating a single target (e.g. an individual).
  • a single party e.g. an individual
  • a single target e.g. an individual
  • a typical example is a single customer evaluating a single supplier (one- to-one evaluation).
  • FIG. IB Other known evaluation tools, as shown schematically in Figure IB, may involve feedback from multiple evaluating parties (e.g. multiple customers) of a single target (e.g. the same supplier). For example, a typical consumer survey or poll eliciting feedback from multiple customers about a specific supplier service (many- to-one evaluation). This is also the typical human resources (HR) model.
  • HR human resources
  • these kinds of tools are limited to evaluating a single target and, as shown in Figures 1A and IB, are not able to take into account common complex relationships between multiple interacting groups, in which the outcome of a project depends on the interaction across multiple inter-dependent groups.
  • the evaluations provided by known systems such as depicted in Figures 1A and IB are confined to artificially simple evaluation environments.
  • the evaluation tool has the advantage of enabling evaluation in complex environments, thereby more accurately reflecting true to life interactions and how they affect performance.
  • the evaluation tool has a user interface that is easy to use and customised to the participant and evaluation project details. Participants, administrators and survey managers can track completion.
  • n evaluation collection means 130 such as a web- or network-enabled application incorporating a questionnaire (whether presented as a series of questions, statements, a poll or a survey).
  • the questionnaire is provided to each member of a first stakeholder group and assesses the individual members' perception of interactions with the other groups with which the first group interacts on the project. Participants (members of each inter-dependent stakeholder group) providing responses have access to the evaluation collection means through the internet or networked computer.
  • the collection means 130 is a relational database containing a survey questionnaire, the database being housed on a server. Access to the survey questionnaire is provided to the participants through unique web pages generated off the server, so access only requires an internet connection and not a server connection.
  • each participant is provided a unique login, enabling a participant to exit the collection means 130 and return to continue at a later time, as well as enabling on-going evaluation on a periodic basis (e.g. weekly, monthly, quarterly).
  • the communication means includes a user interface that enables participants to provide their responses by moving a visual tool such as a slider on a display or screen to indicate a relative level of agreement with a statement about a target group's performance in relation to a specified performance driver.
  • a visual tool such as a slider on a display or screen
  • the participant indicates whether they perceive each of, say seven target groups is performing well in relation to the question posed.
  • the participant registers a response to a question by performing an action in relation to the visual tool. For example, dragging or clicking a slider to register the participant's response.
  • the action could be turning a dial or entering text into a text entry box where the participant enters a response on a scale of, say, 1 to 10 (or other specified scale).
  • the participant can flag any question or relationship as not relevant. Participants can also provide individual comments in relation to each question or each target group (relationship) being evaluated. The individual comments collected from participants during an evaluation provide real indicators of inter-group alignment (e.g. cultural alignment or alignment of values between multiple groups) and issues to be addressed, since participants tend to provide comments when they evaluate another group's performance in relation to a performance driver as low. Individual comments can be exported to a spreadsheet or table, or as text.
  • the evaluation tool 120 also includes a survey management dashboard accessible to the survey manager, to set the opening and closing dates of the survey, invite participants, track progress of each participant in each interdependent group within a project, provide reminders, process responses, generate reports or re-run a survey for on-going evaluation on a periodic basis.
  • a survey management dashboard accessible to the survey manager, to set the opening and closing dates of the survey, invite participants, track progress of each participant in each interdependent group within a project, provide reminders, process responses, generate reports or re-run a survey for on-going evaluation on a periodic basis.
  • the evaluation responses from participants are stored in the storage means 140, such as a database (e.g. a relational database) housed on a server, and processed by the evaluation processing means 150 (e.g. software) for communication to the evaluation reporting means 160.
  • a database e.g. a relational database housed on a server
  • the evaluation processing means 150 e.g. software
  • Assessments are provided by individual members in each stakeholder group, through the evaluation collection means. For example, in a preferred embodiment, up to ten individuals in each of eight, say, different interdependent groups are evaluated to obtain the individual's perceptions and assessments of his or her dealings with each of the other seven, say, interdependent group's performances in matters that affect the way each of those groups interacts with the participant's group.
  • the participants deliver their individual assessments of other groups in a project by entering their responses into the evaluation collection means, which communicates the evaluations to the evaluation processing means (e.g. an algorithm enabled through software), for processing, analysis and to enable a report to be generated.
  • the evaluation processing means e.g. an algorithm enabled through software
  • the processing means includes programming instructions to perform the step of collatings the individual responses from participants and processing them, including by tallying responses to determine a qualitative assessment of the status of a relationship between two or more interdependent groups - based on group trend data.
  • the processing means utilises group trend data from at least two interdependent groups to determine an assessment of relationship status between the independent groups (that is, any two of the interdependent groups). It also allows a project "average” score to be calculated, wherein each qualitative response provided by a participant is quantified against a scoring matrix to allow an "average” group response to each performance driver to be calculated.
  • the status is a relative, not empirical, measure since it evaluates multiple groups relative to other groups within the same project.
  • the "status" of a relationship refers to how well various stakeholder groups are working together at any point in time and is relative to the other groups in the same project.
  • the status is a relative, not an empirical, measure.
  • the evaluations are relative measures within the project environment and referenced against an average status "score" calculated from the average responses obtained to a single questionnaire distributed as part of the relevant evaluation.
  • status can be categorised using a relevant quality descriptor such as “working well” (strong), “needs attention” (intermediary) or “needs urgent attention” (weak).
  • a relevant quality descriptor such as “working well” (strong), “needs attention” (intermediary) or “needs urgent attention” (weak).
  • it can be expressed by reference to an "average” score that is calculated by collating and analysing group trend data for all groups involved in a project.
  • the status of each of the interdependent groups can thus be evaluated as “above average”, “average” or “below average” where "average” is the average performance of all groups within a project.
  • a measure of status is calculated by compiling and processing group trend evaluation(s) of interactions between multiple interdependent groups in a specified project.
  • a report of status is provided using a relevant quality descriptor or relative score such as "below average”. The report also provides visual identification of a degree of alignment or misalignment across stakeholder groups in a project (see Example 1).
  • each of groups B, C and D evaluated group A as performing "below average”. Therefore, the perception within the project environment is that group A is not aligned with the rest of the project groups. This is despite group A perceiving all the other groups as performing well in relation to its interactions with each of the other groups.
  • the graphic evaluation report indicates that group A is the common element in relation to issues identified by the evaluation that need to be addressed.
  • the survey manager has decided to re-run the evaluation in six weeks, giving time to develop plans and implement solutions to address issues and encourage certain behaviours by group A. The same drivers will be evaluated in to determine if the plans and implementation have had the desired result. On-going evaluation also allows the average "score" (across all groups) to be recorded. Tracking of the project average score over time indicates trends over time about whether the status of relationships within a project are improving or deteriorating.
  • the evaluation reporting means 160 receives and collates processed responses and generates an evaluation report 170.
  • the evaluation report 170 provides an overall result ("status" of interdependent relationships) and/or group-to-group evaluation, along with a breakdown of results by question.
  • the status is reported visually in a graphic format, as scores and/or as text, via a mapping means.
  • the mapping means maps the interrelationship between two or more interdependent groups to a visual format (e.g. a relationship map).
  • the evaluation reporting means 160 enables the "status" of relationships between inter-dependent stakeholder groups to be reported visually so that the overall "status" of the various interdependent relationships between multiple groups (say, up to eight defined groups) is provided, including a question-by-question, group-by-group breakdown for comparison between different stakeholder groups. This is achieved by taking responses from participants and processing the responses to determine the quality of the status of the relationship between any two interdependent groups in a complex arrangement of multiple groups (e.g. up to eight groups).
  • the status is visually coded (e.g. colour-coded or otherwise visually coded) to correspond with a status descriptor (e.g. "below average”) so that a graphical report can be generated that:
  • (b) applies a code (e.g. a colour code or other visual code) to the relationship map to indicate the status of the relationship between any two interdependent (e.g. inter-relating) groups.
  • a code e.g. a colour code or other visual code
  • the evaluation reporting means 160 is able to report relationship status from the perspective of each group. For example, referring to Figure 3, imagine that each of the circles 110 in Figure 3 is labelled A to H, representing eight different interdependent groups in a project. Every member of group A evaluates each of groups B to H. Similarly, every member of group B evaluates each of groups A and C to H, and so on throughout the eight groups. Thus, while the project is common to the evaluation, the target of the evaluation is how each group interacts with each of the other groups it interacts with. In other words, each group is itself the target of evaluation by (an)other group(s) in the project. Thus the evaluation tool enables plural groups to evaluate multiple targets in a single evaluation.
  • the evaluation reporting means 160 is able to report the "status" of the relationship between any two interdependent stakeholder groups visually and from the perspective of each group to provide an indication of the relationship between any two interdependent groups.
  • group A and B the status of the relationship between any two of the groups illustrated - let's refer to them as group A and B - has been evaluated by the individual members of both groups A and B. This enables group trend data for each group to be obtained from the collective evaluations of members of each stakeholders group.
  • the evaluation environment 100 is shown at the top of the schematic diagram in Figure 3.
  • Each of the circles 110 represents a different stakeholder group.
  • the arrows map the relationships between multiple interdependent groups 110.
  • the inset depicts the relationship between two of the interdependent groups A and B, by way of example only.
  • the evaluation reporting means applies a code (e.g. as shown at the bottom of the inset, Figure 3) to the status of the relationship between A and B - here, the solid line depicting the relationship status as evaluated by A, the dotted line showing the status as evaluated by B.
  • a code e.g. as shown at the bottom of the inset, Figure 3
  • Figure 3 illustrates how a relationship status between two groups (A, B) is reported in a graphical format, the graphic demonstrating how the status can be perceived differently by each party - thus reported from the perspective of each of group A and group B. Therefore, the final relationship "status" between A and B may be recorded as "working well” by A (because A delivers output to B and does not require B to provide anything first). However, the members of group B might collectively assess their interaction with group A as “needs attention" because A tends, for example, not to adhere to task priorities or be accessible for progress meetings or phone calls.
  • the evaluation reporting means allows assessment of interactions between multiple interdependent groups in a project, as well as comparison of interactions across the groups.
  • stakeholder group A might ordinarily be assessed as meeting all of its KPIs and success drivers.
  • its interactions with stakeholder B might be poor and the client has specified that A and B work closely together towards a desired outcome. The success of that outcome therefore depends not only on A's ability to perform but also on A's ability to interact with B on the mutual project.
  • the measure of relationship status between stakeholders is reported visually (by visual coding to a relationship "map", such as the map depicted at the top of Figure 3) so that visual comparison of the status of relationships between various stakeholders can be readily made and relative weaknesses/strengths readily identified.
  • the evaluation reporting means can also export individual comments to a spreadsheet or table, or as text, for review and optional inclusion in the evaluation report.
  • the communication means 180 are capable of communicating between the collection means 130 and one or more of:
  • the communication means may be common between the above-listed components of the evaluation tool, or each component may have its own communication means. The only requirement is that each of the components is able to communicate with one or more of the other parts of the evaluation tool .
  • the invention also provides an evaluation method 190 for evaluating the status of relationships between multiple interdependent stakeholder groups in a project.
  • the method includes the steps of:
  • identifying a set of performance drivers e.g. success drivers and/or barrier issues
  • groups of interrelating stakeholders for the project that is, the groups that need to interact to deliver the project outcome
  • the evaluation method includes two further steps of:
  • the evaluation method includes the step of identifying performance drivers for a project as the initial step of a project establishment phase. This involves the substeps of defining the scope of the project, including participants, key stakeholder groups, process and timelines. This step and its substeps may be performed or overseen by a survey manager or any other person who wants to have the evaluation performed and is interested in the results.
  • the performance drivers can be determined by workshop or interviews so that:
  • Examples include how well interdependent groups encourage and accept feedback, whether interdependent groups consider all aspects of a problem or issue, how actively groups invite participation from other interdependent groups.
  • a further substep in this project establishment phase is to prepare a customised question panel for use in evaluating and measuring the identified performance drivers, using the evaluation tool and system described.
  • a final substep in the project establishment phase is to identify the groups of interrelating stakeholders for the project (that is, the groups that need to interact to deliver the desired project outcome(s)) who need to be included in the evaluation and the individual participants making up those groups.
  • the evaluation method involves performing an evaluation of the status of relationships between multiple groups of interdependent stakeholders.
  • the way the evaluation is performed is described in the description of the evaluation tool - namely, participants making up each stakeholder group assess the other stakeholder groups in a project.
  • the collective responses are used to identify to trend data for each stakeholder group.
  • the evaluation method includes the step of collating and processing the individual responses provided by members of the stakeholder groups so as to identify group trends and obtain trend data. Collating and processing responses is performed by the evaluation tool (specifically, the evaluation processing means as described earlier in this document). For example, referring to Figure 3, there are, say, 10 members of group A who each evaluate their interactions with each of groups B to H. Similarly, the group members of group B evaluate their interactions with each of groups A, and C to H - and so on throughout the remaining groups.
  • an "average” score of the performance of each group can be quantified using the collated evaluations and a scoring matrix to convert a qualitative evaluation to a corresponding score.
  • an overall "average” of interactions across the multiple groups involved in a project can be calculated, for use as a reference value to determine whether each group (or a number of groups) is performing "above” or “below” average compared with the rest of the groups in that project.
  • an external reference can be used to determine whether all of the groups are performing "above” or “below” that external reference (e.g. an industry standard, a desired performance indicator or another pre-determined reference).
  • the collective group responses reveal the "status" of relationships (e.g. level of collaboration) between multiple interdependent stakeholder groups working together on a project and any misalignment between groups (including identifying with which group or groups the misalignment resides).
  • the evaluation method involves the step of generating an evaluation report on the "status" of relationships between multiple stakeholder groups based on processed evaluations and group trend data.
  • the report is generated via a reporting means (part of the evaluation tool and system).
  • the reporting means is as described in the description of the evaluation tool and includes a mapping means, including an algorithm enabled by software and run on any computer-implemented system, for mapping relationship status between multiple stakeholder groups to a visual format such as a graphic.
  • the reporting means can also include the status of relationships across multiple interdependent groups as scores and/or comments (text).
  • the evaluation reporting means 160 receives and collates processed evaluations and generates an evaluation report 170.
  • the evaluation reporting means 160 enables the "status" of relationships between interdependent stakeholder groups to be reported visually so that the overall "status” of the various interdependent relationships between multiple groups (say, up to eight defined groups) is provided, including a question-by-question, group-by- group breakdown for comparison between different stakeholder groups.
  • the status is visually coded (e.g. colour-coded or otherwise visually coded) to correspond with a status descriptor (e.g. "above average") so that a graphical report 170 can be generated that:
  • (b) applies a code (e.g. a colour code or other visual code) to the relationship map to indicate the status of the relationship between any two interrelating groups.
  • a code e.g. a colour code or other visual code
  • Prioritisation may be based, for example, on value, frequency and disruption to the mutual project being undertaken by the groups evaluated.
  • Performing one or more follow-up evaluations over time allows status to be recorded over time, including tracking of "average” status "scores” and whether the average is trending up or down over time, as well as whether each group's performance relative to the average is also trending up or down. This is useful in clearly identifying improvements against performance drivers addressed in the evaluation project implementation plant.
  • target alignment scores which can be used as a proactive means for driving change, say, in cultural alignment during organisational restructure such as a merger or acquisition, or a major alliance or joint venture project.
  • the invention also provides an evaluation system 200 for evaluating the status of relationships between multiple interdependent groups of stakeholders in a project (see Figure 5).
  • the evaluation system 200 includes:
  • an evaluation tool (a) an evaluation tool (as described earlier in this document), including:
  • an evaluation collection means 130 e.g. a database or other information storage means
  • a storage means 140 e.g. server, computer or other processing device
  • responses e.g. through poll, survey or questionnaire answers
  • evaluation processing means e.g. software
  • evaluation reporting means e.g. software
  • iv. communication means 180 for communicating between one or more of the evaluation collection means 130, the evaluation processing means 150, the evaluation reporting means 160, a storage means 140 and a display 190 (e.g. a computer screen or digital display, including a user interface);
  • identifying a set of performance drivers e.g. success drivers and/or barrier issues
  • groups of interrelating stakeholders for the project that is, the groups that need to interact to deliver the project outcome
  • an administrator access means 210 e.g. a computer with direct access to the evaluation collection means 130 through the storage means 140, and/or a computer with access to the evaluation collection means 130 through the internet or the cloud) for providing multidirectional access to the evaluation system to an administrator of the evaluation system;
  • a manager access means 220 e.g. a computer with direct access to the evaluation collection means 130 through the storage means 140, and/or a computer with access to the evaluation collection means 130 through the internet or the cloud
  • a manager access means 220 for providing multidirectional access to the evaluation system to a manager, who can manage all aspects of the evaluation system, including generating reports;
  • a participant access means 230 e.g. a computer with access to the evaluation collection means 130 through the internet or the cloud
  • a participant i.e. individual member of a stakeholder group providing evaluation of other stakeholder groups in a specific project.
  • EXAMPLE 3 means for measuring and managing cultural alignment during a merger
  • Company X and company Y decide to merge to implement a strategy of market domination in a particular sector of fast moving consumer goods. This is the strategy.
  • the structure for giving effect to the strategy is company Z, which will be formed by the merger of X and Y.
  • a root cause of failed mergers is a misalignment in the cultures (or values) of the merging entities.
  • the values of Z will not equal the sum of the values of X and Y.
  • the preferred embodiments will be piloted as a means to agree values for Z and to manage value alignment across all of the business units making up the merged entity.
  • values are aligned, collaboration follows. Therefore, an initial evaluation will be performed prior to the proposed merger, focusing on the eight business units that management consider to be key to the successful outcome of the merger. The initial evaluation will evaluate a set of pre-agreed performance drivers for the target business units.
  • the follow-up evaluations will assess whether the plans and implementation have had the desired effect, and identify ways to modify the plans and implementation, if necessary.
  • An advantage of the preferred embodiments of the evaluation tool, method and system is that they enable the evaluation of many-to-many relationships in complex arrangements of interdependent groups, all in a single evaluation. This includes many-to-many relationships within an organization or between organizations.
  • a further advantage is that the preferred embodiments can also provide means to manage multiple relationships by identifying issues that will enable fostering of collaboration and co-operation, encouraging alignment to objectives and values, and optimising communication and performance. Yet another advantage is that the preferred embodiments are a means for measuring and managing cultural alignment, or the alignment of values between multiple groups and therefore the preferred embodiments are useful in many contexts - for example:
  • the invention provides thus an evaluation tool, method and system for use in evaluating the status of relationships between multiple interdependent groups in a project and has broad application across a range of diverse business contexts.
  • the invention is not restricted to this particular fields of use and that it is not limited to particular embodiments or applications described herein.
  • Example 2 sample evaluati
  • the scores are set as:
  • Feedback is prompt, usually non-constructive nor provides a clear direction. It tends to be subjective from personal viewpoints rather than what will appeal/work for the target audience.
  • the feedback is prompt but it is not considered, clear and definitive.
  • Deadlines are met out of necessity rather than being considered. Re-prioritisation is constantly required to meet deadlines required by senior stakeholders without taking into account what is required to meet this.
  • Deadlines are often a moving target
  • Timelines are set without consultation of the agency/agencies. Milestones are set to suit senior exec approvals rather than the
  • Timelines are set without enough involvement from the agency on what can be achieved in the timeframe.
  • Agency 1 feel that while Client provides prompt deliver on deadlines, however Agency 1 is scored updates, like the feedback from Client, it can be below average with a comment that while they inconsistent. deliver, the process is considered loose.

Abstract

The invention provides a new or alternative evaluation tool, method and system for use in evaluating, managing and improving stakeholder interactions in a group. The invention provides means for evaluating the inter-relationship between interdependent stakeholder groups and the impact of these relationships on performance across an arrangement of multiple interdependent stakeholder groups.

Description

Evaluation tool, system and method for evaluating stakeholder interactions between multiple groups
TECHNICAL FIELD
The present invention relates to evaluation tools, systems and methods for evaluating, managing and improving stakeholder interactions between multiple interdependent groups.
BACKGROUND
Virtually every business activity in an organisation may be described as a project, with interdependent stakeholders contributing to the success or otherwise of the project. Stakeholders can be internal to the organisation, external to it, or a mixture of internal and external.
There are known evaluation systems, methods and tools for evaluating and monitoring performance. Typically, these are systems, methods and tools that identify and examine performance (based on empirical or quantitative data, or metrics obtained from sources within a group) and measure it against key performance indicators and/or success drivers.
For example, US 6,604,084 describes a system and method for generating an evaluation in a performance evaluation system. The system uses productivity and quality data to evaluate the performance of an individual, group, process or other suitable type of item or operation. Performance evaluations are generated by defining a plurality of questions and a plurality of performance areas. A performance area is a group of questions that relates to a particular area of job performance. For example, performance areas for a call centre may include on-call greeting and call closing. Evaluations can be automatically generated for disparate groups by selecting different performance areas. While US 6,604,084 can measure performance of disparate groups in an organisation, it does not evaluate how performance between disparate groups might affect organisational performance or how the outcome of tasks carried out by interdependent groups might be affected by the performance of those groups.
US 5,684,964 is a method and system for monitoring and controlling the performance of an organisation, including an algorithm for selecting variables that relate to an organisation's performance and for constructing an interaction table that relates the performance variables to one another and calculating an efficiency rating using the collected data. US 5,684,964 determines the relative impact of each performance variable on the organisation's efficiency rating for a specified time interval. The function can be repeated for managerial assessment of interactions among performance variables as well as the accuracy of the calculated efficiency rating. While US 5,684,964 examines the interaction of performance variables, it does not examine the interaction/interdependence of stakeholders whose collective and relative performance affects organisational (or overall project) performance and the achievement of success drivers.
However, neither US 6,604,084 nor US 5,684,964 evaluates the inter-relationship between interdependent stakeholders. Further, US 5,684,964 focuses on the performance of an organisation as a whole, not the groups that make up the organisation. In this way, US 5,684,964 suffers the disadvantage that it is unable to offer any granularity on performance across an organisation.
The success of a project (including a business unit, product line or discrete business project typically involves a number of interdependent stakeholders and the ultimate outcome (success or otherwise) will depend on the status (e.g. level and quality) of interactions between multiple groups of interdependent stakeholders.
There is a need for a means to evaluate the inter-relationships between multiple groups of interdependent stakeholders and the impact of these relationships on performance across multiple groups. US 2005/0086189 describes systems and methods for evaluating the level of collaboration among members of a team in relation to knowledge-centred collaboration (that is, the collection, storage, sharing of knowledge across groups). However, the systems and methods described in US 2005/0086189 perform the evaluation by presenting to a user (or users), one or more pre-determined topics of concern (e.g. statements concerning team collaboration areas of concern and associated issues). The user(s) agree or disagree with the statements. The selections made by the user(s) are analysed using information in a knowledge database, to set an evaluation value for each area of concern. Therefore, US 2005/0086189 does not measure how well the groups collaborate and it assesses knowledge-centred collaboration on an individual rather than a group basis. US 2005/0086189 relies on a knowledge database to identify problems in sharing, storing or collection of knowledge across groups.
Although US 5,684,964 collects information from a plurality of sources and relates performance variables to one another, it does not examine interactions between different groups of stakeholders. US 2005/0086189 does not examine relationships or the level of collaboration (how groups interact) in complex arrangements of multiple interdependent groups. Rather it provides an evaluation on an individual basis of how well information is shared across groups by reference to a knowledge database.
Neither US 6,604,084 nor US 5,684,964 provide a means for evaluating complex many-to-many relationships between multiple stakeholder groups. In US 6,604,084 a one-to-one relationship exists between the information collected and the performance metrics (e.g. performance evaluation of an individual). In US 5,684,964 a many-to-one relationship exists between the information collected and the performance metrics (e.g. efficiency rating).
There is currently no evaluation means (i.e. tool, system or method) that allows the status of complex many-to-many relationships to be evaluated simultaneously in a single evaluation. There is a need for such an evaluation means since the successful outcome of many processes, alliances and projects relies on alignment (strategic, structural and cultural alignment) across multiple groups.
It is an object of the present invention to provide an improved or alternative evaluation tool, system and method for evaluating the status of relationships between multiple groups of interdependent stakeholders. The status of these relationships impacts on performance between groups in complex arrangements of multiple interdependent groups and the ability to achieve desired project or strategy outcomes.
DETAILED DESCRIPTION
According to an aspect of the invention there is provided an evaluation system for evaluating relationship status between multiple interdependent stakeholder groups in a project, including an evaluation tool comprising:
(a) evaluation collection means for collecting one or more responses from at least one member of each of at least two interdependent groups in a project, each member being a participant in an evaluation,
wherein the response(s) collected from each participant includes one or more responses that evaluate other interdependent groups with which the participant's group interacts in the project in relation to a set of performance drivers for the project, and
wherein the responses collected from each participant in the evaluation contribute to group trend data for the participant's group
such that the group trend data for the participant's group reflects a collective response from the participant's group in evaluating other interdependent groups in the project;
(b) evaluation processing means for processing responses collected from each participant, wherein the processing of responses includes utilising group trend data from at least two interdependent groups to determine an assessment of a relationship status between the interdependent groups; and (c) evaluation reporting means for generating a report on the processed responses, wherein the report includes a relationship status between at least two interdependent groups
such that the evaluation tool enables multiple interdependent groups in a project to evaluate each other in relation to an identified set of performance drivers for the project, the respective evaluations of each group being provided by a collective response of each group based on group trend data and an analysis of group trend data across multiple interdependent groups thereby enabling an assessment of relationship status between said interdependent groups.
According to another aspect of the invention there is provided an evaluation method for evaluating relationship status between multiple, interdependent stakeholder groups including the steps of:
(a) identifying a set of performance drivers for a project;
(b) performing an evaluation in relation to the identified performance drivers by collecting one or more responses from at least one member of each of at least two interdependent groups in a project, each member being a participant in an evaluation, wherein the response(s) collected from each participant includes one or more responses that evaluate other interdependent groups with which the participant's group interacts in the project in relation to a set of performance drivers for the project, wherein the responses collected from each participant in the evaluation contribute to group trend data for the participant's group such that the group trend data for the participant's group reflects a collective response from the participant's group in evaluating other interdependent groups in the project;
(c) processing responses collected from each participant in each stakeholder group so as to obtain group trend data for each participant's stakeholder group; wherein the processing of responses includes utilising group trend data from at least two interdependent groups to determine an assessment of a relationship status between the interdependent groups; and
(d) generating an evaluation report on the processed responses,
wherein the report includes a relationship status between at least two interdependent groups such that the evaluation tool enables multiple interdependent groups in a project to evaluate each other in relation to an identified set of performance drivers for the project, the respective evaluations of each group being provided by a collective response of each group based on group trend data and an analysis of group trend data across multiple interdependent groups thereby enabling an assessment of relationship status between said interdependent groups.
The invention thus provides an evaluation tool, method and system for evaluating, managing and improving stakeholder interactions in a group, which overcomes the disadvantages of earlier evaluation tools by providing a means for evaluating the inter-relationship between interdependent stakeholder groups and the impact of these relationships on performance across a complex arrangement of multiple interdependent stakeholder groups.
For a better understanding of the invention and to show how it may be performed, a preferred embodiment will now be described, by way of non-limiting example only, with reference to the accompanying drawings and examples.
FIGURE 1 is a schematic illustration comparing a one-to-one relationship (A), a many-to-one relationship (B) and a many-to-many relationship (C) between a target (the entity, group or individual being evaluated) and an evaluating party (the entity, group or individual performing the evaluation).
FIGURE 2 is a schematic diagram showing an evaluation tool according to a preferred embodiment of the invention.
FIGURE 3 is a schematic diagram showing part of one embodiment of an evaluation report according to the invention. The evaluation environment 100 is shown at the top of the diagram. The arrows map the relationships between multiple interdependent groups 110. The inset depicts the relationship between two of the interdependent groups A and B, by way of example only. In one embodiment, the evaluation report applies a code (bottom of inset) to the status of the relationship between A and B - the solid line depicting the status as evaluated by A; the dotted line as evaluated by B.
FIGURE 4 is a flowchart showing steps in two embodiments of an evaluation method for evaluating the status of relationships in a complex arrangement of multiple groups. Figure 4A (the steps above the dashed line) shows the steps involved in the preferred embodiment. In an alternative embodiment of the evaluation method, there are two additional steps (shown below the dashed line - Figure 4B).
FIGURE 5 is a schematic illustration of an evaluation system for evaluating the status of relationships in a complex arrangement of multiple groups according to a preferred embodiment of the invention.
Detailed description of preferred embodiments
The invention provides a new or alternative evaluation tool, method and system for use in evaluating, managing and improving stakeholder interactions in a group. A group may include stakeholders that are internal or external to an organisation or business unit, or a combination of internal and external stakeholders.
The preferred embodiments of the evaluation tool, method and system are useful as a means for measuring and managing cultural alignment, or the alignment of values between multiple groups in many contexts - for example:
(a) during mergers and acquisitions;
(b) during organisational restructure;
(c) when commencing major alliances or projects; (d) when engaging in new relationships;
(e) for refreshing long term relationships; or
(f) when introducing new suppliers.
Evaluation tool
In a preferred embodiment, the evaluation tool provides a means of evaluating the status of relationships (interactions) across a number of inter-relating groups, including in complex interrelationships as depicted in Figure 1C. The number 100 depicts the evaluation "environment" in each of Figures 1A, IB and 1C.
Figure 2 shows the evaluation tool 120 in a preferred embodiment. The tool 120 includes:
(a) evaluation collection means 130, including a storage means 140 (e.g. server, computer or other processing device), for collecting one or more responses (e.g. through poll, survey or questionnaire answers) from participants, each participant being a member of an inter-dependent group in a project involving multiple interdependent groups;
(b) evaluation processing means 150 for processing responses collected from stakeholders;
(c) evaluation reporting means 160 for generating a report 170 on the processed responses. In one embodiment, the reporting means includes a mapping means, including an algorithm enabled by software and run on any computer-implemented system, for mapping relationship status between multiple stakeholder groups to a visual format such as a graphic (e.g. a relationship map). The reporting means can also include reporting on the status of relationships across complex arrangements of interdependent groups as scores and/or comments; and
(d) communication means 180 for communicating between one or more of the evaluation collection means 130, the evaluation processing means 150, the evaluation reporting means 160, a storage means 140 and a display 190 (e.g. a computer screen or digital display, including a user interface). Evaluation environment: complexity of evaluating environment
In Figure 1C, the evaluation environment 100 is made up of a series of interdependent groups (each group 110 depicted by a circle), each group evaluating all of the other groups (many-to-many evaluations) in a project. The groups may be cross-functional groups within an organisation working on a project together, or different organisations collaborating on a mutual project. The project can be any joint venture, alliance, a product or product line (including goods or service line), or any other collaboration involving multiple groups.
In a preferred embodiment, as shown in Figure 1C, each group is able to evaluate the other groups that it interacts with and to compare the way it works with each group against other groups within the project. If insufficient responses are provided from, say, participants of one of the interdependent groups (let's call this group A), this does not affect the overall evaluation or the assessments of the remaining groups in the project (since each of the remaining groups will have been evaluated by groups other than A) or of the other groups' evaluations of group A.
In this way, the preferred embodiment contrasts with known evaluation tools, as depicted in Figures 1A and B. The preferred embodiment (Figure 1C) enables complex inter-relationships to be evaluated, including the evaluation of multiple inter-dependent groups simultaneously (that is, many-to-many assessments in a single evaluation). The preferred embodiment assesses groups, not individuals using group trend data obtained by collecting and analysing responses from participants in each group. Thus group trend data for each reflects a collective response from the relevant group. Group trend data is obtained for each interdependent stakeholder group in a project.
By contrast, as shown schematically in Figure 1A, many known performance evaluation tools focus on singular relationships between two entities - for example, a single party (e.g. an individual) evaluating a single target (e.g. an individual). A typical example is a single customer evaluating a single supplier (one- to-one evaluation).
Other known evaluation tools, as shown schematically in Figure IB, may involve feedback from multiple evaluating parties (e.g. multiple customers) of a single target (e.g. the same supplier). For example, a typical consumer survey or poll eliciting feedback from multiple customers about a specific supplier service (many- to-one evaluation). This is also the typical human resources (HR) model. However, these kinds of tools are limited to evaluating a single target and, as shown in Figures 1A and IB, are not able to take into account common complex relationships between multiple interacting groups, in which the outcome of a project depends on the interaction across multiple inter-dependent groups. Thus the evaluations provided by known systems such as depicted in Figures 1A and IB are confined to artificially simple evaluation environments.
The evaluation tool has the advantage of enabling evaluation in complex environments, thereby more accurately reflecting true to life interactions and how they affect performance.
The evaluation tool has a user interface that is easy to use and customised to the participant and evaluation project details. Participants, administrators and survey managers can track completion.
Evaluation collection means
Members of stakeholder groups who are providing responses regarding other inter-dependent stakeholder groups in a project do so through use of a n evaluation collection means 130 such as a web- or network-enabled application incorporating a questionnaire (whether presented as a series of questions, statements, a poll or a survey).
The questionnaire is provided to each member of a first stakeholder group and assesses the individual members' perception of interactions with the other groups with which the first group interacts on the project. Participants (members of each inter-dependent stakeholder group) providing responses have access to the evaluation collection means through the internet or networked computer.
In the preferred embodiment, the collection means 130 is a relational database containing a survey questionnaire, the database being housed on a server. Access to the survey questionnaire is provided to the participants through unique web pages generated off the server, so access only requires an internet connection and not a server connection. In one arrangement, each participant is provided a unique login, enabling a participant to exit the collection means 130 and return to continue at a later time, as well as enabling on-going evaluation on a periodic basis (e.g. weekly, monthly, quarterly).
In the preferred embodiment, the communication means includes a user interface that enables participants to provide their responses by moving a visual tool such as a slider on a display or screen to indicate a relative level of agreement with a statement about a target group's performance in relation to a specified performance driver. In this way, participants are enabled to provide at least some responses without the need to enter text. For example, in response to a question framed as "Collaborative behaviour is recognised and rewarded", the participant indicates whether they perceive each of, say seven target groups is performing well in relation to the question posed. The participant registers a response to a question by performing an action in relation to the visual tool. For example, dragging or clicking a slider to register the participant's response. Alternatively, the action could be turning a dial or entering text into a text entry box where the participant enters a response on a scale of, say, 1 to 10 (or other specified scale).
The participant can flag any question or relationship as not relevant. Participants can also provide individual comments in relation to each question or each target group (relationship) being evaluated. The individual comments collected from participants during an evaluation provide real indicators of inter-group alignment (e.g. cultural alignment or alignment of values between multiple groups) and issues to be addressed, since participants tend to provide comments when they evaluate another group's performance in relation to a performance driver as low. Individual comments can be exported to a spreadsheet or table, or as text.
The overall status of group-to-group relationships highlight areas that require attention. True indicators and specific issues can be identified by analysis of detailed participant comments.
Administrators and survey managers are provided direct access to the evaluation collection means 130 (e.g. relational database) to create relational records on the database and manage the survey. In one arrangement, the evaluation tool 120 also includes a survey management dashboard accessible to the survey manager, to set the opening and closing dates of the survey, invite participants, track progress of each participant in each interdependent group within a project, provide reminders, process responses, generate reports or re-run a survey for on-going evaluation on a periodic basis.
Evaluation processing means
The evaluation responses from participants are stored in the storage means 140, such as a database (e.g. a relational database) housed on a server, and processed by the evaluation processing means 150 (e.g. software) for communication to the evaluation reporting means 160.
Assessments are provided by individual members in each stakeholder group, through the evaluation collection means. For example, in a preferred embodiment, up to ten individuals in each of eight, say, different interdependent groups are evaluated to obtain the individual's perceptions and assessments of his or her dealings with each of the other seven, say, interdependent group's performances in matters that affect the way each of those groups interacts with the participant's group. The participants deliver their individual assessments of other groups in a project by entering their responses into the evaluation collection means, which communicates the evaluations to the evaluation processing means (e.g. an algorithm enabled through software), for processing, analysis and to enable a report to be generated.
The processing means includes programming instructions to perform the step of collatings the individual responses from participants and processing them, including by tallying responses to determine a qualitative assessment of the status of a relationship between two or more interdependent groups - based on group trend data. Thus the processing means utilises group trend data from at least two interdependent groups to determine an assessment of relationship status between the independent groups (that is, any two of the interdependent groups). It also allows a project "average" score to be calculated, wherein each qualitative response provided by a participant is quantified against a scoring matrix to allow an "average" group response to each performance driver to be calculated. The status is a relative, not empirical, measure since it evaluates multiple groups relative to other groups within the same project.
According to a preferred embodiment, the "status" of a relationship refers to how well various stakeholder groups are working together at any point in time and is relative to the other groups in the same project. In other words, the status is a relative, not an empirical, measure. The evaluations are relative measures within the project environment and referenced against an average status "score" calculated from the average responses obtained to a single questionnaire distributed as part of the relevant evaluation.
For example, status can be categorised using a relevant quality descriptor such as "working well" (strong), "needs attention" (intermediary) or "needs urgent attention" (weak). Alternatively, it can be expressed by reference to an "average" score that is calculated by collating and analysing group trend data for all groups involved in a project. The status of each of the interdependent groups can thus be evaluated as "above average", "average" or "below average" where "average" is the average performance of all groups within a project.
A measure of status is calculated by compiling and processing group trend evaluation(s) of interactions between multiple interdependent groups in a specified project. A report of status is provided using a relevant quality descriptor or relative score such as "below average". The report also provides visual identification of a degree of alignment or misalignment across stakeholder groups in a project (see Example 1).
An example of how the preferred embodiments can be used as a means to measure and manage inter-group alignment is provided below.
EXAMPLE 1: identification of misalignment among stakeholder groups
A pilot evaluation was performed on a project involving four interdependent groups:
(a) a manufacturer (group A);
(b) dealers appointed by the manufacturer to distribute A's
products, say in Australia (collectively, group B);
(c) the manufacturer's main advertising agency (group C); and
(d) the manufacturer's public relations agency (group D).
A first evaluation indicated all groups perceived groups B, C and
D as performing "above average" in relation to a pre-agreed set of 20 performance drivers. However, each of groups B, C and D evaluated group A as performing "below average". Therefore, the perception within the project environment is that group A is not aligned with the rest of the project groups. This is despite group A perceiving all the other groups as performing well in relation to its interactions with each of the other groups. The graphic evaluation report indicates that group A is the common element in relation to issues identified by the evaluation that need to be addressed.
The survey manager has decided to re-run the evaluation in six weeks, giving time to develop plans and implement solutions to address issues and encourage certain behaviours by group A. The same drivers will be evaluated in to determine if the plans and implementation have had the desired result. On-going evaluation also allows the average "score" (across all groups) to be recorded. Tracking of the project average score over time indicates trends over time about whether the status of relationships within a project are improving or deteriorating.
Evaluation reporting means
In a preferred embodiment, the evaluation reporting means 160 receives and collates processed responses and generates an evaluation report 170. The evaluation report 170 provides an overall result ("status" of interdependent relationships) and/or group-to-group evaluation, along with a breakdown of results by question.
In one embodiment, the status is reported visually in a graphic format, as scores and/or as text, via a mapping means. The mapping means maps the interrelationship between two or more interdependent groups to a visual format (e.g. a relationship map). In this embodiment, the evaluation reporting means 160 enables the "status" of relationships between inter-dependent stakeholder groups to be reported visually so that the overall "status" of the various interdependent relationships between multiple groups (say, up to eight defined groups) is provided, including a question-by-question, group-by-group breakdown for comparison between different stakeholder groups. This is achieved by taking responses from participants and processing the responses to determine the quality of the status of the relationship between any two interdependent groups in a complex arrangement of multiple groups (e.g. up to eight groups). The status is visually coded (e.g. colour-coded or otherwise visually coded) to correspond with a status descriptor (e.g. "below average") so that a graphical report can be generated that:
(a) maps the inter-relationships between groups (which group interacts with which other groups) to a visual format (e.g. a relationship map); and then
(b) applies a code (e.g. a colour code or other visual code) to the relationship map to indicate the status of the relationship between any two interdependent (e.g. inter-relating) groups.
The evaluation reporting means 160 is able to report relationship status from the perspective of each group. For example, referring to Figure 3, imagine that each of the circles 110 in Figure 3 is labelled A to H, representing eight different interdependent groups in a project. Every member of group A evaluates each of groups B to H. Similarly, every member of group B evaluates each of groups A and C to H, and so on throughout the eight groups. Thus, while the project is common to the evaluation, the target of the evaluation is how each group interacts with each of the other groups it interacts with. In other words, each group is itself the target of evaluation by (an)other group(s) in the project. Thus the evaluation tool enables plural groups to evaluate multiple targets in a single evaluation.
The evaluation reporting means 160 is able to report the "status" of the relationship between any two interdependent stakeholder groups visually and from the perspective of each group to provide an indication of the relationship between any two interdependent groups. Referring again to Figure 3, the status of the relationship between any two of the groups illustrated - let's refer to them as group A and B - has been evaluated by the individual members of both groups A and B. This enables group trend data for each group to be obtained from the collective evaluations of members of each stakeholders group. The evaluation environment 100 is shown at the top of the schematic diagram in Figure 3. Each of the circles 110 represents a different stakeholder group. The arrows map the relationships between multiple interdependent groups 110. The inset depicts the relationship between two of the interdependent groups A and B, by way of example only. In one embodiment, the evaluation reporting means applies a code (e.g. as shown at the bottom of the inset, Figure 3) to the status of the relationship between A and B - here, the solid line depicting the relationship status as evaluated by A, the dotted line showing the status as evaluated by B.
Thus Figure 3 illustrates how a relationship status between two groups (A, B) is reported in a graphical format, the graphic demonstrating how the status can be perceived differently by each party - thus reported from the perspective of each of group A and group B. Therefore, the final relationship "status" between A and B may be recorded as "working well" by A (because A delivers output to B and does not require B to provide anything first). However, the members of group B might collectively assess their interaction with group A as "needs attention" because A tends, for example, not to adhere to task priorities or be accessible for progress meetings or phone calls.
In this way, the evaluation reporting means allows assessment of interactions between multiple interdependent groups in a project, as well as comparison of interactions across the groups. For example, stakeholder group A might ordinarily be assessed as meeting all of its KPIs and success drivers. However, its interactions with stakeholder B might be poor and the client has specified that A and B work closely together towards a desired outcome. The success of that outcome therefore depends not only on A's ability to perform but also on A's ability to interact with B on the mutual project.
In one embodiment, the measure of relationship status between stakeholders is reported visually (by visual coding to a relationship "map", such as the map depicted at the top of Figure 3) so that visual comparison of the status of relationships between various stakeholders can be readily made and relative weaknesses/strengths readily identified. In other embodiments, the evaluation reporting means can also export individual comments to a spreadsheet or table, or as text, for review and optional inclusion in the evaluation report.
Communication means
The communication means 180 are capable of communicating between the collection means 130 and one or more of:
(a) the evaluation processing means 150;
(b) the storage means 140;
(c) the evaluation reporting means 160;
(d) a display 190.
The communication means may be common between the above-listed components of the evaluation tool, or each component may have its own communication means. The only requirement is that each of the components is able to communicate with one or more of the other parts of the evaluation tool .
Evaluation method
The invention also provides an evaluation method 190 for evaluating the status of relationships between multiple interdependent stakeholder groups in a project.
In a preferred embodiment, the method (see Figure 4) includes the steps of:
a. identifying a set of performance drivers (e.g. success drivers and/or barrier issues) for a project as well as defining groups of interrelating stakeholders for the project (that is, the groups that need to interact to deliver the project outcome);
b. performing an evaluation in which participants evaluate the performance of inter-dependent groups in relation to the identified performance drivers, each participant being a member of an interdependent group in a project involving multiple interdependent groups; c. processing responses provided by participants so as to identify group trends and obtain trend data; and
d. generating an evaluation report on the "status" of relationships between multiple stakeholder groups based on processed responses and group trend data.
In an alternative embodiment, the evaluation method includes two further steps of:
e. reviewing and prioritising the issues to be addressed based on specified criteria (e.g. value, frequency, disruption);
f. performing an on-going evaluation on a periodic basis (e.g. weekly, fortnightly, monthly, six-weekly, quarterly) to assess improvements or declines in performance.
Identifying performance drivers for a project
In a preferred embodiment, the evaluation method includes the step of identifying performance drivers for a project as the initial step of a project establishment phase. This involves the substeps of defining the scope of the project, including participants, key stakeholder groups, process and timelines. This step and its substeps may be performed or overseen by a survey manager or any other person who wants to have the evaluation performed and is interested in the results.
The performance drivers can be determined by workshop or interviews so that:
(a) issues to be evaluated can be identified, explored and/or agreed;
(b) a draft of customised questions for a questionnaire can be prepared and agreed;
(c) steps and timing can be agreed.
Examples include how well interdependent groups encourage and accept feedback, whether interdependent groups consider all aspects of a problem or issue, how actively groups invite participation from other interdependent groups. A further substep in this project establishment phase is to prepare a customised question panel for use in evaluating and measuring the identified performance drivers, using the evaluation tool and system described.
A final substep in the project establishment phase is to identify the groups of interrelating stakeholders for the project (that is, the groups that need to interact to deliver the desired project outcome(s)) who need to be included in the evaluation and the individual participants making up those groups.
Performing an evaluation
In a preferred embodiment, the evaluation method involves performing an evaluation of the status of relationships between multiple groups of interdependent stakeholders. The way the evaluation is performed is described in the description of the evaluation tool - namely, participants making up each stakeholder group assess the other stakeholder groups in a project. The collective responses are used to identify to trend data for each stakeholder group.
Individual responses from participants are obtained using the evaluation tool (e.g. in the form of survey, questionnaire, or poll answers). This collective information reveals the "status" of a relationship between two or more inter-dependent stakeholder groups by indicating whether each group in a project is performing at, above or below the average "score" for performance (in relation to a specific performance driver - say, encouraging and accepting feedback).
Collating and processing participant evaluations
In a preferred embodiment, the evaluation method includes the step of collating and processing the individual responses provided by members of the stakeholder groups so as to identify group trends and obtain trend data. Collating and processing responses is performed by the evaluation tool (specifically, the evaluation processing means as described earlier in this document). For example, referring to Figure 3, there are, say, 10 members of group A who each evaluate their interactions with each of groups B to H. Similarly, the group members of group B evaluate their interactions with each of groups A, and C to H - and so on throughout the remaining groups.
Collating and analysing the collective responses from the members of each group reveals group trends and trend data. For example, an "average" score of the performance of each group can be quantified using the collated evaluations and a scoring matrix to convert a qualitative evaluation to a corresponding score. Similarly, an overall "average" of interactions across the multiple groups involved in a project can be calculated, for use as a reference value to determine whether each group (or a number of groups) is performing "above" or "below" average compared with the rest of the groups in that project. Alternatively, an external reference can be used to determine whether all of the groups are performing "above" or "below" that external reference (e.g. an industry standard, a desired performance indicator or another pre-determined reference).
In this way, the collective group responses reveal the "status" of relationships (e.g. level of collaboration) between multiple interdependent stakeholder groups working together on a project and any misalignment between groups (including identifying with which group or groups the misalignment resides).
Generating an evaluation report
In a preferred embodiment, the evaluation method involves the step of generating an evaluation report on the "status" of relationships between multiple stakeholder groups based on processed evaluations and group trend data.
The report is generated via a reporting means (part of the evaluation tool and system). The reporting means is as described in the description of the evaluation tool and includes a mapping means, including an algorithm enabled by software and run on any computer-implemented system, for mapping relationship status between multiple stakeholder groups to a visual format such as a graphic. The reporting means can also include the status of relationships across multiple interdependent groups as scores and/or comments (text).
Referring to Figure 2, the evaluation reporting means 160 receives and collates processed evaluations and generates an evaluation report 170. The evaluation reporting means 160 enables the "status" of relationships between interdependent stakeholder groups to be reported visually so that the overall "status" of the various interdependent relationships between multiple groups (say, up to eight defined groups) is provided, including a question-by-question, group-by- group breakdown for comparison between different stakeholder groups.
The status is visually coded (e.g. colour-coded or otherwise visually coded) to correspond with a status descriptor (e.g. "above average") so that a graphical report 170 can be generated that:
(a) maps the inter-relationships between groups (which group interacts with which other groups); and then
(b) applies a code (e.g. a colour code or other visual code) to the relationship map to indicate the status of the relationship between any two interrelating groups.
Additional steps
In another embodiment of the evaluation method, there are additional step of:
(a) reviewing and prioritising the issues to be addressed; and
(b) performing an on-going evaluation on a periodic basis (e.g. weekly, fortnightly, monthly, six-weekly, quarterly) to assess improvements or declines in the status of relationships.
After evaluating a current status of relationships (say, by performing a first evaluation of multiple groups of stakeholders), any issues identified are reviewed to establish whether they need to be addressed. Issues to be addressed (e.g. as reported or presented with independent analysis and interpretation of the evaluation) are then prioritised to focus resources. Prioritisation may be based, for example, on value, frequency and disruption to the mutual project being undertaken by the groups evaluated.
Performing one or more follow-up evaluations over time allows status to be recorded over time, including tracking of "average" status "scores" and whether the average is trending up or down over time, as well as whether each group's performance relative to the average is also trending up or down. This is useful in clearly identifying improvements against performance drivers addressed in the evaluation project implementation plant.
It is also very useful in setting target alignment scores, which can be used as a proactive means for driving change, say, in cultural alignment during organisational restructure such as a merger or acquisition, or a major alliance or joint venture project.
Evaluation system
The invention also provides an evaluation system 200 for evaluating the status of relationships between multiple interdependent groups of stakeholders in a project (see Figure 5).
In a preferred embodiment (Figure 5), the evaluation system 200 includes:
(a) an evaluation tool (as described earlier in this document), including:
i. an evaluation collection means 130 (e.g. a database or other information storage means) including a storage means 140 (e.g. server, computer or other processing device), for collecting responses (e.g. through poll, survey or questionnaire answers) from participants, each participant being a member of an inter-dependent group in a project involving multiple interdependent groups of stakeholders;
ii. evaluation processing means (e.g. software) for processing responses collected from stakeholders; iii. evaluation reporting means (e.g. software) for generating a report 170 on the processed responses, including reporting on the status of relationships across complex arrangements of inter-dependent groups; and
iv. communication means 180 for communicating between one or more of the evaluation collection means 130, the evaluation processing means 150, the evaluation reporting means 160, a storage means 140 and a display 190 (e.g. a computer screen or digital display, including a user interface);
(b) an evaluation method (as described earlier in this document) performed by the evaluation tool, including the steps of:
i. identifying a set of performance drivers (e.g. success drivers and/or barrier issues) for a project as well as defining groups of interrelating stakeholders for the project (that is, the groups that need to interact to deliver the project outcome);
ii. performing an evaluation in participants evaluate the performance of inter-dependent stakeholder groups in relation to the identified performance drivers, each participant being a member of an interdependent group in a project involving multiple interdependent groups;
iii. processing responses provided by participants so as to identify group trends and obtain trend data; and
iv. generating an evaluation report on the "status" of relationships between multiple stakeholder groups based on processed evaluations and group trend data;
(c) an administrator access means 210 (e.g. a computer with direct access to the evaluation collection means 130 through the storage means 140, and/or a computer with access to the evaluation collection means 130 through the internet or the cloud) for providing multidirectional access to the evaluation system to an administrator of the evaluation system;
(d) a manager access means 220 (e.g. a computer with direct access to the evaluation collection means 130 through the storage means 140, and/or a computer with access to the evaluation collection means 130 through the internet or the cloud) for providing multidirectional access to the evaluation system to a manager, who can manage all aspects of the evaluation system, including generating reports; and
(e) a participant access means 230 (e.g. a computer with access to the evaluation collection means 130 through the internet or the cloud) for providing unidirectional access to the evaluation system to a participant (i.e. individual member of a stakeholder group providing evaluation of other stakeholder groups in a specific project).
EXAMPLE 2: sample evaluation
A specific example of an evaluation as performed using the evaluation tool, method and system, and including an evaluation report in graphical format is provided is included as An exes L
EXAMPLE 3: means for measuring and managing cultural alignment during a merger
A specific example of an application of the preferred embodiments to the context of a merger is provided below.
Company X and company Y decide to merge to implement a strategy of market domination in a particular sector of fast moving consumer goods. This is the strategy. The structure for giving effect to the strategy is company Z, which will be formed by the merger of X and Y.
A root cause of failed mergers is a misalignment in the cultures (or values) of the merging entities. The values of Z will not equal the sum of the values of X and Y.
The preferred embodiments will be piloted as a means to agree values for Z and to manage value alignment across all of the business units making up the merged entity. When values are aligned, collaboration follows. Therefore, an initial evaluation will be performed prior to the proposed merger, focusing on the eight business units that management consider to be key to the successful outcome of the merger. The initial evaluation will evaluate a set of pre-agreed performance drivers for the target business units.
Follow-up evaluations will be performed every four to six weeks for the first six months of after the merger. This will allow tracking of average over time as a form of sentiment index regarding the status of relationships across the target groups. Review of individual comments (which can be exported as a spreadsheet table or text) will provide real indicators of performance issues to be addressed. These measures, in addition to the status scores for each group obtained in the first valuation, will allow solutions to be developed and implemented to address the issues identified that require attention.
The follow-up evaluations will assess whether the plans and implementation have had the desired effect, and identify ways to modify the plans and implementation, if necessary.
An advantage of the preferred embodiments of the evaluation tool, method and system is that they enable the evaluation of many-to-many relationships in complex arrangements of interdependent groups, all in a single evaluation. This includes many-to-many relationships within an organization or between organizations.
A further advantage is that the preferred embodiments can also provide means to manage multiple relationships by identifying issues that will enable fostering of collaboration and co-operation, encouraging alignment to objectives and values, and optimising communication and performance. Yet another advantage is that the preferred embodiments are a means for measuring and managing cultural alignment, or the alignment of values between multiple groups and therefore the preferred embodiments are useful in many contexts - for example:
(a) during mergers and acquisitions;
(b) during organisational restructure;
(c) when commencing major alliances or projects;
(d) when engaging in new relationships;
(e) for refreshing long term relationships; or
(f) when introducing new suppliers.
The invention provides thus an evaluation tool, method and system for use in evaluating the status of relationships between multiple interdependent groups in a project and has broad application across a range of diverse business contexts. However, it will be appreciated that the invention is not restricted to this particular fields of use and that it is not limited to particular embodiments or applications described herein.
Annexure 1
Example 2: sample evaluati
Start: Sun-XX-Jul-XXXX Agency 1:
End: Sat-XX-Aug-XXXX Recently appointed independent creative People: 24 agency to execute brand / communications Complete: 95.6% strategy through primarily offer based marketing
Average Score: 65.1
Strong services industry experience with many similar retail clients
Categories & Questions
5 categories
Agency 2:
s Planning Incumbent design / print agency with long
Time Management history with the client
Cross Functional Collaboration * Responsible for designing brand / corporate * Budget Management identity
» Production Management Develops and produces all print collateral 4 questions per category = 20 questions including retail and media
Agency 3:
Client: Digital agency appointed 12 months earlier
Technology services client with significant on a project basis
retail and direct response focus s Agency 2 & 3 owned by same holding company
Participants from the Marketing
Communications Team
Overall Results
The scores are set as:
» RED = Below the survey average
* YELLOW = Above the survey average
* GREEN = Upper survey quartile
Agency 1 and Client scored the lowest overall score
Agency 3 score from Client is on the survey average
Agency 3 scored the highest overall score from Agency 2
Figure imgf000030_0001
Plannin
Agency 1 said of Client
We are involved once marketing planning
have decided on tactical execution. We
should be more involved in the strategic
development to further assist in achieving a stronger brand proposition.
Being involved earlier would allow for more creative options and solutions
We should be involved earlier from a strategic point of view which allows for better planning and better execution.
Agency 2 said of Client
If we could be involved earlier - even just
more of a heads up, we'd be able to deliver much better creative.
Figure imgf000031_0001
anning (cont'd)
Client said of Agency 1
Will often take feedback and will only discuss clarification if next round is off.
Client said of Agency 2
Overall communication is good. Clarification sought very early on.
Agency 2 are great in coming over to the office or calling if they don't understand a brief or need more detail.
Agency 1 said of Client
Client encourage questions and discussions, however, the quality of feedback / solidity of feedback is too variable and subject to change.
There is little conviction in strategy / path to execution.
Feedback/debriefs needs to be clearer and consistent, involving all decision makers.
Yes they encourage discussion but the feedback is not consistent, unified or clear.
Figure imgf000032_0001
Agency 2 said of Agency 1
Relationship too new to tell
Planning (cont'd)
Client said
I feel across the board with the agencies that we sometimes lack the understanding or insights behind the creative. This is an area to be improved.
Agency 1 said of Client
The psychographic segmentation of the market - usage, shopping behavior, path to purchase analysis - could be better and more focused in briefings.
Information provided is not of a high quality, relevance and often not considered in terms of the deliverables. Too much information when we don't need it and too little when we do.
Relevant information is drip fed and not consistent and this affects timings and workflow.
Agency 2 said of Client
It would be incredibly helpful if more information could be provided with briefs for us to work with - particularly for things that are heavily copy based
Figure imgf000033_0001
(e.g. DM, catalogues and brochures). Too often we are hunting through old pieces to try and find relevant content.
Planning (cont'd)
Client said of Agency 1
I feel the feedback is often negative as opposed to finding a positive solution
Client said of Agency 2
Agency 2 consistently gives knowledgeable feedback on all areas of comms (including TV) Using their experience of working with us for a number of years. They work well with the other agencies to share previous learnings.
Agency 1 said of Client
Feedback could be less creatively subjective and more consistent.
Feedback is prompt, usually non-constructive nor provides a clear direction. It tends to be subjective from personal viewpoints rather than what will appeal/work for the target audience.
The feedback is prompt but it is not considered, clear and definitive.
Agency 2 said of Client
Always prompt. Not always valuable.
This depends on where and from whom the
Figure imgf000034_0001
feedback is coming from. Sometimes, teams aren't aligned in their feedback and one person says one thing and another says another.
Planning - Overview
2. Gets us involved early enough in the planning However, Agency 1 is finding feedback from Client as process inconsistent.
All parties scored Agency 1 below average and 12. Provides all relevant information and insights likewise Agency 1 scored all other parties below required
averages.
This is the area of poorest performance for Client, which
Agency 1 and Agency 2 scored Client below and is even acknowledged by someone at Client, (should be above average respectively but both feel that addressed).
being involved earlier would improve the creative
product. It could be that Agency 2 with their 17. Provides prompt and valuable feedback
longer association have lower expectations.
All agencies scored Client below average for feedback
7. Encourages questions and discussion to finding them inconsistent, subjective and lacking
clarify requirements
direction.
Generally all parties scored above average, Client has marked Agency 1 below average and except Client and Agency 3's score of Agency 1.
comment their feedback is "negative". Unlike Agency 2, who they acknowledge "understand" their business.
Client see Agency 2 as good at discussing and
understanding needs up-front and would like this
The differences here could be driven again by the fact to happen more up-front with Agency 1 and not
that Agency 2 has lower expectations through just when there are problems. experience, while Agency 1, as the new agency, is driven by higher expectations.
Time Manaqement
Client said of Agency 2
It does vary on each project. I feel sometimes that if its not a priority for Agency 2 then I need to chase constantly, but when they know that we have to get some done super urgent, they are fairly good. I think it would help to know the expected timelines, I always ask now when I will hear back from them.
We work very fast and it is important to be able to get hold of Agency 2 quickly when anything changes. Agency 2 responds quickly but it is often hard to get hold of them.
Agency 1 said of Client
Timelines are too short to allow for delayed responses. This does not harbour an environment to achieve the best result.
Agency 2 said of Client
Sometimes it is difficult to get hold of Client for
Figure imgf000036_0001
availability of meetings or responses. This is only a reflection of how busy she is, but sometimes her team can't give the feedback and we do need to hear directly from her.
Time Management (cont'd)
Client said
Regular WIPs ensure this happens.
Client said of Agency 2
Sometimes it is difficult to get realistic timelines. I feel that we spend a lot of time chasing things after the due time as passed.
Agency 1 said of Client
* Updates are timely, however, they are reactive,
prone to change and do not allow time for
agencies to respond in an appropriate or
considered manner.
Updates are timely but they don't allow enough time to respond.
Yes updates are timely. But the updates are not concrete or there is no conviction is a decision.
Figure imgf000037_0001
Time Management (cont'd)
Client said of Agency 1
Usually meet the overall deadline, but management along the way is often too loose.
Client said of Agency 2
Quick turnaround and often delivered early.
Client said of Agency 3
Only worked on a few projects with them. But each were scoped out and delivered on time.
Agency 1 said of Client
Deadlines are met out of necessity rather than being considered. Re-prioritisation is constantly required to meet deadlines required by senior stakeholders without taking into account what is required to meet this.
Deadlines are often a moving target
Deadlines seem to shift as/when it is convenient.
Agency 2 said of Client
Sometimes, there is a lot of filtering of the concepts as they travel up the hierarchy which means when people at the top see the project, it has changed entirely and may also be off-brief or outside what
Figure imgf000038_0001
they had in mind. Therefore it's back to the drawing board and precious time may have been wasted in between, which means deadlines are out.
Time Management (cont'd)
Client said of Agency 1
They will meet the end timeline, but it's very loose along the way and one is left feeling a little uncomfortable that all is in hand.
Agency 1 said of Client
Timelines are set without consultation of the agency/agencies. Milestones are set to suit senior exec approvals rather than the
dependencies of the project. This renders them ineffective.
Timelines are set without enough involvement from the agency on what can be achieved in the timeframe.
Timelines are set but without agency
consideration.
Figure imgf000039_0001
Time Management - Overvie
3. Responds promptly to requests 13. Regularly meets deadlines
« All parties scored Agency 1 below average, yet Client is generally happy with the way the
there are no comments, especially from Client to agencies deliver on deadlines.
indicate the circumstances. There could be
several reasons for this and it would be worth
However all agencies scored Client below discussing expectations and reasons.
average.
8. Provides timely updates to progress
Again, there is a feeling that deadlines are inconsistent and poorly considered and in some
Again all parties scored Agency 1 below average cases could be less tight with some improved and Client scored Agency 2 below average. planning and process, especially in the approval process.
Client generally rely on regular WIP meetings to
keep up to date, however there is concern to get 18. Sets and manages timelines effectively realistic timelines from Agency 2.
Client is generally happy with the way agencies
Agency 1 feel that while Client provides prompt deliver on deadlines, however Agency 1 is scored updates, like the feedback from Client, it can be below average with a comment that while they inconsistent. deliver, the process is considered loose.
Again all agencies scored Client below average.
udqet anaqement
Client said of Agency 1
Often struggle to get quotes on time
Client said of Agency 2
I think that Agency 2 need to get in the habit of letting us know up front if there is going to be an additional cost for jobs outside the retainer. We seem to get presented something and pass that on to other areas of the business and then get a quote for images etc. However they are getting better with this, as we are also getting better at asking upfront.
Agency 1 said of Client
Very demanding of the value they receive. Budgets are too small to deliver on the expectations of the business.
The budgets expect a lot for their size. It can put a strain on suppliers and there is a risk of burnout.
Budgets are very demanding and can put a strain on suppliers.
Figure imgf000041_0001
Budget Management (cont'd)
Agency 2 said of Client
Budgets can oscillate significantly for primary campaigns and are agreed late in the process for secondary campaigns, this means we don't have time to explore solutions that fall out of the normal marketing mediums (press/
outdoor/store/online)
There has never been a budget provided for any job. We always have to fight to buy
anything. The image library is incredibly
limited, thus limiting the creative possibilities.
There is only so much that can be done with graphics devices.
Figure imgf000042_0001
42
Figure imgf000043_0001
43
Figure imgf000044_0001
44
Figure imgf000045_0001

Claims

1. An evaluation tool for evaluating relationship status between multiple, interdependent stakeholder groups in a project, the evaluation tool comprising:
(a) evaluation collection means for collecting one or more responses from at least one member of each of at least two interdependent groups in a project, each member being a participant in an evaluation;
(b) evaluation processing means for processing responses collected from each participant; and
(c) evaluation reporting means for generating a report on the processed responses,
wherein the evaluation tool enables each groupin the project evaluate one or more other groups with which it interacts in the project such that each group itself is a target of evaluation by one or more other groups, thereby enabling multiple targets to be evaluated in a single evaluation to allow assessment of relationship status between any two or more of the interdependent groups.
2. An evaluation system for evaluating relationship status between multiple, interdependent stakeholder groups in a project, including an evaluation tool comprising:
(a) evaluation collection means for collecting one or more responses from at least one member of each of at least two interdependent groups in a project, each member being a participant in an evaluation, wherein the response(s) collected from each participant includes one or more responses that evaluate other interdependent groups with which the participant's group interacts in the project in relation to a set of performance drivers for the project, and
wherein the responses collected from each participant in the evaluation contribute to group trend data for the participant's group
such that the group trend data for the participant's group reflects a collective response from the participant's group in evaluating other interdependent groups in the project;
(b) evaluation processing means for processing responses collected from each participant, wherein the processing of responses includes utilising group trend data from at least two interdependent groups to determine an assessment of a relationship status between the interdependent groups; and
(c) evaluation reporting means for generating a report on the processed responses, wherein the report includes a relationship status between at least two interdependent groups
such that the evaluation tool enables multiple interdependent groups in a project to evaluate each other in relation to an identified set of performance drivers for the project, the respective evaluations of each group being provided by a collective response of each group based on group trend data and an analysis of group trend data across multiple interdependent groups thereby enabling an assessment of relationship status between said interdependent groups.
3. An evaluation system according to claim 1, wherein the reporting means includes mapping means for mapping a relationship status between any two or more interdependent groups to a visual format such that the reporting means is able to provide a visual comparison of the status of relationships between multiple interdependent groups.
4. An evaluation system according to claim 1 or claim 2 further including:
(a) administrator access means for providing multidirectional access to the evaluation system to an administrator of the evaluation system;
(b) manager access means for providing multidirectional access to the evaluation system to a manager, who can manage all aspects of the evaluation system, including generating reports; and
(c) participant access means for providing unidirectional access to the evaluation system to a participant.
5. An evaluation method for evaluating relationship status between multiple, interdependent stakeholder groups in a project, the method including the steps of:
(a) identifying a set of performance drivers for a project;
(b) performing an evaluation in relation to the identified performance drivers by collecting one or more responses from at least one member of each of at least two interdependent groups in a project, each member being a participant in an evaluation, wherein the response(s) collected from each participant includes one or more responses that evaluate other interdependent groups with which the participant's group interacts in the project in relation to a set of performance drivers for the project, wherein the responses collected from each participant in the evaluation contribute to group trend data for the participant's group such that the group trend data for the participant's group reflects a collective response from the participant's group in evaluating other interdependent groups in the project;
(c) processing responses collected from each participant in each stakeholder group so as to obtain group trend data for each participant's stakeholder group; wherein the processing of responses includes utilising group trend data from at least two interdependent groups to determine an assessment of a relationship status between the interdependent groups; and
(d) generating an evaluation report on the processed responses,
wherein the report includes a relationship status between at least two interdependent groups such that the evaluation tool enables multiple interdependent groups in a project to evaluate each other in relation to an identified set of performance drivers for the project, the respective evaluations of each group being provided 35 by a collective response of each group based on group trend data and an analysis of group trend data across multiple interdependent groups thereby enabling an assessment of relationship status between said interdependent groups.
6. An evaluation method according to claim 4, wherein the step of generating an evaluation report includes mapping a relationship status between any two or more interdependent groups to a visual format such that the evaluation method is able to provide a visual comparison of the status of relationships between multiple interdependent groups.
7. An evaluation tool substantially as hereinbefore described by reference to accompanying Figure 2 to Figure 5.
8. An evaluation tool substantially as hereinbefore described by reference to the accompanying examples.
PCT/AU2010/001249 2009-11-23 2010-09-23 Evaluation tool, system and method for evaluating stakeholder interactions between multiple groups WO2011060480A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
AU2009905712A AU2009905712A0 (en) 2009-11-23 Evaluation tool, system and method for evaluating stakeholder interactions between multiple groups
AU2009905712 2009-11-23
AU2009101300A AU2009101300B9 (en) 2009-11-23 2009-12-21 Evaluation tool, system and method for evaluating stakeholder interactions between multiple groups
AU2009101300 2009-12-21

Publications (1)

Publication Number Publication Date
WO2011060480A1 true WO2011060480A1 (en) 2011-05-26

Family

ID=41664381

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2010/001249 WO2011060480A1 (en) 2009-11-23 2010-09-23 Evaluation tool, system and method for evaluating stakeholder interactions between multiple groups

Country Status (2)

Country Link
AU (1) AU2009101300B9 (en)
WO (1) WO2011060480A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017145765A1 (en) * 2016-02-22 2017-08-31 株式会社Visits Works Online test method and online test server for evaluating creativity for ideas
US11636774B2 (en) 2019-01-21 2023-04-25 Visits Technologies Inc. Problem collection/evaluation method, proposed solution collection/evaluation method, server for problem collection/evaluation, server for proposed solution collection/evaluation, and server for collection/evaluation of problem and proposed solution thereto

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001084723A2 (en) * 2000-04-28 2001-11-08 Ubs Ag Performance measurement and management
US20050080655A1 (en) * 2003-10-09 2005-04-14 Sengir Gulcin H. System and model for performance value based collaborative relationships
US7383155B2 (en) * 2005-03-11 2008-06-03 Ian Mark Rosam Performance analysis and assessment tool and method
US20090089154A1 (en) * 2007-09-29 2009-04-02 Dion Kenneth W System, method and computer product for implementing a 360 degree critical evaluator

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001084723A2 (en) * 2000-04-28 2001-11-08 Ubs Ag Performance measurement and management
US20050080655A1 (en) * 2003-10-09 2005-04-14 Sengir Gulcin H. System and model for performance value based collaborative relationships
US7383155B2 (en) * 2005-03-11 2008-06-03 Ian Mark Rosam Performance analysis and assessment tool and method
US20090089154A1 (en) * 2007-09-29 2009-04-02 Dion Kenneth W System, method and computer product for implementing a 360 degree critical evaluator

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017145765A1 (en) * 2016-02-22 2017-08-31 株式会社Visits Works Online test method and online test server for evaluating creativity for ideas
JP6249466B1 (en) * 2016-02-22 2017-12-20 VISITS Technologies株式会社 Online test method and online test server for evaluating idea creativity
CN107735829A (en) * 2016-02-22 2018-02-23 Visits科技株式会社 For evaluating the online testing method and online testing server of conception creativity
US10943500B2 (en) 2016-02-22 2021-03-09 Visits Technologies Inc. Method of online test and online test server for evaluating idea creating skills
US11636774B2 (en) 2019-01-21 2023-04-25 Visits Technologies Inc. Problem collection/evaluation method, proposed solution collection/evaluation method, server for problem collection/evaluation, server for proposed solution collection/evaluation, and server for collection/evaluation of problem and proposed solution thereto

Also Published As

Publication number Publication date
AU2009101300A4 (en) 2010-02-11
AU2009101300B9 (en) 2010-07-29
AU2009101300B4 (en) 2010-06-17

Similar Documents

Publication Publication Date Title
Bukh et al. Constructing intellectual capital statements
Sugahara et al. Value creation in management accounting and strategic management: An integrated approach
Mohamad The structural relationships between corporate culture, ICT diffusion innovation, corporate leadership, corporate communication management (CCM) activities and organisational performance
WO2011060480A1 (en) Evaluation tool, system and method for evaluating stakeholder interactions between multiple groups
Krishnan et al. A field study on the acceptance and use of a new accounting system
Mohapatra et al. Business process reengineering: a consolidated approach to different models
Fapohunda Operational framework for optimal utilisation of construction resources during the production process
Backer et al. Towards Sustainable ERP Systems: Bridging the Gap Between Current Capabilities and Future Potential
Smith Development of a four stage continuous improvement framework to support business performance in manufacturing SMEs
Salleh Measuring organisational readiness prior to IT/IS investment
Ljunglöf et al. KPIs in a service organization-a case study of Axfood IT
Takkunen Scrum implementation in a virtual team environment
Brink Purchasing performance measurement through selecting and implementing key performance indicators
Jacobsson et al. The Implementation Process of IT-Systems and its Effect on User Acceptance
Palermo Adopting Performance Appraisal And Reward Systems A Qualitative Analysis Of Public Sector Organisational Change
Campoverde Multidimensional Analysis of the Interaction Between the Organization, Management Pratices and Performance in Construction
Medina Quantitative descriptive correlational research study on customer service leadership skills and customer satisfaction
Locke Research into Optimization of Grant Proposal Processing at the Johns Hopkins University Applied Physics Laboratory
Ramadan Exploring the Impact Lean Performance Management (LPM) Towards Superior Sustainable Value Based (SSVB) Organization as a Competitive Intelligence
Young et al. Project Benefit Realisation and Project Management: The 6Q Governance Approach
Whiting Critical success factors in implementing projects on restituted land parcels in South Africa
Carilli The Perceived Effectiveness of the Scaled Agile Framework® in Software Development Organizations
Ramprasad Selection and Visualization of Key Performance Indicators in Product Development
Tangara Total Quality Management and Service Delivery at Kenya Power
Karlsen et al. Real time business intelligence and decision-making: how does a real time business intelligence system enable better and timelier decision-making? An exploratory case study.

Legal Events

Date Code Title Description
DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10830941

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10830941

Country of ref document: EP

Kind code of ref document: A1