WO2018203238A1 - System and method for assessing tax governance and managing tax risk - Google Patents

System and method for assessing tax governance and managing tax risk Download PDF

Info

Publication number
WO2018203238A1
WO2018203238A1 PCT/IB2018/053018 IB2018053018W WO2018203238A1 WO 2018203238 A1 WO2018203238 A1 WO 2018203238A1 IB 2018053018 W IB2018053018 W IB 2018053018W WO 2018203238 A1 WO2018203238 A1 WO 2018203238A1
Authority
WO
WIPO (PCT)
Prior art keywords
tax
governance
scores
processor
survey
Prior art date
Application number
PCT/IB2018/053018
Other languages
French (fr)
Inventor
Stephen Callahan
James Gordon
Stacey BERKMAN
Troy Green
Alline DOS SANTOS
Original Assignee
KPMG Australia IP Holdings Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2017901600A external-priority patent/AU2017901600A0/en
Application filed by KPMG Australia IP Holdings Pty Ltd filed Critical KPMG Australia IP Holdings Pty Ltd
Priority to AU2018262902A priority Critical patent/AU2018262902A1/en
Publication of WO2018203238A1 publication Critical patent/WO2018203238A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/10Tax strategies

Definitions

  • the present invention relates to computer-implemented systems and methods for assessing tax governance maturity and managing tax risk.
  • Tax authorities around the world are increasingly placing the onus on companies to demonstrate best practice tax governance and tax risk management.
  • the Australian Tax Office ATO
  • the ATO have commenced reviewing the Top 1000 organisations which includes a review of a gap analysis to the Guide and remediation plans.
  • the method may further comprise:
  • the method may further comprise applying, via the processor, exclusion logic to the computation of a category score and/or a gap score if a predetermined survey response is received, such that a subset of survey questions is excluded from the computation.
  • the method may further comprise adjusting, via the processor, weighted values assigned to the included set of survey questions.
  • a survey response may be mapped to multiple gap scores.
  • Each of the assessment categories may comprise two or more sub-categories, wherein a score is computed for each sub-category based on weighted values assigned to one or more survey questions.
  • a score for a category may be computed from an average of the scores of its sub-categories.
  • the graphical representation of the scores computed for each assessment category may further include data obtained from peer groups, stakeholders, or combinations thereof.
  • the method may further comprise displaying, on a user device, a detailed analysis of the results displaying answers from each of the participants.
  • the method may further comprise filtering the graphical representation by one or more of: revenue, industry, location and company type.
  • the scores computed for each assessment category may be displayed in a spider chart.
  • the scores against the controls may be displayed in a gap analysis.
  • the method may further comprise displaying, on a user device, a graphical representation of the action items.
  • Impact and effort of each action item may be graphically represented on a bubble chart.
  • the present invention also provides a computer-implemented method comprising:
  • the tax control framework user interface may comprise a spider chart of the overall scores relating to tax governance categories for the enterprise.
  • the spider chart may represent one or more benchmark scores relating to the tax governance categories for one or more other enterprises.
  • the tax control framework user interface may comprise a gap analysis of the overall gaps relating to tax governance controls.
  • the actions list user interface may comprise a bubble graph of the actions relating to tax governance for the enterprise.
  • the present invention also provides a system, comprising:
  • a non-transitory computer-readable medium coupled to the processor and having instructions stored thereon, which, when executed by the processor, cause the processor to perform operations comprising:
  • the present invention also provides a non-transitory computer-readable medium having instructions stored thereon, which, when executed by a processor, cause the processor to perform operations comprising
  • Figure 1 is a functional block diagram of a system according to one embodiment of the present invention for implementing a method for evaluating tax governance;
  • Figure 2 is a flowchart of a computer-implemented method for evaluating tax governance according to embodiments of the present invention
  • Figure 3 illustrates the components of an exemplary platform for implementing the method
  • Figure 4 is an example user interface generated by the method for presenting an exemplary survey question to a participant
  • Figures 5 to 7 illustrate the mapping of the survey question of Figure 4 to its relevant tax governance assessment category and relevant gap analysis
  • Figure 8 illustrates automatically generated statements and action items mapped to the survey question of Figure 4.
  • Figure 9 illustrates the exclusion logic applied to the survey question of Figure 4.
  • Figures 10 to 14 are example user interfaces generated by the method to graphically represent the assessment category scores and gap analysis contributed by the response to the survey question of Figure 4;
  • Figures 15 and 16 are example user interfaces generated by the method to display automatically generated action items
  • Figure 17 is an example user interface generated by the method for presenting another exemplary survey question to a participant
  • Figures 18 to 21 illustrate the mapping of the survey question of Figure 17 to its relevant tax governance assessment category and relevant gap analysis
  • Figure 22 illustrates automatically generated statements and action items mapped to the survey question of Figure 21 ;
  • Figure 23 is an example user interface generated by the method to graphically represent the gap analysis contributed by the response to the survey question of Figure 21 ;
  • Figures 24 and 25 are example user interfaces generated by the method to display automatically generated action items
  • Figure 26 is an example user interface of the platform for implementing the method
  • Figures 27 and 28 are example user interfaces generated by the method to graphically represent the assessment category scores from a survey of multiple participants.
  • Figure 29 is an example user interfaces generated by the method to display automatically generated action items.
  • a computer-implemented method for evaluating tax governance may be implemented in a system 100 as web and/or mobile software applications comprising one or more computer program modules executable by one or more computing devices 102 associated with users of the system that communicate via a network with one or more servers 104 and associated databases 106.
  • the computing devices 102 may comprise desktop computers, laptop computers, tablet computers, smartphones, and combinations thereof.
  • the method 200 may be provided as SaaS (Software as a Service) to consumers who register as users of the system 100.
  • the method (200) begins by receiving, at the server 104, survey results from one or more participants (202).
  • the survey results may be obtained via user input on a computing device 102 in response to a survey question, such as illustrated in Figures 4 and 21 .
  • the survey questions may be presented in any suitable form including, for example, radio buttons, multi-selections, sliders, radio button matrix, multi-select matrix, etc.
  • a processor associated with the server computes scores for each of a plurality of tax governance assessment categories based on weighted values assigned to one or more survey questions (204).
  • the tag "CPTS" links this question to a specified category, in this case, the "Tax Strategy” sub-category under the "Common Purpose” category.
  • the question illustrated in Figure 4 has a maximum score of 20 if the participant selects "Yes".
  • Remaining survey questions that contribute to the "Tax Strategy" sub-category and/or "Common Purpose” category may be assigned different maximum scores, and thereby weighted differently when computing the total score for the sub-category or category.
  • Other survey weighting methodologies may be applied to one or more categories, for example, equal weighting if all questions are equally important, weighting with an auxiliary variable to address sampling bias, weighting participants to adjust an individual's contribution to the survey results, etc.
  • Computational step (204) may be repeated for all the survey questions until each tax governance assessment category is scored.
  • Figures 18 to 20 illustrate the programmed mapping of the matrix-type survey question of Figure 17 to its relevant tax governance assessment category "Enabling Technologies” and sub-category "Data Integrity", using the tag "ETDI”.
  • the maximum score for the eighth sub-question relating to management account focus versus legal entity requirements is 5, and there are five possible answers worth different scores.
  • Each sub-question may have different maximum scores, and thereby contribute differently to the assessment category.
  • the processor repeats the method step (204) until the total scores for "Data Integrity" and "Enabling Technologies” have been computed from weighted values assigned to each contributing question.
  • each tax governance assessment category comprises at least two sub-categories, and a score is computed for each sub-category based on weighted values assigned to one or more survey questions. The score for each category is then computed from an average of the scores of its sub-categories. Other scoring methods may be applied, for example, a category may be scored based on weighted values assigned to each contributing sub-category.
  • a gap analysis is performed as the processor computes scores representative of gaps between tax governance and each of a plurality of benchmark controls, based on weighted values assigned to one or more survey questions (206).
  • the benchmark controls may comprise controls prescribed or recommended by a tax authority, for example, the Board-Level Controls and Managerial-Level Controls presently recommended by the ATO. In other examples, the benchmark controls may comprise industry best practices.
  • An example of the mapping of a survey question to its benchmark control, which may be programmed on the processor, is illustrated in Figures 5 and 8. The tag "BC1 " links this question to a specified control, in this case, ATO's Board-Level Control 1 , which relates to assessing the formalised tax control framework.
  • the question illustrated in Figure 4 has a maximum score of 40 if the participant selects "Yes".
  • This score contributes to the assessed tax governance control, ie the total score representative of the aspect of the organisation's tax governance performance that may be compared to the ATO's Board-Level Control 1 .
  • the gap between this assessed tax governance control and the benchmark control may accordingly be quantified.
  • Remaining survey questions that contribute to the Board-Level Control 1 may be assigned different maximum scores, and thereby weighted differently when computing the total score for the sub-category or category. As described above, other alternative weighting methodologies may be applied.
  • the computational step (206) may be repeated for all survey questions until each gap is quantified.
  • Figures 18, 19 and 21 illustrate the programmed mapping of a matrix-type survey question to multiple benchmark controls, in this case, Managerial- Level Controls 4, 7 and 8, using tags MC4, MC7, MC8.
  • the maximum score for the eighth sub-question relating to management account focus versus legal entity requirements is 10, and there are five possible answers worth different scores.
  • Each sub-question may have different maximum scores, and thereby contribute differently to its relevant assessed tax governance control.
  • the processor repeats the method step (204) until the assessed tax governance controls and the gaps relative to each of Managerial-Level Controls 4, 7 and 8 have been computed from weighted values assigned to each contributing question.
  • the method may further comprise generating, via the processor, a predetermined positive statement if a score is above a predetermined threshold or a predetermined negative statement if a score is below a predetermined threshold. Examples of mapping of a statement to a score are illustrated in Figures 8 and 22.
  • the tags "high” and “low” and the relevant category or gap analysis, eg "BC1 R”, may be used to automatically populate a display, eg as illustrated in Figures 13 and 14, which show the statements automatically generated on a display screen of a user device in response to high and low assessed tax governance controls relative to the benchmark control Board-Level Control 1 respectively.
  • the method may generate one or more action items if a score is below a predetermined threshold (208).
  • a score may comprise the score of a single question, the score of a sub-category, the score of an entire category, an assessed tax governance control, the computed gap between an assessed tax governance control and a benchmark control, or combinations thereof.
  • the tag may be passed to one or more subsequent method steps, for example, to automatically populate the displays illustrated in Figures 15, 16, 24 and 25 with the action items.
  • the action items may comprise predetermined recommendations for addressing a specific low area of maturity within a control. Actions may be customised, edited and deleted. Where an action is rejected, the user may include text to generate an "if not, why not” statement to the gaps. The accepted actions may subsequently be published to a user interface or an automatically- generated report.
  • the automatically generated action items may initially only be visible to or accessible by a third-party consultant administering the method for the organisation.
  • the third-party consultant may subsequently consult with the organisation, eg in a workshop, to discuss and prioritise the list of recommended actions.
  • the accepted actions may then be published (eg as described above) to be viewed by members of the organisation.
  • the method may further comprise displaying, on a user device, a graphical representation of the action items.
  • a graphical representation of the action items As illustrated in Figures 16 and 25, the impact and effort of each action item may be graphically represented on a bubble chart, to assist with prioritising tasks.
  • the perceived impact and effort of each action may be selected by a user via a user device, eg as illustrated in Figures 15 and 24.
  • a fully populated bubble chart which may be filtered by the user to focus on a category or control, is illustrated in Figure 29.
  • the method may further comprise applying, via the processor, exclusion logic to the computation of a category score and/or a gap score if a predetermined survey response is received, such that a subset of survey questions is excluded from the computation.
  • Figure 9 illustrates programmed exclusion logic comprising skipping to a specific question if the response to the question illustrated in Figure 4 is "no". The questions that have been skipped over are thereby excluded from the calculation of the category score.
  • processor may be programmed to adjust the weighted values of the remaining/included set of survey questions, to compensate for the excluded questions.
  • the exclusion logic may be implemented by the processor so that excluded questions are not displayed to the survey participant. The exclusions may be implemented to increase efficiency of the survey and/or reduce survey bias (for example, if a predetermined pattern of responses are detected).
  • the method may display, on a user device, a graphical representation of the scores computed for each assessment category (210).
  • Figure 10 illustrates a spider chart displaying the computed scores obtained from the survey.
  • the graphical representation of the category scores may further include data obtained from peer groups, stakeholders, or combinations thereof.
  • the spider chart may be filtered to compare the organisation's performance with peer groups in similar industries, with similar revenues, in similar locations, of similar company types, etc.
  • Stakeholder data may be obtained via a stakeholder survey that may be similarly implemented via methods and systems of the present invention.
  • the user may be able to drill down into a single category to view details of the responses contributing to that category, eg as shown in Figures 1 1 and 27.
  • the method may further comprise storing survey results over a period of time, in order to compute and display trends. This may be particularly useful for monitoring the organisation's tax governance and risk management framework over time, and for visualising the effect of improvements, eg implemented action items.
  • Figures 3 and 26 illustrate a platform tool according to one embodiment for administering and implementing the present system and method.
  • the platform may be accessed in order to create a survey, add and manage participants and/or allocate a participant's contribution to the survey results (as illustrated in Figure 26), review results and consult with the organisation to prioritise action items generated by the present method.
  • the platform may further include workflow tools to assist with monitoring action items across the organisation.
  • Embodiments of the present invention provide computer-implemented systems and methods that are useful for assessing an organisation's tax governance and tax risk management.
  • Embodiments of the present invention provide for methods of benchmarking the organisation's tax governance framework against multiple criteria such as against benchmark or best practice controls recommended by tax agencies, against other companies in similar industries or with similar revenues, and against shareholder perception.
  • Embodiments of the present invention further provide methods that are useful for understanding priorities for change and implementing recommended actions once such gaps between the organisation's governance and benchmark targets are identified.

Abstract

A computer-implemented method comprising: receiving, at a server, survey results relating to tax governance and tax risk management from one or more participants; computing, via a processor, scores for each of a plurality of tax governance assessment categories based on weighted values assigned to one or more survey questions; computing, via the processor, scores representative of gaps between assessed tax governance controls and benchmark controls, based on weighted values assigned to one or more survey questions; generating, via the processor, an action item if a score is below a threshold; displaying, on a user device, a graphical representation of the scores computed for each assessment category; displaying, on user device, a graphical representation of the computed gaps relative to each benchmark control.

Description

SYSTEM AND METHOD FOR ASSESSING TAX GOVERNANCE AND MANAGING
TAX RISK
Field
[0001 ] The present invention relates to computer-implemented systems and methods for assessing tax governance maturity and managing tax risk.
Background
[0002] Tax authorities around the world are increasingly placing the onus on companies to demonstrate best practice tax governance and tax risk management. In particular, the Australian Tax Office (ATO) has been an early adopter of this global view and has published a Tax Risk Management and Governance Review Guide (Guide) which states their better practice guidance in relation to tax governance and risk management. In addition, as part of the OECD's justified trust initiative, the ATO have commenced reviewing the Top 1000 organisations which includes a review of a gap analysis to the Guide and remediation plans.
[0003] It is difficult for companies to assess how their internal tax controls compare to best practice or recommendations by tax authorities. Companies are also generally unable to benchmark their internal tax controls against other companies in similar industries or with similar revenues. Furthermore, when gaps in tax governance and tax risk management are identified, it is difficult for companies to understand and identify priorities for change and implement recommended actions to manage and mitigate tax risks.
[0004] A need therefore exists for software tools for assessing tax governance and managing tax risk.
Summary
[0005] According to the present invention, there is provided a computer-implemented method comprising:
receiving, at a server, survey results relating to tax governance and tax risk management from one or more participants; computing, via a processor, scores for each of a plurality of tax governance assessment categories based on weighted values assigned to one or more survey questions;
computing, via the processor, scores representative of gaps between assessed tax governance controls and each of a plurality of benchmark controls, based on weighted values assigned to one or more survey questions;
generating, via the processor, an action item if a score is below a threshold;
displaying, on a user device, a graphical representation of the scores computed for each assessment category;
displaying, on user device, a graphical representation of the computed gaps relative to each benchmark control.
[0006] The method may further comprise:
generating, via the processor, a predetermined positive statement if a score is above a predetermined threshold or a predetermined negative statement if a score is below a predetermined threshold;
displaying, on a user device, a list of positive and negative statements relevant to a displayed category or gap.
[0007] The method may further comprise applying, via the processor, exclusion logic to the computation of a category score and/or a gap score if a predetermined survey response is received, such that a subset of survey questions is excluded from the computation. The method may further comprise adjusting, via the processor, weighted values assigned to the included set of survey questions.
[0008] A survey response may be mapped to multiple gap scores.
[0009] Each of the assessment categories may comprise two or more sub-categories, wherein a score is computed for each sub-category based on weighted values assigned to one or more survey questions.
[0010] A score for a category may be computed from an average of the scores of its sub-categories.
[001 1 ] The graphical representation of the scores computed for each assessment category may further include data obtained from peer groups, stakeholders, or combinations thereof. [0012] The method may further comprise displaying, on a user device, a detailed analysis of the results displaying answers from each of the participants.
[0013] The method may further comprise filtering the graphical representation by one or more of: revenue, industry, location and company type.
[0014] The scores computed for each assessment category may be displayed in a spider chart.
[0015] The scores against the controls may be displayed in a gap analysis.
[0016] The method may further comprise displaying, on a user device, a graphical representation of the action items.
[0017] Impact and effort of each action item may be graphically represented on a bubble chart.
[0018] The present invention also provides a computer-implemented method comprising:
presenting a survey user interface of survey questions and answer options relating to tax governance to users of an enterprise;
receiving selected answer options from the users via the survey user interface; automatically assigning answer scores to the selected answer options based on comparison to best practice relating to tax governance
based on the answer scores, mapping one or more of:
overall scores relating to tax governance categories in a tax control framework user interface for the enterprise;
overall assessment of gaps to a predefined set of controls; statements relating to tax governance in a gap analysis user interface for the enterprise;
actions relating to tax governance in an actions list user interface for the enterprise.
[0019] The tax control framework user interface may comprise a spider chart of the overall scores relating to tax governance categories for the enterprise.
[0020] The spider chart may represent one or more benchmark scores relating to the tax governance categories for one or more other enterprises. [0021 ] The tax control framework user interface may comprise a gap analysis of the overall gaps relating to tax governance controls.
[0022] The actions list user interface may comprise a bubble graph of the actions relating to tax governance for the enterprise.
[0023] The present invention also provides a system, comprising:
a processor; and
a non-transitory computer-readable medium coupled to the processor and having instructions stored thereon, which, when executed by the processor, cause the processor to perform operations comprising:
receiving, at a server, survey results relating to tax governance from one or more participants;
computing scores for each of a plurality of tax governance assessment categories based on weighted values assigned to one or more survey questions; computing scores representative of gaps between assessed tax governance controls and each of a plurality of benchmark controls, based on weighted values assigned to one or more survey questions;
generating an action item if a score is below a threshold;
displaying a graphical representation of the scores computed for each assessment category;
displaying a graphical representation of the computed gaps relative to each benchmark control.
[0024] The present invention also provides a non-transitory computer-readable medium having instructions stored thereon, which, when executed by a processor, cause the processor to perform operations comprising
receiving, at a server, survey results relating to tax governance from one or more participants;
computing scores for each of a plurality of tax governance assessment categories based on weighted values assigned to one or more survey questions;
computing scores representative of gaps between assessed tax governance controls and each of a plurality of benchmark controls, based on weighted values assigned to one or more survey questions;
generating an action item if a score is below a threshold; displaying, on a user device, a graphical representation of the scores computed for each assessment category;
displaying, on user device, a graphical representation of the computed gaps relative to each benchmark control.
Brief Description of Drawings
[0025] Embodiments of the invention will now be described by way of example only with reference to the accompanying drawings, in which:
Figure 1 is a functional block diagram of a system according to one embodiment of the present invention for implementing a method for evaluating tax governance;
Figure 2 is a flowchart of a computer-implemented method for evaluating tax governance according to embodiments of the present invention
Figure 3 illustrates the components of an exemplary platform for implementing the method;
Figure 4 is an example user interface generated by the method for presenting an exemplary survey question to a participant;
Figures 5 to 7 illustrate the mapping of the survey question of Figure 4 to its relevant tax governance assessment category and relevant gap analysis;
Figure 8 illustrates automatically generated statements and action items mapped to the survey question of Figure 4;
Figure 9 illustrates the exclusion logic applied to the survey question of Figure 4;
Figures 10 to 14 are example user interfaces generated by the method to graphically represent the assessment category scores and gap analysis contributed by the response to the survey question of Figure 4;
Figures 15 and 16 are example user interfaces generated by the method to display automatically generated action items;
Figure 17 is an example user interface generated by the method for presenting another exemplary survey question to a participant;
Figures 18 to 21 illustrate the mapping of the survey question of Figure 17 to its relevant tax governance assessment category and relevant gap analysis;
Figure 22 illustrates automatically generated statements and action items mapped to the survey question of Figure 21 ;
Figure 23 is an example user interface generated by the method to graphically represent the gap analysis contributed by the response to the survey question of Figure 21 ;
Figures 24 and 25 are example user interfaces generated by the method to display automatically generated action items;
Figure 26 is an example user interface of the platform for implementing the method;
Figures 27 and 28 are example user interfaces generated by the method to graphically represent the assessment category scores from a survey of multiple participants; and
Figure 29 is an example user interfaces generated by the method to display automatically generated action items.
Description of Embodiments
[0026] Referring to Figure 1 , a computer-implemented method for evaluating tax governance according to an embodiment of the present invention may be implemented in a system 100 as web and/or mobile software applications comprising one or more computer program modules executable by one or more computing devices 102 associated with users of the system that communicate via a network with one or more servers 104 and associated databases 106. The computing devices 102 may comprise desktop computers, laptop computers, tablet computers, smartphones, and combinations thereof. The method 200 may be provided as SaaS (Software as a Service) to consumers who register as users of the system 100.
[0027] According to one embodiment, the method (200) begins by receiving, at the server 104, survey results from one or more participants (202). The survey results may be obtained via user input on a computing device 102 in response to a survey question, such as illustrated in Figures 4 and 21 . The survey questions may be presented in any suitable form including, for example, radio buttons, multi-selections, sliders, radio button matrix, multi-select matrix, etc.
[0028] Next, a processor associated with the server computes scores for each of a plurality of tax governance assessment categories based on weighted values assigned to one or more survey questions (204). An example of the mapping of a survey question to its relevant tax governance assessment category, which may be programmed on the processor, is illustrated in Figures 5 and 6. The tag "CPTS" links this question to a specified category, in this case, the "Tax Strategy" sub-category under the "Common Purpose" category. The question illustrated in Figure 4 has a maximum score of 20 if the participant selects "Yes". Remaining survey questions that contribute to the "Tax Strategy" sub-category and/or "Common Purpose" category may be assigned different maximum scores, and thereby weighted differently when computing the total score for the sub-category or category. Other survey weighting methodologies may be applied to one or more categories, for example, equal weighting if all questions are equally important, weighting with an auxiliary variable to address sampling bias, weighting participants to adjust an individual's contribution to the survey results, etc. Computational step (204) may be repeated for all the survey questions until each tax governance assessment category is scored.
[0029] In another example, Figures 18 to 20 illustrate the programmed mapping of the matrix-type survey question of Figure 17 to its relevant tax governance assessment category "Enabling Technologies" and sub-category "Data Integrity", using the tag "ETDI". As illustrated in Figure 20, the maximum score for the eighth sub-question, relating to management account focus versus legal entity requirements is 5, and there are five possible answers worth different scores. Each sub-question may have different maximum scores, and thereby contribute differently to the assessment category. The processor repeats the method step (204) until the total scores for "Data Integrity" and "Enabling Technologies" have been computed from weighted values assigned to each contributing question.
[0030] In the embodiments described above, each tax governance assessment category comprises at least two sub-categories, and a score is computed for each sub-category based on weighted values assigned to one or more survey questions. The score for each category is then computed from an average of the scores of its sub-categories. Other scoring methods may be applied, for example, a category may be scored based on weighted values assigned to each contributing sub-category.
[0031 ] Next, a gap analysis is performed as the processor computes scores representative of gaps between tax governance and each of a plurality of benchmark controls, based on weighted values assigned to one or more survey questions (206). The benchmark controls may comprise controls prescribed or recommended by a tax authority, for example, the Board-Level Controls and Managerial-Level Controls presently recommended by the ATO. In other examples, the benchmark controls may comprise industry best practices. An example of the mapping of a survey question to its benchmark control, which may be programmed on the processor, is illustrated in Figures 5 and 8. The tag "BC1 " links this question to a specified control, in this case, ATO's Board-Level Control 1 , which relates to assessing the formalised tax control framework. The question illustrated in Figure 4 has a maximum score of 40 if the participant selects "Yes". This score contributes to the assessed tax governance control, ie the total score representative of the aspect of the organisation's tax governance performance that may be compared to the ATO's Board-Level Control 1 . The gap between this assessed tax governance control and the benchmark control may accordingly be quantified. Remaining survey questions that contribute to the Board-Level Control 1 may be assigned different maximum scores, and thereby weighted differently when computing the total score for the sub-category or category. As described above, other alternative weighting methodologies may be applied. The computational step (206) may be repeated for all survey questions until each gap is quantified.
[0032] In another example, Figures 18, 19 and 21 illustrate the programmed mapping of a matrix-type survey question to multiple benchmark controls, in this case, Managerial- Level Controls 4, 7 and 8, using tags MC4, MC7, MC8. As illustrated in Figure 21 , the maximum score for the eighth sub-question, relating to management account focus versus legal entity requirements is 10, and there are five possible answers worth different scores. Each sub-question may have different maximum scores, and thereby contribute differently to its relevant assessed tax governance control. The processor repeats the method step (204) until the assessed tax governance controls and the gaps relative to each of Managerial-Level Controls 4, 7 and 8 have been computed from weighted values assigned to each contributing question.
[0033] In some embodiments, the method may further comprise generating, via the processor, a predetermined positive statement if a score is above a predetermined threshold or a predetermined negative statement if a score is below a predetermined threshold. Examples of mapping of a statement to a score are illustrated in Figures 8 and 22. The tags "high" and "low" and the relevant category or gap analysis, eg "BC1 R", may be used to automatically populate a display, eg as illustrated in Figures 13 and 14, which show the statements automatically generated on a display screen of a user device in response to high and low assessed tax governance controls relative to the benchmark control Board-Level Control 1 respectively. [0034] Next, the method may generate one or more action items if a score is below a predetermined threshold (208). For example, Figure 8 illustrates that an action item tagged "BC1 R" is generated if the "BC1 " response to the question illustrated in Figure 4 scores lower than 100%. The score may comprise the score of a single question, the score of a sub-category, the score of an entire category, an assessed tax governance control, the computed gap between an assessed tax governance control and a benchmark control, or combinations thereof. The tag may be passed to one or more subsequent method steps, for example, to automatically populate the displays illustrated in Figures 15, 16, 24 and 25 with the action items. The action items may comprise predetermined recommendations for addressing a specific low area of maturity within a control. Actions may be customised, edited and deleted. Where an action is rejected, the user may include text to generate an "if not, why not" statement to the gaps. The accepted actions may subsequently be published to a user interface or an automatically- generated report.
[0035] In some embodiments, the automatically generated action items may initially only be visible to or accessible by a third-party consultant administering the method for the organisation. The third-party consultant may subsequently consult with the organisation, eg in a workshop, to discuss and prioritise the list of recommended actions. Once the actions have been agreed upon, the accepted actions may then be published (eg as described above) to be viewed by members of the organisation.
[0036] In some embodiments, the method may further comprise displaying, on a user device, a graphical representation of the action items. As illustrated in Figures 16 and 25, the impact and effort of each action item may be graphically represented on a bubble chart, to assist with prioritising tasks. In some embodiments, the perceived impact and effort of each action may be selected by a user via a user device, eg as illustrated in Figures 15 and 24. A fully populated bubble chart, which may be filtered by the user to focus on a category or control, is illustrated in Figure 29.
[0037] In some embodiments, the method may further comprise applying, via the processor, exclusion logic to the computation of a category score and/or a gap score if a predetermined survey response is received, such that a subset of survey questions is excluded from the computation. For example, Figure 9 illustrates programmed exclusion logic comprising skipping to a specific question if the response to the question illustrated in Figure 4 is "no". The questions that have been skipped over are thereby excluded from the calculation of the category score. In some embodiments, processor may be programmed to adjust the weighted values of the remaining/included set of survey questions, to compensate for the excluded questions. In other embodiments, the exclusion logic may be implemented by the processor so that excluded questions are not displayed to the survey participant. The exclusions may be implemented to increase efficiency of the survey and/or reduce survey bias (for example, if a predetermined pattern of responses are detected).
[0038] Next, the method may display, on a user device, a graphical representation of the scores computed for each assessment category (210). For example, Figure 10 illustrates a spider chart displaying the computed scores obtained from the survey. The graphical representation of the category scores may further include data obtained from peer groups, stakeholders, or combinations thereof. As illustrated in Figure 10, the spider chart may be filtered to compare the organisation's performance with peer groups in similar industries, with similar revenues, in similar locations, of similar company types, etc. Stakeholder data may be obtained via a stakeholder survey that may be similarly implemented via methods and systems of the present invention. In some embodiments, the user may be able to drill down into a single category to view details of the responses contributing to that category, eg as shown in Figures 1 1 and 27.
[0039] In some embodiments, the method may further comprise storing survey results over a period of time, in order to compute and display trends. This may be particularly useful for monitoring the organisation's tax governance and risk management framework over time, and for visualising the effect of improvements, eg implemented action items.
[0040] Figures 3 and 26 illustrate a platform tool according to one embodiment for administering and implementing the present system and method. The platform may be accessed in order to create a survey, add and manage participants and/or allocate a participant's contribution to the survey results (as illustrated in Figure 26), review results and consult with the organisation to prioritise action items generated by the present method. The platform may further include workflow tools to assist with monitoring action items across the organisation.
[0041 ] Embodiments of the present invention provide computer-implemented systems and methods that are useful for assessing an organisation's tax governance and tax risk management. Embodiments of the present invention provide for methods of benchmarking the organisation's tax governance framework against multiple criteria such as against benchmark or best practice controls recommended by tax agencies, against other companies in similar industries or with similar revenues, and against shareholder perception. Embodiments of the present invention further provide methods that are useful for understanding priorities for change and implementing recommended actions once such gaps between the organisation's governance and benchmark targets are identified.
[0042] For the purpose of this specification, the word "comprising" means "including but not limited to", and the word "comprises" has a corresponding meaning.
[0043] The above embodiments have been described by way of example only and modifications are possible within the scope of the claims that follow.

Claims

Claims
1 . A computer-implemented method comprising:
receiving, at a server, survey results relating to tax governance and tax risk management from one or more participants;
computing, via a processor, scores for each of a plurality of tax governance assessment categories based on weighted values assigned to one or more survey questions;
computing, via the processor, scores representative of gaps between assessed tax governance controls and benchmark controls, based on weighted values assigned to one or more survey questions;
generating, via the processor, an action item if a score is below a threshold;
displaying, on a user device, a graphical representation of the scores computed for each assessment category;
displaying, on user device, a graphical representation of the computed gaps relative to each benchmark control.
2. The method of claim 1 , further comprising:
generating, via the processor, a predetermined positive statement if a score is above a predetermined threshold or a predetermined negative statement if a score is below a predetermined threshold;
displaying, on a user device, a list of positive and negative statements relevant to a displayed category or gap.
3. The method of claim 1 or 2, further comprising applying, via the processor, exclusion logic to the computation of a category score and/or a gap score if a predetermined survey response is received, such that a subset of survey questions is excluded from the computation.
4. The method of claim 3, further comprising adjusting, via the processor, weighted values assigned to the included set of survey questions.
5. The method of any one of the preceding claims, wherein a survey response is mapped to multiple gap scores.
6. The method of any one of the preceding claims, wherein each of the assessment categories comprises two or more sub-categories, and wherein a score is computed for each sub-category based on weighted values assigned to one or more survey questions.
7. The method of claim 6, wherein a score for a category is computed from an average of the scores of its sub-categories.
8. The method of any one of the preceding claims, wherein the graphical representation of the scores computed for each assessment category further includes data obtained from peer groups, stakeholders, or combinations thereof.
9. The method of claim 8, wherein the graphical representation of the scores computed for each assessment category includes data obtained from peer groups, further comprising filtering the graphical representation by one or more of: revenue, industry, location and company type.
10. The method of any one of the preceding claims, wherein the scores computed for each assessment category are displayed in a spider chart.
1 1 . The method of any one of the preceding claims, wherein the scores against the benchmark controls may be displayed in a gap analysis.
12. The method of any one of the preceding claims, further comprising displaying, on a user device, a graphical representation of the action items.
13. The method of claim 12, wherein impact and effort of each action item are graphically represented on a bubble chart.
14. A computer-implemented method, comprising:
presenting a survey user interface of survey questions and answer options relating to tax governance to users of an enterprise;
receiving selected answer options from the users via the survey user interface; automatically assigning answer scores to the selected answer options based on comparison to best practice relating to tax governance;
based on the answer scores, mapping one or more of: overall scores relating to tax governance categories in a tax control framework user interface for the enterprise;
overall assessment of gaps to a predefined set of controls; statements relating to tax governance in a gap analysis user interface for the enterprise;
actions relating to tax governance in an actions list user interface for the enterprise.
15. The method of claim 14, wherein the tax control framework user interface comprises a spider chart of the overall scores relating to tax governance categories for the enterprise.
16. The method of claim 14 or 15, wherein the spider chart represents one or more benchmark scores relating to the tax governance categories for one or more other enterprises.
17. The method of any one of claims 14 to 16, wherein the tax control framework user interface comprises a gap analysis of the overall gaps relating to tax governance controls.
18. The method of any one of claims 14 to 17, wherein the actions list user interface comprises a bubble graph of the actions relating to tax governance for the enterprise.
19. A system, comprising:
a processor; and
a non-transitory computer-readable medium coupled to the processor and having instructions stored thereon, which, when executed by the processor, cause the processor to perform operations comprising:
receiving, at a server, survey results relating to tax governance from one or more participants;
computing scores for each of a plurality of tax governance assessment categories based on weighted values assigned to one or more survey questions; computing scores representative of gaps between assessed tax governance controls and each of a plurality of benchmark controls, based on weighted values assigned to one or more survey questions; generating an action item if a score is below a threshold;
displaying a graphical representation of the scores computed for each assessment category;
displaying a graphical representation of the computed gaps relative to each benchmark control.
20. A non-transitory computer-readable medium having instructions stored thereon, which, when executed by a processor, cause the processor to perform operations comprising:
receiving, at a server, survey results relating to tax governance from one or more participants;
computing scores for each of a plurality of tax governance assessment categories based on weighted values assigned to one or more survey questions;
computing scores representative of gaps between assessed tax governance controls and each of a plurality of benchmark controls, based on weighted values assigned to one or more survey questions;
generating an action item if a score is below a threshold;
displaying, on a user device, a graphical representation of the scores computed for each assessment category;
displaying, on user device, a graphical representation of the computed gaps relative to each benchmark control.
PCT/IB2018/053018 2017-05-02 2018-05-02 System and method for assessing tax governance and managing tax risk WO2018203238A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2018262902A AU2018262902A1 (en) 2017-05-02 2018-05-02 System and method for assessing tax governance and managing tax risk

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2017901600A AU2017901600A0 (en) 2017-05-02 System and method for assessing tax governance and managing tax risk
AU2017901600 2017-05-02

Publications (1)

Publication Number Publication Date
WO2018203238A1 true WO2018203238A1 (en) 2018-11-08

Family

ID=64015996

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2018/053018 WO2018203238A1 (en) 2017-05-02 2018-05-02 System and method for assessing tax governance and managing tax risk

Country Status (2)

Country Link
AU (1) AU2018262902A1 (en)
WO (1) WO2018203238A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578007A (en) * 2022-10-12 2023-01-06 南京数聚科技有限公司 Method and system for integrating calculation of points and task in tax industry

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050197952A1 (en) * 2003-08-15 2005-09-08 Providus Software Solutions, Inc. Risk mitigation management
US20100010849A1 (en) * 2008-07-11 2010-01-14 Hurd Geralyn R Method and system for tax risk assessment and control
US20100121746A1 (en) * 2008-11-13 2010-05-13 Ez Decisions Llc Financial statement risk assessment and management system and method
US20120053981A1 (en) * 2010-09-01 2012-03-01 Bank Of America Corporation Risk Governance Model for an Operation or an Information Technology System
US20130006701A1 (en) * 2011-07-01 2013-01-03 International Business Machines Corporation Assessing and managing risks of service related changes based on dynamic context information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050197952A1 (en) * 2003-08-15 2005-09-08 Providus Software Solutions, Inc. Risk mitigation management
US20100010849A1 (en) * 2008-07-11 2010-01-14 Hurd Geralyn R Method and system for tax risk assessment and control
US20100121746A1 (en) * 2008-11-13 2010-05-13 Ez Decisions Llc Financial statement risk assessment and management system and method
US20120053981A1 (en) * 2010-09-01 2012-03-01 Bank Of America Corporation Risk Governance Model for an Operation or an Information Technology System
US20130006701A1 (en) * 2011-07-01 2013-01-03 International Business Machines Corporation Assessing and managing risks of service related changes based on dynamic context information

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578007A (en) * 2022-10-12 2023-01-06 南京数聚科技有限公司 Method and system for integrating calculation of points and task in tax industry

Also Published As

Publication number Publication date
AU2018262902A1 (en) 2019-11-07

Similar Documents

Publication Publication Date Title
US11276007B2 (en) Method and system for composite scoring, classification, and decision making based on machine learning
Curtis et al. Risk assessment in practice
US10735471B2 (en) Method, apparatus, and computer-readable medium for data protection simulation and optimization in a computer network
Becker et al. Offshoring and the onshore composition of tasks and skills
Purnama et al. Competition intensity, uncertainty environmental on the use of information technology and its impact on business performance small and medium enterprises
Blankley et al. Are lengthy audit report lags a warning signal?
US20050272022A1 (en) Method and Apparatus for Project Valuation, Prioritization, and Performance Management
US20210150443A1 (en) Parity detection and recommendation system
WO2019148030A1 (en) Insight and learning server and system
CA3068829A1 (en) Systems and methods for ranking entities
US20160342928A1 (en) Business activity information management
Lélis et al. Auditor and auditee perceptions of internal auditing practices in a company in the energy sector
US20110119202A1 (en) Automated, self-learning tool for identifying impacted business parameters for a business change-event
WO2018203238A1 (en) System and method for assessing tax governance and managing tax risk
Grabmann et al. Impact factors on the development of internal auditing In the 21ST Century
US20190019120A1 (en) System and method for rendering compliance status dashboard
Kádárová et al. Holistic system thinking as an educational tool using key indicators
US20190130341A1 (en) Human Resource Capital Relocation System
US20150186814A1 (en) Supplier technical oversight risk assessment
Umeokafor et al. A framework for managing contextual influence on health and safety in construction projects
Buehler et al. The Value for Insurers in Better Management of Non Financial Risk
US20180232463A1 (en) Dynamic application landscape processing system
Esayas Structuring compliance risk identification using the CORAS approach: compliance as an asset
US20210125115A1 (en) Analysis Method, Analysis Apparatus, and Computer Readable Medium
US20210125276A1 (en) Display Method, Information Apparatus and Computer Readable Medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18795070

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018262902

Country of ref document: AU

Date of ref document: 20180502

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 18795070

Country of ref document: EP

Kind code of ref document: A1