WO2004074763A2 - Method and apparatus for reserve measurement - Google Patents

Method and apparatus for reserve measurement Download PDF

Info

Publication number
WO2004074763A2
WO2004074763A2 PCT/US2004/004762 US2004004762W WO2004074763A2 WO 2004074763 A2 WO2004074763 A2 WO 2004074763A2 US 2004004762 W US2004004762 W US 2004004762W WO 2004074763 A2 WO2004074763 A2 WO 2004074763A2
Authority
WO
WIPO (PCT)
Prior art keywords
intl
distribution
intj
outcomes
constructing
Prior art date
Application number
PCT/US2004/004762
Other languages
French (fr)
Other versions
WO2004074763A9 (en
WO2004074763A3 (en
Inventor
Costandy K. Khury
Original Assignee
Khury Costandy K
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Khury Costandy K filed Critical Khury Costandy K
Priority to US10/546,235 priority Critical patent/US20060293926A1/en
Publication of WO2004074763A2 publication Critical patent/WO2004074763A2/en
Publication of WO2004074763A9 publication Critical patent/WO2004074763A9/en
Publication of WO2004074763A3 publication Critical patent/WO2004074763A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/02Banking, e.g. interest calculation or account maintenance

Definitions

  • the invention relates generally to methods for the determination of historically based benchmarks against which estimates of future outcomes may be compared, thus developing a measure of the reasonableness of such estimates. More particularly, the invention develops historically based benchmarks against which estimates of property & casualty insurance loss reserves may be compared, thus developing a measure of the reasonableness of such loss reserve estimates.
  • the process of estimating insurance company reserves involves four primary elements: raw data, assumptions, methods of estimation, and judgment of the loss reserve specialist (e.g., an actuary).
  • the loss reserve specialist e.g., an actuary
  • judgment is an indispensable element in the process of arriving at loss reserve estimates
  • the manner of assessing the reasonableness of such estimates remains a largely unexplored subject. It would be useful to have objective historically based benchmarks against which loss reserve estimates may be compared.
  • One direct method for developing such objective historically based benchmarks involves the use of historical ratios generated by comparing consecutive valuations of various cohorts of losses (e.g., losses incurred during a particular year or other time period) as they develop from one time period to another.
  • a historically based benchmark for loss reserve estimates one can calculate period to period ratios for known consecutive valuations of cohorts of losses and use combinations of such ratios to project outcomes for all the cohorts for which future valuations have yet to emerge.
  • the collection of all such outcomes forms an empirical frequency distribution of all the possible outcomes with all the statistical measures associated with a frequency distribution (such as mean, standard deviation, variance, and mode.)
  • N intervals are set for each cohort for each line of business, there will be N distinct outcomes for each accident year for each line of business (Each outcome being represented by the midpoint of an interval), and each distinct outcome having an associated frequency (The frequency associated with a specific midpoint is equal to the number of times a true calculated possible outcome is slotted in that interval).
  • These individual distributions are then combined to produce yet another distribution that combines all cohorts (accident years) and all lines of business.
  • This convolution distribution is the underlying distribution that is implied by the given data arrays. It may be used to calculate a wide assortment of probabilities for various reserving propositions; and thus enable the development of a substantial measure of the reasonableness of any given loss reserve estimate.
  • FIGURE 1 illustrates an exemplary manner in which a subinterval is constructed so as to observe the error tolerance.
  • FIGURE 2 illustrates an exemplary manner in which the sum of two subintervals, each of which meets the error criterion, also meets the error criterion.
  • FIGURE 3 A shows a graph of an exemplary convolution distribution for two sample data sets (shown as Tables A and B).
  • FIGURE 3B shows the graph of an exemplary basic distribution produced for Table A.
  • FIGURE 3C shows the graph of an exemplary basic distribution produced for Table B.
  • FIGURE 4 shows a flow chart for an exemplary process according to the invention.
  • a process for calculating distribution outcomes is provided.
  • This process can be implemented, for example, by a computer program, by electronic hardware specifically designed to execute the process or software implementing the process, by a microprocessor storing firmware instructions designed to cause computer hardware to carry out the process, or by any other combination or hybrid of hardware and software.
  • the process can also be embodied in a computer readable medium that can be executed by computer hardware or software to implement the disclosed process.
  • K 1, 2, 3, ... , k, ..., K-2, K-L, K.
  • the user determines the number of intervals N needed such that each calculated outcome is no more than a given percent tolerance ⁇ from its slotted value at the midpoint of an interval.
  • the degree of tolerance, ⁇ is determined by the user.
  • the user also makes use of the ultimate valuation for accident years 1 through J as of the end of Jyears of development.
  • valuations after Jyears have passed are routinely provided by insurance companies, on an annual basis, to the regulatory authorities.
  • the process makes use of the historical factors utilized by the insurance company for the purpose of making an ultimate estimate for a cohort of claims after the required ten years of tracking has expired.
  • Vy ;k Each valuation point is designated by given Vy ;k , where i is the accident year, j is the year of development, and k is the line of business.
  • V 2;3 , 6 represents the value associated with accident year No. 2, at the end of development period No. 3, for line of business No. 6.
  • a loss development factor is defined as the ratio of the valuation at time j+1 to the value at time j, or Ly ; ⁇
  • the set of loss development factors through two years of development consists of all Loss Development Factors of the form L 1;1;1 , or ⁇ L 1;ljl ; L 2;ljl ; ... ; L , ⁇ ; ... ; L 1-2;l ⁇ l ; L ⁇ _ 1,1, 1 ⁇ .
  • Max cumulative loss development factor ⁇ (Max ⁇ Ly, ⁇ ), with the "Max function” ranging over i and the "II function” ranging over j.
  • Min cumulative loss development factor ⁇ (Min ⁇ L y, ⁇ ), with the "Min function” ranging over i and the "II function” ranging over j.
  • the goal is to determine the number N LI , a number of subintervals for year I, such that (a) if the interval containing the full range of outcomes is divided into these subintervals, and (b) any calculated value that falls in that subinterval is replaced with the midpoint of that subinterval, then (c) the true (computed) value cannot be more that ⁇ away from the midpoint of that subinterval.
  • the target number is denoted by N ⁇ , ⁇ .
  • each subinterval is defined as one-half that number, or:
  • the subinterval can be open or closed on either end, to suit the particular application.
  • the subinterval defined here is an open/closed subinterval, with the leftmost point being excluded from the subinterval and the rightmost point being included in the subinterval.
  • the leftmost point of the full range [that is, ⁇ (Min ⁇ Ly, ⁇ )] is designated as the midpoint of the first subinterval. Then the full leftmost subinterval is given by:
  • This particular construction restores the odd subinterval that was subtracted from N I to arrive at the width of a subinterval.
  • the true error (the distance from the true value to the midpoint of the associated subinterval) is always less than or equal to the maximum error (the radius of the subinterval as given above. So instead of dealing with the true error, a more stringent requirement is imposed, that the ratio of the radius of the subinterval to the midpoint of the subinterval be less than ⁇ . In other words:
  • N LI s selected such that:
  • N u > 1 + (l/2 ⁇ )[ ⁇ (Max ⁇ Ly ;1 ⁇ ) - ⁇ (Min ⁇ Ly ;1 ⁇ )] / ⁇ (Min ⁇ Ly ;1 ⁇ ).
  • N I is therefore sufficient so that when each true, computed value is replaced with the midpoint of the appropriate subinterval, the true value is never more than ⁇ away from its surrogate, the midpoint of the subinterval.
  • the true value is never more than ⁇ away from the midpoint of the corresponding subinterval for each accident year, from accident year J+l to accident year I.
  • the maximum of all these N I values is selected to ensure that this condition (of the error being less than ⁇ ) is met for every single accident year individually.
  • Max ⁇ N I ⁇ is used, with i ranging from J+l to I. This value is designated Ni, meaning the N value associated with line of business No. 1.
  • N is determined to meet some other criteria, it is still necessary to provide the historical loss data for each accident year for each line of business. Note also that when N is determined by other criteria, there is no assurance that the error tolerance ⁇ is met. The process described below requires that the original data array has been provided regardless of whether or not it is used to determine N.
  • N is determined, and N and the valuations described above have been provided, for example, entered as a value in a computer program, the process proceeds as follows:
  • the result is a single aggregate (convolution) loss distribution for a line of business.
  • Non-volatile media include, for example, optical or magnetic disks.
  • Volatile media include dynamic memory, such as the random access memory (RAM) found in personal computers.
  • Transmission media may include coaxial cables, copper wire, and fiber optics. Transmission media may also take the form of acoustic or light (electromagnetic) waves, such as those generated during radio frequency (RF) and infrared (TR) data communications.
  • RF radio frequency
  • TR infrared
  • Computer readable media include, for example, a floppy disk, a hard disk, magnetic tape, CD-ROM, DVD-ROM, punch cards, paper tape, any other physical medium with patters of holes, RAM, PROM, EPROM, FLASHEPROM, other memory chips or cartridges, a carrier wave, or any other medium from which a computer can read instructions.
  • the present invention has utility, for example, in the property and casualty insurance industry, to assist in satisfying legal requirements in the field, and to efficiently determine estimates of loss reserves necessary to conduct business.
  • N u 1 + (l/2 ⁇ )[ ⁇ (Max ⁇ Ly ,! ⁇ ) - ⁇ (Min ⁇ Ly tl ⁇ )] / ⁇ (Min ⁇ Ly ,! ⁇ ).
  • NLI > 1 + (1 28) [(VLI, I ) ⁇ (Max ⁇ Ly , ⁇ ) - (V , ⁇ ) II (Min ⁇ Ly , ⁇ )] /
  • the midpoints of the new set of subintervals would be located at (a + b),(a + b) + 2( ⁇ + ⁇ 2 ), (a + b)+4( ⁇ + ⁇ 2 ), ..., (a + b) + 2(n-l)( ⁇ + ⁇ 2 ). And thus the radius of the new subintervals (i.e., ⁇ i+ ⁇ 2 ) would be equal to the sum of the radii of the two component subintervals.
  • Set A is the set of subintervals produced for Cohort A, consisting of a group of losses (e.g., the losses incurred during a specific accident year) and that Set B is the set of subintervals produced for Cohort B, consisting of another group of losses (e.g., the losses incurred during another specific accident year)
  • any true calculated value of ultimate outcomes produced for Cohort A has been replaced by one of the midpoints associated with Set A.
  • We constructed these subintervals such that the error generated by substituting a true calculated value with a midpoint of a subinterval is not greater than ⁇ .
  • the tolerance condition ⁇ , / (a - ⁇ ) ⁇ ⁇ implies that ⁇ i ⁇ (a - ⁇ ) ⁇ .
  • the interval (A,B) is the segment bounded by A, the smallest midpoint of all subintervals, and B, the largest midpoint of all subintervals. Thus the midpoints of all subintervals are evenly spaced within this larger interval.
  • the point corresponding to x designates a typical calculated outcome. In this illustration it is selected to between the midpoint M and the endpoint b.
  • the true error that is generated by replacing x with m is given by the amount
  • Requiring that the replacement of x by m does not generate an error greater than ⁇ means requiring that the error is less than the ratio of
  • This array holds the raw data entered by the operator
  • This array holds the starting and ending row values for each column of data in the LossFactors array
  • This array holds the development data for each year, in the Development subroutine
  • This macro labels the RawData worksheet which holds the raw data entered by the program operator.
  • the corresponding Loss Factor is the value of that ' cell divided into the value of the cell in the same row but one column to the left.
  • Intl 1 To 40
  • O fset (Intl - 1, IntJ - 1) myDe (Intl, IntJ, 1) ActiveCell .
  • Offset (Intl - 1, IntJ + 1) myDev (Intl, IntJ, 2) ActiveCell .
  • Offset (Intl - 1, IntJ + 3) myDev (Intl, IntJ, 3) ActiveCell.
  • Offset (Intl - 1, IntJ + 5) myDe (Intl, IntJ, 4) ActiveCell .
  • O fset (Intl - 1, IntJ + 7) myDe (Intl, IntJ, 5) ActiveCell.
  • O fset (Intl - 1, IntJ + 13) myDev (Intl, , IntJ, , 8) ActiveCell .
  • ActiveCell.Of set (Intl - 1, IntJ - 1) myDistdntl, IntJ)
  • This macro will store the current results in a separate array to be used if several lines of i surance are to be analyzed nd Sub ub Unio () lodulel - 8
  • This macro will det ⁇ mii ⁇ e wB ⁇ ttftw-ii another line of insurance is ' tcschreib- analyzed.
  • "'l ⁇ n'Ss have been analyzed, a consolidated distribution will be generated and ;harted
  • ActiveCell Of set (Intl 1, IntJ - 1) myDatal (Intl, IntJ) ActiveCell .Offset (Intl 1, IntJ + 1) myData-2 (IntI, Int ) ⁇ ActiveCell .Offset (Intl 1/ IntJ + 3) myData3 (Intl, IntJ) ActiveCell .O fset (Intl 1, IntJ + 5) myData4(Intl, IntJ) ActiveCell .

Landscapes

  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Engineering & Computer Science (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Complex Calculations (AREA)
  • Measuring Or Testing Involving Enzymes Or Micro-Organisms (AREA)

Abstract

The present invention describes a method and apparatus for constructing a historically based frequency distribution of unknown ultimate outcomes in a data set, the method comprising the acts of: (A) collecting relevant data about a series of known cohorts, where a new group of the data emerges at regular time intervals, measuring a characteristic of each group of the data at regular time intervals, and entering each said characteristic into a data set having at least two dimensions; (B) determining a number of frequency intervals N to be used to construct said distribution of unknown ultimate outcomes; (C) for each period I, constructing an aggregate distribution by: (a) calculating period-to-period ratios of the data characteristics; (b) identifying a range of ratio outcomes for cohort I ;(c) constructing subintervals for cohort I; and (d) calculating all possible ratio outcomes for cohort I; and (D) constructing a convolution distribution of outcomes for all said possible ratio cohorts combined, by: (a) selecting outcomes for any two cohorts A and B; (b) constructing a new range of outcomes for the convolution distribution of cohorts A and B; (c) constructing new subintervals for the convolution distribution of cohorts A and B; (d) calculating the combined outcomes for the two cohorts A and B to provide a resulting convolution distribution; and (e) combining the resulting convolution distribution with the distributions of outcomes for each remaining cohort by repeating each of the preceding acts D.(a) through D.(d) for each pair of cohorts.

Description

METHOD AND APPARATUS FOR RESERVE MEASUREMENT
TECHNICAL FIELD The invention relates generally to methods for the determination of historically based benchmarks against which estimates of future outcomes may be compared, thus developing a measure of the reasonableness of such estimates. More particularly, the invention develops historically based benchmarks against which estimates of property & casualty insurance loss reserves may be compared, thus developing a measure of the reasonableness of such loss reserve estimates.
BACKGROUND ART In the property & casualty insurance (hereinafter "insurance") industry, maintenance of proper loss and loss expense reserves (hereinafter "loss reserves") is
(a) Legally required,
(b) A vital element in the determination of the financial condition of an insurance company, and
(c) A major determinant of the current income and associated income statements. On one hand, over the years, a large variety of methodologies have been developed for the determination of estimates of loss reserves. On the other hand, there has been a virtual vacuum in the area of identification of historical benchmarks against which such loss reserve estimates may be compared, thereby providing a means for the determination of the reasonableness of such loss reserve estimates.
The process of estimating insurance company reserves involves four primary elements: raw data, assumptions, methods of estimation, and judgment of the loss reserve specialist (e.g., an actuary). Thus the various estimates that a loss reserve specialist makes necessarily rely on the judgment of the loss reserve specialist in the selection of assumptions and methods and ultimately in making the final reserve selection. While the application of judgment is an indispensable element in the process of arriving at loss reserve estimates, the manner of assessing the reasonableness of such estimates (via the identification of historically based benchmarks) remains a largely unexplored subject. It would be useful to have objective historically based benchmarks against which loss reserve estimates may be compared.
One direct method for developing such objective historically based benchmarks involves the use of historical ratios generated by comparing consecutive valuations of various cohorts of losses (e.g., losses incurred during a particular year or other time period) as they develop from one time period to another. To identify a historically based benchmark for loss reserve estimates, one can calculate period to period ratios for known consecutive valuations of cohorts of losses and use combinations of such ratios to project outcomes for all the cohorts for which future valuations have yet to emerge. The collection of all such outcomes forms an empirical frequency distribution of all the possible outcomes with all the statistical measures associated with a frequency distribution (such as mean, standard deviation, variance, and mode.) These statistical measures provide useful tools for assessing the reasonableness of loss reserve estimates.
Urrfortunately, while this direct method can identify every possible outcome based on the application of historical valuation-to-valuation ratios (i.e., possible "actual" outcomes), in practice the number of possible outcomes becomes unwieldy for even fairly small data sets. For larger data sets (i.e., involving more than ten cohorts), the process of calculating all possible outcomes becomes impractical, because of the dramatic increase in the amount of computing power necessary to calculate all possible outcomes.
An indirect solution exists. Instead of using calculated outcomes, individual outcomes for any one cohort can be slotted as they are calculated for each cohort (such as all losses incurred in a specific time period) into a set of N intervals, with N sufficiently large such that the difference between any calculated outcome and its surrogate (the midpoint of the appropriate interval) is not more than any given degree of tolerance, ε. For our purposes ε is expressed as a percent tolerance. In other words, a calculated outcome is never more than ε% from its surrogate. Once the N intervals are set for each cohort for each line of business, there will be N distinct outcomes for each accident year for each line of business (Each outcome being represented by the midpoint of an interval), and each distinct outcome having an associated frequency (The frequency associated with a specific midpoint is equal to the number of times a true calculated possible outcome is slotted in that interval). These individual distributions (one for each cohort, and each consisting of N distinct outcomes, with each distinct outcome having an associated frequency) are then combined to produce yet another distribution that combines all cohorts (accident years) and all lines of business. This convolution distribution is the underlying distribution that is implied by the given data arrays. It may be used to calculate a wide assortment of probabilities for various reserving propositions; and thus enable the development of a substantial measure of the reasonableness of any given loss reserve estimate.
DISCLOSURE OF INVENTION
BRIEF DESCRIPTION OF DRAWINGS
The accompanying drawings illustrate a complete exemplary embodiment of the invention according to the best modes so far devised for the practical application of the principles thereof, and in which:
FIGURE 1 illustrates an exemplary manner in which a subinterval is constructed so as to observe the error tolerance.
FIGURE 2 illustrates an exemplary manner in which the sum of two subintervals, each of which meets the error criterion, also meets the error criterion.
FIGURE 3 A shows a graph of an exemplary convolution distribution for two sample data sets (shown as Tables A and B).
FIGURE 3B shows the graph of an exemplary basic distribution produced for Table A.
FIGURE 3C shows the graph of an exemplary basic distribution produced for Table B.
FIGURE 4 shows a flow chart for an exemplary process according to the invention.
TABLE A. Sample Data Set A.
TABLE B. Sample Data Set B.
TABLE C. Shows tabular distribution of outcomes associated with Table A.
TABLE D. Shows tabular distribution of outcomes associated with Table B.
TABLE E. Shows tabular distribution of outcomes that represent the convolution of distributions shown in Tables C and D.
APPENDLX A. This is the basic program that produces Tables C and D for Data Sets A and B.
APPENDIX B. This is the convolution program that takes Tables C and D and combines them into Table E and Drawing 3 A.
BEST MODE(S) FOR CARRYING OUT THE INVENTION In a preferred embodiment, a process for calculating distribution outcomes is provided. This process can be implemented, for example, by a computer program, by electronic hardware specifically designed to execute the process or software implementing the process, by a microprocessor storing firmware instructions designed to cause computer hardware to carry out the process, or by any other combination or hybrid of hardware and software. The process can also be embodied in a computer readable medium that can be executed by computer hardware or software to implement the disclosed process.
ASSUMPTIONS
A. It is assumed that data will be provided for a number of lines of business K. Thus K = 1, 2, 3, ... , k, ..., K-2, K-L, K.
B. It is assumed that each line of business has a historical database for I accident years. Thus I = 1, 2, 3, ... , i, ... , 7-2, 7-1, 7. The most mature (oldest) year is designated year 1.
C. It is assumed that each accident year is developed through J periods of development. Thus J = 1, 2, 3, ...,/, ..., J-2, J-l, J.
D. It is assumed that 7> J(i.e. that no accident year develops longer than the total number of years in the historical database). This assumption allows one to cut off the loss development after a number of years have passed, as is done in Schedule P filed by insurance companies with the state regulatory authorities. (Schedule P is a series of exhibits required to be included in the Statutory Financial Statements of insurance companies in which, for each line of business and for all lines combined, each accident year is valued at annual intervals for a maximum often years of development. In other words, the tracking of valuation of individual accident years is abandoned after ten years on the premise that the vast majority of loss values have emerged by that time.)
CALCULATION OF N
First, the user determines the number of intervals N needed such that each calculated outcome is no more than a given percent tolerance ε from its slotted value at the midpoint of an interval.
1. The degree of tolerance, ε, is determined by the user.
2. The user also makes use of the ultimate valuation for accident years 1 through J as of the end of Jyears of development. Such valuations after Jyears have passed are routinely provided by insurance companies, on an annual basis, to the regulatory authorities. In other words, the process makes use of the historical factors utilized by the insurance company for the purpose of making an ultimate estimate for a cohort of claims after the required ten years of tracking has expired.
3. Each valuation point is designated by given Vy;k, where i is the accident year, j is the year of development, and k is the line of business. Thus V2;3,6 represents the value associated with accident year No. 2, at the end of development period No. 3, for line of business No. 6.
4. Finally, a loss development factor is defined as the ratio of the valuation at time j+1 to the value at time j, or Ly; ~
Figure imgf000007_0001
Thus, the data needed to drive the process would appear in an array similar to the following (this example shows only line of business No. 1 - and other arrays would be provided for the remaining lines of business):
Figure imgf000007_0002
Figure imgf000008_0001
A. Constructing N for accident year I for line of business No. 1, or constructing NIL
Constructing the maximum and minimum loss development factors for each development period. For each development period, all loss development factors are identified, and then the maximum (Max) and minimum (Min) loss factors are identified for each such set. For example, for year , the set of loss development factors through two years of development consists of all Loss Development Factors of the form L1;1;1, or {L1;ljl; L2;ljl; ... ; L ,ι; ... ; L1-2;lιl; Lι_ 1,1,1}. The Max and Min of this set is denoted by: Max {Lj,ι,ι} and Min {Ly,ι}, both taken over the index i, respectively; i = 1, 2 , 3, ... , 1-1. This process is repeated for each development period. This results in a set of maximums and minimums of the form Max {Ly,ι} and Min {Ly,ι}, with each development period yielding a max and a min loss development factor.
Constructing the maximum and minimum values for the cumulative loss development factors. Having identified the maximum and minimum loss development factor for each development period, now the max and min cumulative loss development factors for accident year I are constructed by multiplying together all the max and all the min loss development factors. For example:
Max cumulative loss development factor = π (Max {Ly,ι}), with the "Max function" ranging over i and the "II function" ranging over j.
Min cumulative loss development factor = π (Min {Ly,ι}), with the "Min function" ranging over i and the "II function" ranging over j. Thus, the difference between the maximum and minimum values of all outcomes for all products of loss development factors for year I is given by the quantity:
Figure imgf000009_0001
Any specific ultimate outcome for year I must fall somewhere along the closed interval defined by:
[π (Min {Ly,ι}) , π (Max {Ly,ι})].
Constructing the subintervals. The goal is to determine the number NLI, a number of subintervals for year I, such that (a) if the interval containing the full range of outcomes is divided into these subintervals, and (b) any calculated value that falls in that subinterval is replaced with the midpoint of that subinterval, then (c) the true (computed) value cannot be more that ε away from the midpoint of that subinterval.
The target number is denoted by Nτ,ι. The interval
[π (Min {Ly,ι}) , π (Max {Ly.i})]
is divided into (NLI - 1) equal subintervals. The width of any one of the new subintervals is given by:
J (Max {Ly;1}) - π (Min {Ly,,})] / (N -1)
and the radius of each subinterval is defined as one-half that number, or:
[π (Max {Ly;1}) - π (Min {Ly;1})] / 2(N -1). In practice, the subinterval can be open or closed on either end, to suit the particular application. For convenience, the subinterval defined here is an open/closed subinterval, with the leftmost point being excluded from the subinterval and the rightmost point being included in the subinterval. The leftmost point of the full range [that is, π (Min {Ly,ι})] is designated as the midpoint of the first subinterval. Then the full leftmost subinterval is given by:
[π (Min {Ly;1}) - [[π (Max {Ly,ι}) - π (Min {Ly;1})] / 2(NL1 - 1)] , π (Min {Ly,ι}) + [pπ (Max {Ly;1}) - ϋ (Min {Ly,!})] / 2(N -1)]].
The rightmost subinterval is similarly defined and is given by:
[π (Max {LU1}) - [[π (Max {Ly,j}) - π (Min {Ly, j})] / 2(N - 1)] , π (Max {Ly;1}) + [\U (Max {Ly,,}) - π (Min {Ly,!})] / 2(N -1)]].
This particular construction restores the odd subinterval that was subtracted from N I to arrive at the width of a subinterval.
Meeting the tolerance criterion, solving for N . The number of subintervals, NLI, that will assure tolerance criterion ε is met are now calculated.
Once a true value has been placed in its appropriate subinterval, it cannot be more than the radius of the subinterval away from its proposed surrogate (the midpoint of that subinterval). Thus the maximum error is the radius of the subinterval constructed above:
[π(Max {Ly, ) - π(Min {Ly,ι})] / 2(N -1).
Thus the true error (the distance from the true value to the midpoint of the associated subinterval) is always less than or equal to the maximum error (the radius of the subinterval as given above. So instead of dealing with the true error, a more stringent requirement is imposed, that the ratio of the radius of the subinterval to the midpoint of the subinterval be less than ε. In other words:
{[TI(Max {Ly,ι» - π(Min {Ly;1})] / 2(N ι -1)} / Midpoint of subinterval < ε.
Now note that:
{[π (Max {Ljj,ι}) - π (Min {Ly^})] / 2(N -1)} / Midpoint of subinterval <
{[π (Max {Ly,ι}) - π (Min {Ly,ι»] / 2(Nύ~l)} / π(Min{Ly,1})
since the "midpoint of the subinterval" is at least equal to or greater than π (Min {Ly,1}).
Thus the tolerance condition is met if NLI s selected such that:
{[π (Max {Ly,ι}) - π (Min {Ly,!})] / 2(N -1)} / π(Min {Ly,ι}) < ε.
Solving for Nτ,ι one obtains:
Nu > 1 + (l/2ε)[π (Max {Ly;1}) - π (Min {Ly;1})] / π (Min {Ly;1}).
The value N I is therefore sufficient so that when each true, computed value is replaced with the midpoint of the appropriate subinterval, the true value is never more than ε away from its surrogate, the midpoint of the subinterval.
Constructing Nfor line of business 1, or Nj.
Having constructed Nι,l3 the process is repeated as often as necessary to construct a corresponding N value for each accident year to be projected to ultimate, thus yielding an entire set of N values for line of business No. 1 :
NLI; NM; NI-2;1; Nw,ι; ... ; NJ+2,ι; Nj+i.
For each of these N values, the true value is never more than ε away from the midpoint of the corresponding subinterval for each accident year, from accident year J+l to accident year I. The maximum of all these N I values is selected to ensure that this condition (of the error being less than ε) is met for every single accident year individually. Thus, instead of a set of NLI values, Max {N I} is used, with i ranging from J+l to I. This value is designated Ni, meaning the N value associated with line of business No. 1.
C. Constructing Nfor all lines of business.
Once Ni, N2, N3, ... , N& have been constructed, the maximum of these N values, Max {Nj, i = 1,2,3, ..., k, ... , K}, is selected, so that maximum N is sufficient to satisfy the ε criterion for every single line of business.
Although this exemplary embodiment employs the above method for the calculation of N, N may also be a number chosen arbitrarily by the user, or may be based upon other considerations, such as, for example, the maximum number of intervals that could be calculated within a given amount of time by the computer used by the user to execute the program, or some given number that is high enough that ε is sufficiently low for the user's purposes regardless of the particular characteristics of the dataset to be evaluated (for example, if the N that provides a given error level ε is virtually always between 500 and 600, a user could select N=1000 rather than calculate N for each dataset). In the event that N is determined to meet some other criteria, it is still necessary to provide the historical loss data for each accident year for each line of business. Note also that when N is determined by other criteria, there is no assurance that the error tolerance ε is met. The process described below requires that the original data array has been provided regardless of whether or not it is used to determine N.
CONSTRUCTION OF THE CONVOLUTION DISTRIBUTION
Once N is determined, and N and the valuations described above have been provided, for example, entered as a value in a computer program, the process proceeds as follows:
A. Constructing the aggregate loss distribution for one year, and for this illustration accident year I. The process consists of the following actions:
1. Identifying the range of outcomes for accident year I. Using the Max Min functions described above, the Max/Min cumulative loss development factors are calculated, and those are multiplied by the latest valuation available for the accident year I. Thus the Max/Min ultimate values for accident year I are determined.
2. Constructing the subintervals for accident year I. Given the Max/Min ultimate values for accident year I, the N subintervals described above are identified.
3. Calculating all the different outcomes for accident year I. As discussed above, the product of each combination of loss development factors and the latest valuation for accident year I is calculated. As each outcome is calculated, the interval in which it belongs is determined and the outcome is replaced with the midpoint of that interval, and the frequency of outcomes appearing in that interval is increased by 1. The process continues until all combinations are calculated and all possible outcomes have been determined for accident year I. All results are slotted and their frequency is calculated. This process creates an aggregate loss (frequency) distribution for accident year I.
B. Constructing the aggregate loss distribution for each of the remaining accident years. The process described in Section A above for accident year I is then repeated for each of the remaining accident years. This results in a set of individual aggregate loss distributions, one for each accident year, and each consisting of N intervals, with each interval having an associated frequency.
C. Creating the convolution distribution for all accident years combined within one line of business. This process consists of the following actions:
1. Selecting two accident years from the set of all open accident years. Select any two accident years, preferably starting with the two most mature years.
2. Creating the new range of outcomes for the convolution distribution of the two accident years. This task is accomplished by calculating (a) the sum of the two greatest midpoints of the two component distributions and (b) the sum of the two smallest midpoints of the component distributions. These calculations result in a new Max/Min for the two accident years combined.
3. Creating the new subintervals for the convolution distribution of the two selected accident years. Once again, divide the new interval into N subintervals as described above.
4. Calculating the combined outcomes for the two accident years. Every outcome from the first component distribution is then added to every outcome of the second component distribution, and the results are slotted in the new N subintervals constructed in the prior step. The frequencies for each two subintervals thus added are multiplied and tagged as belonging with the combined subinterval. This process yields the first convolution distribution - the one belonging to the two selected accident years.
5. Creating the ultimate convolution distribution for all accident years for a line of business. Actions 1-4 are then repeated; combining the first convolution distribution derived in step 4 immediately above with the distribution of outcomes of another accident year. This process yields a second convolution distribution representing the combined distribution for the three selected accident years. The process is repeated until all accident year outcomes have been combined.
The result is a single aggregate (convolution) loss distribution for a line of business.
D. Creating the convolution distribution for all lines of business combined. This process consists of Steps 1-5 as described in the immediately preceding section except that the component distributions are those belonging to lines of business. The end result is an aggregate (convolution) loss distribution for all lines of business combined, for the given insurance company.
The above described method may be implemented by instructions stored on a "computer readable medium." The term "computer readable medium" as described herein refers to any medium that participates in providing instructions to a computer processor for execution. Such a medium may take many forms, including, but not limited to, non- volatile media, volatile media, and transmission media. Non-volatile media include, for example, optical or magnetic disks. Volatile media include dynamic memory, such as the random access memory (RAM) found in personal computers. Transmission media may include coaxial cables, copper wire, and fiber optics. Transmission media may also take the form of acoustic or light (electromagnetic) waves, such as those generated during radio frequency (RF) and infrared (TR) data communications. Common forms of computer readable media include, for example, a floppy disk, a hard disk, magnetic tape, CD-ROM, DVD-ROM, punch cards, paper tape, any other physical medium with patters of holes, RAM, PROM, EPROM, FLASHEPROM, other memory chips or cartridges, a carrier wave, or any other medium from which a computer can read instructions.
The present invention has been described in sufficient detail to teach its practice by one of ordinary skill in the art. However, the above description and drawings of exemplary embodiments are only illustrative of preferred embodiments that achieve the objects, features and advantages of the present invention, and it is not intended that the present invention be limited thereto. Any modification of the present invention that comes within the spirit and scope of the following claims is considered part of the present invention.
INDUSTRIAL APPLICABILITY
The present invention has utility, for example, in the property and casualty insurance industry, to assist in satisfying legal requirements in the field, and to efficiently determine estimates of loss reserves necessary to conduct business.
ANNEX 1
DEMONSTRATION OF VALIDITY OF N AS CALCULATED
A. Demonstrating that the ε condition remains satisfied when the cumulative loss development factors are applied to a base number (the given, and latest, value).
All work thus far has been performed for just the cumulative loss development factors. In reality, when one projects ultimate values, one takes the cumulative loss development factor and multiplies it by the latest reported value. When all the calculations carried out above are carried out with this last step included (i.e., multiplying the latest value by the cumulative loss development factor), it will be readily seen that the latest reported amount simply cancels out at all points of the calculation. For example, if we take the final formula for NLI developed above, we have:
Nu > 1 + (l/2ε)[π (Max {Ly,!}) - π (Min {Lytl})] / π(Min {Ly,!}).
And if each cumulative loss development factor is multiplied by the relevant latest reported value, VLI,I, we would have:
NLI > 1 + (1 28) [(VLI,I) Π (Max {Ly,ι}) - (V ,ι) II (Min {Ly,ι})] /
(VI,1,ι) π(Min {Ly,ι}).
And VLI,I cancels out from all parts of the major fraction. And the same is true for all other accident years.
B. Demonstrating that the ε condition remains satisfied when accident years are combined (i.e., added) in order to arrive at the aggregate loss distribution for all accident years combined, all within line of business No.l. Observation. Given two sets of intervals, each set consisting of n subintervals of identical width, one set spanning the interval (a-Δl5 a + (2n-l)Δι), where Δi is the radius of a subinterval, that has the midpoints of the component intervals placed at a + 2iΔι, with i ranging from 0 to n-1, and the other set spanning (b - Δ2, b+ 2n-l)Δ2), where Δ2 is the radius of a subinterval, that has the midpoints of the respective intervals placed atb + 2iΔ2, with i ranging from 0 to n-1, one can then construct a new set of subintervals consisting of the "sum" of the two original sets of intervals, spanning ((a + b) - (Δ ι+ Δ2), (a + b)+(2n-l)(Δ! + Δ2)), each having a with of (Δi + Δ2).
The midpoints of the new set of subintervals would be located at (a + b),(a + b) + 2(Δι + Δ2), (a + b)+4(Δι + Δ2), ..., (a + b) + 2(n-l)(Δι + Δ2). And thus the radius of the new subintervals (i.e., Δi+Δ2) would be equal to the sum of the radii of the two component subintervals.
With this background, let us now consider two sets of subintervals, with each set consisting of n subintervals, with the subintervals having radii of Δi and Δ2, for the two sets, respectively, with the midpoints of the respective sets of subintervals' given as follows:
Set A: a, a + 2Δι, a + 4Δl5 a + 6Ah a + 8Δi, a + 10Δι, ... , a + 2(n - l)Δι Set B: b, b + 2Δ2,b + 4Δ2 , b + 6Δ2, b + 8Δ2, b + 10Δ2, ... , b + 2(n - 1)Δ2
Let us now assume that Set A is the set of subintervals produced for Cohort A, consisting of a group of losses (e.g., the losses incurred during a specific accident year) and that Set B is the set of subintervals produced for Cohort B, consisting of another group of losses (e.g., the losses incurred during another specific accident year) By our construction thus far, we know that any true calculated value of ultimate outcomes produced for Cohort A has been replaced by one of the midpoints associated with Set A. We constructed these subintervals such that the error generated by substituting a true calculated value with a midpoint of a subinterval is not greater than ε. Put yet differently, the difference between any true calculated value Va and the nearest midpoint of the subintervals in Set A is not more than Δi. Therefore, the ratio of Δi to the leftmost point of all the subintervals in Set A, that is (a- Δ , is less than ε. In formula form this is given by:
Δι / (a - Δ < ε
Similarly, for Set B, we can reach the conclusion that a true calculated value Vb meets the following parallel construction noted above for Set A.:
Δ2/ (b - Δ2) < ε
Given that if one had infinite computing power, one would never resort to substituting midpoints of subintervals for true calculated values, it is appropriate at this point to inquire about the amount of error that one generates by adding two surrogates (midpoints) for Va and Vb, when both Va and Vb individually meet the error tolerance criterion ε. Thus the question becomes:what can be said about
(Δι + Δ2) / [(a - Δι) + (b - Δ2)]
in relation to the original error tolerance ε?
The tolerance condition Δ, / (a - Δι) < ε implies that Δi < (a - Δι)ε.
Similarly, the tolerance condition Δ2 / (b - Δ2) < ε implies that Δ2 < (b - Δ2)ε. Adding the two inequalities yields:
1 + Δ2) < [(a - Δ1)ε] + [(b - Δ2)ε] or:
1 + Δ2) <. [(a - Δ1) + [(b - Δ2)]ε
Dividing both sides of the inequality by [(a - Δi) + (b - Δ2)] yields:
{A.l + A.2) / [{a - A1) + (b - A2)]< e
Thus when adding one accident year's approximation to another's, when each approximation meets the ε condition, it is demonstrated that the sum of the two approximations also meets the ε condition. And, this kind of demonstration can continue to be extended, one cohort at a time, until all the cohorts in a data array have been accounted for.
onstrating that the ε condition remains satisfied when aggregate distributions two lines of business are added together.
Using the identical logic as that used above in Section B, it is possible to demonstrate that when two distributions of outcomes, each of which meeting the ε criterion, will continue to meet the ε criterion when the convolution distribution is constructed by adding the respective outcomes from each of the two distributions.
DRAWING NO. 1
ILLUSTRATION OF THE MANNER IN WHICH A SUBINTERVAL IS CONSTRUCTED SUCH THAT THE ERROR TOLERANCE IS MET
x
A a m b B
<-|x - m|-» «- Δ = | m - b |-»
The line segment (a,b) represents a typical subinterval, having a midpoint at M ( = l/2( a + b )), such that a calculated point, such as x, may be slotted in this subinterval, and x is ultimately replaced by m.
The interval (A,B) is the segment bounded by A, the smallest midpoint of all subintervals, and B, the largest midpoint of all subintervals. Thus the midpoints of all subintervals are evenly spaced within this larger interval.
The point corresponding to x designates a typical calculated outcome. In this illustration it is selected to between the midpoint M and the endpoint b.
The true error that is generated by replacing x with m is given by the amount | x - m |.
The maximum error that is possible is denoted by Δ = | m - b |.
Requiring that the replacement of x by m does not generate an error greater than ε means requiring that the error is less than the ratio of | x - m | / m.
In the construction advanced by this invention we assure this condition is met by going through the following transformation:
ε = | x -m | / m < | m -b | / m ≤ | m - b | / A (1)
Thus dividing (A,B) into sufficiently large number of subintervals such that the condition in (1) . is met assures that the subinterval construction preserves the accuracy requirement. DRAWING NO. 2
ILLUSTRATION OF THE MANNER
IN WHICH THE SUM OF TWO INTERVALS, EACH OF WHICH
MEETS THE ERROR CRITERION, ALSO MEETS
THE ERROR CRITERION
Given a subinterval from the set of subintervals produced for Cohort I such that the subinterval construction meets the error criterion ε: x
II
Figure imgf000022_0001
<r\x - m\-> <r A = \ m-b \-
And given a subinterval from the set of subintervals produced for Cohort II such that the subinterval construction meets the error criterion ε:
A' a' m' b' B'
<-|x'-m'|-» - A' = \m' - b'\->
Then the construction of the combination of these two subinterval into a new subinterval (thus forming a convolution subinterval) yields the following:
x + x' II
II II 1 II II
A+A' a+a' m+m' b+b' B+B'
<-|(x+x')-(m+m')|-> <r{A+A') = |(m + m') - (b + b')|-_>
The error that would be generated if x+x' was replaced with m+m' is given by:
|(x+x')-(m+m')|
And we wish for this amount to be less than the specified tolerance ε. Thus we construct the following sequence of successively more stringent constraints:
|(x+x')-(m+m')| / (m+m') < |(b+b')-(m+m')| /(m+m') < |(b+b')-(m+m')| / (a+a')| (2) We already have, by construction, the conditions that
|b-m| / a < ε| and |b'-m'| / a' < ε. Or, equivalently,
|b-m|< aε| and |b'-m'| < a'ε. Adding both sides of the inequalities yields:
|b-m| + |b'-m'| < aε + a'ε = (a + a')ε.
Dividing both sides by (a+a') yields the desired condition as shown in (2) above.
* * *
SAMPLE DATASET A Calculation of N for Set A
Figure imgf000024_0001
SAMPLE DATASET B Calculation of N for Set B
Figure imgf000025_0001
Figure imgf000026_0001
Figure imgf000027_0001
Figure imgf000028_0001
Figure imgf000029_0001
Figure imgf000030_0001
Figure imgf000031_0001
Figure imgf000032_0001
Figure imgf000033_0001
Figure imgf000034_0001
Figure imgf000035_0001
Figure imgf000036_0001
Figure imgf000037_0001
Figure imgf000038_0001
Figure imgf000039_0001
Figure imgf000041_0001
Figure imgf000042_0001
Figure imgf000043_0001
Figure imgf000044_0001
Figure imgf000045_0001
Figure imgf000046_0001
Figure imgf000047_0001
Figure imgf000048_0001
Figure imgf000049_0001
Figure imgf000050_0001
Figure imgf000051_0001
Figure imgf000052_0001
Figure imgf000053_0001
Figure imgf000054_0001
Figure imgf000055_0001
Figure imgf000056_0001
Figure imgf000057_0001
Figure imgf000058_0001
Figure imgf000059_0001
Figure imgf000060_0001
Figure imgf000061_0001
Figure imgf000062_0001
Figure imgf000063_0001
Figure imgf000064_0001
Figure imgf000065_0001
Figure imgf000066_0001
Figure imgf000067_0001
Figure imgf000068_0001
Figure imgf000069_0001
Figure imgf000070_0001
Figure imgf000071_0001
Figure imgf000072_0001
Figure imgf000073_0001
Figure imgf000074_0001
Figure imgf000075_0001
Figure imgf000076_0001
Figure imgf000077_0001
Figure imgf000078_0001
Figure imgf000079_0001
Figure imgf000080_0001
Figure imgf000081_0001
Figure imgf000082_0001
Figure imgf000083_0001
Figure imgf000084_0001
Figure imgf000085_0001
Figure imgf000086_0001
Figure imgf000087_0001
Figure imgf000088_0001
Figure imgf000089_0001
Figure imgf000090_0001
Figure imgf000091_0001
Figure imgf000092_0001
Figure imgf000093_0001
Figure imgf000094_0001
Figure imgf000095_0001
lod lel - 1
Reserve f^^^..'^^3^ψ^ fi^t ^ Sl^
Figure imgf000096_0001
Variable Declarations
Last Modified 4/7/01
These are counters and temporary variables used during the program
Dim Intl, IntJ, IntK, myCheck, myCol, myCount, myCounter, myFirstCol, myFirstRawRow, myRow As Integer
Dim myTest As Single
Dim myInterval, myLastData, myMax, my in, mySum, yFirstSum As Double
This array holds the raw data entered by the operator
Public myRawDatad To 40, 1 To 20) As Single
This array holds the calculated loss factors for each year
Public yLossFactors (1 To 40, 1 To 20) As Double
This array holds the starting and ending row values for each column of data in the LossFactors array
Public myLossFactorsDat d To 20, 1 To 2) As Integer
This array holds the interim Development data for each year
Public myTempDev(l To 100) As Double
This array holds the development data for each year, in the Development subroutine
Public myDevd To 1000, 1 To 2, 1 To 20) As Double
This array holds the interim Loss Products for each year
Public myProducts(l To 1000, 1 To 2, 1 To 20) As Double 1 These arrays are used by the myStandardize () procedure
Public myStandardd To 100, 1 To 2) As Double
Public myInput (1 To 20000, 1 To 2 ) As Double
Public myOutput (1 To 1000, l To 2) As Double 1 This array is used by the myDistStandardize () procedure
Public myDistlnputd To 1000000, 1 To 2) As Double ' This temporary array is used in the Products subroutine
Public myTe pd To 20000, 1 To 2 , 1 To 20) As Double ' These routines are used in the Distribution subroutine
Public myTempDist d To 1000000, 1 To 2) As Double
Public myDistd To 1O0O, 1 To 2) As Double
Sub Begin 0
* Macro modified 1/21/02 by Lloyd Lee Wilson ' Begins Reserve Neasure Processing
* Opening Screen ID ' s program and asks if operator wants to continue. myCheck; = MsgBox ( "Reserve Measure Program Version S.00 - 1,000 Intervals (c) MMI Continue?", irbYesNo) If myCheck = vbNo Then
Workbooks ( "R 5.0 - 1,000 Intervals.xls") .Close SaveChanges :=False End If Main End Sub Sub MainO
' Macro modified 10/20/00 by Lloyd Lee Wilson ' Main subroutine for Reserve Measure Program
Call EnterYears
Call LoadData
Call CalcLossFactors
Call Products
Call Developments
Call Distribution
Call Massage ' Results and Union are only used to combine analyses of two or more lines at one time
Call Results
Call Union End Sub Sub EnterYears ( )
EnterYears Macro
Macro modified 3/10/01 by Lloyd Lee Wilson
This macro labels the RawData worksheet which holds the raw data entered by the program operator.
Label Axes
Sheets ("RawData") .Select
Range ("Al") . FormulaRlCl = "Years " iodulel - 2 myDat^j^"'IlftPυlBI>iπiϊϊ;Mfeβrl Jhfe 'Ftesrffifi Year for which you have data in YYYY format") Label Column of Years on RawData Sheet Range ("A2") .Select For intl = 1 To 40
ActiveCell.Offset (Intl - 1, 0) = myDate + (Intl - 1) Next Intl !nd Sub lub LoadData { ) Macro modifed 3/21/00 by Lloyd Lee Wilson Load data from RawData worksheet into myRawData array The entire 20X40 table is loaded, including zero values
Sheets ("RawData") .Select Range ( "B2 » ) . Select For Intl = 1 To 40 For IntJ » 1 To 20 myRawData (Int , IntJ) = ActiveCell .Offset (Intl - 1, IntJ - 1) Next IntJ Next Intl 3nd Sub
3ub CalcLossFactors () CalcLossFactors Macro
Macro modified 9/21/00 by Lloyd Lee Wilson This macro calculates the Loss Factors from the Raw Data entered by the operator.
' Copy Column of Labels to Loss Data Sheet Sheets ("RawData") .Select ActiveWindow. Panes (1) .Activate Columns ("A:A11) .Select Selection.Copy Sheets ("LossFactors") .Select Columns ("A:A") .Select ActiveSheet . Paste Application. CutCopyMode = False 1 Initialize myLossFactors array For Intl = 1 To 40
For IntJ = 1 To 20 ' myLossFactors (Intl, IntJ) = 0 Next IntJ Next Intl 1 Calculate Loss Factors, using entire myRawData array, including zero values 1 (avoid dividing by zero)
1 For each cell in the myRawData array, the corresponding Loss Factor is the value of that ' cell divided into the value of the cell in the same row but one column to the left. For Intl = 1 To 40 For IntJ = 2 To 20 1 If the numerator of the division is zero, set loss factor to zero If myRawDat (Intl, IntJ) - 0 Then myLossFactors (Intl, IntJ - 1) = 0 Else
1 If the denominator of the division ia zero, set loss factor to zero If myRawDat (Intl, IntJ - 1) = 0 Then myLossFactors (Intl, IntJ - 1) = 0 Else myLossFactors (Intl, IntJ - 1) = myRawData (Intl, IntJ) / myRawData (Intl , IntJ )
End If End If Next IntJ Next Intl ' Display Loss Factors
Sheets ("LossFactors") . Select Range ("A2") .Select For Intl = 1 To 40 For IntJ = 1 To 20
ActiveCell.Offset (Intl - 1, IntJ) = myLossFactors (Intl, IntJ) Next IntJ Next Intl Snd Sub ;ub Products () odulel - 3
Macro (ji i#i d li/| α i|gip(l.ibι ' QHMyζf 1'iffiϊ Wilson
Each los's fac€or"s ould be multiplied by all the loss factors in columns to the right of its column This is done by multiplying the first two columns, standardizing the result to a set of 1000 values, then storing those values in myProducts. Then the next column of loss factors is multiplied by the most recent set stored in myProducts initialize myTemp - it stores the results of multiplication before standardizing ' there are 20X1000 products for each column For Intl = 1 To 20000
' for each product, there is a value and an on-off flag For IntJ = 1 To 2
' there are up to 20 columns of products that could be stored For IntK = 1 To 20 myTem (Intl, IntJ, IntK) = 0 Next IntK Next IntJ Next Intl initialize myProducts - it stores the results of multiplication after standardizing For Intl = 1 To 1000 For IntJ = 1 To 2
For IntK = 1 To 20 myProducts (Intl, IntJ, IntK) = 0 Next IntK Next IntJ Next Intl
1 Look for first non-zero data
1 begin at column 20 and look at every cell of myLossFactors until a positive value is found myCol = 20 Do
For Intl = 1 To 40
If myLossFactors (Intl, myCol) > 0 Then myFirstCol = myCol Exit Do End if Next Intl myCol = myCol - 1 Loop ' myFirstCol is now the rightmost column with nonzero data
' There is no column to the right of myFirstCol, so there is nothing to multiply it by • So put myLossFactors for first column w/data in same column of myTemp and myProducts For Intl = 1 To 20
'store the LossFactor value in the value part of the myTemp array myTem (Intl, 2, myFirstCol) = myLossFactors (Intl, myFirstCol) 1 if the LossFactor value is nonzero, set the flag to 1 If myTemp (Intl, 2, myFirstCol) > 0 Then myTemp (Intl, 1, myFirstCol) = 1 End If
1 store the LossFactor value in the value part of the myProducts array myProducts (Intl, 2, myFirstCol) = myLossFactors (Intl, myFirstCol) ' if the LossFactor value is nonzero, set the flag to 1 If myProducts (Intl, 2, myFirstCol) > 0 Then myProducts (intI, 1, myFirstCol) = 1 End If Next Intl calculate myProducts for remaining columns begin with the rightmost column of nonzero data myCol = myFirstCol
DO
' stop when column 1 is reached If myCol = l Then
Exit Do End If
' multiply values in current column by those in column to the left - store in myTemp 1 current column is myCol; column to the left is myCol-1 lodulel
' i tl Intl =
Figure imgf000099_0001
1 IntJ counts the products in myCol, after standardization For IntJ = 1 To 1000
'IntK counts the values in myCol-1, which has not yet been multiplied For IntK = 1 To 20
'dimension 2 holds the products myTem (Intl, 2, myCol - 1) = myProducts (IntJ, 2, myCol) * myLossFactors (IntK, myCol
- 1)
' when dimension 2 is nonzero, dimension 1 holds myProducts for the previous colum If myTemp (Intl, 2, myCol - 1) > 0 Then myTemp (Intl, 1, myCol - 1) = myProducts (IntJ, 1, myCol) Else myTem (Intl, 1, myCol - 1) = 0 End If
Intl = Intl + 1 Next IntK Next IntJ
'Load current colomn of myTemp into my Input For Intl = 1 To 20000
For IntJ = 1 To 2 mylnput (Intl, IntJ) = myTem (Intl, IntJ, myCol - 1)
Next IntJ Next Intl
' call standardization subroutine Call myStandardise
!
' store standardized output in appropriate column of myProducts For Intl = 1 To 1000
For IntJ = 1 To 2 myProducts (Intl, IntJ, myCol - 1) = myOutput (Intl, IntJ)
Next IntJ Next Intl
' Move to next column myCol = myCol - 1
display sample results
Sheets ("Products") .Select aangef'Al") .Select
For Intl = 1 To 1000 For IntJ = 1 To 2
ActiveCell.Offset (Intl 1, IntJ - 1) = = myProducts (Intl, IntJ, 1) ActiveCell.Of se (intl 1, IntJ + 1) = = myProducts (Intl, IntJ, 2) ActiveCell.Offset (Intl 1, IntJ + 3) = = myProducts (Intl, IntJ, 3) ActiveCell.Offset (Intl 1, IntJ +, 5) • » myProducts (Intl, IntJ, 4) ActiveCell .Offset (Intl 1, IntJ + 7) = - myProducts (Intl, IntJ, 5) ActiveCell.O fset (Intl 1, IntJ + 9) = = myProducts (Intl, IntJ, 6) ActiveCell.Offset (Intl 1, IntJ + 11) = myProducts (Intl, , IntJ, , 7) ActiveCell.Offset (Intl 1, IntJ + 13) = myProducts (Intl, , IntJ, , 8) ActiveCell.Offset (Intl 1, IntJ + 15) = myProducts (Intl, , IntJ, , 9) ActiveCell.Offset (Intl 1, IntJ + 17) = myProducts (Intl, , IntJ, , 10) Next IntJ lext intl
2nd Sub
3ub Developments ()
1 Macro modified 10/09/02 by Lloyd Lee Wilson
1 multiplies each year's last data by future loss factors
1 Because .right-most column is ultimate values, first number to be developed is in myFirstCol initialize myDev
Figure imgf000099_0002
For intK = 1 To 20 myDe (Intl, IntJ, IntK) = 0 Next IntK odulel - 5
Next inH ext Intl
Find last cell with non-zero data in myFirstCol of myRawData ntl = 20 Do myTest = myRawDat (Intl, myFirstCol) If myTest <> 0 Then myFirstRawRow = Intl Exit Do End If
Intl = Intl - 1 Loop multiply last raw data value for each year (row) by myProducts array for same column lyCol = myFirs Col lyRow = myFirstRawRow )o
For IntJ - 1 To 1000 myDev(IntJ, 2, myCol) myRawData (myRow, myCol) * myProducts (IntJ, 2, myCol) myDev (IntJ, 1, myCol) myProducts (IntJ, 1, myCol) Next IntJ
'Quit when column One has been calculated myCol = myCol - 1 myRow = myRow + 1 If myCol = 0 Then
Exit Do End If joop
' temporarily display myDev resuits
Sheets { "Development") .Select
Range ("Al") .Select
For intl = 1 To looo For IntJ = 1 To 2
ActiveCell. O fset (Intl - 1, IntJ - 1) = myDe (Intl, IntJ, 1) ActiveCell . Offset (Intl - 1, IntJ + 1) = myDev (Intl, IntJ, 2) ActiveCell . Offset (Intl - 1, IntJ + 3) = myDev (Intl, IntJ, 3) ActiveCell. Offset (Intl - 1, IntJ + 5) = myDe (Intl, IntJ, 4) ActiveCell . O fset (Intl - 1, IntJ + 7) = myDe (Intl, IntJ, 5) ActiveCell. Offset (Intl - 1 , IntJ + 9) = myDev (Intl, IntJ, 6) ActiveCell Offset (Intl - 1, IntJ + 11) = myDev (Intl, , IntJ, , 7) ActiveCell. O fset (Intl - 1, IntJ + 13) = myDev (Intl, , IntJ, , 8) ActiveCell . Offset (intl - 1, IntJ + 15) = myDev (Intl, , IntJ, , 9) ActiveCell Offset (Intl - 1, IntJ + 17) = myDsvdntl, , IntJ, , 10) Next IntJ
Next Intl
End Sub
Sub Distribution!)
' Macro modified 4/07/01 by Lloyd Lee Wilson
' Calculates the distribution of multiplying all the development arrays together
1 initialize myTe pDist array For Intl = 1 To 1000000
For IntJ = 1 To 2 myTempDist (Intl, IntJ) = 0
Next IntJ Next Intl
' initialize yDist array For Intl = 1 To 1000 For IntJ = 1 To 2 myDist (Intl, IntJ) = 0 Next IntJ Next Intl
' store column One of myDev in myDist odulel - 5
Next Ints3i ext Intl
Find last cell with non-zero data in myFirstCol of myRawData ntl = 20 Do myTest = myRawData (Intl, myFirstCol) If myTest < > 0 Then myFirstRawRow = Intl Exit Do End If
Intl = Intl - 1 Loop multiply last raw data value for each year (row) by myProducts array for same column = myFirstCol lyRow = myFirstRawRow )o
For IntJ = 1 To 1000 myDev(IntJ, 2, myCol) = myRawDat (myRow, myCol) * myProducts (IntJ, 2, myCol) yDevdntJ, 1, myCol) = myProducts (IntJ, 1, myCol) Next IntJ
•Quit when column One has been calculated myCol = myCol - 1 myRow = myRow +■ 1 If myCol - 0 Then
Exit Do End If
1 temporarily display myDev resuits
Sheets ("Development") .Select ϊange("Al») .Select
Por Intl = 1 To 1000 For IntJ = 1 To 2
ActiveCell.O fse (Intl - 1, intj - 1) = myDev (Intl, IntJ, 1) ActiveCell.Offset (Intl - 1, intj + 1) = myDev (Intl, IntJ, 2) ActiveCell.Offset (Intl - 1, IntJ + 3) = myDev (Intl, IntJ, 3) ActiveCell.Offset (Intl - 1, IntJ + 5) = myDev (Intl, IntJ, 4) ActiveCell.Offset (Intl - 1, IntJ + 7) = myDe (Intl, IntJ, 5) ActiveCell.Offset (Intl - 1, IntJ + 9) = myDe (Intl, IntJ, 5) ActiveCell.Offset (Intl - 1, IntJ + ID = myDev (Intl, , IntJ, 7) ActiveCell.Offset (Intl - 1, IntJ + 13) = myDev (Intl, , IntJ, 8) ActiveCell.Offse (intl - 1, IntJ + 15) = myDev (Intl, , IntJ, 9) ActiveCell.Of se (intl - 1, IntJ + 17) = myDev (Intl, , IntJ, , 10) Next IntJ
Next Intl
End Sub
Sub Distribution ()
• Macro modified 4/07/01 by Lloyd Lee Wilson
' Calculates the distribution of multiplying all the development arrays together
1 initialize myTempDist array For Intl = 1 To 1000000
For IntJ = 1 To 2 myTempDis (Intl, IntJ) = 0
Next IntJ Next Intl
' initialize myDist array For intl = 1 To 1000
For IntJ = 1 To 2 myDist (Intl, IntJ) = 0
Next IntJ Next Intl
' store column One of myDev in myDist lodulel - 6
"or Intl = J.- ψo 0.0Q
For Int = 1 To 2 myDist (Intl , IntJ) = myDev ( Intl , IntJ, l)
Next IntJ Text Intl multiply myDist by next column of myDev and store in myTempDist lyCol = 2 to
Intl = 1
For IntJ = 1 To 1000
For IntK = 1 To 1000 myTempDist (Intl, 1) = myDev ntJ, 1, myCol) * myDist (IntK, 1) myTempDist (Intl, 2) = myDev(IntJ, 2, myCol) + myDis (IntK, 2) Intl = Intl + 1 Next IntK Next IntJ store myTempDist in myDistlnput for intl = 1 To 1000000 For IntJ = 1 To 2 myDistlnput (Intl, IntJ) = myTempDist (Intl, IntJ) Next IntJ lext intl
1 temporarily display myDistlnput on Distribution .Sheets ("Distribution") .Select lange("Ξ2") .Select ?or intl = l o loooo
For IntJ = 1 To 2
ActiveCell. Offset (Intl - 1, IntJ - 1) = myDistlnput (Intl, IntJ)
Next IntJ (Text Intl
' standardize myTempDist and store in myDist Ξall myDistStandardize
' store myOutput in myDist ?or Intl = l To 1000
For IntJ = 1 To 2 myDist (Intl, IntJ) = myOutput (IntI, IntJ)
Next IntJ SText Intl
1 temporarily display myDist on Distribution Range ( "02 " ) . Select For Intl = 1 To 1000
For IntJ = 1 TO 2
ActiveCell.Of set (Intl - 1, IntJ - 1) = myDistdntl, IntJ)
Next IntJ dext Intl
' check to see if done iiyCol =. myCol + 1
If myCol > myFirstCol Then
Exit Do Snd If
• repeat if necessary
' Display final unadjusted distribution Sheets ("Distribution") .Select Range ( "A3 " ) . Select For Intl = 1 To 1000 For IntJ = 1 To 2
ActiveCell.Offse (intl - 1, IntJ - 1) = myDist (Intl, IntJ) Next IntJ Next Intl
Shift to Chart lodulel - 7
Sheets ( "Chart' ) . select Range ( "Af"') Seledte
3nd Sub
;ub Massage 0
Macro modified 4/4/01 by Lloyd Lee Wilson
Scales the final distribution so frequency column is a percentage of total frequencies, for better charting
Adjusts values by subtracting sum of last observed data values and adding constant
First step is to scale the frequencies This is done by the spreadsheet it divides all frequencies by the sum of all frequencies and stores the result in Column C
' Second step is to adjust the values column find the sum of last observed values mySum = 0 myCount = 0 myCol = 1 myRow = l For IntJ = 1 To 20 Intl = 1
Do
If myRawData (Intl, IntJ) = 0 Then If myCount > 0 Then mySum = mySum + myRawData (Intl - 1, IntJ) myLastData = myRawData (Intl - l, IntJ) Exit Do End If Else myCount = 1 End If If Intl = 40 Then
Exit Do Else
Intl = Intl + 1 End If Loop myCount = 0 Next IntJ mySum = mySum - myLastData
' for each value point, subtract the observed value sum and add the constant
' read constant value and add to mySum
Sheets ("RawData" ) . Select
Range ("B45") .Select mySum - mySum - ActiveCell.Of set (0, 0)
' subtract the new mySum from each of the values in myDist
For Intl = 1 To 1000 myDis (Intl, 2) = myDist (Intl, 2) - mySum Next Intl
Display results on Distribution worksheet Sheets ("Distribution") .Select Range ("C3") .Select For Intl - 1 To 1000
ActiveCell.Offset (Intl - 1, 1) = myDist(lntl, 2) Next Intl
Final step is to calculate the weighted average of the adjusted outcome distribution The spreadsheet does this
:nd Sub ub Results ()
This macro will store the current results in a separate array to be used if several lines of i surance are to be analyzed nd Sub ub Unio () lodulel - 8
This macro will detβκmiiαe wBβttftw-ii another line of insurance is' tc „- analyzed. When alll"'sfepairate,"'lϊn'Ss have been analyzed, a consolidated distribution will be generated and ;harted
;nd Sub iub myStandardize ()
Macro modified 1/21/02 by Lloyd Lee Wilson
Standardizes an array of 20000 values into 1000 buckets
Input data is mylnpu O
Output data is myOutput ()
Initialize myOutput array For Intl = 1 To 1000 For IntJ = 1 To 2 myOutput (Intl, IntJ) = 0 Next IntJ Next Intl
Determine largest Input value and smallest non-zero Input value myMax = rayInpu (1, 2) If mylnputd, 2) > 0 Then myMin = mylnput (1 , 2) Else yMin = 10000000 End If For Intl = 2 To 20000
If mylnpu (Intl, 2) > myMax Then myMax = mylnput (IritI, 2) Else
If mylnput (Intl, 2) < myMin Then If mylnput (Intl, 2) > 0 Then myMin = mylnput (Intl, 2) End if End If End If Next Intl
1 Determine standard intervals and mid-points mylnterval = (myMax - myMin) / 999 myOutput (1, 2) = myMin + (mylnterval / 2) For Intl = 2 To 1000 myOutpu (Intl, 2) = myOutpu (Intl - 1, 2) 4 mylnterval Next Intl
* Sort values into buckets For Intl = 1 To 20000 myCounter - 0 IntJ - 1
1 sort only non-zero values If mylnput (Intl, 2) > 0 Then If mylnput (Intl, 1) > 0 Then Do Until myCounter > 0
If mylnput (Intl, 2) <= myOutput (IntJ, 2) Then myOutput (IntJ, 1) = myOutput (IntJ, 1) + mylnput (Intl, 1) myCounter = 1 Else
IntJ = IntJ H- 1 End If Loop End If End If Next Int
Reset Interval markers myOutput (1, 2) = myMin For Intl = 2 To 1000 myOutput (Intl, 2) = myOutput (Intl - 1, 2) + mylnterval Next Intl nd Sub lodulel - 9
Sub myDistStandardiz.eI) Macro mo$ti£±eia l 2?l'>0>2' by Lϊoyd L'ee Wilson Standardizes an array of 1000000 values into 1000 buckets Input data is myDistlnput () Output data is myOutput ()
Initialize myOutput array For Intl = 1 To 1000 For IntJ = 1 To 2 myOutpu (Intl, IntJ) = 0 Next IntJ Next intl
' Determine largest Input value and smallest non-zero Input value myMax = myDistlnput (1, 2) If mylnput (1, 2) > 0 Then myMin = mylnput (1, 2 ) Else myMin - 1000000000 End If For Intl = 2 To 1000000
If myDistlnput (Intl, 2) > myMax Then myMax = myDistlnput (Intl, 2) Else
If myDistlnput (Intl, 2) < myMin Then If myDistlnpu (Intl, 2) > 0 Then
1 set minimum at first value with frequency larger than zero If myDistlnput (Intl, 1) i 0 Then myMin = myDistlnpu (Intl, 2) End If End If End If End If ! Next Intl
' Determine standard intervals and mid-points mylnterval = (myMax - myMin) / 999 myOutput (1, 2) - myMin + (mylnterval / 2) For Intl - 2 To 1000 myOutput(Intl, 2) = myOutput (Intl - 1, 2) + mylnterval Next Intl
1 Sort values into buckets For Intl = 1 To 1000000 myCounter = 0 IntJ = l
If myDistlnput (Intl, 2) ? 0 Then If myDistlnpu (Intl, 1) > 0 Then Do Until myCounter > 0
If myDistlnput (Intl, 2) <= myOutpu (IntJ, 2) Then myOutput (IntJ, 1) = myOutpu (IntJ, 1) + myDistlnpu (Intl, 1) myCounter = 1 Else
IntJ = IntJ + 1 End If Loop End If End If Next Intl
Reset Interval markers n
Figure imgf000105_0001
myOutput (IritI, 2) = myOutput (Intl - 1, 2) + mylnterval Next Intl rid Sub odulel - 1
Reserve Measur PEsgifSm.. Ϋ^ ktη,, S[;a!0 - 8-Way Convolutions (c) MMII Datarray L.L.C. ' Last modified 10/14/02 by Lloyd Lee Wilson
These variables contain the interval value for each input array
Public myDatallnterval, myData2Interval , myData3Interval, myData4Interval, myDataSInterval As Double
Public myData6Interval, mylntervals, myMax, myMin As Double These arrays hold the raw data entered by the operator
Public myDataKl To 1000, 1 To 2) As Double
Public myData2 (1 To 1000, 1 To 2) As Double
Public myData3 (1 To 1000, 1 To 2) As Double
Public myData4(l To 1000, 1 To 2) As Double
Public myData5(l To 1000, 1 To 2) As Double
Public myData6(l To 1000, 1 Tα 2) As Double
Public myData7(l To 1000, 1 To 2) As Double mylnterim holds the results of combining two distributions
Public mylnterimd To 1000000, 1 To 2) As Double myStandard holds the results of standardizing mylnterim
Public myStandardd To 1000, 1 To 2) As Double lub Convolute {) Macro modified 10/14/02 by Lloyd Lee Wilson
Convolutes distributions recorded on Data worksheet Macro presently set for seven distribution datasets only
Initialize data arrays For Intl = 1 To 1000 For IntJ - 1 To 2 myDatal(IntI, IntJ) = 0 myData2 (Intl , IntJ) = 0 myData3 (Intl, IntJ) = 0 myData4 (Intl, IntJ) = 0 myData5 ( Intl , IntJ) = 0 myData6(IntI, IntJ) = 0 myData7 (Intl, IntJ) = 0 Next IntJ Next intl
Load data from worksheet into arrays First Dataset Sheets ("Data") .Select Range ( "A2 " ) .Select For Intl = 1 To looo For IntJ = 1 To 2 myDatal ( Intl, IntJ) = ActiveCell. Offset (Intl - 1, IntJ - 1) Next IntJ Next Intl ' Second Dataset Sheets ("Data") .Select Range ("C2") .Select For Intl = 1 To 1000 For IntJ = 1 To 2 myData2(IntI, IntJ) = ActiveCell.Offset (Intl - 1, IntJ - 1) Next IntJ Next Intl 1 Third Dataset
Sheets ("Data") .Select . Range ("E2") .Select For Intl = 1 To 1000 For IntJ = 1 To 2 myData3(IntI, IntJ) = ActiveCell.Offset (Intl - 1, IntJ - 1) Next Intl ' Fourth Dataset Sheets ( "Data" ) .Select Range ("G2") .Select For Intl = 1 Tα 1000 For IntJ = 1 To 2 myData4(IntI, IntJ) = ActiveCell.Offset (Intl - 1, IntJ - 1) Next IntJ Next Intl lodulel - 2
Fifth Dataset Sheets ( "DgstlB.:!') .ϋ&t&ϋt Range ("12") .Select For Intl = 1 To 1000 For IntJ = 1 To 2 myData5(IntI, IntJ) = ActiveCell.Offse (Intl - 1, Int - 1) Next IntJ Next Intl Sixth Dataset Sheets ("Data") .Select Range ( "K2 " ) . Select For Intl = 1 To 1000 For IntJ = 1 To 2 myDataδ ntl, IntJ) = ActiveCell .Offset (Intl - 1, IntJ - 1) Next IntJ Next intl Seventh Dataset Sheets ("Data") .Select Range ("M2") .Select For Intl = 1 To 1000 For IntJ = 1 To 2 myData7 (Intl, IntJ) = ActiveCell.Of set (Intl - 1, IntJ - 1) Next IntJ Next Intl display tπyData arrays
Sheets ("test data array") .Select
Range ( "A2 " ) . Select
For Intl = 1 To 1000 For IntJ = 1 To 2
ActiveCell . Of set (Intl 1, IntJ - 1) myDatal (Intl, IntJ) ActiveCell .Offset (Intl 1, IntJ + 1) myData-2 (IntI, Int ) ■ActiveCell .Offset (Intl 1/ IntJ + 3) myData3 (Intl, IntJ) ActiveCell .O fset (Intl 1, IntJ + 5) myData4(Intl, IntJ) ActiveCell . Offset (Intl 1, IntJ + 7) myData5 (Intl, IntJ) ActiveCell .Offset (Intl 1, IntJ + 9) myDataS (Intl, IntJ) ActiveCell .Offset (Intl 1, IntJ + 11) » myData7 (Intl, IntJ) Next IntJ
Next Intl
Combine data from first two arrays
Multiply Frequency values; add Mean values Determine distribution intervals myDatallnterval = myDatal(2, 2) - myDatal , .2) myData Interval = myData ( , 2) - myData2 (1, 2) myData3 Interval = myData3(2, 2) - myData3(l, 2) myData nterval = myData4(2, 2) - myData4(l, 2) myData5Interval = myData5(2, 2) - myDataS , 2) myDataβlnterval = myData6(2, 2) - myDataδ (1, 2) myData7Interval = myData7(2, 2) - myData7(l, 2) IntK = 1
For Intl = 1 To, 1000 For IntJ = 1 To 1000
1 mylnterim (1,000, 000, 2) holds the results of combining two years' results 1 m Interim(x, 1) holds frequency hits mylnterim (IntK, 1) = yDatal (Intl, 1) * myData (IntJ, 1) ' mylnteri (x, 2) holds mean values mylnterim(IntK, 2) = myDatal ( Intl , 2) + myData (IntJ, 2) IntK = IntK + 1 Next IntJ Next Intl Standardize mylnterim Array ' Initialize myStandard array For Intl = 1 To 1000 For IntJ = 1 To 2 myStandard (Intl, IntJ) = 0 Next IntJ Next Intl Identify maximum and minimum for transitional array todulel myMaX},i=£- ^in.t -& S(,,j?.} myMin = mylnterim , 2) For Intl = 2 To 1000000
If mylnterim (Intl, 2) > myMax Then myMax = mylnterim (Intl, 2) Else
If mylnterim (Intl, 2) < myMin Then myMin = mylnterim(Intl, 2) End If End If Next Intl
Determine standard intervals and mid-points or myStandard array mylnterval = (myMax - myMin) / 999 myStandar (1, 2) = myMin + (mylnterval / 2) For Intl = 2 To 1000 myStandard (Intl, 2) = myStandard (Intl - 1, 2) + mylnterval Next Intl
Sort values into buckets
For Intl = 1 To 1000000 myTe pMin = myMin myCounter = o IntJ = 1 Do Until myCounter > 0
If mylnterim (Intl, 2) <= myStandard (IntJ, 2) Then
myStandard (IntJ, 1) = myStandard (IntJ, 1) + mylnterim (Intl, 1) myCounter = 1 Else
IntJ = IntJ + 1 End If Loop Next Intl Reset Interval Markers myStandar (1, 2) = myMin For Intl = 2 To 1000 myStandar (Intl, 2) = myStandard (Intl - 1, 2) + mylnterval Next Intl
Display myStandard Results Sheets ("Results") .Select Range ( "A2 " ) . Select For IntJ = 1 To 1000
ActiveCell. Offset (IntJ - 1, 0) = myStandar (IntJ, 1) ActiveCell.Of se (IntJ - 1, 1) = myStandard ( IntJ, 2) Next IntJ Combine Third distribution into Running Total ' Multiply Frequency values; add Mean values ' Determine myStandard and myData3 intervals - mylntervals = myStandar (2, 2) - myStandar (1, 2) πvyData3Interval = myData3(2, 2) - myData3(l, 2) IntK = 1
For Intl = 1 To 1000 For IntJ = 1 To 1000
' mylnterimd, 000,000, 2) holds the results of combining two years' results ' mylnterim (x, 1) holds frequency hits mylnterim (IntK, 1) = myStandar (Intl, 1) * myData3 (IntJ, 1) * mylnteri (x, 2) holds mean values mylnterim (IntK, 2) = myStandard (Intl, 2) + myData3 (IntJ, 2) IntK = IntK + l Next IntJ Next Intl Standardize mylnterim Array ' Initialize myStandard array For Intl = 1 To 1000 For IntJ = 1 To 2 myStandard (Intl, IntJ) = 0 Next IntJ Next Intl ' Identify maximum and minimum for transitional array myMax = mylnterimd, 2) myMin = mylnterim (1, 2) odulel - 4
m Interim(Intl, 2) > myMax Then myMax = mylnteri (nitI, 2) Else
If mylnterim (IntI; 2) < myMin Then myMin = mylnterim (Intl, 2) End If End If Next iαti ' Determine standard intervals and mid-points for myStandard array mylnterval = (myMax - myMin) / 999 myStandard (1, 2) = myMin -t (mylnterval / 2) For Intl = 2 To 1000 myStandard (Intl, 2) = myStandar (Intl - 1, 2) + mylnterval Next intl . ' Sort values into buckets For Intl - 1 To 1000000 myTe pMin = myMin myCounter = 0 IntJ = 1 Do Until myCounter > 0
If mylnterim (Intl, 2) <= myStandard (IntJ, 2) Then myStandard (IntJ, 1) = myStandar (IntJ, 1) +• mylnterim (Intl, 1) myCounter = 1 Else
IntJ = IntJ + 1 End If Loop Next Intl 1 Reset Interval Markers myStandard(1, 2) = myMin For Intl = 2 To 1000 myStandard (Intl, 2) = myStandard (Intl - 1, 2) + mylnterval Next Intl ' Display myStandard Results Sheets ("Results") .Select Range ("C2") .select For IntJ = l To 1000
ActiveCell.Offset (IntJ - 1, 0) = myStandar (IntJ, 1) ActiveCell.Offset (IntJ - 1, 1) = myStandard (IntJ, 2) Next IntJ
1 Combine Fourth distribution into Running Total ' Multiply Frequency values; add Mean values ' Determine myStandard and myData4 intervals mylntervals = myStandard (2 , 2) - myStandard(1, 2) myData4lnterval = myData4(2, 2) - myData4(l, 2) intK = l
For intl = 1 o looo For IntJ = 1 To 1000
' mylnterim (1,000, 000, 2) holds the results of combining two years' results • mylnterim (x,l) holds frequency hits mylnterim(IntK, 1) = myStandard (Intl, 1) * myData4 (IntJ, 1) 1 mylnterim (x, 2) holds mean values mylnterim (IntK, 2) = myStandard (Intl, 2) + myData4 (IntJ, 2) IntK = IntK + 1 Next IntJ Next intl ' Standardize mylnterim Array ' Initialize myStandard array For Intl = 1 To 1000 For IntJ = 1 To 2 myStandard(Intl, IntJ) = 0 Next IntJ Next Intl ' Identify maximum and minimum for transitional array myMax = mylnterim(1, 2) myMin = mylnterimd, 2) For Intl = 2 To 1000000
If mylnteri (Intl, 2) > myMax Then odulel - 5 P m ϊntφrfill.sfήieEΛ I) i'tl e ' '
If mylnterim(Intl, 2) < myMin Then myMin = mylnteri (Intl, 2) End If End If Next Intl 1 Determine standard intervals and raid-points for myStandard array mylnterval - (myMax - myMin) / 999 myStandardd, 2) = myMin + (mylnterval / 2) For Intl = 2 To 1000 myStandard (Intl, 2) = myStandard (Intl - 1, 2) + mylnterval Next Intl ' Sort values into buckets For Intl = 1 To 1000000 myTempMin = myMin myCounter = 0 IntJ = l Do Until myCounter > 0
If mylnterimdntl, 2) <= myStandard (IntJ, 2) Then myStandard (IntJ, 1) = myStandard [IntJ, 1) + mylnterim (Intl, 1) myCounter = 1 Else
IntJ = IntJ + 1 End If Loop Next Intl Reset Interval Markers myStandar d, 2 ) = myMin
Figure imgf000110_0001
myStandar (Intl, 2) = myStandard (Intl - 1, 2) + mylnterval Next Intl
Display myStandard Results Sheets ("Results") .Select Range ("E2") .Select For IntJ = 1 To 1000
ActiveCell.Offset (IntJ - 1, 0) = myStandarddntJ, l) ActiveCell.O fset dntJ - 1, 1) = myStandar ntJ, 2) Next IntJ Combine Fifth distribution into Running Total Multiply Frequency values; add Mean values ' Determine myStandard and myDataδ intervals mylntervals = myStandard(2 , 2) - myStandardd, 2) myDataSInterval = myData5(2, 2) - myData5(l, 2) IntK = 1
For Intl - 1 To 1000 For IntJ = 1 To 1000
' mylnterimd, 000, 000, 2) holds the results of combining two years' results ' mylnterim (x, 1) holds frequency hits mylnteri (IntK, 1) = myStandard (Intl, 1) * myDataS (IntJ, 1) ' mylnterim (x, 2) holds mean values mylnterim(IntK, 2) = myStandar (Intl, 2) + myDataS (IntJ, 2) IntK = IntK + 1 Next IntJ Next Intl ' Standardise mylnterim Array ' Initialize myStandard array For Intl = 1 To 1000 For IntJ = 1 To 2 myStandard (Intl, IntJ) = 0 Next IntJ Next Intl ' Identify maximum and minimum for transitional array myMax = mylnterim (1, 2) myMin = mylnteri (1, 2) For Intl = 2 To 1000000
If mylnterim ntl, 2) > myMax Then myMax = mylnterim(Intl, 2) Else
If mylnterim ntl, 2) < myMin Then lodulel - 6 røMiiπ; jsji Btflyiηieipi^ifu(ϊnif| , 2 )
Figure imgf000111_0001
Next Intl ' Determine standard intervals and mid-points for myStandard array mylnterval = (myMax - myMin) / 999 myStandardd, 2) = myMin + (mylnterval / 2) For Intl = 2 To 1000 myStandard (Intl, 2) = myStandard (Intl - 1, 2) + mylnterval Next Intl ' Sort values into buckets For Intl = 1 To 1000000 myTempMin = myMin myCounter = 0 IntJ = 1 Do Until myCounter 0
If mylnterim ntl, 2) <= myStandard ntJ, 2) Then myStandard (IntJ, 1) = myStandard (IntJ, 1) + mylnterim (Intl, 1) myCounter = 1 Else
IntJ = IntJ + 1 End If Loop Next Intl 1 Reset Interval Markers myStandard(1, 2) = myMin For Intl = 2 To 1000 myStandard (Intl, 2) = myStandar (Intl - 1, 2) + mylnterval Next Intl ' Display myStandard Results Sheets) "Results") .Select Range ("G2") .Select For IntJ = 1 To 1000
ActiveCell. Offset (IntJ - 1, 0) = myStandarddntJ, 1) ActiveCell. Of set (IntJ - 1, 1) = myStandard ntJ, 2) Next IntJ 1 Combine Sixth distribution into Running Total ' Multiply Frequency values; add Mean values ' Determine myStandard and myData6 intervals mylntervals = myStandard (2 , 2) - myStandardd, 2) myData6Interval = myData6(2, 2) - myData€d, 2) IntK = l
For Intl = 1 To 1000 For IntJ = 1 To 1000
' mylnterimd, 000, 000, 2) holds the results of combining two years' results 1 mylnterim (x,l) holds frequency hits mylnterim(IntK, 1) = myStandard (In l, 1) * myDataβ (IntJ, 1) 1 mylnterim (x, 2) holds mean values mylnterim (IntK, 2) = myStandard (Intl, 2) + yDataδ (IntJ, 2) IntK = IntK + 1 Next IntJ Next Intl ' Standardize mylnterim Array ' Initialize myStandard array For Intl = 1 To 1000 For IntJ = 1 To 2 myStandard (Intl, IntJ) = 0 Next IntJ Next Intl ' Identify maximum and minimum for transitional array myMax = mylnterim(1 , 2) myMin = mylnterim , 2) For Intl = 2 To 1000000
If mylnteri dntl, 2) > myMax Then myMax = mylnterim (Intl, 2) Else
If mylnterim ntl, 2) < myMin Then myMin = mylnteri dntl, 2) End If End If todulcl - 7
Next,, JSA-J" Determ rϊe "s'taHdard" intervals andT'mid-points or myStandard array mylnterval = (myMax - myMin) / 999 myStandardd, 2) = myMin + (mylnterval / 2) For Intl = 2 To 1000 myStandard (Intl, 2) = myStandard (Intl - 1, 2) + mylnterval Next Intl ' Sort values into buckets For Intl - 1 To 1000000 myTempMin = myMin myCounter = 0 IntJ = 1 Do Until myCounter > 0
If mylnterimdntl, 2) <= myStandarddntJ, 2) Then myStandar (IntJ, 1) = myStandard (IntJ, 1) + mylnterim (Intl, 1) myCounter = 1 Else
IntJ = IntJ + 1 End If Loop Next Intl 1 Reset Interval Markers myStandardd, 2) = myMin
Figure imgf000112_0001
myStandard (Intl, 2) = myStandard (Intl - 1, 2) + mylnterval Next Intl ' Display myStandard Results Sheets ("Results") .Select Range ( " 12 " ) . Select For IntJ = 1 To 1000
ActiveCell. Offset (IntJ - 1, 0) = myStandar dntJ, 1) ActiveCell. Offset (IntJ - 1, 1) = myStandar ntJ, 2) Next IntJ ' Combine Seventh distribution into Running Total ' Multiply Frequency values; add Mean values ' Determine myStandard and myData7 intervals mylntervals = myStandard(2, 2) - myStandardd, 2) myData7Interval = myData7(2, 2) - myData (l, 2) IntK = 1
For Intl = 1 To 1000 For IntJ = 1 To 1000
' myInterim (1,000, 000, 2) holds the results of combining two years' results ' mylnterim (x, 1) holds frequency hits mylnterim (IntK, 1) = myStandard (Intl, 1) * myData7 (IntJ, 1) ' mylnterim (x, 2) holds mean values mylnterim (IntK, 2) - myStandard (Intl, 2) + myData7 (IntJ, 2) IntK - IntK + 1 Next Int T Next Intl ' Standardize mylnterim Array ' Initialize myStandard array For Intl =' 1 To 1000 For IntJ = 1 To 2 myStandard (Intl, IntJ) = 0 Next IntJ Next Intl 1 Identify maximum and minimum for transitional array myMax = mylnteri d, 2) myMin = mylnterim (1, 2) For Intl = 2 To 1000000
If mylnterim (Intl, 2) > myMax Then myMax = mylnteri ntl, 2) Else
If mylnterim (Intl, 2) < myMin Then myMin = mylnterim (Intl, 2) End If End If Next intl Determine standard intervals and mid-points for myStandard array mylnterval = (myMax - myMin) / 99S lodulei - 8 mys ad α.(.^.r.2,)!|B,myMiifa,. frIn'terval / FoiS-Tte l! i z'TO-lo'b'O myStandard (intl, 2) = myStandard (Intl 1, 2) + mylnterval Next Intl Sort values into buckets For intl = 1 To 1000000 myTempMin = myMin myCounter = 0 IntJ = 1 Do Until myCounter 0
If mylnterim ntl, 2) <= myStandard ntJ, 2) Then myStandard ntJ, 1) = myStandard (IntJ, 1) + mylnteri dntl, 1) myCounter = 1
Blse
IntJ IntJ + 1 End If Loop Next Intl Reset Interval Markers myStandar , 2) = myMin For Intl - 2 To 1000 myStandard (Intl, 2 ) = myStandar (Intl - 1, 2) + mylnterval Next Intl
Display myStandard Results Sheets ("Results") .Select Range ("K2") .Select For IntJ = 1 To 1000
ActiveCell . Of se (IntJ 1, 0) = myStandarddntJ, l) ActiveCell. Offset (IntJ 1, 1) = myStandarddntJ, 2) Next IntJ Display Chart
Sheets ("Chartl") .Select nd Sub

Claims

CLAIMSWhat is claimed is:
1. A method for constructing a historically based frequency distribution of unknown ultimate outcomes in a data set, the method comprising the following acts:
A. collecting relevant data about a series of known cohorts, where a new group of the data emerges at regular time intervals, measuring a characteristic of each group of the data at regular time intervals, and entering each said characteristic into a data set having at least two dimensions;
B. determining a number of frequency intervals N to be used to construct said distribution of unknown ultimate outcomes;
C. for each period I, constructing an aggregate distribution by:
(a) calculating period-to-period ratios of the data characteristics;
(b) identifying a range of ratio outcomes for cohort I;
(c) constructing subintervals for cohort I; and
(d) calculating all possible ratio outcomes for cohort I;
(e) inserting each outcome into the proper interval; and
D. constructing a convolution distribution of outcomes (said historically based frequency distribution of unknown ultimate outcomes) for all said possible ratio cohorts combined, by:
(a) selecting outcomes for any two cohorts A and B;
(b) constructing a new range of outcomes for the convolution distribution of cohorts A and B; (c) constructing new subintervals for the convolution distribution of • cohorts A and B;
(d) calculating the combined outcomes for the two cohorts A and B to provide a resulting convolution distribution; and
(e) combining the resulting convolution distribution with the distribution of outcomes for each remaining cohort by repeating each of the preceding acts D.(a) through D.(d) for each pair of cohorts.
2. The method of claim 1 , in which N is a number of intervals required to meet a given level of error tolerance selected by a user.
3. The method of claim 1 , in which N is a maximum number of intervals that can be calculated by a computer provided by a user in a given period of time.
4. The method of claim 1 , further comprising the acts of
(a) constructing convolution distributions for at least two separate groups of data using the method described in claim 1; and
(b) constructing a convolution distribution of such separate groups together.
5. A computer software system having a set of instructions for controlling a general purpose digital computer in performing a reserve measure function comprising: a set of instructions for:
A. receiving a set of data,
B. receiving a number of intervals N,
C. for each period I, constructing the aggregate distribution by:
(a) calculating the period-to-period ratios of the data;
(b) identif ing a range of ratio outcomes for cohort I;
(c) constructing subintervals for cohort I; (d) calculating all possible ratio outcomes for cohort I;
(e) inserting each outcome into the proper interval; and
D. constructing a convolution distribution for all said possible ratio cohorts combined, by:
(a) selecting outcomes for any two cohorts A and B;
(b) constructing a new range of ratio outcomes for the convolution distribution of cohorts A and B;
(c) constructing new subintervals for the convolution distribution of cohorts A and B ;
(d) calculating the combined possible ratio outcomes for the two cohorts A and B; and
(e) combining the resulting convolution distribution with the distribution of outcomes for each remaining cohort by repeating each of the preceding actions D.(a) through D.(d) for constructing a new convolution distribution.
6. The computer software system of claim 5, where N is a number of intervals required to meet a given level of error tolerance as determined by a user.
7. The computer software system of claim 5, further comprising a set of instructions for: receiving an error tolerance ε selected by a user; calculating the number of intervals N required to produce such level of error tolerance.
8. The computer software system of claim 5, in which N is a maximum number of intervals that can be calculated by the computer in a given period of time.
9. The computer software system of claim 5, in which a value for N is fixed in the instructions.
10. The computer software system of claim 5, in which N is a number selected by a user.
11. The computer software system of claim 5, in which the set of data is comprised of insured losses over a given period of years and for a given line of businesses.
12. A computer-readable medium storing instructions executable by a computer to cause the computer to perform a reserve measure process comprising:
A. receiving a set of data;
B. receiving a number of intervals N;
C. for each period I, constructing the aggregate distribution by:
(a) calculating the period-to-period ratios;
(b) identifying the range of outcomes for cohort I;
(c) constructing the subintervals for cohort I; and
(d) calculating all the different outcomes for cohort I
(e) inserting each outcome into the proper interval; and
D. constructing a convolution distribution for all cohorts combined, by:
(a) selecting any two cohorts A and B
(b) constructing a new range of outcomes for the convolution distribution of cohorts A and B;
(c) constructing new subintervals for the convolution distribution of cohorts A and B;
(d) calculating the combined outcomes for the two cohorts A and B; and
(e) combining the resulting convolution distribution with the distribution of outcomes for each remaining cohort by repeating each of the preceding actions D.(a) through D.(d) for constructing a new convolution distribution.
13. The computer readable medium of instructions of claim 12, where N is the number of intervals required to meet a given level of error tolerance as determined by the user.
14. The computer readable medium of instructions of claim 12, further comprising a set of instructions for: receiving an error tolerance ε selected by the user; calculating the number of intervals N required to produce such level of error tolerance.
15. The computer readable medium of instructions of claim 12, in which N is the maximum number of intervals that can be calculated by the computer in a given period of time.
16. The computer readable medium of instructions of claim 12, in which a value for N is fixed in the instructions.
17. The computer readable medium of instructions of claim 12, in which N is a number selected by the user.
18. The computer readable medium of instructions of claim 12, in which the data set is comprised of insured losses over a given period of years and for a given line of businesses.
19. A method for constructing a historically based frequency distribution of insurance losses, the method comprising the following acts:
A. collection of relevant data about claims experience across a line of businesses, for a set of accident years; B. determination of a number of intervals N to be used to construct said distribution of insurance losses;
C. for each accident year I in each line of business K, constructing the aggregate distribution by:
(a) calculating the period-to-period ratios;
(b) identifying the range of outcomes for accident year I;
(c) constructing the subintervals for accident year I;
(d) calculating all the different outcomes for accident year I
(e) inserting each outcome into the proper interval; and;
D. for each line of business K, constructing a convolution distribution for all accident years combined, by:
(a) selecting any two accident years A and B;
(b) constructing a new range of outcomes for the convolution distribution of accident years A and B;
(c) constructing new subintervals for the convolution distribution of accident years A and B;
(d) calculating the combined outcomes for the two accident years A and B;
(e) combining the resulting convolution distribution with the distribution of outcomes for each remaining accident year by repeating each of the preceding steps D.(a) through D.(d) for constructing anew convolution distribution; and
F. combining the resultant convolution distributions for all lines of business by
(a) selecting any two lines of business X and Y;
(b) constructing a new range of outcomes for the convolution distribution of lines of business X and Y; (c) constructing new subintervals for the convolution distribution of lines of business X and Y;
(d) calculating the combined outcomes for the two lines of business X and Y; and
(e) combining the resulting convolution distribution with the distribution of outcomes for each remaining line of business by repeating each of the preceding steps F.(a) through F.(d) for constructing a new convolution distribution to produce a convolution distribution across all lines of business.
20. The method of claim 19, further comprising the following action: evaluating the actual insurance reserve based on the resulting convolution distribution.
21. The method of claim 20, further comprising the following action: adjusting the insurance reserve of the user based upon the comparison of the actual reserve to the convolution distribution.
22. The method of claim 19, further comprising the following action: selecting an insurance reserve based upon the resulting convolution distribution.
PCT/US2004/004762 2003-02-18 2004-02-18 Method and apparatus for reserve measurement WO2004074763A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/546,235 US20060293926A1 (en) 2003-02-18 2004-02-18 Method and apparatus for reserve measurement

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US44805803P 2003-02-18 2003-02-18
US60/448,058 2003-02-18

Publications (3)

Publication Number Publication Date
WO2004074763A2 true WO2004074763A2 (en) 2004-09-02
WO2004074763A9 WO2004074763A9 (en) 2004-11-11
WO2004074763A3 WO2004074763A3 (en) 2005-10-06

Family

ID=32908531

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2004/004762 WO2004074763A2 (en) 2003-02-18 2004-02-18 Method and apparatus for reserve measurement

Country Status (1)

Country Link
WO (1) WO2004074763A2 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6061662A (en) * 1997-08-15 2000-05-09 Options Technology Company, Inc. Simulation method and system for the valuation of derivative financial instruments
US6317700B1 (en) * 1999-12-22 2001-11-13 Curtis A. Bagne Computational method and system to perform empirical induction

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6061662A (en) * 1997-08-15 2000-05-09 Options Technology Company, Inc. Simulation method and system for the valuation of derivative financial instruments
US6317700B1 (en) * 1999-12-22 2001-11-13 Curtis A. Bagne Computational method and system to perform empirical induction

Also Published As

Publication number Publication date
WO2004074763A9 (en) 2004-11-11
WO2004074763A3 (en) 2005-10-06

Similar Documents

Publication Publication Date Title
Wang et al. Multi-attribute decision making on reverse logistics based on DEA-TOPSIS: A study of the Shanghai End-of-life vehicles industry
US7945472B2 (en) Business management tool
Eraker et al. Bayesian mixed frequency VARs
EP1316906A1 (en) Business performance index processing system
Qonita et al. Prediction of rupiah against US dollar by using ARIMA
Cullinane et al. A comparison of models for forecasting the Baltic freight index: Box-Jenkins revisited
Cotter Varying the VaR for unconditional and conditional environments
Shi et al. Performance evaluation of Chinese commercial banks by an improved slacks-based DEA model
US20030120580A1 (en) Processing system for market efficiency value added
US20060293926A1 (en) Method and apparatus for reserve measurement
Ernst et al. Hiring uncertainty: a new labour market indicator
Mikosch et al. Real-time forecasting with a MIDAS VAR
Vahidi Monfared et al. A system dynamics model to forecast the population aging in Iran
WO2004074763A2 (en) Method and apparatus for reserve measurement
Tarsidin et al. Nowcasting Household Consumption and Investment in Indonesia
Shafie et al. Forecasting Of Malaysia Gold Price with Exponential Smoothing
WO2022176073A1 (en) Production planning support system
Timonina-Farkas COVID-19: data-driven dynamic asset allocation in times of pandemic
Vogelezang Early estimating using COSMIC-FFP
Rendon Job creation and investment in imperfect capital and labor markets
Markland et al. A conceptual model for analysing discrete alternative franchising portfolios: Design and validation
Mariano et al. High-mixed-frequency forecasting models for GDP and inflation
Bruzda Quantile forecasting in operational planning and inventory management–an initial empirical verification
Avlijaš Using Earned Value Management for More Sustainable Project Schedule Control
KURT et al. A New Mathematical Model Based Approach to Multi-Criteria Stock Selection Problem

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
COP Corrected version of pamphlet

Free format text: PAGES 1/96-96/96, DRAWINGS, REPLACED BY NEW PAGES 1/96-96/96; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE

122 Ep: pct application non-entry in european phase
WWE Wipo information: entry into national phase

Ref document number: 2006293926

Country of ref document: US

Ref document number: 10546235

Country of ref document: US

WWP Wipo information: published in national office

Ref document number: 10546235

Country of ref document: US