WO1999021116A1 - Method and system for creating index values supporting the settlement of risk transfer contracts - Google Patents

Method and system for creating index values supporting the settlement of risk transfer contracts Download PDF

Info

Publication number
WO1999021116A1
WO1999021116A1 PCT/US1998/012739 US9812739W WO9921116A1 WO 1999021116 A1 WO1999021116 A1 WO 1999021116A1 US 9812739 W US9812739 W US 9812739W WO 9921116 A1 WO9921116 A1 WO 9921116A1
Authority
WO
Grant status
Application
Patent type
Prior art keywords
loss
value
method
data
reporting unit
Prior art date
Application number
PCT/US1998/012739
Other languages
French (fr)
Inventor
John A. Major
Bruce B. Thomas
Original Assignee
Indexco, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance, e.g. risk analysis or pensions

Abstract

A system and method for creating index values supporting the settlement of property risk transfer contracts. The system accepts as input exposure and claim transaction records obtained from a selected group of property insurance companies, demographic data, and geographic data describing the interrelationship between geographic units. Based upon the specifics of the desired risk transfer contracts, the computer selects the relevant subset of experience data and defines a set of geographic Reporting Units. The system extracts and summarizes the exposure and claim data for a set of index reporting units. Claim data may be further analyzed to define catastrophic events as sets of claims related by proximity in space and time. Index values are computed for each Reporting Unit. Loss-to-Value Ratios for larger units of geography are formed through a system of fixed weights. Index values for alternate units of geography may also be computed from the reporting unit index values through a system of weights based on the demographic daa. The index values are then made available for the settlement of the risk transfer contracts.

Description

Method and System For Creating Index Values Supporting The Settlement of Risk Transfer Contracts

Field of the Invention

The present invention relates to a method and system for supporting financial contracts . More particularly, it relates to a method and system for the creation of index values which support the settlement of property risk transfer contracts.

Background of the Invention An index, in the context of financial contracts, is the numerical representation of the magnitude of some underlying phenomenon. For example, the Dow Jones Indus- trial Average is an index representing the market value cf a certain group of stocks.

Indices of insured loss experience have existed since 1949.

The first such index was the Property Claim Ser- vices (PCS) index of catastrophe experience in the U.S., described in "PCS Catastrophe Insurance Options: A User's Guide", Chicago Board of Trade publication EM61-7 (1995) . A catastrophe was defined by PCS as an event causing in excess of $5mm ($25mm since 1997) of insured property damage and affecting a "significant number" of policyholders and insurance companies. "Significant number" was never defined precisely, but is described as "thousands of claims." (In insurance parlance, "claims" refer to the notices given by policyholders whereas "losses" refer to the dollar amounts specified in those notices.)

To produce the index value, PCS first identifies potential catastrophes from news accounts. Judgment is applied to the separation of events, e.g. , weather fronts, fires, or flooding, which may occur in groups but are in- dexed separately. PCS assigns each event a serial number and issues a bulletin specifying the event in terms of the dates, states, and perils (e.g., hurricane, earthquake) involved. It then surveys insurance companies, agents, and adjusters within three to five days of the event, typically by telephone, to solicit their estimates of their own losses. Total estimated losses include personal and commercial property lines of insurance covering fixed property, personal property, living expense, business interruption, vehicles, boats, and related property items. PCS attempts to survey at least 70% of the insurance industry, as measured by premium volume, which typically means at least 100 companies. From the reported paid amounts and projected loss estimates, PCS extrapolates to a total, ultimate loss figure for the industry, which becomes the PCS index value.

If the event is large enough (over $250mm) or has unique characteristics (e.g. , the World Trade Center bombing) PCS resurveys and reestimates every 60 days until a final estimate is published. In more severe events, PCS may send personnel to the scene to conduct visual surveys of damage .

A similar index, known as SIGMA, has been published by Swiss Re/North American Reinsurance Corporation since before 1986, as described in "Natural catastrophes and major losses in 1992: Insured damage reaches new record level," Economic studies number 2/93.

SIGMA also reports estimates of total insured losses, but with a worldwide scope. It covers natural and human-made (industrial/technical) disasters where there are at least 20 victims, or at least 50 injured people, or insured losses exceeding $10mm to $30mm, depending on insured category, e.g., aviation vs. waterborne traffic.

SIGMA data sources include original documents, press reports, technical journals, and reports by insurance and reinsurance companies. SIGMA makes no claim to be complete,

Both PCS and SIGMA were devised originally as information vehicles, hence the nature of their methodologies was not a source of concern. In 1992, the Chicago Board of Trade (CBOT) began listing property/casualty (P/C) catastrophe futures and options based on an index of insured catastrophe experience. The CBOT commissioned ISO DATA, a division of the Insurance Services Office (ISO), to create this index since 1971. ISO has served as a statistical agent for state insurance regulators since 1971 and as provider of actuarial studies for the P/C insurance industry. It continually gathers loss payment and case reserve data quarterly from over one hundred property/casualty insurers. Case reserves are estimated sums of money designated by insurance companies in anticipation of possible further payments on existing insurance claims.

To create the index (as described in "Catastrophe Insurance Futures & Options: Background Reports," Chicago Board of Trade, January 1994), ISO DATA extracted ISO's data on a sample of twenty to twenty-five insurers, covering at least ten distinct lines of business, e.g., personal automobile property damage, homeowners, etc. Losses were categorized by cause of loss, e.g., wind storm, fire, etc. and the inclusion of losses in the index depended on the combination of line of business and cause of loss.

Premiums by state by line of business for both the industry as a whole and the sample companies were estimated from state Department of Insurance statutory filings. Pre- miums were adjusted by a multiplicative factor between zero and one to reflect ISO's judgment of that portion of premium attributable to property subject to catastrophe risk. The schedule of premium totals was published in a demographic report prior to the trading of CBOT contracts and fixed during the life of those contracts. The ISO DATA index value was a loss ratio, i.e., a ratio of quarterly losses to premium. First, the sampled losses were inflated upwards by the ratio of industry premi-_ urns to sampled premiums. Second, the inflated losses were summed across line of business and certain state groupings to provide regional totals. Third, the loss totals were divided by the corresponding premium totals to obtain loss-to-premium ratios by region. Separate catastrophes were not identified, rather, the index reported on the total loss-ratio experience of the sample in each quarter.

In mid-1995, the CBOT rolled out a new set of contracts based on the PCS index, which are still in use today. The PCS-based contracts on the CBOT exchange refer to total losses over time in a region, but the PCS index is also available at the state level, event by event, for off-exchange (over-the-counter, or "OTC") transactions.

SIGMA, reporting at the national level, is also being used in some off-exchange contracts. The aim of risk transfer contracts is to transfer risk from those who have an excessive exposure to it and desire to hedge it (e.g., property insurers) to those who wish to take on more of the risk in anticipation of the possibility of profit (e.g., investors) . There are sound reasons why many investors should desire to invest in catastrophe risk; it has been demonstrated that catastrophic risk, as an asset class, bears little or no correlation to the performance of other investments and has generated significant positive rates of return over time. Accordingly, there is a need for a method and system for generating an index which allows investors to invest in catastrophe risk, and, also meets the needs of insurers.

Previous indices exhibit a number of inadequacies in this respect. First, previous indices do not correlate well enough with individual insurer loss experience to use them to hedge with less than 25% error in a majority of cases. When the correlation between an index-based instrument and the risk being hedged is less than perfect, this introduces what is known as basis risk: the possibility that the difference between the contract payout and the insurer's own losses will take on unexpected values. The primary reason for imperfect correlation is that the typical insurer's business is not evenly distributed within a state or by type of property (e.g. , commercial versus residential) but previous indices do not disaggregate below the state level nor by type of property. Even in a large event, such as Hurricane Andrew, there can be as much as a 1/3 probability that the value of a state-level hedge contract is off by more than 50% from insurer expectations.

ISO DATA used line of business distinctions as components in computing its index value, but it did not report index values by lines of business or otherwise use the line of business calculations as an index value.

Second, the underlying methodologies of previous indices introduce numerous areas of concern with respect to transparency and manipulability . Calculations of index values under PCS and SIGMA methodologies are based, to some extent, on subjective judgment. Investors can not be certain how the index values will be calculated nor the data upon which they will be based. After an index value has been published, the underlying data and methodology are not released to the public in detail. Even ISO DATA provided only partial documentation on its methodology. This lack of transparency prevents insurers from being able to measure the amount of basis risk that might result from a given contract and causes investors to be concerned about potential manipulation of the index values. Finally, previous index values are based in part on selected insurers' opinions concerning the magnitude of their losses, rather than an objectively verifiable quantity such as the amount of claims they have actually paid. Investors are therefore concerned that these estimates can be manipulated. Moreover, since the indices attempt to estimate the total dollars of loss sustained by the industry, one would expect that the results of the few largest insurers would dominate the index values. This provokes fears about potential manipulability and information parity, since these large insurers would have some advance knowledge bearing on ultimate index values.

Objects and Advantages To satisfy both insurer and investor needs, a method and system for providing an insured loss index is needed that is capable of combining (1) high correlation to individual insurer's loss experience with (2) sufficient articulation and coordination to support the development of a market and (3) a standardized, transparent, and non-rαanipulable methodology. Such an index is made possible by the present invention which provides the following objects and advantages:

First, the method and system of the present invention enables contracts with much better correlation and lower basis risk. The method and system of the present invention can create an index articulated to, for example, 10,000 or more small-area geographic units. This enables hedge contracts to be written which better mirror the geographic distribution of property of a particular insurer, thereby reducing basis risk.

This level of geographic articulation has never been done before; it would not be practical under the prior art indices due to the methodologies systems previously used. Second, index values created from the present invention are expressed as ratios of losses to insured property values (Loss-to-Value Ratios), rather than loss totals as in two previous indices. This makes contracts based on the index more reliable since the exact relationship between an insurance company's volume of business and the total indus- • try's, typically unknown at levels of geography finer than a state, is not needed. For example, an insurer desiring to hedge a $10 million portfolio of insured property in a cer- tain geographic area will specify a contract paying $10 million times the index Loss-to-Value Ratio. In contrast, with an exemplary index based on loss totals, the insurer needs to specify a contract paying some percentage times the index total loss in that geographic area. The insurer therefore would have to express the size of its portfolio as a percentage of some total industry volume in that geographic area. Such total industry volumes at geographic levels below the state level, were, however, generally unknown. The use of Loss-to-Value ratios, therefore, further supports geographic articulation at levels not previously practicable .

Loss-to-Value is also preferable to the expression of a ratio of losses to premiums. Price fluctuations in the insurance industry can change the level of premiums, and hence the index value, in ways that bear little relation to catastrophe experience. Moreover, while insurance actuaries are familiar with loss-to-premium ratios, it is less than intuitive to investors.

Third, the present invention preferably utilizes a system of unweighted company averages. The unweighted averaging feature helps the hedger insofar as it reduces non- geographic sources of basic risk. Research shows that different insurers can exhibit persistent differences in their loss experience, ZIP-code by ZIP-code, within the same cata- strophic event. Fourth, the present invention preferably utilizes a system of predetermined weights to allow the index to be calculated at virtually any level of geography, e.g., city, county. Because of this, intermediate traders (variously known as basis risk managers, match book operators, or transformers) will be able to trade contracts at different levels of geography while measuring their basis risk. Intermediate traders are the link between investors dealing with standardized regional contracts and insurers dealing with customized small-area contracts. The very existence of intermediate traders has been hampered heretofore because the prior art was unable to supply sufficiently articulated and coordinated index values .

Fifth, the method and system of the present inven- tion enables an index which offers specific advantages in terms of transparency and freedom from manipulability concerns. The method and system of the present invention uses an objective, publishable methodology. Data and documentation backing the calculation of index values will be available to all parties, which has heretofore been lacking due to methodological uncertainties. The methodology is based on the verifiable statistics of historical paid losses, which further ameliorates concerns about manipulability. Moreover, the index is based on Loss-to-Value Ratios, and it weights the contributions of reporting companies equally, rather than in proportion to their insured amounts exposures or premium market shares as, in effect, all previous indices have. This means that large companies do not exercise disproportionate influence on the index values.

Finally, the present system provides a method of analyzing claim data to automatically define catastrophic events as sets of claims related by proximity in space and time . In summary, the present invention combines several innovative features in a process which goes several steps beyond the prior art. It addresses significant, and currently-unmet, investor and insurer needs for securitized risk-transfer contracts. Further objects and advantages will become apparent from a consideration of the ensuing description and drawings.

Summary of Invention The system for creating index values accepts as input exposure (insurance coverage) and claim transaction records obtained from a selected group of property insurance companies, demographic data describing the housing stock of a geographic area (such as the U.S.), and geographic data describing the interrelationship between geographic units. After the operator specifies the desired characteristics of risk transfer contracts and other operating parameters, the system extracts and summarizes the exposure and claim data for a set of index reporting units. These reporting units are preferably based on individual ZIP codes or sets of related ZIP codes, and are defined by the system so as to satisfy actuarial credibility criteria as reflected in the operating parameters. Claim data may be further analyzed to define catastrophic events as sets of claims related by proximity in space and time. Index values are preferably computed as average loss-to-value ratios within each reporting unit for each event (and also for the entire specified range of time) , where the average is taken over a set of companies within each reporting unit, such set also satisfying specified actuarial credibility criteria. Index values for alternate units of geography are computed from the reporting unit index values through a system of weights based on the demographic data.

Brief Description of the Drawings These and other aspects of the present invention are more apparent in the following detailed description and claims, particularly when considered in conjunction with the accompanying drawings showing a system and method in accor- dance with the present invention, in which:

Figure 1 is a physical system diagram showing the physical components of an exemplary system;

Figure 2 shows a list of exposure input records data items; Figure 3 shows a list of claim input records data items;

Figure 4 shows a state adjacency table;

Figure 5 shows a list of demographic database data items; Figure 6 is an example of demographic data;

Figure 7 is a top-level flow chart of a system constructed in accordance with the present invention;

Figure 8 shows operating parameters for a system constructed in accordance with the present invention; Figure 9 is a flow chart for selecting and summarizing exposure data;

Figure 10 is an example exposure database;

Figure 11 is a flow chart for determining index reporting units and admitted companies; Figure 12 is a flow chart for determining ZIP codes that qualify as index reporting units;

Figure 13a is an exemplary temporary table of qualified ZIP codes;

Figure 13b is an exemplary temporary table of unqualified ZIP codes;

Figure 14 is an exemplary temporary table of credible companies;

Figure 15a shows an index reporting unit database ZIP-index cross-reference table; Figure 15b shows an index reporting unit database admitted company table;

Figure 16 shows an exemplary temporary exposure table; Figure 17 shows an exemplary report of index reporting units;

Figure 18 shows an exemplary higher-level geography table;

Figure 19 is a flow chart for selecting and summa- rizing loss data;

Figure 20 is a loss-by-date table of the loss database;

Figure 21 is a flow chart for determining events;

Figure 22 is a flow chart for initializing tempo- rary tables of unmatched State-day combinations;

Figure 23 shows an exemplary temporary table of damage rates by State by date;

Figure 24 shows the initial status of an exemplary temporary table of unmatched state-day combinations; Figure 25 shows the initial status of an exemplary temporary table of to-match state-day combinations;

Figure 26 is a flow chart for identifying records that share space-time boundaries;

Figure 27 shows an exemplary temporary table of unmatched state-day combinations upon first marking;

Figure 28 shows an exemplary temporary table of event definitions upon first entry;

Figure 29a shows an exemplary temporary table of to-match state-day combinations upon second iteration; Figure 29b shows an exemplary temporary table of unmatched state-day combinations upon second iteration;

Figure 30a shows an exemplary temporary table of to-match state-day combinations upon fifth iteration;

Figure 30b shows an exemplary temporary table of unmatched state-day combinations upon fifth iteration; Figure 31 shows an exemplary temporary table of event definitions;

Figure 32a shows an exemplary temporary table of to-match state-day combinations upon second cycle of the outer loop;

Figure 32b shows an exemplary temporary table of unmatched state-day combinations upon second cycle of the outer loop;

Figure 33 shows an exemplary temporary table of event definitions upon final status;

Figure 34 is a flow chart for selecting top events;

Figure 35a shows an exemplary table of event definitions; Figure 35b shows an exemplary table of events;

Figure 36 shows an exemplary loss by event table of the loss database;

Figure 37 is a flow chart computing index values for index reporting units; Figure 38 shows examples of computing company LTV ratios;

Figure 39 shows an exemplary reporting unit table of the index value data base upon initial status;

Figure 40 is a flow chart for computing index val- ues for higher-level geographies;

Figure 41a shows an exemplary reporting unit table upon final status;

Figure 41b shows an exemplary higher level geography table; and Figure 42 shows an exemplary index report.

Detailed Description of Invention Section: System Overview Figure 1 depicts the physical components of an exemplary computing system that implements the inventive system for creating index values. A central processing unit 136 is connected to a random access memory bank 138 and a peripheral controller 118. The peripheral controller controls data flow in and out of a bank of disk drives 110, a bank of tape drives 112, a terminal consisting of a CRT monitor and keyboard 122, a device for reading from and writing to CD-ROMs 124, a network interface 130, and a laser printer 132. The tape drives are used to read the exposure input data tapes 114 and claim input data tapes 116. The disk drives are used to read from a demographic data file 104 and geographic files 106, and to read from and write to the operating parameters disk file 102 and numerous internal tables used during the processing 108. The tape and disk files will be described in detail in later sections. The terminal is used to send messages to and accept data and commands from the system operator 120. The laser printer and CD-ROM device are used to create output reports in printed form 134 and computer-readable form on CD-ROM 126, respectively. The network interface communicates with an Internet or World Wide Web server 128 which is an independ- ent computer system making reports available to remote users via the Internet.

Section: Inputs There are five principal sets of data used by the system for creating index values. These can be logically divided into three groups: experience data (exposure data and claim data) , geodemographic data (demographic data and geographic data), and operating parameters. The following describes the preferred formats and definitions within the United States. Similar formats and definitions will be apparent for areas outside of the United States.

In insurance parlance, "experience" means the historical record of premiums and claims. The experience data consists of transaction records obtained from reporting companies (a selected group of property insurance compa- nies) . Exposure data 114 consists of records detailing the terms of insurance coverage under policies issued by an insurer and the related premium payments made by its policyholders. Each record of the exposure data 114 includes at least one data item corresponding to an exposure value, such as an amount of insured value 226. Other exemplary data items of the exposure input records are listed in Figure 2 and preferably include insurance company 202, accounting date 204, policy inception date 206, state 208, ZIP code 210, policy form 212, wind deductible 214, fire deductible 216, number of families 218, construction type 220, fire protection class 222, year of construction 224 and annual premium 228. Claim data 116 consists of records detailing the notices of insurance claims made by policyholders and the related claim payments made by the insurer. Each record of the claim data 116 includes at least one data item corresponding to a loss experience value, such as paid loss amount 326. Other example loss experience values are described in the alternate embodiment section below. Some of the data items of the claim input records are listed in Fig- ure 3 and preferably include insurance company 302, accounting date 304, policy inception date 306, state 308, ZIP code 310, policy form 312, construction type 314, number of families 316, year of construction 318, amount of insured value 320, date of loss 322, cause of loss 324 and reserved loss amount 328. In insurance parlance, "losses" are distinguished from "claims," the former referring to the events of property damage or their dollar values and the latter referring to the events of notification or their enumeration. However, as experience data combines loss and claim informa- tion in the same records, "claim data" is considered synonymous with "loss data."

The system for creating index values preferably uses a standard 80-column format for experience data as promulgated by the Insurance Services Office (ISO) in its Per- sonal Lines Statistical Plan (Other than Auto) . The report- ing companies are selected based on the reliability and timeliness of their data feeds and the comprehensiveness and quality of their data. In the United States, comprehensiveness is preferably defined as having at least 2.0% of the homeowners market as measured by premium volume in at least one of the 50 States or District of Columbia, in comparison to the total industry premium as reported in statutory filings. The fields in these data sets are described in a later section. Geodemographic data consists of two parts, demographic data and geographic data. For the United States market, demographic data 104 may be obtained from Claritas, Inc., a commercial vendor of such data. The demographic data describes the housing stock in each ZIP code and State of the United States and for the U.S. as a whole; it is derived by the vendor from U.S. Bureau of the Census reports and other data. Some of the data items of the demographic data are listed in table 500 in Figure 5. Many of the data items listed in table 500 may be used to calculate a "demo- graphic value" associated with the geographic area. An exemplary table comprised of data items ZIP code 510, aggregate housing value 548 and number of housing units 601, calculated from the table of owner-occupied housing units 552 (Figure 5) is shown in table 600 in Figure 6. Geograph- ic data 106 describes the geographical contiguity of the States (shown as table 400 Figure 4) and the hierarchical relationship between various units of geography (shown as table 1800 in Figure 18) . It is created by personnel connected to the operation of the system for creating index values. It is derived, ultimately, from paper and electronic maps of the U.S. The fields in these data sets are described in the next section.

Operating parameters 102 are preferably created by the system operator 120 in the first step of processing, which is detailed in the next section. Section: Process

Figure 7 is a flow chart of a preferred system for creating index values. Six out of the eleven process blocks in Figure 7 are expanded in subsequent figures to show more detail.

Step 1: Defining Operating Parameters

The first major step (block 702) is the definition of the operating parameters 800 of the system. Figure 8 shows these parameters 800 in a table with sample values. There are three groups of parameters 800: the credibility parameters 851, the risk transfer characteristics 852 and the event parameters 853. The credibility parameters 851, are shown in blocks 802, 804, 806, and 808. These specify the criteria for statistically credible geographic reporting units and associated reporting companies. They are: the minimum number of owner-occupied housing units in a reporting unit 802, the minimum number of insured homes for a company within a reporting unit 804, the minimum amount of insurance (dollar insured value of homes) for a company within a reporting unit 806, and the minimum number of admitted companies (reporting companies meeting the insurance number and amount criteria) within a reporting unit 808. The meaning of these parameters will be made clear in subsequent discussion of determining index reporting units and admitted companies.

The second group of parameters, the characteristics of the risk transfer contracts 852, are shown in blocks 810, 812, 814, 816, 818, 820, 822, 824, and 826. These define the geographic and temporal scope of the experience covered by the index. The first three items are lists. Any one, two, or several items may be entered. The first item 810 is a list of States from among the States of the U.S. The second item 812 is a list of property types from among the following options: single family homes, multi-family homes, and mobile homes. The third item 814 is a list of causes of loss from among the following options: fire, lightning, windstorm, hail, water damage, freezing, and theft.

The next six parameters in the second group 852 specify date ranges for various aspects of the experience. Blocks 816 and 818 are the begin 816 and end 818 dates of the loss event. These limit the range of time in which losses may occur. Blocks 820 and 822 are the begin 820 and end 822 dates of the claim report. These limit the range of time in which claim notices may be filed and still be covered by the index. Blocks 824 and 826 are the begin 824 and end 826 dates of insurance in-force. These define the range of time in which insured values are measured.

The third group of parameters, the event param- eters 853, are shown in blocks 828 and 830. Block 828 is the minimum standard deviation for an event and block 830 is the maximum number of events. The meaning of these parameters will be made clear in subsequent discussion of determining events. As noted above, operating parameters 102 are preferably defined by the system operator 120, but may alternatively be defined through other means, such as by being predefined or preprogrammed or by being calculated by the system from other parameters. Step 2: Selection of Exposure Data

The second major step (block 704) is the selection and summarization of the exposure data. This process is shown in more detail in Figure 9. At block 902, exposure input records (shown as 200 in Figure 2) are read from their tapes into disk storage, with only certain records being retained. In the preferred system, a record is retained if meets the following criteria: its number of families 218 and construction type 220 conform with the property type list parameter 812, its State 208 is in the State list pa- rameter 810, and its policy inception date 206 is consistent with the policy still being in effect between the begin and end date of insurance in-force parameters 824 and 826, respectively. From the records in disk storage, block 904 counts the policies (each record represents one policy) by company 202 by ZIP code 210. Similarly, block 906 sums the amount of insured value 226 by company 202 by ZIP code 210. At block 908, the counts and sums are stored, along with company and ZIP code identifiers, in an exposure database. An example of a portion of the exposure database is shown as table 1000 in Figure 10. It consists of four fields: ZIP code 1041, company identifier 1042, policy count 1043, and in-force sum 1044. Together, ZIP code 1041 and company identifier 1042 values are unique to each record. Step 3: Determining Index Reporting Units With reference back to Figure 7, the third major step (block 706) is the determination of index reporting units and their associated admitted companies. Index reporting units preferably satisfy actuarial credibility criteria, e.g., they should have at least a minimum number of owner-occupied housing units and at least a minimum number of "admitted companies." To be an admitted company for an index reporting unit, a reporting insurance company (also referred to herein as a "property-casualty insurer") preferably has at least a minimum number of policies and at least a minimum total in-force amount of insurance in the index reporting unit. Index reporting units may be individual ZIP codes or groups of ZIP codes sharing the same first three digits.

The preferred process of jointly determining index reporting units and their associated admitted companies is shown in Figure 11. Determining index reporting units proceeds in three phases. The first phase (block 1102) determines which ZIP codes individually qualify as index reporting units. The second phase (block 1106) determines which groups of ZIP codes (that did not qualify individually at the first phase) qualify collectively as index reporting units. The third phase, at blocks 1116 and 1120, "repairs" groups of ZIP codes (that did not qualify collectively at the second phase) so they finally may qualify. The first phase, at block 1102, is shown in more detail in Figure 12. At block 1202, each ZIP code record (e.g., 602-610 in Figure 6) in the demographic database (Figure 5) is tested to see whether the number of owner-occupied housing units (sum of counts in block 552) is at least as great as the minimum number of owner-occupied housing units operating parameter 802, which is preferably equal to 1,000 owner-occupied units. For records that qualify, the ZIP code is posted to a temporary table of qualified ZIP codes, otherwise the ZIP code is posted to a tempo- rary table of unqualified ZIP codes. Examples of the temporary tables of qualified 1302, 1304, 1306 and unqualified 1308, 1310 ZIP codes are shown in Figures 13a and 13b. These figures show the results of processing the example data 602-610 in Figure 6. ZIP records 602, 608, and 610 represent ZIP codes whose count of owner-occupied housing units meets the minimum criterion specified in the minimum housing parameter block 802. The ZIP codes therefore appear at blocks 1302, 1304, and 1306. On the other hand, ZIP records 604 and 606 do not meet the criterion 802, therefore appear at blocks 1308 and 1310.

Blocks 1204 through 1222 constitute a loop which is iterated over each ZIP code "Z" appearing in the temporary table of qualifying ZIP codes (Figure 13a) . At block 1206, the exposure database (1000 in Figure 10) is searched for all records with ZIP codes corresponding to the ZIP code "Z" in the current iteration of the loop. For example, in the first iteration over the example data in Figure 13a, the records of Figure 10 corresponding to the ZIP code at block 1302 are examined. These are the records at blocks 1002, 1012, 1022, and 1030. Each exposure record is examined with regard to the credibility criteria for admitted companies. Specifically, the records are tested to see whether the policy count is at least as great as the minimum number of insured homes operating parameter 804, which is preferably at least 10, and whether the in-force-sum is at least as great as the minimum amount of insurance operating parameter 806, which is preferably at least $100,000. If both criteria are met, then the company identifier and ZIP code are posted to a temporary table of credible companies 1400 in Figure 14. If one or the other criterion is not met, no action is taken with respect to the record. Figure 14 shows an example of the temporary table of credible companies 1400, with the results of the first execution of block 1206 appearing at blocks 1402, 1404, 1406, and 1408. At block 1208, the number of such records associated with the current iteration ZIP code "Z" is counted. At block 1210, this number is tested to see whether it is at least as great as the minimum number of admitted companies operating parameter 808, which is preferably equal to three. If the number is not at least that large, block 1220 appends that ZIP code "Z" to the temporary table of unqualified ZIP codes (Figure 13b) .

If the number is large enough, first, at block 1212, the ZIP code "Z" is stored in a ZIP-index cross-reference table (1500 in Figure 15) . The ZIP code is stored in the ZIP code field 1551, and the ZIP code is also stored in the Index Reporting Unit field 1552. This signifies that the ZIP code, by itself, constitutes an Index Reporting Unit. An example of this table is shown in Figure 15a. In this example, given that ZIP code "10001" (block

1302) has four credible companies (blocks 1402, 1404, 1406, and 1408), and the minimum requirement 808 is three companies, then block 1212 posts a corresponding record (block 1502) for ZIP code "10001". After block 1212, blocks 1214, 1216, and 1218 proceed. They constitute a loop which is iterated over each record representing a credible company in the temporary table of credible companies (1400 Figure 14) that is asso- ciated with the current iteration ZIP code "Z." In the example, the loop first proceeds over blocks 1402, 1404, 1406, and 1408. The data from these records is stored in an admitted company table (Figure 15b, blocks 1512, 1514, 1516, and 1518) . The loop from block 1204 through block 1222 continues with the remaining records in the temporary table of qualified ZIP codes (Figure 13a) . In the example, this results in blocks 1504 and 1506 being posted to Figure 15a and blocks 1520 through 1530 being posted to Figure 15b. This completes the detailed description of Figure

12 and hence block 1102, determining ZIP codes that individually qualify as index reporting units.

With reference back to Figure 11, the second phase of determining index reporting units requires exposure data to be expressed for groups of unqualified ZIP codes. This is performed at block 1104. For each record in the temporary table of unqualified ZIP codes (Figure 13b) , the associated records in the exposure database (Figure 10) are identified, and the policy-count and in-force-sum fields are summarized by company by first three digits of the ZIP code. In this example, there is only one such three-digit group: all ZIP codes shown in Figure 13b share the first three digits "100". The summary records are posted to a temporary exposure table, shown in Figure 16. For this example data, observe that blocks 1004 and 1006 combine to produce the summary record at block 1602. The coding for a three digit group is the three digits followed by the characters "XX". Thus, ZIP codes "10002" and "10003" combine to form the group "100XX". Block 1106 determines which of these three-digit groups qualify as index reporting units. The process is analogous to the process at block 1102, with the following differences. First, owner-occupied housing counts from the demographic database (500 in Figure 5) are summed by group since there are no records for the groups per se in the database. For example, blocks 604 and 606 result in an owner-occupied housing unit count of 1300 for group "100XX". Second, iteration analogous to the loop at blocks 1204 through 1222 proceeds through a temporary table of three-digit groups (not shown) derived from the temporary table of unqualified ZIP codes (Figure 13b) , rather than through the temporary table of qualified ZIP codes (Figure 13a) . Third, the determination of credible companies analo- gous to the process at block 1206 operates on the temporary exposure table (Figure 16) rather than the exposure database (Figure 10) . Other requisite changes will be obvious to the practitioner of ordinary skill in the art.

Three-digit groups that satisfy the credibility criteria are posted by block 1106 to the ZIP-index cross-reference table (Figure 15a) . Note the ZIPs making up a group are identified individually in the ZIP code field whereas the identifier for the group is placed in the index reporting unit field. For example, blocks 1508 and 1510 together define the "100XX" index reporting unit. The admitted companies corresponding to a credible three-digit group are posted to the admitted company table (Figure 15b) . For example, blocks 1532, 1534, 1536, and 1538 are the result of this process. Note block 1610 (developed from blocks 1024 and 1026 by block 1104) represents a company (ECHO) that does not meet the minimum number of insured homes operating parameter (block 804) . Therefore, there is no record corresponding to block 1610 in the admitted company table (Figure 15b) . Three-digit groups that do not satisfy the credibility criteria are stored in a temporary table of unqualified three-digit groups (not shown) . This table identifies the ZIP codes making up the unqualified three-digit groups.

A report of the current status of index reporting unit definition is produced at block 1108. This combines information from the ZIP-index cross-reference and admitted company tables (Figure 15) with the exposure database (Figure 10) . An example report is shown in Figure 17.

The third phase of determining index reporting units occurs in blocks 1110 through 1122. These constitute a loop which is iterated over each three-digit group "T" in the temporary table of unqualified three-digit groups (not shown). For each such three-digit group "T", block 1112 searches the ZIP-index cross-reference table (Figure 15a) for qualified ZIP codes that have the same first three digits as "T". For example, block 1502 represents a qualified ZIP code with the first three digits 100. Blocks 1504 and 1506 each represent a qualified ZIP code with the first three digits "101". Blocks 1508 and 1510 do not represent qualified ZIP codes; they represent a qualified three-digit group .

If there exist such records of qualified ZIP codes, block 1114 searches the demographic database (Figure 5) to identify which of those ZIP codes has the fewest owner-occupied housing units. Block 1116 pools the identified ZIP code "Z" with the three-digit group "T" . This means that the record corresponding to ZIP code "Z" in the ZIP-index cross-reference table (Figure 15a) is modified so that the index reporting unit field shows the three-digit group "T" instead of "Z" . Also, new records are posted to the ZIP-index cross-reference table representing the other ZIP codes in the three-digit group "T" . This information comes from the temporary table of unqualified three-digit groups. Notice that since ZIP code "Z" met all the credi- bility criteria to be qualified on its own, the new three-digit group, comprised of three-digit group "T" together with ZIP code "Z", will therefore also meet all the credibility criteria.

The admitted company table (Figure 15b) is also updated to reflect this change in index reporting unit definitions. The index reporting unit field is changed, in those records that previously listed ZIP code "Z", to three-digit group "T". In a process similar to the processing in block 1106, new records are appended to the admitted company table to account for companies that now qualify in the new group. Duplicate records are removed.

If the test at block 1112 fails, that is, there are no qualified ZIP codes that have the same first three digits as "T", block 1118 notifies the operator via CRT monitor (block 122) to input a qualified ZIP code "X" for three-digit group "T". This ZIP code will not share the first three digits as "T", because there are none. The operator will select ZIP code "X" from the report of index reporting units (Figure 17) based on judgment involving criteria such as number of owner-occupied housing units and geographic proximity to the three-digit group "T" . ZIP code "X" is entered through the keyboard device (122 in Figure 1) . Block 1120 performs the same processing as block 1116, operating on ZIP code "X" to update the index reporting unit database (Figure 15) .

At the end of the processing steps shown in Figure 11, all ZIP codes in selected States (block 810) appearing in the demographic database will appear in the ZIP-index cross-reference table (Figure 15a) associated with some index reporting unit. All index reporting units will appear in the admitted company table (Figure 15b) associated with at least the minimum number of admitted companies. All such index reporting units will meet the actuarial credibility criteria set out in the operating parameters (blocks 802, 804, 806, and 808). This completes the description of block

706.

Step 4: Aggregation of Geographic Units

Returning to the level flow chart of Figure 7, the fourth major step (block 708) is the definition of higher-level geographic units. First, a final report of index reporting units, in a suitable format such as that shown in Figure 17, is produced. Second, a table defining units of geography larger than individual index reporting units is entered into computer storage by the operator. An example of such a table is shown in Figure 18. This table preferably defines which index reporting units lie within each State. For example, blocks 1802, 1804, 1806, and 1808 do this. It may also define groups such as ZIP Sectional Centers (blocks 1810, 1812, 1814, and 1816), counties

(blocks 1818 and 1820) or other types of geographical units. Step 5: Summarize Loss Data

The fifth major step (block 710) is to select and summarize loss data by company, date, and reporting unit. A preferable process for performing this is shown in Figure

19. At block 1902, claim input records (Figure 3) are read from their tapes into disk storage, with only certain records being retained. A record is retained if it meets the following criteria: its company identifier 302 and ZIP code 310 correspond to an admitted company in an index reporting unit (Figure 15), its number of families 316 and construction type 314 conform with the property type list operating parameter 812, its policy inception date 306 is consistent with the policy still being in effect between the begin date 824 and end date 826 of insurance in-force parameters, its cause of loss 324 is on the cause of loss list 814, its date of loss 322 is between the begin date 816 and end date 818 of loss event, and its accounting date 304 is between the begin date 820 and end date 822 of claim report. From the records retained in disk storage, block 1904 counts the claims (each record represents a claim) by insurance company 302 by reporting unit 310 (which is translated via the zip-index cross reference table 1500 as shown in Figure 15a) by date of loss 322. Similarly, block 1906 sums the paid loss amount 326 by company by reporting unit by date. At block 1908, the counts and sums are stored, along with company and reporting unit identifiers and date, in a loss-by-date table of a loss database. An example of a portion of the loss-by-date table is shown in Figure 20.

Together, the Reporting Unit 2051, Company ID 2053, and Date 2052 values are unique to each record. Step 6: Determination of "Events"

Returning to the top level process shown in Figure 7, the sixth major step (block 712) is the determination of events (catastrophes) . Index values are preferably computed both for the totality of losses during the risk period (the period defined by the begin date 816 and end date 818 of loss event operating parameters), as well as for losses associated with particular events. Events are defined by identifying groups of losses related in time and geographic area. An event is represented as a set of records, each of which identifies a date and a geographic area, preferably at the level of a State. The preferred process of defining events is shown in Figure 21. The general flow of control consists of initialization steps (blocks 2102 and 2104), an outer loop which creates each event (blocks 2106 through 2118), an inner loop which processes State-date combinations (blocks 2108 through 2114), and a finalization step (block 2120).

The first initialization step (block 2102) sets a memory location value "R", representing the catastrophe ID number of the current event, to one. The second initialization step (block 2104) is shown in more detail in Figure 22. At block 2202, the higher-level geography table 1800 (Figure 18), index reporting unit database (Figures 15a and b) , exposure database 1000 (Figure 10) , and loss- by-date table (Figure 20) are summarized to create a tempo- rary table of damage rates by State by date. An example of this table is shown in Figure 23. In-force-sums are obtained from the exposure database and do not vary by date. Only exposure database records corresponding to admitted companies are used. For example, blocks 1024 and 1026 are not included in the total in-force-sum shown in block 2312. Loss-sums are obtained from the loss by date table and do vary by date. All loss-by-date records are used because all records correspond to admitted companies. The damage rate is computed as the ratio of loss-sum to in-force-sum. At block 2204, the natural logarithm (base e) of the damage rate is computed. At block 2206, statistical summaries as a function of historical loss experience and exposure experience are calculated. In the preferred method, the mean and standard deviation of the logarithm of damage rate is computed for each group of records associated with a State. These means and standard deviations are stored in a temporary table (not shown) .

Blocks 2208 through 2214 constitute a loop which is iterated over each record in the temporary table of aam- age rates (Figure 23) . At block 2210, the log damage rate for the current record is compared to a threshold computed from the mean and standard deviation for the State identified in the record. The threshold is computed as the mean plus the product of the standard deviation times the minimum standard deviation operating parameter 828. If the log damage rate is less than or equal to the threshold, the record is bypassed and processing continues with the next record.

If the log is greater than the threshold, block 2212 adds a new record, consisting of the State and date, of the current record, to a temporary table of unmatched State-day combinations; processing then continues with the next record. An example of the initial configuration of such a table is shown in Figure 24. This example assumes that the threshold for each State is -6.0. (In general, thresholds will differ from State to State.) Therefore, for example, records at blocks 2302 through 2314 pass the threshold and result in records at blocks 2402 through 2414. Block 2316, however, does not pass the threshold and there- fore there is no associated record in Figure 24. The nomenclature "unmatched" refers to the fact that the next step is to group these records into matched sets. This completes the description of Figure 22 and therefore of block 2104. Block 2106 begins the outer loop, which creates event definitions. At this block, the first record from the temporary table of unmatched State-day combinations is moved (i.e., deleted from and information copied) to a temporary table of to-match State-day combinations. An example of the to-match table is shown in Figure 25. The first record, at block 2402, has been moved to block 2502.

At block 2108, the temporary table of unmatched State-day combinations is searched and all records identifying State-day combinations that share a space-time boundary with any record in the temporary table of to-match State-day combinations are marked. This process is shown in more detail in Figure 26. An outer loop (blocks 2602 through 2616) is iterated over each record "U" in the temporary table of unmatched State-day combinations. An inner loop (blocks 2604 through 2614) is iterated over each record "T" in the temporary table of to-match State-day combinations. Within the inner loop, two tests are applied. At block 2606, the dates of the two records are tested to see whether they are the same day or one day apart. If not, processing continues with the next to-match record. If the dates are adjacent, the second test, at block 2608, determines whether the States are the same or share a boundary. The state adjacency table (Figure 4) is used to determine this. If the second test fails, processing continues with the next to-match record. If the second test succeeds, the current unmatched record "U" is marked and processing continues with the next unmatched record in the outer loop.

An example of the status of the unmatched table upon the first execution of block 2108 is shown in Figure 27. The first remaining record in the table, at block 2704, has been marked because it shares a space-time boundary with the record at block 2502: OR (Oregon) shares a boundary with CA (California) and 1/16/96 is the day after 1/15/96.

Block 2110 then moves all records in the temporary table of to-match State-day combinations to a temporary table of event definitions, with the current value of the running catastrophe ID number "R" being assigned to the catastrophe ID number field. An example of the temporary table of event definitions is shown in Figure 28. The record at block 2802 is the result of moving the record at block 2502.

Block 2112 then moves all marked records in the temporary table of unmatched State-day combinations to the temporary table of to-match State-day combinations. These are records that have previously been identified as sharing space-time boundaries with the just-removed records assigned to catastrophe "R" . Figure 29a shows an example of the consequent status of the temporary table of to-match State-day combinations. Block 2114 tests whether there are new records in the to-match table. If so, control returns to block 2108 where these new to-match records become the matching criteria for marking records in the unmatched table.

Figure 29 shows an example of the status of the two temporary tables of State-day combinations after a sec- ond execution of block 2108. Block 2904 is derived from. block 2704 by the process of block 2112. The record at block 2906 is marked by the process at block 2108 because of a match with the record at block 2904: NV (Nevada) and OR (Oregon) share a boundary, and 1/17/96 is the day after 1/16/96.

Following the example data in Figure 29, the processing between block 2108 and 2114 will continue in this fashion, moving, in succession, records at block 2906 and 2908 first to the temporary table of to-match State-day combinations and then to the temporary table of event definitions. Upon moving the record at block 2910 to the to-match table and executing the process at block 2108 for a fifth time, the status of the two temporary tables of State-day combinations is as shown in Figure 30. No records in the unmatched table have been marked by block 2108. Process 2110 then moves the record at block 3010 to the temporary table of event definitions. An example of the status of the temporary table of event definitions at that point is shown in Figure 31. Process 2112 then does nothing because there are no marked records in the unmatched table

(Figure 30b) , leaving the to-match table empty. The test at block 2114 then fails.

When the test at block 2114 fails, because there are no more records in the temporary table of to-match State-day combinations (therefore there were no records marked in the temporary table of unmatched State-day combinations in the last execution of block 2108), this means all records related to the current catastrophe "R" have been found. Block 2116 then increments the running catastrophe ID number "R" .

Block 2118 then tests whether there are more records in the temporary table of unmatched State-day combinations. If so, processing continues at block 2106, starting a new cycle of event definition for the new value of the running catastrophe ID number "R" . An example of the status of the two temporary tables of State-day combinations after the first return to block 2106 and subsequent execution of block 2108 is shown in Figure 32. Processing in the outer loop from block 2106 through block 2118 continues until the test at block 2118 fails.

When the test at block 2118 fails, this means there are no more records in the temporary table of unmatched State-day combinations (because all records have been assigned to events) . An example of the final status of the temporary table of event definitions is shown in Figure 33. Control passes to block 2120 which selects among these events and stores the results in an event database.

The process at block 2120 is shown in more detail in Figure 34. Blocks 3402 through 3408 constitute a loop which is iterated over each record "E" in the temporary table of event definitions (Figure 33) . Block 3404 obtains, from the demographic database (Figure 5) , the aggregate housing values of the Nation and the State associated with record "E" . Block 3406 computes a national-level damage rate for record "E" by multiplying its damage rate (obtained from the temporary table of damage rates, Figure 23) by the ratio of the State housing values divided by the national housing value. The result, along with the other field val- ues of record "E", is stored in a table of event definitions of the event database. An example of the table of event definitions is shown in Figure 35a.

Block 3410 computes the sum of national-level damage rates and the earliest date for all records in the event definition table sharing the same catastrophe ID number and places the results in a table of events of the event database. An example of the table of events is shown in Figure 35b. In this example, the records at blocks 3502 through 3510 combine to produce the record at block 3526. Blocks 3412 and 3414 delete records from the event database, retaining only the largest so many events specified by the maximum number of events operating parameter (block 830) . This is accomplished by sorting the records of the table of events in descending order of national-level damage rate (block 3412) then deleting records past the position corresponding to the maximum number of events (block 3414). Corresponding records in the table of event definitions are retained at block 3416 by deleting records with no matching catastrophe ID number in the table of events .

Blocks 3418 and 3420 renumber the events so that the catastrophe ID numbers proceed in date order. This is accomplished by sorting the records in the table cf events in date order (block 3418) then sequentially renumbering the catastrophe ID numbers of records in the table of events, simultaneously with renumbering the corresponding records in the table of event definitions (block 3420) .

At this point in processing, the event database is complete. This ends the description of Figure 34, thus the process at block 2120, and thus also the process at block 712.

Steps 7 and 8: Summarize Loss Data Per Reporting Unit By Event (Step 7) and Risk Period (Step 8)

Returning again to the top level process shown in Figure 7, the seventh and eighth major steps (blocks 714 and 716) convert the loss data from date-by-date to event-specific (block 714) and to risk period-wide (block 716) loss totals. Both steps use as input the loss-by-date table of the loss database (Figure 20) . Block 714 also uses the event database (Figure 35) and higher-level geography table (Figure 18) to group the loss database records according to events. It is possible that not every record in the loss database will be found to match an event in the event database. Block 714 creates a loss-by-event table of the loss database. An example of the loss-by-event table is shown in Figure 36. The values of reporting unit, event, and company id occur in unique combinations for each record. Block 716 appends records to this table, using a special numerical code of zero for the catastrophe ID number to signify that the record contains all losses in the loss database. (Much of subsequent processing treats catastrophe id zero as if it were just another event.) Block 716 then sorts the loss-by-event table by reporting unit, catastrophe ID number, and company id. Step 9: Compilation of Index Value Per Reporting Unit

The ninth major step in the top level process (block 718) is the computation of index values for index reporting units. This process is shown in more detail in Figure 37. The process iterates through the loss-by-event table of the loss database (Figure 36) . The outer loop, from block 3702 through block 3724, is iterated over each reporting unit "U" . The intermediate loop, from block 3704 through block 3722, is iterated over each catastrophe id

"E", including the catastrophe id zero event. The innermost loop, from block 3710 through 3718, is iterated over each company "A" .

After blocks 3702 and 3704 establish values of the current index reporting unit "U" and event "E", block 3706 initializes to zero a memory location which will accumulate the sum of loss-to-value ratios. Block 3708 initializes to zero a memory location which will count companies.

Block 3710 then establishes a value of the current admitted company "A". Block 3712 computes a loss-to-value ratio associated with company "A", event "E", and reporting unit "U" . To do this, block 3712 determines the in-force-sum associated with company "A" and reporting unit "U" from the exposure database (Figure 10) and index report- ing unit database (Figure 15), and divides this into the loss-sum associated with "A", "E", and "U" in the loss-by-event table (Figure 36) . A table of examples of this computation is shown in Figure 38. The computation shown at block 3802, for example, derives from the loss-sum at block 3602 and the in-force-sum at block 1002.

Block 3714 adds this loss-to-value ratio to the sum of loss-to-value ratios accumulator. Block 3716 adds one to the company count. Iteration proceeds until loss-to-value ratios for all companies associated with re- porting unit "U" and event "E" have been accumulated.

Block 3720 computes the index value for reporting unit "U" and event "E" as the ratio of the sum of loss-to-value ratios divided by the company count. The result of the division, and the company count, are posted to a record in a reporting unit table of an index value database. An example of a portion of the index value database is shown in Figure 39. The record at block 3904, for example, is derived from blocks 3802 through 3808. Iteration proceeds until all records in the loss by event table have been processed.

Step 10: Computation of Index Values for Aggregated Units

The tenth major step of the top level flow (block 720) is the computation of index values for higher level geographies. Weights, derived from demographic data, are used to implement the computation. The process is shown in more detail in Figure 40.

Block 4002 augments the reporting unit table of the index value database with weight values derived from the demographic database (Figure 5) and ZIP-index cross refer- ence table (Figure 15a) . Weights are preferably aggregate values of specified owner-occupied housing units (block 554) . Example aggregate housing values are shown in Figure 6. Their translation to the index value database is shown in Figure 41a. For example, blocks 604 and 606 are treated as one unit because the coding at blocks 1508 and 1510 puts them in the same index reporting unit; the combined aggregate value is posted to block 4102 as the weight. For each record in the reporting unit table of the index value database, block 4004 multiplies the number of companies by the weight and stores the product as the company count product. It then multiplies the index value by the weight and stores- the product as the index value product. These products are shown in Figure 41a.

Blocks 4006 through 4026 constitute a loop which is iterated over each geographic unit "H" in the higher-level geography table (Figure 18) . Within this loop, blocks 4008 through 4024 constitute a loop which is iterated over each event (catastrophe id) "E" in the reporting unit table of the index value database (Figure 41a) . After blocks 4006 and 4008 establish values of the current geographic unit "H" and event "E", block 4010 initializes to zero a memory location which will accumulate the sum of weights. Block 4012 initializes to zero two memory locations which will accumulate the sums of company count prod- uct and index value product, respectively.

Blocks 4014 through 4020 constitute a loop which is iterated over each record "U" in the reporting unit table of the index value database associated with both event "E" and an index reporting unit within geographic unit "H", the latter as determined by reference to the higher-level geography table (Figure 18) . Blocks 4016 and 4018 add the weights and products, respectively, to their respective accumulator memory locations.

Block 4022 computes the ratios of the product accumulators divided by the weight accumulator and stores the results, along with the accumulated weight and event and geography identifiers, in a higher-level geography table of the index value database. An example of this table is shown in Figure 41b. In this example, block 4110 results from blocks 4102 and 4104 being processed in the inner loop (blocks 4014 through 4020) and the results processed at block 4022. Step 11: Final Report

The eleventh major step (block 722) is to produce the final index report. Records from the reporting unit table and higher level geography table of the index value database (Figure 41) are combined and formatted for three output media. First, the records are sent to a storage device, such as a CD-ROM device, (block 124) in a format compatible with personal-computer database management software. Second, the records are sent to a laser printer (block 132) in report format, where a printed report is produced. Third, the records are sent to a network interface (block 130) and from there to a World Wide Web server (block 128) where an electronic equivalent of the printed report is made available by internet. An example of such a report, for an event similar to Hurricane Andrew, is shown in Figure 42.

While the preferred embodiment for the system for creating index values, as described above in the specification, is intended to support risk transfer contracts, there are other potential uses as well. Index values at the ZIP code level represent a highly detailed pattern of loss-to- value ratios for a particular catastrophe. This information would be of value to meteorologists, actuaries, engineers, and others who seek to understand the nature and dynamics of catastrophic events. It would also be of value in disaster mitigation planning and post-disaster resource allocation.

The preferred embodiment contemplates specifically formatted data feeds from a set of insurance companies, reporting on insurance policies. Other data sources of property damage experience may be substituted for insurance companies. The format of these feeds also could easily be modified without substantial change to the rest of the sys- tern. Some insurers may prefer, for example, to report su - mary information closer in format to the exposure and loss databases than the transaction-level detail as described. The class of reporting entities itself is not necessarily limited to insurance companies. Instead, data might be obtained from government agencies, news media, or property owners themselves. Finally, there is no inherent limitation to insurance policies; what is needed is an expression of value at risk and subsequent losses arising from events. This could be satisfied by alternative forms of exposure and loss reports, for example, property tax assessment records (exposure), income tax deduction records (losses), or reports specifically created for the purpose (either) .

The preferred embodiment is discussed in terms of a number of "major" steps. It will be apparent to one of skill in the art based on the disclosure herein that many of these steps are optional or may be combined or further broken apart and recombined. For example, while the preferred embodiment is described in terms of a set of eleven steps (also referred to herein as routines or programs) , many of these may be combined.

The preferred embodiment contemplates specific criteria for selecting reporting companies, requiring each to have at least 2.0% of the homeowners market. This figure could be changed up or down or could be replaced by alterna- tive criteria.

The preferred embodiment focuses on risk transfer for a specific class of events. Given appropriate source data feeds, the following substitutions could easily be made to the operation of the system: Geographic areas outside the United States could be used. In particular, the postal code systems of other countries such as Canada, Japan, the United Kingdom, and other European nations could be substituted for the U.S. ZIP code system. Although it would forgo some of its advantages, the system could be based on larger units of geography such as telephone area code service areas, cities and towns, or other subnational government units such as states, counties, cantons, communes, congressional districts, provinces, etc., or statistical units such as metropolitan statistical areas or Donnelley marketing areas.

Smaller units of geography such as census blocks or enumeration districts, post office carrier routes, or postal ZIP+4 codes could be used.

Synthetic units of geography such as latitude- longitude grid cells, township-range cells, or units combining several non-nesting natural units (e.g. , congressional district by three-digit ZIP code) could also be used. Types of property other than owner-occupied homes could be used. In particular, condominiums and rental properties could be used, as well as various classes of commercial, government and industrial property, including retail establishments, factories, warehouses, offices, util- ities, schools, hospitals, and municipal facilities. Furthermore, infrastructure assets such as electrical lines, gas and oil pipelines, roads, bridges, etc. could be covered.

Causes of loss other than atmospheric perils could be used. In particular, earthquake, volcano, tsunami, riot, insurrection, industrial accidents, transportation accidents, and political expropriation could be included.

The preferred embodiment uses a table describing the geographical contiguity of the States. The table is assumed to be based on the sharing of boundaries between States. Instead, the table might be based on the minimum distance between the States. In particular, such a substitution would tend to increase the "connectivity" of States in the northeast. The preferred embodiment assumes a particular set of demographic data which serves several purposes. Alternative demographic concepts could substitute. For example, in the process of defining index reporting units, instead of number of owner-occupied housing units, ZIP codes could be screened on number of households, population, aggregate value, number of housing units within a certain range of values, number of properties in a wider sense, or any other volume-type measure. The same substitution opportunities exist for the role played by aggregate housing value in restating State-level damage rates to a national level, and in the definition of weight factors for computing index values at higher levels of geography.

The preferred embodiment assumes particular credi- bility criteria for the definition of index reporting units. The particular values of threshold minima (number of owner- occupied housing units, number of insured homes, amount of insurance, and number of admitted companies) could be made variable by State or other geographic grouping without af- fecting the essential function of the system. Alternate demographic criteria for screening ZIP codes, mentioned previously, could be substituted. Instead of both number of insured homes and amount of insurance, admitted companies could be screened on just one of those criteria, or on a formula combining the two. Instead of thresholds being fixed numbers, they could be computed dynamically; for example, the inclusion of a company could be decided by a computation which takes the relative sizes of other companies into account. Instead of the number of admitted companies, reporting units could be screened on total number of insured homes, total amount of insurance, or some formula combining several factors .

The preferred embodiment assumes index reporting units are individual ZIP codes or groups of ZIP codes shar- ing the same first three digits, without regard to physical location. An alternative implementation would use locational information (e.g., the latitude and longitude of the ZIP code centroids) and proximity criteria to determine candidate groupings, regardless (or in partial disregard) of the numerical coding of the ZIP codes. Another alternative would be to use ZIP code boundary-sharing to determine candidate groupings. If units other than ZIP codes were to be used as the fundamental unit, then locational or boundary criteria could still be used. Unit naming or numbering conventions may or may not be workable, depending on the type of geographic unit.

The preferred embodiment assumes a specific procedure for defining events. In particular: claims grouped by State by date are selected to be included in events based on a statistical assessment of the logarithms of their damage rates. Instead of restating damage rates at a national level by use of aggregate housing values, alternative demographic concepts, mentioned previously, could substitute. Instead of the logarithm, another mathematical transforma- tion such as the square root or arctangent, or no transformation at all, could be used. Instead of the threshold being stated in terms of the mean plus a fixed number of standard deviations, formulas involving other statistical moments (e.g., skewness and kurtosis) could be used. Rather than using a computed threshold, a certain number or proportion of the State-days, in descending damage rate order, could be selected.

In the preferred embodiment, once State-day groups have been linked together to form national events, a certain maximum number of events are retained. Alternatively, the retention of events could be decided by a formula that takes the relative sizes of events into account. For example, an event might be retained if its damage rate is greater than a certain fraction of the damage rate of the largest or second-largest event, or, if its damage rate is greater than a certain fraction of the damage rate of the next largest retained event.

In the preferred embodiment, the linking of State- _ day groups into events proceeds by determining groups that share State and date boundaries. Alternatively, the spatial distances or time spans could be made dependent on the magnitude of the damage rates, or on the cause of loss. External meteorological data (e.g. , weather reports, barometric pressure maps, etc.) could be used to discern linkages, for example, by identifying the motion of storm fronts.

Rather than using States as the fundamental geographic unit for events, sub-State geographical units such as counties or ZIP Sectional Centers could be used.

The preferred embodiment assumes a specific proce- dure for computing index values for index reporting units.

In particular: index values are ratios whose numerators are paid losses. At the time claim payments are made, insurers routinely designate case reserves, which are sums of money set aside in anticipation of future payments on a claim. An alternative embodiment could use, instead of paid losses, the sum of paid losses and case reserves. Furthermore, insurers also designate incurred but not reported (IBNR) reserves, which are sums of money set aside in anticipation of claims stemming from an event but not yet filed. An alternative embodiment could allocate IBNR reserves to index reporting units as well. Numerators could be or include other expressions of property damage, or economic or other loss arising from events, such as total damage (insured plus uninsured) , lost income, time to repair, deaths, injuries, medical costs, property devaluation, etc.

In the preferred embodiment, index values are expressed as ratios whose denominators are amounts of insurance. While this has certain advantages outlined above, the denominator might instead be insurance premiums, total property values, tax-assessed value of improvements, pure premium (long-run expected losses) , premiums less expenses, square footage, square footage multiplied by adjustment factors varying with type or use of the property, income, income adjusted for expenses or other accounting entities, wages, number of employees, mathematical functions or combinations of the preceding, etc.

In the preferred embodiment, index values are ratios. While this has certain advantages outlined above, they could alternatively consist of the losses (or alterna- tive numerator) with the denominator being expressed separately, or being implicit in the methodology.

In the preferred embodiment, index values for a reporting unit are computed by combining index values for admitted companies in the reporting unit through an un- weighted average. While this is believed to be superior to averages weighted by amount of insurance, these two alternatives do not exhaust the possibilities for combination method. Other weight factors, such as those mentioned as alternative denominators, could be used. Other statistical summaries of the central tendency of company index values could be used, for example, the median (middle of sorted values) , trimmed mean (unweighted or weighted average of a set of values excluding a certain number or proportion of high and low values) , or conjugated mean. (A conjugated mean is the unweighted or weighted average of a one-to-one monotonic transformation of values, which average is then inversely transformed. Examples include the geometric mean, harmonic mean, and root mean square.)

The preferred embodiment assumes a particular pro- cedure for computing index values for higher level units of geography. Alternative demographic concepts, mentioned previously, could substitute for aggregate housing values as the weights. Index values could be computed by previously mentioned alternatives to weighted averages. If, instead of ratios, numerators and denominators were carried separately (or denominators were left implicit) , then higher-level index values could be the sums of the lower-level values.

The preferred embodiment assumes a specific configuration of hardware and software. As would be obvious to a practitioner of average skill in the art, numerous substitutions could be made without affecting the functionality of the system. In particular:

Input from insurers could be by alternative forms of computer media or by direct network or telephone connec- tion.

The central processor and peripheral devices could be any of a number of forms known to those of skill in the art. For example, the roles of magnetic tape, disk, CD-ROM and other forms of permanent or temporary storage could be freely interchanged.

The preferred embodiment uses both iterative loops over records in tables and database queries. Alternatively, these two modes of accessing data records may in large measure substitute for each other in equivalent implementa- tions. Sets of operations of finding data and transferring data from one memory or storage location to another are embodied in particular sequences . Many such operations could be resequenced without affecting the functionality of the system. For example, an operation of copying a set of records from one table to another, followed by a computation on each record, could instead by implemented by interleaving the copying and computation steps, or even perhaps by iterating the computation without performing the copying. A practitioner of ordinary skill in the art is well aware of such substitution possibilities and the tactical issues concerning hardware, software, and human resources that go into choosing which implementation is preferred.

The system for creating index values provides a support for risk transfer contracts addressing the needs of both hedgers and investors. Hedgers can eliminate a sub- stantial source of basis risk by using index-based contracts denominated at the ZIP code or three-digit ZIP code level. Investors can be assured of information symmetry and the avoidance of moral hazard by using index-based contracts. In addition, the system of the present invention for creating index values has additional advantages in that:

(1) Investors can achieve greater liquidity by dealing in standardized contracts at the State, region, or national level, while intermediaries are able to effect the translation of financing from those contracts to customized low-geographic-level contracts.

(2) Hedgers can express their needs directly in terms of loss-to-value impact on their own exposures, without the need to estimate their shares of industry-wide expo- sures .

(3) Even if a hedger is one of the index reporting companies, its influence on the index is limited.

(4) Non-geographic sources of basis risk are reduced. Although the specification and illustrations of the invention contain many particulars, these should not be construed as limiting the scope of the invention but as merely providing an illustration of the preferred embodiments of the invention. Thus, the claims should be con- strued as encompassing all features of patentable novelty that reside in the present invention, including all features that would be treated as equivalents by those skilled in the art.

Claims

What is claimed is:
1. A method, with the aid of a computer system, of creating index values to support the settlement of risk transfer contracts, said computer system operably coupled to a database of property damage experience including claim data, exposure data and location fields, said database of property damage experience including a plurality of records, said method comprising the steps of: a) defining operating parameters controlling the selection of claim data and exposure data from said database of property damage experience; b) applying said operating parameters to said database of property damage experience to create a set of selected claim data and exposure data; c) defining at least one reporting unit, each reporting unit being associated with a geographical location, each reporting unit being associated with a subset of said set of selected claim data and exposure data based on said property damage geographical location fields, said reporting unit satisfying predetermined actuarial credibility requirements; and d) determining an index value for each said reporting unit, said index value based on a predetermined relationship between said claim data and said exposure data for said subset of claim data and exposure data associated with said re- porting unit.
2. The method of claim 1, wherein said predetermined relationship of step (d) for calculating said index value comprises a loss-to-value ("LTV") ratio.
3. The method of claim 2, wherein said database of property damage experience comprises a plurality of insurance companies.
4. The method of claim 3, wherein each said exposure datum comprises an exposure value corresponding to an amount of insurance.
5. The method of claim 4, wherein each said claim datum comprises a loss experience value selected from the group consisting of claim paid loss amount, claim paid loss plus case reserve amount, claim paid loss plus case reserve plus allocated Incurred But Not Reported amount.
6. The method of claim 5, wherein each said loss experience value includes a loss adjustment expense.
7. The method of claim 4, wherein said subset of selected claim data and exposure data includes a plurality of insurance company records, said loss-to-value ratio for each reporting unit being calculated using the steps of: totaling the loss experience values for each insurance company record contained within the subset of selected data associated with said reporting unit; totaling the exposure values for each insurance company record contained within the subset of selected data associated with said reporting unit; and calculating a loss-to-value ratio for each said insur- ance company having a record contained within said subset of data associated with said reporting unit.
8. The method of claim 7, wherein said reporting unit loss-to-value ratio is computed as an unweighted average of said insurance company loss-to-value ratios.
9. The method of claim 7, wherein said reporting unit loss-to-value ratio is computed as a weighted average of said insurance company loss-to-value ratios.
10. The method of claim 7, wherein said reporting unit loss-to-value ratio is computed as a conjugated mean of said insurance company loss-to-value ratios.
11. The method of claim 7, wherein said reporting unit loss-to-value ratio is computed as a trimmed mean of said insurance company loss-to-value ratios.
12. The method of claim 2, further comprising the step of: f) aggregating the reporting unit loss-to-value ratios for an aggregate group of said reporting units, whereby said aggregated loss-to-value ratios comprise index values to support the settlement of risk transfer contracts based on said aggregate group of reporting units.
13. The method of claim 12, wherein said aggregation of reporting units loss-to-value ratios is computed as a weighted average, each reporting unit having a weight pro- portional to the aggregate housing value within said reporting unit.
14. The method of claim 1, wherein said reporting unit is defined on the ZIP code level.
15. The method of claim 1 wherein said defining of reporting units of step (c) comprises the steps of: i) acquiring demographic and insurance company data on ZIP codes in a given region; ii) applying by means of said computer a set of reporting unit criteria to each said ZIP code to determine whether said ZIP code qualifies as a reporting unit; iii) combining ZIP codes that do not meet said set of reporting unit criteria together to form potential reporting units; and iv) recursively applying steps (ii) and (iii) on said combined ZIP codes.
16. The method of claim 1, wherein said database of property damage experience comprises a plurality of property-casualty insurers, said application of said operating parameters in step (b) comprising the steps of: determining whether each said property-casualty insurer reports insuring at least 2.0% of the homeowners market in at least one of the 50 United States or the District of Columbia; and determining, in each reporting unit, whether each said property-casualty insurer reports policies on at least 10 homes and a minimum of $100,000 of insured housing value, in aggregate, within said reporting unit.
17. The method of claim 16, wherein said geographical location field of said property damage experience records is associated with individual ZIP codes, said defining of re- porting units of step (c) comprising the steps of: i) testing whether said selected set of database records associated with said individual ZIP codes include at least a predetermined number of owner-occupied housing units; ii) testing whether the selected set of database records associated with said ZIP codes include at least a predetermined number of insurance companies; and iii) rejecting ZIP codes which do not satisfy tests (i) and (ii) .
18. The method of claim 17 further comprising the steps of: iv) grouping rejected ZIP codes together to form candidates for a reporting unit; and v) repeating steps (i) and (ii) on said ZIP code group.
19. The method of claim 1, wherein said operating parameters comprise: credibility parameters defining criteria for sta- - tistically credible geographic reporting units and reporting entities; risk transfer parameters defining characteristics of the risk transfer contracts; and event parameters defining criteria for defining events .
20. A method, with the aid of a computer system, of defining catastrophic events of losses related in time and geographic area for supporting the settlement of risk transfer contracts, said computer system operably coupled to a database of property damage experience including claim data having a field indicating geographical location and at least one field indicating a loss experience value, said method comprising the steps of: a) creating loss summaries by summarizing said claim data by units of geographic location and time; b) selecting from among said loss summaries those meeting defined criteria associated with unusually high levels of loss experience; c) collecting said selected loss summaries into groups connected by proximity in geographic location and time; and d) defining catastrophic events as said collected groups of loss summaries.
21. The method of claim 20 wherein said units of geographic locations are States of the United States or the District of Columbia;
22. The method of claim 20, wherein said units or geographic locations are selected from the group consisting of aggregates of States, counties, aggregates of counties,
ZIP code sectional centers, aggregates of ZIP code sectional centers, ZIP codes, and aggregates of ZIP codes.
23. The method of claim 20, wherein said units of time are individual days.
24. The method of claim 20, wherein said units of time are selected from the group consisting of aggregates of days, hours, and aggregates of hours.
25. The method of claim 20, wherein said database of property damage experience further includes exposure data comprising at least one field indicating an exposure value, said step (a) of creating loss summaries further including the step of summarizing exposure values by units of geographic location and time, said step (b) of selecting from among said loss and exposure summaries comprising the steps of: i) computing statistical summaries as a function of said loss experience and said exposure experience for each unit of geographic space; and ii) testing each said statistical summary to determine whether said function of loss experience values and exposure values falls above one or more defined statistical threshold parameter values.
26. The method of claim 25, wherein said function of loss experience and exposure experience is a logarithm of the ratio of sum of said loss experience value to sum of exposures value.
27. The method of claim 25, wherein said function of loss experience and exposure experience is selected from the group consisting of a ratio of sum of losses to sum of exposures, and a sum of losses.
28. The method of claim 25, wherein each said statistical summary comprises a mean and a standard deviation, and said determination of relation to statistical threshold comprises a test of whether said functional value falls above said mean plus said threshold parameter value times said standard deviation.
29. The method of claim 25, wherein said statistical summaries comprise one or more percentiles and said determination of relation to statistical threshold comprises a test of whether said functional value falls above certain of said percentiles specified by said threshold parameter value.
30. The method of claim 20, wherein said step (c) of collecting said selected loss summaries into groups comprises the steps of: i) initially forming a collection by placing a previously uncollected one of said selected loss summaries in said collection; ii) identifying all said selected loss summaries which exhibit connectivity in space and connectivity in time with at least one selected loss summaries in said collection and adding said identified loss summaries to said collection; iii) recursively applying step (ii) above until no new loss summaries are added to said collection; and iv) recursively applying steps (i) through (iii) above until there are no uncollected selected loss summaries.
31. The method of claim 30, wherein said identifying of connectivity in space comprises identifying the sharing of geographic boundaries.
32. The method of claim 31, wherein said identifying of connectivity in space comprises testing whether a function of geographic distances between two units is below a specified threshold.
33. The method of claim 32, wherein said function of geographic distances is selected from the group consisting of minimum distance, maximum distance, and average distance.
34. The method of claim 30 wherein said identifying of connectivity in time comprises identifying the sharing of temporal boundaries.
35. The method of claim 30, wherein said identifying of connectivity in time comprises testing whether a function of time spans between two units is below a specified threshold.
36. The method of claim 35, wherein said function of time spans is selected from the group consisting of minimum time span, maximum time span, and average time span.
37. A method, with the aid of a computer system, of computing insurance index values for geographic areas, said method comprising the steps of: a) identifying a plurality of reporting units, each reporting unit corresponding to a distinct geographic area; b) selecting at least one source of property damage experience, said property damage experience comprising a plurality of records, each said record comprising a total insurance value and a total losses paid value, each record being associated with one reporting unit geographic area; c) calculating loss experience associated with each said reporting unit; d) calculating value exposure associated with each said reporting unit; and e) computing a loss-to-value ratio for each reporting unit by dividing loss experience by value exposure.
38. The method of claim 37, wherein a loss-to-value ratio is calculated for each selected source of property damage experience associated with each said reporting unit, said reporting unit loss-to-value ratio being the weighted average of each said source loss-to-value ratio.
39. A data processing system including at least one central processor for creating index values supporting the settlement of risk transfer contracts, said data processing system comprising: a database of property damage experience including geographical location, a loss value and an exposure value operably coupled to said central processor, said database including a plurality of records; a first set of defining criteria applied by said central processor, said first set of defining criteria controlling the selection of loss values and exposure values from said database of property damage experience to create a set of selected database records; a database of geographical units, said database including a plurality of records, each record including a field indicating an associated geographical location; a second set of defining criteria executing on said central processor, said second set of defining criteria controlling the selection of candidate reporting units from said database of geographical units, each reporting unit being associated with a set of geographical locations; a first program calculating the loss experience associated with each said reporting unit; a second program calculating the exposure associated with each said reporting unit; and a third program computing a loss-to-value ratio for each reporting unit by dividing the loss experience by the exposure.
40. A computer program for execution on a data processing system including at least one central processing unit operably coupled to a database of property damage experience, said database including a plurality of records, each record including a field indicating an associated geographical location, a loss value and an exposure value, said computer program for creating index values supporting the settlement of insurance risk transfer contracts, said com- puter program comprising: a means for creating a set of selected data from said database of property damage experience; a means for defining a plurality of reporting units, each reporting unit being associated with a geographical location, each reporting unit further being associated with a subset of said set of selected data based on said property damage geographical location fields; and a means for creating a loss-to-value index ratio for each said reporting unit.
41. The computer program of claim 40 further comprising: a means for defining catastrophic events of losses related in time and geographic area, said means for creating a loss-to-value index including a means for creating said index per catastrophic event.
PCT/US1998/012739 1997-10-20 1998-06-18 Method and system for creating index values supporting the settlement of risk transfer contracts WO1999021116A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US95544397 true 1997-10-20 1997-10-20
US08/955,443 1997-10-20

Publications (1)

Publication Number Publication Date
WO1999021116A1 true true WO1999021116A1 (en) 1999-04-29

Family

ID=25496833

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1998/012739 WO1999021116A1 (en) 1997-10-20 1998-06-18 Method and system for creating index values supporting the settlement of risk transfer contracts

Country Status (1)

Country Link
WO (1) WO1999021116A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002001430A1 (en) * 2000-06-28 2002-01-03 Whw Holding B.V. System and method for changing a value assigned to a value bearer
US7783505B2 (en) 2003-12-30 2010-08-24 Hartford Fire Insurance Company System and method for computerized insurance rating
US8090599B2 (en) 2003-12-30 2012-01-03 Hartford Fire Insurance Company Method and system for computerized insurance underwriting
US20130103457A1 (en) * 2008-10-18 2013-04-25 Kevin Leon Marshall Method and system for providing a home data index model
US8676612B2 (en) 2003-09-04 2014-03-18 Hartford Fire Insurance Company System for adjusting insurance for a building structure through the incorporation of selected technologies
US9311676B2 (en) 2003-09-04 2016-04-12 Hartford Fire Insurance Company Systems and methods for analyzing sensor data
US9460471B2 (en) 2010-07-16 2016-10-04 Hartford Fire Insurance Company System and method for an automated validation system
US9665910B2 (en) 2008-02-20 2017-05-30 Hartford Fire Insurance Company System and method for providing customized safety feedback
US9984415B2 (en) 2009-09-24 2018-05-29 Guidewire Software, Inc. Method and apparatus for pricing insurance policies

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4975840A (en) * 1988-06-17 1990-12-04 Lincoln National Risk Management, Inc. Method and apparatus for evaluating a potentially insurable risk
US5361201A (en) * 1992-10-19 1994-11-01 Hnc, Inc. Real estate appraisal using predictive modeling

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4975840A (en) * 1988-06-17 1990-12-04 Lincoln National Risk Management, Inc. Method and apparatus for evaluating a potentially insurable risk
US5361201A (en) * 1992-10-19 1994-11-01 Hnc, Inc. Real estate appraisal using predictive modeling

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
BANHAM R: "Reinsurers seek relief in computer predictions", RISK MANAGEMENT, AUG. 1993, USA, vol. 40, no. 8, ISSN 0035-5593, pages 14 - 16, 18 - 19, XP002082269 *
COVALESKI J M: "Software helps carriers avoid perilous areas (insurance)", BEST'S REVIEW - PROPERTY/CASUALTY INSURANCE EDITION, USA, vol. 94, no. 10, February 1994 (1994-02-01), ISSN 0161-7745, pages 82 - 83, XP002082268 *
FORSYTHE J: "Unum: divide and conquer (insurance company)", INFORMATIONWEEK, USA, no. 233, 21 August 1989 (1989-08-21), ISSN 8750-6874, pages 30 - 32, XP002082267 *
LEMAIRE J ET AL: "Models for earthquake insurance and reinsurance evaluation", PROCEEDINGS SECOND INTERNATIONAL SYMPOSIUM ON UNCERTAINTY MODELING AND ANALYSIS, COLLEGE PARK, MD, USA, 25 April 1993 (1993-04-25) - 28 April 1993 (1993-04-28), ISBN 0-8186-3850-8, April 1993, Los Alamitos, CA, USA, IEEE Computers Society Press, USA, pages 271 - 274, XP002082266 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL1016309C2 (en) * 2000-06-28 2002-02-12 Whw Holding B V System and method for changing a value assigned to a value carrier.
WO2002001430A1 (en) * 2000-06-28 2002-01-03 Whw Holding B.V. System and method for changing a value assigned to a value bearer
US8676612B2 (en) 2003-09-04 2014-03-18 Hartford Fire Insurance Company System for adjusting insurance for a building structure through the incorporation of selected technologies
US9881342B2 (en) 2003-09-04 2018-01-30 Hartford Fire Insurance Company Remote sensor data systems
US9311676B2 (en) 2003-09-04 2016-04-12 Hartford Fire Insurance Company Systems and methods for analyzing sensor data
US10032224B2 (en) 2003-09-04 2018-07-24 Hartford Fire Insurance Company Systems and methods for analyzing sensor data
US7783505B2 (en) 2003-12-30 2010-08-24 Hartford Fire Insurance Company System and method for computerized insurance rating
US8090599B2 (en) 2003-12-30 2012-01-03 Hartford Fire Insurance Company Method and system for computerized insurance underwriting
US9665910B2 (en) 2008-02-20 2017-05-30 Hartford Fire Insurance Company System and method for providing customized safety feedback
US20130103457A1 (en) * 2008-10-18 2013-04-25 Kevin Leon Marshall Method and system for providing a home data index model
US9984415B2 (en) 2009-09-24 2018-05-29 Guidewire Software, Inc. Method and apparatus for pricing insurance policies
US9460471B2 (en) 2010-07-16 2016-10-04 Hartford Fire Insurance Company System and method for an automated validation system
US9824399B2 (en) 2010-07-16 2017-11-21 Hartford Fire Insurance Company Secure data validation system

Similar Documents

Publication Publication Date Title
Tuckman et al. A methodology for measuring the financial vulnerability of charitable nonprofit organizations
Dixon et al. The National Flood Insurance Program’s market penetration rate: estimates and policy implications
Gande et al. Bank underwriting of debt securities: Modern evidence
Blake et al. Longevity bonds: financial engineering, valuation, and hedging
Pielke et al. Flood damage in the United States, 1926-2000: a reanalysis of National Weather Service estimates
Buzby Company size, listed versus unlisted stocks, and the extent of financial disclosure
Allayannis et al. Capital structure and financial risk: Evidence from foreign debt use in East Asia
Hochrainer Assessing the macroeconomic impacts of natural disasters: are there any?
Boyle et al. A survey of house price hedonic studies of the impact of environmental externalities
Hoyt et al. On the demand for corporate property insurance
Chow et al. The economic exposure of US multinational firms
Hughes et al. Bank capitalization and cost: Evidence of scale economies in risk management and signaling
Norton et al. How hospital ownership affects access to care for the uninsured
Tinoco et al. Financial distress and bankruptcy prediction among listed companies using accounting, market and macroeconomic variables
D'arcy et al. Catastrophe futures: A better hedge for insurers
US7315842B1 (en) Computer system and method for pricing financial and insurance risks with historically-known or computer-generated probability distributions
Brainard et al. The financial valuation of the return to capital
Chabotar Financial ratio analysis comes to nonprofits
US20050216384A1 (en) System, method, and computer program for creating and valuing financial instruments linked to real estate indices
Below et al. Documenting drought-related disasters: A global reassessment
Mechler Cost-benefit analysis of natural disaster risk management in developing and emerging countries
Park et al. Are REITs inflation hedges?
Jones Current wealth and tenure choice
Hoag Towards indices of real estate value and return
Canabarro et al. Analyzing insurance-linked securities

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA CN JP MX

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: CA