WO2018031697A1 - Intégration de système d'essai clinique de données médicales avec le système de codage oracle - Google Patents

Intégration de système d'essai clinique de données médicales avec le système de codage oracle Download PDF

Info

Publication number
WO2018031697A1
WO2018031697A1 PCT/US2017/046168 US2017046168W WO2018031697A1 WO 2018031697 A1 WO2018031697 A1 WO 2018031697A1 US 2017046168 W US2017046168 W US 2017046168W WO 2018031697 A1 WO2018031697 A1 WO 2018031697A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
tms
tmsint
client
coding
Prior art date
Application number
PCT/US2017/046168
Other languages
English (en)
Inventor
Donna CARUSO
Denise Clark
Original Assignee
Dbms Consulting, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dbms Consulting, Inc. filed Critical Dbms Consulting, Inc.
Priority claimed from US15/673,174 external-priority patent/US20180060540A1/en
Publication of WO2018031697A1 publication Critical patent/WO2018031697A1/fr

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/20ICT specially adapted for the handling or processing of patient-related medical or healthcare data for electronic clinical trials or questionnaires

Definitions

  • TMSINT is provided as a Commercial off the shelf (COTS) solution that enables a client (the System user) using the Medidata Rave electronic data capture (EDC) system to integrate the clients source study data requiring medical coding with the Oracle ® Thesaurus Management System (TMS) Repository to perform medical coding and return the classifications back to the
  • COTS Commercial off the shelf
  • Medidata Rave EDC system While not limited in use to data collection and subsequent analysis and tabulation of data, the primary use of the system is to collect, enter and compare patient outcomes in clinical trials of experimental drugs, medical procedures and medical devices, particularly clinical trials where there are multiple variables.
  • MEDIDATA and MEDIDATA RAVE are registered trademarks of Medidata Solutions, Inc of New York for downloadable software for enabling clinical researchers to collect data on individuals subject to clinical trials.
  • the TMS to Medidata Rave Integration application is available to run at a scheduled frequency or manually (ad-hoc) to extract clinical data from the client's Medidata Rave EDC source system, process the data in the Thesaurus
  • TMS Oracle Thesaurus Management System
  • VTA Verbatim Term Assignment
  • Medidata Rave Integration application's custom APIs create an omission in TMS and a record in the source terms table.
  • the TMS to Medidata Rave Integration application enables the user to proceed with Medical Coding in the TMS Repository to:
  • TMS Oracle Thesaurus Management System
  • This database as described by Oracle, is a collection of data treated as a unit.
  • the purpose of a database is to store and retrieve related information and provide a means for managing voluminous quantities of related information.
  • a server is programmed to reliably manage the large amount of data in a multiuser environment while providing multiple users the capability of concurrently accessing the same data.
  • the database server also prevents
  • the Oracle Database was one of the first databases designed for enterprise grid computing while providing flexible ways to manage information and applications.
  • Enterprise grid computing establishes large pools of industry-standard, modular storage and servers such that each new system can be rapidly provided from the pool of components and capacity can be readily added or reallocated from the resource pools as required.
  • the database has both logical and physical structures and because the physical and logical structures are separate, the physical storage of data can be managed without affecting the access to logical storage structures.
  • Oracle has several patents covering their database including, but not limited to US5,850,547, US85,991 ,754, US6.023.895, US7.165.065, US7.171.427, US7.299.223, US7.366.730, US7.415,457, US8.001.112, US8.200.612. US8.209.280. US8.566.341 , US8.612.421 , US8.812,488, US8.868.544, US9.230.007, US9.317,529, US Application 2004/0254947, US Application 2004/0260690, US Application 2005/0004904, US Application 20050033741 , US Application 20050076065 and US Application
  • US Patent 6,385,604 is directed to a method of and apparatus for joining and aggregating data elements integrated within a relational database management system
  • US Patent 6,741 ,983 is directed to a method of indexed storage and retrieval of multidimensional information.
  • Database Manipulation and Mapping System Having a Simple Global Interface and an Optional Multiple User Need Only Caching System With Disable and Notify Features (the '912 Patent) is directed to the problem of user manipulation of stored data and addresses the synchronization of various copies of data.
  • US Patent 7,272,612 is directed to method of partitioning data records.
  • US Patents 7,454,411 and 7,882,106 are directed to methods of indexed storage and retrieval of multidimensional information.
  • US Patent 8,099,733 is directed to parallel data processing architecture.
  • U.S. Patent Number Re. 40,526, which is a reissue of U.S. Patent Number 5,617,567, is directed to a data processing system and method for retrieving an entity specified in a search path record from a relational database.
  • U.S. Patent Number Re. 40,520 which is a reissue of U.S. Patent Number 5,826,259, is directed to an expandable data processing system and method.
  • TMS full integration requires the installation of TMS objects into the external system, for stable operation of the external system in the global Oracle environment. If fully integrated, TMS maintains external data in both the
  • the fully integrated system will then benefit from the full range of functionality provided by the TMS.
  • the fully integrated external system will then feed source terms to TMS with contextual data specified by the user (such as Document Number) so that if the data is reclassified or declassified TMS can then send information about each affected source term back to the external source data system.
  • TMS the user runs Autoclassification, manually classifying remaining terms (omissions), assigning Actions, and reclassifying or declassifying as necessary and specifies the information derived from TMS for each source term. IMS then sends that data to the source data system associated with each source term.
  • the X Area is the source term collection unit in both the tms_source_terms table and the tms_vt_onlissions table)
  • Oracle TMS does NOT provide users with any instructions or suggestions on how to fully integrate with non-Oracle source data systems.
  • the customers “must handle the data exchange", “must devise ways to: 1 ) associate the source term collection unit for a study or case, with TMS domains and dictionaries 2) Define objects to receive values derived from TMS, 3) Integrate TMS with the external system's discrepancy management function, and 4) Exchange data between the two systems”.
  • Extensive customization of the external system is also required, such as installing Oracle TMS objects into other proprietary external data system environments, which may not be possible, or in fact prohibits, when those systems are hosted by a third party.
  • the DBMS TMS I NT system described herein provides a technical solution that provides users with a system, which fully integrates a non-Oracle source data systems, referred to as Medidata Rave.
  • the DBMS TMS I NT system handles the data exchange and associates the source term collection unit with TMS domains and dictionaries.
  • TMS I NT defines objects to receive values derived from TMS and integrates TMS with the external system's query management functionality.
  • TMS I NT exchanges data between the two systems in a scheduled or on-demand (immediate) manner and does NOT require the external system to install Oracle TMS objects into other proprietary external data system environments.
  • Figure 1 is a schematic diagram showing the data flow for the TMS to Medidata Rave Integration application.
  • Figure 2 is a schematic representation of the extraction of Ciient Source Data via Web-Services.
  • Figure 3 is an example of a Client TMS Integration Definition Worksheet.
  • Figure 4 is an example of an auto match display.
  • Figure 5 is an example of an auto match display showing locating equivalent terms.
  • Figure 6 is an example of an auto match display showing locating non- equivalent terms.
  • Figures 7 and 8 are example displays showing re-submit terms to be re-coded.
  • Figure 9 is an example of a screen display showing export coding data and queries from TMS.
  • Figures 9 and 10 are examples of a screen display showing associated source data and actions/queries created in TMS.
  • Figure 12 is an example of a screen display showing multiple synonym lists created to enable different synonym lists to be linked to different studies with the TMS Domain/Dictionary functionality.
  • Figure 13 is an example of multiple screen displays showing different TMS enabled approval workflows by domain illustrating that one study/program can automatically approve a VTA when it is classified while the other study can enforce the manual approval process after classification.
  • Figure 14 is an example of a screen display showing TMS accessed by multiple users at the same time allowing the coder to filter using the TMS Omission Filter.
  • Figure 15 is an example of a screen display showing, while coding, that the user can sort coded data within the TMS Classify VT Omissions.
  • Figure 16 is an example of multiple displays on a single screen showing TMS browsing capabilities that makes ingredients visible when coding in WHO DDE and all multi-axial relationships are able to be seen.
  • Figure 17 illustrates that the TMSINT application then propagates the Ingredients and Dictionary Version to Medidata Rave.
  • FIG. 18 illustrates that the TMSINT application allows the user to "Drill Down" through the data to obtain auxiliary information.
  • Figure 19 are three examples of screens showing the ability of TMS to attach and view status codes or notes to coded and uncoded terms.
  • Figure 20 is an example of a screen display showing approval of VTAs.
  • Figure 21 illustrates that the user has the option to use a two-step coding process with a second coder acknowledging approval of the code.
  • Figure 22 is a screen shot illustrating that TMS is able to perform ATC coding for medications in a single step.
  • Figure 23 is a screen shot illustrating that for Drug Dictionaries without Primary Links or Paths on the ATC Levels, the Indication Omissions window is available.
  • FIG 24 illustrates Auto-coding to previously coded verbatim code individual (VTIs).
  • Figure 25 illustrates automatic ATC classification of a drug based on previous ATC classification of the same drug having the same indication.
  • Figure 26 illustrates the TMSINT application Custom auto-coding search object algorithms automatically coding verbatim terms to the Medical and Drug Dictionaries.
  • Figure 27 is a flow diagram illustrating the Oracle TMS to Medidata Rave
  • Figure 28 is a screen image ii!ustrating the Custom Autocoder Partial Word Substitution Design.
  • Figure 29 is a screen image illustrating the Custom Autocoder Full Word
  • Figure 30 is a schematic representation of the Oracle TMS to Medidata Rave
  • TMSINT Transaction Integration
  • Figure 31 is a schematic representation of the OHS 5.1.x Logical Architecture.
  • Figure 32 Is a screen image illustrating user administration of overview of the system comprising creating, configuring, modifying, and inactivating portions of the stored records.
  • Figure 33 is a sample of a Client definition based on a new client worksheet.
  • TMSINT system described herein provides the approaches and methods for:
  • the XML data exchange can be scheduled to run at predetermined frequencies and run on-demand.
  • Integration Configuration Allows the transactional data exchange interfaces to be configured at the clinical study level via a user interface.
  • Data Extract Reads and stages data from the clinical EDC applications to the TMSINT system's custom staging tables in the Oracle RDBMS.
  • TMSiNT system's custom Staging APIs process the data in the application's staging tables to the application's inbound tables in the Oracle RDBMS.
  • INBOUND Data Processing The TMSINT system's custom INBOUND APIs process the data in the application's inbound tables with the clinical terminology and coding application. This process includes keeping client data "Synchronized" with the clinical terminology and coding application and interrogates the clinical coding conditions such as declassification, classification, reclassification, data updates, data promotions, data demotions and changes to the terminology hierarchy derivational path.
  • TMSINT system's custom APIs create an omission in coding application and a record in the source terms table.
  • the TMSINT system enables the user to proceed with Medical Coding in the coding application to:
  • OUTBOUND Data Processing and Database Manipulation The TMSINT system's custom OUTBOUND APIs process the data in the coding application with the system's outbound tables. This process includes keeping client data "Synchronized" with the TMS Repository and interrogates the clinical coding conditions such as declassification, classification, reclassification, data updates, data promotions, data demotions and changes to the terminology hierarchy derivational path.
  • OUTBOUND Data Formatting The TMSINT system's custom Staging APIs process the data in the application's outbound tables to the import staging tables.
  • Data Import Writes the import staging data containing the following data to the clinical EDC application.
  • TMSINT TMS to Medidata Rave Integration
  • TMS Setup - Described below are the setup and definition within the IMS application. Before the client can be configured within the IMS for Medidata Rave Integration application, the TMS setup must be completed.
  • TMS Dictionaries Any TMS dictionaries that are used by the TMS to Medidata Rave Integration application are defined and created in TMS before they can be referenced in the TMS to Medidata Rave integration application. TMS dictionaries are not shared between clients. All dictionaries used by the TMS to Medidata Rave
  • Integration application are specific to a given client and are determined based on the individual client's requirement.
  • each standard dictionary package includes the TMS dictionary definition, initial dictionary loading, dictionary versioning upgrade and canned impact reporting.
  • the client specifies at the onset what vendor dictionary format (if applicable) and dictionary version are to be initially loaded for TMS to Medidata Rave Integration use and specify a version upgrade schedule. Specific information on the TMS dictionary definition and "derivation path" are separately provided to the client.
  • SO meddra Dictionary includes the TMS dictionary definition of a standard "Primary Path" meddra dictionary.
  • the dictionary classification level is the LLT level.
  • the "derivation path" includes the LLT, PT and the HLT, HLGT and SOC of the primary path. As LLTs are made non-current by the vendor, they are logically deleted within the TMS repository. Due to the limited variation of the meddra dictionary implementation, this dictionary package may be used by most if not ail clients.
  • UMC WHO Drug Dictionary (B2 format)-The UMC WHO drug B2 dictionary package includes the TMS dictionary definition of a standard "Primary Link" drug dictionary.
  • the dictionary classification level includes both the preferred drug term (PT) and the trade drug term synonym (SYN) drug levels.
  • the dictionary includes the UMC ATC5 dictionary level.
  • the UMC ATC long text value is provided as an additional attribute (VALUE_1 ) for all ATC levels.
  • the UMC provided ingredient-list is
  • the defined dictionary levels are ATC1-ATC5, PT, SYN, ING, SRC and CCODE.
  • the "derivation path" includes the SYN, PT and the ATC hierarchy of the designated primary ATC. Any of the UMC provided B2 formats may be used with this dictionary definition such as the DD, DDE or DDE-HD formats.
  • UMC WHO Drug Dictionary (C format)- The UMC WHO drug C dictionary package includes an implementation as available from Oracle. Any if the UMC provided C formats may be used with this dictionary definition such as the DD, DDE or DDE-HD formats.
  • UMC WHO Drug Dictionary (B3 format) - The UMC WHO drug B3 format will be implemented as a standard dictionary package in future releases. This version of the drug dictionary is not currently compatible for use with the TMS application and will be implemented for later releases of TMS with expanded column lengths.
  • MSSO meddra & SMQ Filter Dictionary The MSSO meddra & SMQ dictionary package includes the standard meddra dictionary as specified above as well as the additional SMQ (Standard meddra Query) filter dictionary. This dictionary is a vendor- only dictionary and does not support custom SMQs. When selecting this dictionary package, the meddra and SMQ filter dictionary will be defined, loaded and versioned at the same time. Both dictionaries will be treated programmaticaily as one dictionary in the context of loading and upgrading.
  • the SMQ filter dictionary is implemented as a standard SMQ filter dictionary as specified by the vendor.
  • UMC WHO Drug & SDG Filter Dictionary The UMC WHO Drug & SDG dictionary package includes the standard 82 dictionary as specified above as weii as the additional SDG (Standard Drug Query) filter dictionary. This dictionary is a vendor- only dictionary and does not support custom SDGs. When selecting this dictionary package, the WHO Drug and SDG filter dictionary will be defined, loaded and versioned at the same time. Both dictionaries will be treated programmatically as one dictionary in the context of loading and upgrading.
  • the SDG filter dictionary is implemented as a standard SDG filter dictionary as specified by the vendor. Functionality will be provided to programmatically allow the creation of ATC/SDG and PT/SDG relationships that are overlapping or the removal of overlapping ATC/SDG and PT/SDG relationships.
  • Custom meddra SMQ Filter Dictionary The Custom meddra & SMQ dictionary package includes the standard meddra dictionary as specified above as well as the additional SMQ (Standard meddra Query) filter dictionary with additional custom SMQ dictionary levels (CMQs).
  • the meddra dictionary is a vendor-only dictionary. However, the SMQ dictionary supports the creation of custom CMQs. When selecting this dictionary package, the meddra and SMQ filter dictionary will be defined, loaded and versioned at the same time. Both dictionaries will be treated programmatically as one dictionary in the context of loading and upgrading.
  • Custom WHO Drug SDG Filter Dictionary includes the standard B2 WHO Drug dictionary, as specified above. As well as the additional SDG (Standard Drug Groups) filter dictionary with additional custom SDG dictionary levels (CDGs).
  • the WHO Drug dictionary is a vendor-only dictionary. However the SDG dictionary supports the creation of custom cdgs. When selecting this dictionary package, the WHO Drug and SDG filter dictionary will be defined, loaded and versioned at the same time. Both dictionaries will be treated programmatically as one dictionary in the context of loading and upgrading.
  • TMS to Medidata Rave Integration requires that the TMS external system the client will use for the TMS to Rave integration is defined within the TMS repository prior to defining and configuring the TMS to
  • IMS allows for up to eight external column values for a given external system. Each external value may be up to 500 characters in length. These external values are "key" values within the client's source system used to identify unique patient records within the client's source system. These values typically include Study, Site, DCM, Patient, etc. And may be customized as needed. Since these values are "key” values, they are expected to remain constant.
  • the TMS to Medidata Rave Integration application has been designed to allow for a Site location to change provided one of the eight external column names is specified as "SITE”. This value is the designated “snowbird" column to accommodate those patients that live in a different location during different calendar months, for example the winter months. If no external column name is named “SITE”, this functionality will not be enabled and any update to a "key” value will cause the system to treat the patient record as if it were a brand new patient record and will disconnect any audit trail capability between previous versions of the patient record.
  • the TMS to Medidata Rave Integration application can store up to two values per individual column allowing the ability to store sixteen external values. If columns are "shared” in this manner, there may be no more than two values stored per external column and the data values will be delimited with the T character within TMS. Any custom omission status requirements will need to be specified as a requirement by the client at the onset of the project. If no requirement is specified, the default omission status "Site from Coder" will be used from the TMS predefined "DISCREPANCY REV STATUS CODE” codelist.
  • Custom TMS Drill-Down views An optional Custom TMS Drill-Down views solution for the medical coder to see auxiliary study data from the Rave source system while coding in TMS (e.g., the county, indication, route, dosage for medications or any other applicable study data the coder requires to make the right choices while coding) is provided.
  • These Custom TMS Drill-Down views are optionally available and may be defined for any client TMS external system.
  • the number of drill-down views that may be created in TMS is contingent on the number of external columns defined within the external system. Additional client requirements will need to be specified to indicate what data will be displayed in each drill-down view as well what external column the drill-down view will be associated with.
  • the corresponding ancillary data is defined within the IMS to Medidata Rave Integration application. See Ancillary Data Definition section below.
  • IMS Custom Search Objects An optional Custom TMS search object solution within TMS to enable Verbatim terms to code to non-equivalent dictionary terms with the use of word swapping, word substitutions, special character/number/punctuation/stop word removal, and other types of matching algorithms during the classification process in TMS is provided.
  • the Custom TMS search object solution that is provided is designed for use with the meddra AE dictionary and/or WHO drug dictionary.
  • the defined word substitutions and stop word removal are configurable for each client using a TMS codelist.
  • TMS Custom VTA Load Utility An optional Custom TMS VTA load utility to enable a client to populate the TMS meddra or whodrug dictionaries with their existing synonym or verbatim term assignment (VTA) lists is provided The client provides a spreadsheet containing desired Synonym or VTA mappings that may be loaded into the client's TMS meddra or whodrug dictionaries.
  • TMS Dictionary Custom VTA Copy Utility An optional Custom TMS dictionary VTA copy utility is provided which will enable a client to copy VTAs from one TMS dictionary to another TMS dictionary within the same TMS database instance for the purpose of keeping the coding "synchronized" between a designated source and target dictionary.
  • the TMS to Medidata Rave Integration application requires the following Oracle client specific accounts to be defined with the Oracle database.
  • Client Data-Transfer Account Each client using the TMS to Medidata Rave Integration application has a designated Oracle data-transfer schema.
  • the schema name is a user name of TMSINT_XFER_ ⁇ clientcode> where ⁇ clientcode> is the 3-6 character code used to identify a given client.
  • the data-transfer schema is not a privileged Oracle account and is responsible for the transfer of data from and to the client source system. Each client specific data-transfer schema may see only data that is applicable to the given client.
  • the data-transfer schema will be created with the following objects set forth in Table 1 :
  • Each client using the TMS to Medidata Rave Integration application has a designated Oracle data-processing schema.
  • the schema name is a user name of TMSINT_PROC_ ⁇ ClientCode> where ⁇ ClientCode> is the 3-6 character code used to identify a given client.
  • the data-processing schema is a privileged Oracle account and is responsible reading data extracted from the client's source system by the data-transfer schema, processing the data in TMS and writing the data back to the data-transfer schema import table.
  • Each client specific data- processing schema may see only data that is applicable to the given client.
  • the data- processing schema will be created with the following objects set forth in Table 2:
  • TMS to Medidata Rave Integration application definition process The following is a description of the components of the TMS to Medidata Rave Integration application definition process that is specific to a given client. It should be noted that a client may not be defined within the TMS to Medidata Rave Integration application until the TMS definition has been completed.
  • the TMS dictionaries, external systems, etc. must exist within the TMS application prior to setting up the client within the TMS to Medidata Rave Integration application.
  • the administrator is the only.user authorized to manage client definitions and setup.
  • the Oracle database package TMS I NT_ADM_UTI LS owned by the administrator contains the application APIs in which the administrator may create, update, delete, enable or disable client related data.
  • Client Def in itign -The first step for defining a client within the TMS to Medidata Rave Integration application is to create a client definition entry in a metadata table TMS!NT_ADM_ CLIENTS. Only the administrator is authorized to create a new client. Each client has a unique AliasCode and unique ClientID that will be used by the application. The ClientID and ClientAlias values may not be modified after the creation of the client record. The ClientID is used as a foreign key reference in other tables for linking attributes to a specific client. Oracle database objects and Oracle user accounts are subsequently created using the ClientAlias value. Described below are various TMSINT ADM CLIENTS files.
  • the API CREATE_CLIENT is used to create a client definition record in the TMSINT_ADM_CLIENTS table.
  • the administrator selects a unique 3-6 character code that will identify the client.
  • the client code is used internally by the application only and will not be used by or written to the client's source system. Therefore, this value may be selected by the administrator. It is preferred that the client code visually identify the client to which it is assigned. For example, “NOV” may be used for Novartis, "BMS” may be used for Bristol-Myers Squibb and 1NV" may be used for Inventive Health.
  • the client description may be any value up to 200 characters and is recommended to be the full name of the client.
  • a Client! D will automatically be assign during execution of the API.
  • a ClientID is required for any additional setup or definition for the client.
  • Deleting a Client The API DELETE_CLIENT is used to delete client definition record from the TMSINT_ADM_CLIENTS table. When a client is deleted, all
  • subordinate client metadata will likewise be deleted (datafiles, dictionary mappings, etc.).
  • the Oracle accounts associated with the client are manually deleted from the database by the DBA; however, the registration of the Oracle accounts will be deleted from the application.
  • the API QUERY_CL!ENT is used to query any client definition record in the TMS I NT_ADM_CLI ENTS table.
  • the API is a pipelined function based on the TMSINT_ADM_CLIENTS table and may therefore be executed as any query from the SQL command line.
  • TMSINT_ADM__DATAFILES After a client definition has been defined and a ClientID assigned, individual datafi!es are then be created. Datafile definitions for a client are stored in a metadata table entitled TMSINT_ADM__DATAFILES. A client may have one or more datafiies. At least one active datafile is required for integration processing. Normally there is a one to one correspondence between a clinical study and a datafile and each study will have its own record entry in the TMS I NT_ADM_DATAFI LES table. Metadata in the TMSINT_ADM__DATAFILES table will specify the URL and credentials (if any) required to connect to the client's source system for both the extraction of data as well as the import of data.
  • HTML tags for both the beginning of the HTML file and beginning of new patient record is specified.
  • the end of file and end of record file HTML tags may be derived from the beginning of file and beginning of record values.
  • the ActiveFlag value can be used to enable and disable the given file from integration processing.
  • UPDATE _DATAFSLE is used to update an existing datafile definition record in the TMS I NT_ADM_DATAFI LES table. This API is also used to update URL passwords if they are periodically updated by the client.
  • datafiies can be enabled or disabled from integration processing by updating the ActiveFlag. Any required value with the table in which the API has a default of NULL is simply a means to allow the user to enter only the values that they want to update.
  • the pActiveFlag parameter has a default value of NULL. If the pActiveflag value is not provided or a NULL value is provided, the API will assume the user is not updating that value and the current value will not be changed.
  • Deleting a Datafile - The API DELETE_DATAFILE is used to delete datafile definition records from the TMSINT_ADM_DATAFILES table. When a datafile is deleted, all subordinate client metadata wi!l likewise be deleted (dictionary mappings, external system mappings, ancillary data, etc.).
  • Querying a Datafile The API QUERY_DATAFILE is used to query any datafile definition record in the TMSINT_ADM_DATAFILES table.
  • the API is a pipelined function based on the TMSINT_ADM_DATAFILES table and may therefore be executed as any query from the SQL command line.
  • each distinct DCM-VT combination within the datafile is then defined in a TMSINT_ADM_DICT_MAPPING metadata table to map the DCM-VT to a client specific TMS dictionary for coding. For example, assuming that a given datafile contained an AE (Adverse Event), MH
  • TMSINT_ADM__DICT_MAPPING table will ultimately specify how the DCM and VT will be parsed from the client source system HTML extract data, what TMS dictionary to use for coding, and how to construct the HTML coding file and HTML action file data that will be returned to the client source system for update after TMS processing.
  • Each mapping record carries the parent DatafilelD of the client datafile in which it is applicable.
  • a DictMappingID will be assigned and returned.
  • TMSINT_ADM_DICT_MAPPING table This API may be used to update any of the non-key values at any time. Additionally, dictionary mappings may be enabled or disabled from integration processing by updating the ActiveFlag. Any required value with the table in which the API has a default of NULL provides a means to allow the user to enter only the values that they want to update. For example, the pActiveFiag parameter has a default value of NULL. If the pActiveflag value is not provided or a NULL value is provided, the API will assume the user is not updating that value and the current value will not be changed,
  • Querying a Dictionary Mapping The API QUERY_ DICT _MAPPING is used to query any datafile dictionary mapping records in the TMSINT_ADM_DICT_MAPPING table.
  • the API is a pipelined function based on the TMSINT_ADM_DICT__MAPPING table and may therefore be executed as any query from the SQL command line.
  • each datafile is then mapped to a client specific TMS External System within TMS within the
  • TMS I NT_ADM_EXT__MAPPI NG metadata table TMS allows for the definition of up to eight custom columns (EXT_VALUE__1 EXT_VALUE__8) to store source system "key" data values.
  • the TMSINT_ADM_DICT_MAPPiNG will contain one record for each defined TMS external column.
  • the TMSINT_ADM_EXT_MAPPING table will ultimately specify how each of the source system "key" values will be parsed from the client source system HTML extract data, how they are mapped to the TMS external system, and finally how to construct the HTML coding file and HTML action file data that will be returned to the client source system for update after TMS processing.
  • TMS external column values are considered to be a "key” value used in the client source system to uniquely identify a specific patient record, the data values are expected to remain static.
  • the exception to this is any external column defined with the name "SITE” which is designated as a "snowbird” column for patients that change locations (Site) during part of the year. Functionality has been included to include updates to the "SITE" column only. Updates to any additional external value would simply treat the record as a new patient record.
  • Creating a New External System Mapping The API CREATE_EXT_MAPPING is used to create a new external system mapping record in the
  • mapping record carries the parent DatafiielD of the client datafi!e in which it is applicable.
  • ExtMappingID When the dictionary mapping has been created, an ExtMappingID will be assigned and returned.
  • a mapping should be created for each column defined within the TMS external system. If the external system has only six of the eight possible external columns defined, then the datafile will have six mapping records.
  • TMSINT_ADM_EXT_MAPPING table This API may be used to enable or disable the collection of a given external data value from integration processing by updating the ActiveFlag. Any required value with the table in which the API has a default of NULL provides a means to allow the user to enter only the values that they want to update. For example, the pActiveFlag parameter has a default value of NULL. If the pActiveflag value is not provided or a NULL value is provided, the API will assume the user is not updating that value and the current value will not be changed.
  • TMSINT_ADM_DICT_MAPPING table The API is a pipelined function based on the TMSINT_ADM_DiCT_MAPPING table and may therefore be executed as any query from the SQL command line.
  • Ancillary Column Definition is an optional component of the TMS to Medidata Rave Integration application.
  • the definition of ancillary data is defined in a metadata table TMSINT_ADM_ADD_COLS.
  • Ancillary data or add on-co!umns are individual data values in the client source system that are associated to a specific DCM within a datafiie that are retrieved from the client source system for the purpose of displaying ancillary data during TMS omission management activities via TMS Drill-Down Views.
  • the TMSINT_ADM_ADDON_ COLS table will ultimately specify what data column values for a given DCM and datafile should be retrieved from the client source system and how the values will be parsed from the client source system HTML extract data.
  • TMSINT_ADM_ADDON_COLS table This API may be used to enable or disable the collection of a given ancillary column value during integration processing by updating the ActiveFlag. Any required value in the table in which the API has a default of NULL is a means to allow the user to enter only the values that they want to update. For example, the pActiveFlag parameter has a default value of NULL. If the pActiveflag value is not provided or a NULL value is provided, the API will assume the user is not updating that value and the current value will not be changed.
  • the API QUERY _ DICT_MAPPING is used to query any datafile dictionary mapping records in the TMSINT_ADM_DICT_ MAPPING table.
  • the API is a pipelined function based on the TMSINT_ADM_DICT_ MAPPING table and may therefore be executed as any query from the SQL command line,
  • Each client is required to have a client specific Oracle data-transfer schema and an Oracle data-processing schema registered in the IMS to Medidata Rave Integration application table TMSINT_ADM_ORA__ACCOUNTS table.
  • the actual Oracle accounts must exist in the Oracle database before they may be registered.
  • the data-transfer schema is responsible for the "transfer" of data from the client source system to the Oracle database and from the Oracle database back to the client source system.
  • the data-processing schema is responsible for picking up client data written to database by the data-transfer schema, processing the data in TMS and writing the results back to the database to be written back to the source system by the data-processing schema.
  • AH data-transfer schemas must have the name
  • the API QUERY_ORA_ACCOUNT is used to query Oracle accounts in the TMSINT__ADM_ORA_ACCOUNTS table.
  • the API is a pipelined function based on the TMSINT__ORA_ACCOUNTS table and may therefore be executed as any query from the SQL command line.
  • HTML prefix shall precede the data value (DT) within the HTML import file will need be specified when returning coding data?
  • HTML prefix shall precede the data value (action text) within the HTML import file will need be specified when returning response/action data?
  • Custom TMS to Medidata Rave Integration application (TMSINT) - The TMS to Medidata Rave Integration process will automatically extract the configured clinical data in an HTML format, the HTML data will be transformed into patient records appropriate for processing in the TMS application and then the TMS processing results will be written in a client compatible HTLM format and updated in the client's source system, [0158]
  • Each client utilizing the TMS to Medidata Rave Integration application will have designated TMS dictionaries within the TMS application as required and a designated custom TMS External System definition. Additionally, the client may have enhanced coding capabilities using DBMS Consulting Inc. custom TMS Search Objects and DBMS consulting Inc. custom VTA loading. Use of the TMS to Medidata Rave
  • Integration application allows the client to perform all TMS omission management activities from within TMS and the ability to customize pre-defined omission actions and statuses as well as the option to utilize DBMS customTMS Drill-Down views for the display of ancillary data.
  • the objective of this document is to specify the overall design of the DBMS TMS to Medidata Rave Integration (TMS I NT) application and its range of functionality. Topic specific to the TMS application such at the defining, loading and upgrading TMS dictionaries, as well as TMS external systems, TMS Search Objects, and TMS Drill- Down views are considered custom TMS application components.
  • the DBMS TMS to Medidata Rave Integration (TMS I NT) application interacts directly with client source systems and the TMS application to provide a hosted clinical data coding solution.
  • TMS to Medidata Rave Integration Schemas -
  • the TMS to Medidata Rave Integration (TMSINT) application requires one Oracle schema to be defined as the application owner and administrator. All clients configured to use the TMSINT application will be defined and managed by the administrator. Each client setup to use the TMSINT application will require two client specific Oracle schemas. One schema will be the designated data-transfer account and the other account will be the
  • TMSINT_ADM Oracle schema is the owner and administrator of the TMSINT application. It is recommended that the schema name be TMSINT_ADM; however, another Oracle schema name may be used if desired.
  • the administrator account will be referred to as being
  • TMSINT_ADM The application administrator will own all of the application metadata tables and will be required to setup and configure a given client before the client may use the TMSINT application. All stored procedures for application use are created within the administrative account. As clients are configured for use, the appropriate privileges, synonyms and objects will be created will be created in the client specific Oracle accounts. The data-transfer and data-processing functionality used by each client will be accessed from stored procedures from within the administrative account. The application functionality has been written so that the same APIs will be appropriate for ajl clients based on the client's individual configuration in the application metadata tables. This allows for future modifications and bug fixed from within a single Oracle account opposed to having to modify multiple client accounts. Table 3 summarizes the objects created for the application administrator schema:
  • TMSINT__XFER_ ⁇ client> Each client configured to use the TMSINT application will be required to have one designated Oracle data-transfer schema.
  • the Oracle schema name is named TMSINT_XFER_ ⁇ client> where the ⁇ client> value is the alias code defined for the client during the configuration process. For example, if the client "ABC Company" has a defined client alias code of "ABC”, the data-transfer schema for "ABC Company" will be TMSINT_XFER_ABC.
  • the purpose of the data-transfer schema is to "transfer" data to and from the client's clinical data source system.
  • Installation scripts have been written to automate the creation of the client specific data-transfer schemas which will include the creation of all Oracle objects to be created directly within the data-transfer schema. As the data-transfer schema is created, the appropriate grants will be issued and the appropriate private synonyms will be created to ensure access to all administrator owned objects.
  • the data-transfer schema is not a privileged Oracle account and is granted only the Oracle CONNECT and RESOURCE system roles. The data-transfer is a "staging" schema and as such, the data will not persist in the database. Table 4 summarizes the objects created for each client specific data-transfer schema:
  • TMSINT_PROC_ ⁇ client> Each client configured to use the TMSINT application will be required to have one designated Oracle data- processing schema.
  • the Oracle schema name is named TMSINT_PROC_ ⁇ client> where the ⁇ client> value is the alias code defined for the client during the configuration process. For example, if the client "ABC Company" has a defined client alias code of "ABC”, the data-processing schema for "ABC Company" will be TMSINT_PROC_ABC.
  • the purpose of the data-processing schema is to process all client data in TMS as well as return the TMS processing results to the data-transfer schema to be updated in the client's clinical source system, installation scripts have been written to automate the creation of the client specific data-processing schemas which will include the creation of all Oracle objects to be created directly within the data-processing schema. As the data-processing schema is created, the appropriate grants will be issued and the appropriate private synonyms will be created to ensure access to all administrator owned objects.
  • the data-processing schema is a privilege Oracle account and is defined as a TMS super-user. Table 5 summarizes the objects created for each client specific data-processing schema.
  • TMSINT_ADM_PROPERTIES The property table allows defining various properties within the TMS I NT application.
  • the only required entry within the properties table during the installation process is the name of the Oracle account that is the TMINT application owner.
  • the PropertyCategory is "APPLICATION”
  • the PropName is OWNER”
  • the PropValue is the Oracle schema owning all of the application objects which is expected to be TMSINT_ADM”.
  • the properties table can optionally be used to specify other miscellaneous properties as needed.
  • a SQL script is executed during the installation process as the application owner and will automatically create the required entry in the TMSINT_ADM_ PROPERTIES table. Additional SQL scripts are provided to insert, update, and delete properties as needed.
  • TMSINT_ADM_CLIENTS The clients table allows for the definition of a client within the TMS I NT application. Each client is assigned a unique numeric identifier and alias code. The numeric identifier, Client-ID will be used as a foreign key in other metadata tables for referential integrity. The value of the client alias code selected dictates the names of the Data-Transfer and Data-Processing schemas
  • SQL scripts are provided to easily allow for the insert, update, and delete of clients data.
  • TMS !NT_ADM_DATAFI LES - The datafiles table allows for the definition of datafiles within the TMS I NT application for a specific client in TMSINT_ADM_CLIENTS.
  • a client may have one or more datafiles but a datafile may be assigned to only one client.
  • Each datafile is assigned a unique numeric identifier (Datafi!eiD) that will be used as a foreign key in other metadata tables that have a child relationship to the datafile at hand.
  • a datafile is the URL (e.g. https://pharmanet.mdsol.com/RaveWeb Services) in which to access the client's datafile, which will be an HTML document.
  • the user name and password credentials needed to access the client datafile URL are required.
  • Each datafile will correspond to one clinical study name.
  • the TMS domain name will be the "Study Name” concatenated with the string "_DOMAIN" for all coding records within the given datafile.
  • New datafiles may be added to an existing client at any time.
  • the Active Flag can simpiy be updated to "N” and likewise to reinstate processing the Active Flag may be reset back to ⁇ ". All datafiles are initially created as active. SQL scripts are provided to easily allow for the insert, update, and delete of client datafiles.
  • the dictionary mapping table is a child table to the datafiles table TMSINT__ADM_DATAFILES and contains one record for every DCM/VT within a datafile to be coded in TMS.
  • a single datafile may contain a DCM/VT (AE/AETERM) that Is applicable to adverse event MedDRA coding and a DCM/VT (CONMEDS/CMTERM) that is applicable to Drug coding or multiples of each.
  • the dictionary mapping table associates the specific DCM/VT to be coded with a specific dictionary in TMS. This information is used during TMS processing by the Data-Processing schema.
  • a new DCM/VT dictionary mapping may be added to an existing datafile at any time by simple adding the new record to the TMSINT_ADM__ DICT_MAPPING table.
  • the Active Flag can be set to "N" and likewise to reinstate processing of the DCM/VT within the datafile, the Active Flag can be reset to ⁇ ". All datafile dictionary mappings will initially be created as active.
  • Dictionary MUST already exist in the TMS repository prior to adding a datafile dictionary mapping. SQL scripts are provided to easily allow for the insert, update, and delete of datafile to dictionary mapping.
  • TMSINT_ADM_EXT_MAPPING - The external system mapping table is a child table to the datafiles table. This table creates a relationship between a given client datafile and a defined TMS External System. The table contains one record for every column defined in TMS definition of the External System which may be at most eight values (ExtValuel - ExtValue8). If the TMS External System has 6 defined external column values (Study, Site, Patient, Event, etc.) there may be only one record for each defined external system column. This table is used by the TMS I NT application to parse the datafile values that will ultimately be stored in TMS during processing.
  • Additional external system mappings can be added to an existing datafile at any time as long as it corresponds to the TMS definition of the external system.
  • the Active Flag can be set to "N” and likewise reinstated by setting the Active Flag back to "Y", Ali datafile external mappings will initially be created as active.
  • the TMS External System MUST already exist in the TMS repository prior to adding a datafile external system mapping. SQL scripts are provided to easily allow for the insert, update, and delete of datafile to external system mapping.
  • TMSINT Some additional data associated to a specific DCM within a given datafile. These column values are not stored within the TMS repository during the coding process; however, they can be used for later omission management via TMS Drill-Down views in which ancillary data may be queried and viewed as additional data to aid in the manual classification process.
  • the ancillary data is stored in the TMSINT application tables but not directly accessible from TMS.
  • Ancillary data is optional and as many columns as needed can be stored which may differ between DCMs.
  • AE DCM Adverse Event
  • CM DCM Concomitant Medication
  • CCDOSE dose and frequency
  • the actual Oracle schema MUST be pre-existing in the Oracle database and will be validated using the Oracle view ALL_USERS based on user name.
  • Creating or deleting entries does NOT actually create or drop the Oracle schema in the database but is more of a "registration" process which creates a relationship between the Oracle schema and a client as well as controls the selection of client data between the Oracle schemas.
  • Additional SQL scripts are provided to create the Oracle schemas in the database which will default the Oracle password for accounts to the user name.
  • the SQL script to create a data transfer schema for client "INV” will create and Oracle user of TMSINT_XFER_INV and the account has an Oracle password of TMSINT _XFER_INV.
  • the Oracle database password may optionally be changed at any time using normal Oracle means (ALTER ⁇ username> USER IDENTIFIED BY ⁇ psw>;). When passwords are modified within the database, it is strongly
  • the Oracle passwords within the Oracle accounts table are used for query only and not used to connect to the database at the time of this document.
  • SQL scripts are provided to easily allow for the registration and un-registration of Oracle accounts as well as suspending and unsuspending of the accounts by modifying the Active flag to "N" and "Y", respectively. Suspending the Oracle Data-Transfer schema by setting the Active Flag to "N" will "suspend” or prevent the data transfer activity.
  • TMSINT_ADM_DICT_VERSIONS The TMSINT_ADM eradicate DICT_VERSIONS is an Oracle view owned by the TMS to Medidata Rave Integration application administrator and contains a list of all TMS base and/or filter dictionaries and their corresponding dictionary version based on the TMS ⁇ DICTVER informative note.
  • the dictionary type value will be derived based on the dictionary level names and will indicate if the dictionary is an "AE" (Adverse Event) dictionary such as MedDRA , a "DRUG” such as WHODrug or a "FILTER” dictionary such as the SMQ or SDG dictionary.
  • AE Adverse Event
  • PATHJLEVELS is an Oracle view owned by the TMS to Medidata Rave Integration application administrator containing the TMS "Derivation Path" dictionary levels per TMS dictionary. This view is used is used by the TMS to Medidata Rave Integration application process when calling the TMS API TMS_USER_ FULLAUTOCODE.
  • TMSINT Some TMSINT_ADM_DERV_PATH_COLUMNS is an Oracle view owned by the TMS to Medidata Rave Integration application administrator containing the TMS dictionary columns that will be derived for each dictionary level of the TMS dictionary derivation path.
  • TMS!NT_ADM_DICT_INGJ_ISTS is an Oracle view owned by the TMS to Medidata Rave Integration application owner and contains the informative note values for any TMS dictionary having a "PT" dictionary level that has an associated informative note named "INGLIST".
  • the ingredients list is associated to UMC drug dictionaries in which the lngredientsJLongText.txt file has been loaded into the TMS dictionary.
  • TMS does not allow informative notes to be derived at part of the TMS derivation path; therefore, this view was created for ease of access and viewing of the "INGLIST" outside of the TMS application.
  • the UMC provided ingredient list provided by the vendor is up to 4000 characters in length (INGJJSTJLONG). Due to restrictions within most client source systems, the column ING_LIST_SHORT is provided which consist of the first 500 characters of the ingredient list.
  • TMSINT_XFER_HTML_EXTRACT The TMSI NT_XFER_HTML_EXTRACT table is used to stage data extracted directly from the client source system that is to be processed in TMS. Data written to the EXTRACT table is written in an HTML format as directly extracted from the source system. EXTRACT data will be initially created with a PROCESS_FLAG value of "N" indicating the data has not been processed.
  • the data-processing schema will select pending data and mark the data as being processing with a PROCESSJFLAG of "P".
  • PROCESSJFLAG of "P"
  • the data will be marked as having been processed with PROCESS_FLAG of "Y"
  • Successfully processed data will subsequently be deleted.
  • Data is written to the EXTRACT table by FILEJNAME as defined in
  • TMSINT_ADM_DATAFILES Each datafile is applicable to one client clinical study. Once study data has been written to the EXTRACT table, it is deleted before data for the same study/file may be extracted and written to the table.
  • TMSINT_XFER_HTML-IMPORT The TMSINT_XFER__HTML_IMPORT table is used to stage TMS processing results for patient records that have been either classified in TMS or records that have been assigned an omission action in TMS that requires a response from the source system.
  • the FILE_TYPE column will contain the values of either "CODING” or "ACTION” indicating the type of update in the client system that is needed.
  • Data written to the IMPORT table is written in an HTML format.
  • the IMPORT data is created by the data-processing schema based on integration processing. Records are initially written with a PROCESS_FLAG of "N" indicating that the record has not yet been processed in the client source system.
  • the PROCESS_ FLAG will be updated to ⁇ "
  • the next execution of the integration process delete any records that have a PROCESS_FLAG of ⁇ " before writing any new updates needed.
  • the IMPORT table will contain HTML files per patient update as required by the source system (i.e. Medidata Rave).
  • TMS to Medidata Rave Integration application Data-Processing Tables.
  • the following Oracle tables are created within each client specific data-processing schema.
  • the INBOUND tables are the staging tables of client data subject to TMS integration processing.
  • the CODING tables are the "production” tables that contain of all client's data that has been processed in TMS.
  • the JN tables are the audit tables.
  • Each series of tables contains the "core” table as well as a “DTLS” table and a “DERV” table.
  • the "DTLS” table is applicable only to any ancillary data defined for the given DCM.
  • the "DERV” table will contain the TMS “Derivation Path” data and will only contain data for TMS patient records that are VTA classifications.
  • TMSINT_PROC-INBOUND The TMSINT_PROC_INBOUND table is used to stage client data by study/datafiie that has been obtained from the data-transfer owned EXTRACT table.
  • the INBOUND table is populated using the application metadata tables and applicable data parsed from the HTML within the EXTRACT table.
  • the EXTRACT HTML data transformed into a record format applicable to TMS processing. All data processed in TMS via the integration application will be done based on data in the INBOUND tables. As data is processed in TMS, it will be created or updated in the CODING tables and can then be deleted from the INBOUND tables.
  • the INBOUND tables are cleared of all data at the onset of integration Processing.
  • TMSI NT_PROC-I NBOUND_DTLS The TMSINT _PROC_INBOUND_DTLS table is used to stage client ancillary data for the given patient record in the
  • TMSlNT_PROC_INBOUND staging table where ancillary data has been defined in the application metadata table TMSINT_ADM_ADDON_COLS for the given study and DCM. One record is created for each applicable ancillary column defined.
  • TMSINT_PROC-INBOUND_DERV The TMSINT_PROC_INBOUND_DERV table is used to stage the TMS derivation path records for the given patient record in the TMSINT_PROC-INBOUND staging table where the patient record is a VTA coding record.
  • the TMSINT vigorous PROC_INBOUND_DERV table is populated directly by the integration process during TMS processing and is not populated via EXTRACT data.
  • TMSINT_PROC_INBOUND_DERV When records are classified in TMS, the application views TMS I NT_ADM_DICT_DERV PATH LEVELS and TMS I NT ADM DICT DERV PATH COLUMNS are used to generate a record in TMSINT_PROC_INBOUND_DERV for every derivation path level and column for the given patient record. Since the TMS derivation path is define in IMS, the TMSINT_PROC-INBOUND_DERV tabie may contain records with NULL derivation path values. For example, if the full ATC path is applicable (ATC1..ATC5) but the coding term has a primary ATC at the ATC3 level, the derivation path levels for ATC4 and ATC5 will be NULL
  • TMSINT_PROC_CODING The TMS!NT_PROC_CODING table is used maintain all TMS related data for a given client. As data from INBOUND is processed in TMS, it will be either created in the CODING table or updated in the CODING table. The CODING table should always be a representation of the client's source data since the last execution of the integration process. All data within the TMSINT_PROC_ CODING table should be either a patient record in TMS_SOURCE_TERMS or an omission record in TMS mecanic VT _OMISSIONS. The TMSINT_PROC_CODING can be populated ONLY with data from TMSINT__PROC-INBOUND.
  • TMSINT_PROC_CODING_DTLS The TMSINT_PROC _CODING_DTLS table will store all ancillary data for the given patient record in the TMSINT_PROC _CODING. Data in the TMSINT_PROC_CODING_DTLS will be created/updated directly from TMSlNT_ PROC_INBOUND_DTLS during integration processing. Data in the
  • TMSINT_PROC_CODING_DTLS table is only used by the TMS application when applicable TMS drill-down views have been defined for the given TMS external system and DCM.
  • TMSINT considerable PROC_CODING_DERV The TMSINT considerable PROC_CODING participat DERV table stores the complete TMS derivation path for any given patient record in the TMSINT_ PROC_CODING that exists as a coding record.
  • Data in the TMSINT_PROC_CODING _DERV is created/updated directly from TMSINT mecanic PROC_INBOUND_DERV during integration processing.
  • Data in the TMSINT_PROC___CODING_DERV table is only used for the purpose of returning one or more components of the TMS derivation path to the client source system.
  • TMSINT_PROC Fire-Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field, JN_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field_Field
  • the ENTRY_TS and ENTRYJ3Y of the JN record created will automatically become the UPDATED_TS and UPDATE__ BY records values in the TMSINT_PROC_CODING records that is superseding the JN record except in the case of deleted patient records.
  • TMSINT_PROC Pain CODING_DTLS_JN -
  • the TMSINT_PROC_CODING__DTLS _JN table is an audit table that Is used to maintain a history of patient ancillary data. Data from TMSINT_PROC_CODING_DTLS is written to the TMSINT_ PROC_CODING _DTLS JN table when a patient record is being deleted from the client source system and TMS or when any component of the patient record is being updated including ancillary data and TMS derivation path data.
  • the TMSINT_PROC Cryptographical user
  • JN_ENTRY_TS, JN_ENTRY_BY and JN_REASON will be the same as the parent JN record in TMSINT_PROC_CODING_ JN.
  • DERV__JN table is an audit table that is used to maintain a history of patient derivation path data (if applicable). Data from TMSINT_PROC_CODING_DERV will be written to the TMSINT_PROC_CODING_DERV JN table when a patient record is being deleted from the client source system and TMS or when any component of the patient record is being updated including ancillary data and TMS derivation path data.
  • the JN_ENTRY_TS, JNJENTRYJBY and JN_REASON will be the same as the parent JN record in TMSINT_PROC_CODING_ JN.
  • TMSINT_PROC mecanic LOGGING The TMS i NT__PROC_LOGGI NG table is a logging table that will be populated with TMS integration processing runtime results.
  • the logging table is truncated and repopulated with each execution of the integration. Data from the logging table can be queried or emailed for integration processing results and execution times.
  • the logging table is currently being used to specify how many records were processed, how many were new patient records, how many were patient update record as well as how many resulting in classifications or omissions, etc.
  • TMS I NT Application Administrator Database Packages All TMS integration processing is executed via APIs within Oracle database packages. For ease of maintenance, all database packages are owned by the application administrator. All integration activity performed by the data-transfer schema and the data-processing schema is accomplished so via APIs are owned by the administrator. The individual data-transfer and data-processing schemas are granted execute privileges on the applicable APIs and have the appropriate Oracle synonyms.
  • the TMSINT_ADM_UTILS is designed exclusively for use by the application administrator.
  • the TMSINT_XFER_UTILS is designed for use by the data-transfer schemas and the TMS I NT_PROC_ UTI LS is designed for use by the data-processing schemas.
  • TMS I NT_ADM_ UTILS Package The TMSINT_ADM_UTILS database package is owned by the application administrator TMSINT_ADM. The administrator is the overseer of the application metadata and configures new clients for application use, makes modifications, and/or makes additions to existing clients.
  • the APIs available to the application administrator for client setup and management are listed below from the TMSINT_ADM_UTILS database package. Additionally a SQL "wrapper * script has been written to call each of the specified APIs below in which the user will be prompted for the API inputs and the modification can be captured in a LOG file with a before and after image where applicable.
  • the "wrapper" is an alternative means to execute some of the client configuration APIs with scripting allowing the user to be "prompted” for required input.
  • TMSINT_XFER_UTILS Package The TMSINT_XFER_UTILS database package is owned by the application administrator TMSINT_ADM but is designed for execution exclusively by the client specific data-transfer schemas (TMSINT_XFER_ ⁇ client>). All query related functions are "wrappers" around the corresponding TMSINT_XFER_UTILS database package.
  • TMSINT_ADM_UTILS query functions but are additionally constrained for the ⁇ client> at hand based on Oracle USER and the corresponding ClientID as indicated in the TMSINT_ORA_ACCOUNTS table for the Oracle USER.
  • a private synonym of TMSINT_XFER_UTILS that "points" to the application owner (TMSINT_ADM) package TMSINT_XFER__UTILS is created in the data-transfer schema automatically when the data-transfer schema is created.
  • TMSINT_PROC_UTILS Package The TMSINT_PROC_UTILS database package is owned by the application administrator TMSINT_ADM but has been designed for execution exclusively by the client specific data-processing schemas (TMSINT_PROC_ ⁇ client>). All query related functions are "wrappers" around the corresponding TMSINT_ADM__UTILS query functions but are additionally constrained for the ⁇ client> at hand based on Oracle USER and the corresponding ClientID as indicated in the TMSINT_ORA_ACCOUNTS table for the Oracle USER.
  • TMSINT_ADM package TMSINT_PROC_UTILS will be created in the data-processing schema automatically when the data-processing schema is created.
  • TMSINT TMS to Medidata Rave Integration
  • the TMS to Medidata Rave Integration (TMSINT) application process design contains a data-transfer schema and a data-processing schema.
  • the data-transfer schema is responsible for obtaining clinical data from within the client's source system and writing the data to an Oracle staging table for additional processing.
  • the data- transfer schema is also is responsible for updating the client's source system with the TMS processing results and for all direct interfacing with the client's source system using web-services.
  • TMSINT The synchronous TMS to Medidata Rave Integration (TMSINT) application performs direct data transfer from RAVE via Rave Web Services (RWS) technology.
  • RWS Rave Web Services
  • the TMSINT processes are exposed as web services and are also scheduled via an Oracle DBMS_SCHEDULER at a frequency specified by the client.
  • the TMSINT also exchanges standard clinical data and auxiliary data synchronously and with immediate confirmation.
  • RWS uses the Representational State Transfer (RESTful) architecture. Data is posted to or retrieved from Rave using HTTP protocol messages posted to specific URLs. Each message receives an immediate success or failure response. In the event of a failure, any pending changes are rolled back. Since RWS uses "RESTful" web services, it does not mandate the use of either Simple Object Access Protocol (SOAP) or Web Services Description Language (WSDL). The Medical Coding Solution supplied by DBMS allows the loading of HTML and text files as data sources.
  • SOAP Simple Object Access Protocol
  • WSDL Web Services Description Language
  • TMS to Medidata Rave Integration Application Data-Transfer Process - Data is entered into the Medidata Rave system.
  • TMSINT Transactional System Interogle
  • TMSINT TMS to Medidata Rave Integration
  • TMSINT TMS Integration to process all verbatim terms that are marked for processing by TMS sends to TMS and inserts into the TMS SOURCE_TERMS TABLE all their associated key data.
  • TMSINT TMS to Medidata Rave Integration
  • TMSINT TMS to Medidata Rave Integration
  • TMS searches for an exact match to a dictionary term or an existing VTA in the Domain and links the TMS verbatim term to a dictionary term.
  • TMS first searches all terms in the Dictionary at the Classification Level, including dictionary terms plus Verbatim Terms that have been mapped to Dictionary Terms in the Domain.
  • TMS uses a built-in matching algorithm which makes all mixed case terms match and reduces multiple spaces to one space.
  • TMSINT Medidata Rave Integration
  • TMS can use any custom-defined
  • TMS TMS to Medidata Rave Integration
  • TMS to Medidata Rave Integration (TMS I NT) application is then able to resubmit terms to be re-coded if a verbatim, route or indication field is modified.
  • TMS to Medidata Rave Integration (TMS I NT) application is written so that extracting data subsequent times will process source data/metadata extract where the verbatim term, route or indication field is modified.
  • the data in the INBOUND table is reprocessed and synchronized with the TMS data to ensure the modified data is updated and omissions are either coded or a TMS Action (Query) is Raised in TMS.
  • TMS to Medidata Rave Integration (TMS I NT) application is run for TMS Integration and Import the TMS to Medidata Rave Integration (TMS I NT) application imports into the Medidata Rave EDC system all coded data from each individual study/job/domain including dictionary version information and WHODrug ingredients.
  • TMS to Medidata Rave Integration (TMS I NT) application also creates open queries in the Medidata Rave EDC system for the Sites to respond to the Coders coding queries.
  • the TMS to Medidata Rave Integration application Bi-Directional Detailed Design provides for extracting Client Source Data via Web-Services by extracting the study dataset from the source Medidata Rave EDC system using the Rave Web Services (RWS) defined as https:// ⁇ host ⁇ /RaveWebServices/studies/ ⁇ study-id ⁇ /datasets/ ⁇ regular- or-raw ⁇ .
  • Client Source Data can also be updated via Web-Services by Updating/Creating the coding assignment and related dictionary hierarchy for each such identified item on the source Medidata Rave EDC system using the Rave Web Services (RWS).
  • TMS to Medidata Rave Integration (TMS I NT) application process design contains a data-transfer schema and a data-processing schema.
  • the data-processing schema is responsible for obtaining clinical data that has been staged by the data-transfer schema, processing the data in TMS, and staging the TMS processing results in the data-transfer schema to be written back to the client's source system by the data-transfer schema.
  • Each of the processing steps performed by the data-processing schema is described below.
  • the data-processing schema determines if there is pending client data that has been staged by the data-transfer schema. Client data is staged by the data-transfer schema in a raw HTML format within the staging table TMSINT_ XFER_HTML_EXTRACT. Data that is pending processing in TMS will have a
  • each distinct datafile in the EXTRACT staging table is validated to ensure that the datafile has been written in its entirety. This will be done by verifying that the datafile contains both a beginning of file (BOF) marker and an end of file (EOF) marker.
  • BOF beginning of file
  • EEF end of file
  • the BOF/EOF marker for all client datafiles is stored in the NEWJFILE_HTML_ TAG column of the application metadata table TMSINT_ADM_DATAFILES.
  • the NEW_FILE_HTML_TAG i.e. ⁇ ODM>
  • the BOF marker will consist of the NEW__FILE_HTML_TAG less the trailing (i.e. ⁇ ODM).
  • the EOF marker will consist of the NEW__FILE_HTML__TAG with the " ⁇ " replaced with " ⁇ /" (i.e. ⁇ /ODM>).
  • Each EXTRACT datafile wii! be checked to ensure the BOF/EOF markers are present.
  • the file is incomplete. This may occur if the file is actively being written to the EXTRACT table by the data-transfer schema in which case, the datafile will simply be processed with the next execution of the integration process. If the datafile does not have both the BOF marker and the EOF marker, the file will be omitted from current processing. If the datafile contains both the BOF and EOF markers, processing will continue with the step below,
  • the BOR/EOR marker for all client datafiles is stored in the NEW_REC _HTML_ TAG column of the application metadata table TMS I NT_ADM_DATAFI L ES .
  • NEW_REC_HTML_TAG (i.e. ⁇ ClinicalData>) will be used to derive the BOR and EOR file markers.
  • the BOR marker will consist of the N EW_REC_HTML_TAG less the trailing ">" (i.e. ⁇ ClinicalData).
  • the EOR marker will consist of the NEW_REC_HTML_ TAG with the " ⁇ " replaced with " ⁇ /” (i.e. ⁇ /ClinicalData>).
  • Each EXTRACT datafile record will be checked to ensure the BOR/EOR markers are present.
  • Each HTML record written to the EXTRACT table is assigned a unique identifier (RECORD_SEQNO).
  • the sequential order of the RECORDJSEQNO is the order in which the HTML record was inserted. While the RECORD_SEQNO identifies individual records, the PROCESS_RECNO column will be used during this process to assign a logical record numbers based on the BOR/EOR record markers. Datafile records will be read in the order of insert.
  • the logical record number (PROCESS_RECNO) will be updated to a 1-up number each time the BOR marker is read. All records prior to the first occurrence of the BOR marker will be assigned a PROCEES_RECNO of 0 since these records are HTML header records.
  • the PROCESS_RECNO will be incremented and it will be confirmed that the previous record read contained the EOR marker. If the datafile is found to have any unmatched BOR/EOR markers, the file will deemed invalid and will be omitted from further processing. If the datafile was found to be valid, the
  • PROCESS_ FLAG will be updated from “N” to "P”, the PROCESSES will be updated to the current SYSDATE and the PROCESS.. REGNO will be updated to reflect the logical patient numbers.
  • the INBOUND tables exist within the data-processing schema and are the staging tables for ALL data to be processed in TMS.
  • the INBOUND tables include the tables TMSINT_PROC _
  • TMSINT_PROC-INBOUND_DTLS TMSINT_PROC_INBOUND_DTLS
  • TMSINT_PROC_INBOUND_DERV TMSINT_PROC-INBOUND table
  • the TMSINT_PROC-INBOUND table will contain ail information about a patient record.
  • the TMSINT Stamm PROC_INBOUND Stamm DTLS table will contain all of the ancillary data for the corresponding TMSINT_PROC_INBOUND patient record.
  • the TMSINT_ PROC_INBOUND_DERV table will contain the TMS derivation path for the
  • INBOUND tables which are staging tables, will be cleared of any data at the onset of each process. As data is processed successfully in TMS from the INBOUND tables, the data will be moved to the corresponding CODING tables. The only time data will be left in the INBOUND tables from previous processing is if an error occurred. Since all client data obtained from EXTRACT is cumulative, the previous data in INBOUND may safely be removed at the onset of each process.
  • Stage HTML EXTRACT Data in INBOUND Staging Tables - Data from the EXTRACT staging table having a PROCESS_FLAG of "P" are copied from EXTRACT staging table in the data-transfer schema to the INBOUND staging tables in the data- processing schema.
  • the application metadata tables are used in conjunction with the EXTRACT data to successfully populate the INBOUND tables.
  • the metadata tables specify not only core TMS information about a given datafiie such as the TMS
  • TMSINT_ADM_DATAFILES metadata table will be used to obtain the DatafilelD, ClientID, TMS dictionary information, TMS domain information, and the TMS External System name for the given client datafiie staged in EXTRACT.
  • the TMSINT_ADM_DICT_MAPPING metadata table will used to determine what DCMs and VTs are contained within the HTML data and HOW to parse the data values from the HTML using the HTML_TAG and HTML_PREFIX values for the given client datafiie staged in EXTRACT.
  • the TMSINT_ADM_DICT_MAPPING metadata table will also be used to determine HOW to parse the RESPONSE value from the HTML using the RESPONSE_HTML_TAG and RESPONSE_HTML_PREFIX values for the given client datafiie staged in EXTRACT. Responses are applicable only to previously processed data in which the specified VT is an omission in TMS, the VT has an open action and the site (client source system) is responding to the action.
  • the TMS I NT_ADM_EXT_MAPPI NG metadata table will used to determine what patient related "key" values are contained within the HTML data based on the TMS External System and HOW to parse each of the data values from the HTML using the HTML_TAG and HTML_PREFIX values for the given client datafiie staged in EXTRACT.
  • TMSINT_PROC-INBOUND table based on the mappings below. Certain data items will be obtained from the application metadata tables based on the give client datafiie while other data will be parsed directly from the HTML data in the EXTRACT staging table for the given patient record.
  • TMSINT_PROC-INBOUND__DTLS table is repopulated with any required ancillary data for the datafiie DCM based on defined metadata.
  • Ancillary data is defined for a given datafile at the DCM level and is contained in the metadata table TMSINT_ADM_
  • the TMSINT_ADM_ADDON_COLS metadata table is used to determine what patient related ancillary values are contained within the HTML data and HOW to parse each of the ancillary data values from the HTML using the HTML_TAG and HTML_ PREFIX values for the given EXTRACT datafile and DCM.
  • the TMSINT_PROC_ INBOUND table in conjunction with the application metadata table referenced above and the EXTRACT table, the TMSINT_PROC-INBOUND_DTLS table is populated based on the following mappings below. Note that the logical record number in the EXTRACT table is temporarily stored in the TMS__ERROR_MSG column of the
  • TMSINT_PROC_INBOUND table to enable a direct link between the INBOUND record and the HTML EXTRACT logical patient record.
  • HTML file header is obtained and written to the LAST_FILE_HEADER column value in the application metadata table TMSINT_ADM_DATAFILES.
  • the HTML header in the EXTRACT table will be updated to reflect a PROCESS_RECNO of 0 during the formatting process.
  • This HTML file header contains key information that is specified in the HTML file data generated that will be written back to the data-transfer IMPORT staging tables. This key information is required by the client source system for the subsequent updates based on TMS processing results.
  • HTML file header containing key information generated by the client source system:
  • TMSINT_PROC_INBOUND table will be checked for exact duplicate records. Any record that is determined to be a duplicate is removed from the INBOUND tables. It is not likely this situation will occur. However, a check has been put in place to ensure patient records are not duplicated.
  • Duplicated TMSINT__PROC_INBOUND records are denoted as records in which the following column values have more than one occurrence:
  • PROC-INBOUND table is checked for patient records having more than one
  • the application does not currently support processing multiple versions of the same datafile within the EXTRACT staging table. However, this functionality has been pre-added to handle multiple versions of a patient record if added in future releases.
  • TMS_SOURCE_TERM_ID and TMS_OCCURRENCE_ID occur multiple times. For each of these records, either the VT, the external value representing the "SITE" or the action/response values differ. Each record with the same TMS_SOURCE_TERM_ID will selected and ordered based on the order of insert. The second and subsequent records are updated to reflect a 1-up TMS__OCCURRENCE_ID value.
  • TMS Integration Processing The following describes how the data-processing schema processes client data in TMS. At the time of TMS processing, any pending client data from the data-transfer EXTRACT staging table will have been written to the data-processing schema INBOUND staging tables. The following processes are then executed during TMS processing:
  • TMS_USER_SYNCHRONIZATION.SYNCHRONIZE This ensures that any activity that has occurred via the TMS front-end or any dictionary versioning activity is propagated to the TMS patient data in the TMS_SOURCE_ TERMS and
  • TMSINT_PROC_INBOUND table is processed in TMS via the TMS API
  • TMS_USER_FULLAUTOCODE.CLASSIFYTERM All TMS processing results are captured in the TMSINT_PROC_INBOUND table within the TMS related columns depending on whether the patient record resulted in being a classification record or an omission record in TMS:
  • TMSINT_PROC_INBOUND data has been processed in TMS
  • the TMSINT_PROC_ INBOUND_DERV table is populated with the TMS "Derivation Path" for each patient record in TMSINT_PROC_INBOUND where the TMS_VTAID is non-null.
  • the TMS API TMS_USER_AUTOCODE.Try Classifying will be called in conjunction with custom code to retrieve all TMS columns as defined in the views TMSI NT_ADM_DERV_PATH_ LEVELS and TMS I NT_ADM_DERV_PATH__COLU MS . Once the entire TMS derivation path has been obtained the data is inserted into the TMS!NT_PROC_INBOUND_DERV tables.
  • TMSINT _PROC-INBOUND_DERV table For every dictionary level/column combination for the given patient record identified by the assigned INBOUND-ID. Since ALL level/column combinations are created, it is possible that some derivation path data values will be NULL.
  • An example is the Drug dictionary which can have a full ATC hierarchical derivation path consisting of ATC level 1 through 5. If the VTA classification term has a "Primary Link" ATC that is an ATC3, then the corresponding ATC4 and ATC5 derivation path values will be NULL.
  • TMSINT_PROC_INBOUND_DTLS and TMSINT_PROC-INBOUND_DERV tables for the given patient will be inserted into the TMSINT_PROC_CODING, TMSINT_PROC_ CODING_DTLS and TMSINT_PROC_CODING_DERV tables, respectively.
  • the new patient records After the new patient records have been successfully created in the CODING tables, the corresponding patient records will be deleted from the INBOUND tables.
  • the current CODING records for the patient are copied as-is to the JN tables to preserve a pre-update copy of the patient record.
  • the TMSINT_PROC_CODING record, all corresponding TMSINT_ PROC_CODING_DTLS records and all corresponding TMSINT_PROC_CODING_ DERV records will be copied to the JN tables TMSINT _PROC_CODING_ JN, TMSINT_ PROC_CODING_DTLS_JN and TMSINT_CODING_DERV_ JN, respectively.
  • the reason for the change is annotated in all JN tables in the JN_REASON column and wells the date of the change in JN_ENTRY_ TS and the user making the change in JN_ENTRY_BY.
  • the JN_REASON, JN_ENTRY_TS and JN_ENTRY_BY will be the same value on all of the JN tables. Additionally, the JN_ENTRY_TS will become the UPDATED .TS for all related CODING tables applicable to the new updated values.
  • the UPDATED_TS and UPDATEDJ3Y values of the most record patient CODING record will always point to the JN records in which is superseded. After the current CODING records have been inserted in the JN tables, the CODING tables are then updated to reflect the values of the INBOUND tables. After the CODING tables have been updated, the INBOUND record can be deleted,
  • TMSINT_PROC_CODING where either the CREATION _TS or the UPDATE__TS is greater than or equal to the execution timestamp. This then selects not only new patient records that were classified but also patient records that were updated by the current process.
  • HTML-IMPORT table is also in an HTML format. Unlike the extract data; however, that is one single HTML file per study with multiple embedded records, the import data will be constructed as one HTML file per patient record to be updated.
  • the import HTML file header is constructed from the metadata table TMSINT_ADM_DATAFILES columns LAST _F!LE_HEADER and
  • NEW_REC_HTML_TAG is used to derive the BOR/EOR markers.
  • the patient "key" values as specified in the TMS external system are constructed in an HTML format using the TMSINT_PROC_EXT_MAPPING values of HTML_ TAG, HTML_PREFIX and CODING_HTML_SUFFIX in conjunction with the EXT_VALUE_# in TMSINTJPROC_CODING for the given patient in the order specified by POST_COLUM-ID.
  • TMS related values can be returned to include the full TMS derivation path, by default the application will return the VTA classification term, the dictionary version and in the case of the drug dictionary the list of ingredients.
  • These values are constructed in an HTML format using the TMSINT_PROC_ DICT_MAPPING values of HTML_TAG, HTML_PREFIX and CODING_HTML _ SUFFIX in conjunction with the corresponding data values from TMSINT _ PROC_CODING and TMSINT_ADM_DICT_ING_LIST for the given patient.
  • the execution timestamp is captured. This processing timestamp is then used to select all patient omission data from the TMSINT_PROC_CODING where UPDATE_T S is greater than or equal to the execution timestamp, the TMS_ACTION_TEXT is not null and the TMSINT_JROC_ CODING_JN table indicates the creation or update of an action.
  • NEW_REC_HTML_TAG is used to derive the BOR/EOR markers.
  • the patient "key" values as specified in the TMS external system are constructed in an HTML format using the TMSINT_PROC_EXT_MAPPING values of HTML_TAG, HTML_PREFIX and ACTION_HTML_SUFFIX in conjunction with the EXT_VALUE_# in TMSINT__PROC_CODING for the given patient in the order specified by POST_COLUM_ID.
  • the VT is returned constructed in an HTML format using the
  • TMS related values can be returned, by default the application returns the TMS omission status and the TMS omission action.
  • the DBMS TMS to Medidata Rave Integration (TMSINT) application has the ability to reuse study field configurations (e.g. copying the configuration for an AE Verbatim between studies).
  • TMSINT TMS to Medidata Rave Integration
  • a template SQL script is provided that can be cloned and modified for the given client being defined similar to that of the SQL script below. The SQL script has been designed to accommodate subsequent executions and will initially delete the client and ail related data prior to executing the SQL script.
  • TMSINT TMS to Medidata Rave Integration
  • dsNavigator is a means to facilitate the creation of approved VTA's in TMS for a given dictionary by which the Verbatim Term (VT), Dictionary Coding Term and optional TMS Domain Name are specified in ASCII TAB delimited file(s).
  • the contents of the ASCII VTA data file(s) is loaded into a staging table called STG__VTA_LOAD .
  • a SQL process creates the distinct TMS Dictionary Domain VTA's using the TMS API
  • the ASCII VTA data file used by TMS VTA Batch Load Utility is a TAB delimited file less any headings or embedded special characters except for the TAB character used to separate the column values. All efforts should be made to ensure that there are no embedded horizontal tabs characters or new line characters within the text values.
  • the DBMS TMS to Medidata Rave Integration (TMSINT) application allows for direct data transfer from EDC and Safety Database systems via WebServices and similar technology.
  • the application also allows the loading of HTML and text files as data sources, for example provide Extract from Rave, Integrate data into TMS and Import Classification and Coding Queries (actions) to Rave
  • Webservice that can run on a scheduler or manually. This example, illustrates running manually.
  • TMS Integration TMS Integration
  • TMS Integration TMS Integration
  • TMS searches for an exact match to a dictionary term or an existing VTA in the Domain linking the TMS verbatim term to a dictionary term.
  • TMS first searches all terms in the Dictionary at the Classification Level, including dictionary terms plus Verbatim Terms that have been mapped to Dictionary Terms in the Domain.
  • TMS uses a built-in matching algorithm which makes all mixed case terms match and reduces multiple spaces to one space. ( See Fig 4)
  • the verbatim term is associated with a VTA linking it to a dictionary term in TMS.
  • TMS Transactional Management Entity
  • all the derived data is written to the Outbound Views for processing to the source EDC system. This provides a propagation feature enabling coding of identical previously coded verbatim terms to their VTAs. (See Fig. 5)
  • TMS can use any custom-defined Autocoder Search Objects to find a match which automatically matches non-equivalent terms. (See Figure 6) If a match is still not found, manual intervention is required.
  • the TMSINT application creates an omission in TMS and a record in the source terms table. An omission is then handled in TMS by either coding the term or by applying an Action. Re-submit terms to be re-coded
  • the TMSINT application is able to re-submit terms to be re-coded if a verbatim, route or indication field is modified.
  • the Oracle TMS to Medidata Rave Integration has been written so that extracting data subsequent times will process source data/ metadata extracted where the verbatim term, route or indication field is modified.
  • the data in the INBOUND table is reprocessed and synchronized with the TMS data to ensure the modified data is updated. (See Fig. 7 and Fig 8)
  • TMS TMS Domain/Dictionary functionality
  • TMS also enables different approval workflow by domain; one study/program can automatically approve a VTA when it is classified while the other study can enforce the manual approval process after classification (See Fig 13).
  • TMS can be accessed by multiple users at the same time and the coder is able to filter using the TMS Omission Filter (See Figure 14) and while coding, the user can sort coded data within the TMS Classify VT Omissions (See Figure 15).
  • TMSINT application maintains a unique record ID from the source data.
  • TMS browsing capabilities make ingredients visible when coding in WHO DDE and all multi-axial relationships are able to be seen including the ability to see all ATC levels. While coding in the Oracle TMS coding system, the coder is able to view ingredients, all ATC levels, and all multi-axial relationships for the dictionary
  • the TMSINT application then propagates the Ingredients and Dictionary Version to Medidata Rave. Additional hierarchy, including the ATC can be propagate if the Rave study is setup to receive the data (See Figure 17).
  • the TMSINT application allows the user to "Drill Down" for auxiliary information (e.g., CM Country, Indication, Route, Route Other) and displays differentiators while coding, allows the ability to use these differentiators for code assignment without automatically adding terms to a synonym list (e.g. allow the ability to use country specific information in the coding of non-distinct trade medications).
  • auxiliary information e.g., CM Country, Indication, Route, Route Other
  • a synonym list e.g. allow the ability to use country specific information in the coding of non-distinct trade medications.
  • TMS provides the ability to attach/view status codes or notes to coded and uncoded terms (e.g. to indicate terms with outstanding queries or those which have had a change request sent to the MSSO) using the TMS Status/Notes Pop-up Window from the Classify VT Omissions (under Omission Management). Status/Notes can also be propagated to Medidata Rave once an Item is created in the study to receive the Note. (See Figure 19) and to approve VTAs (under Omission Management (See Figure 20) Coding Approval
  • TMS provides configurable workflows within the coding/approval process (e.g. provide the ability to allow a secondary coding approval work-flow but not mandate it).
  • the workflow is configured in the Domain/Dictionary definition See Figure 21.
  • TMS is able to perform ATC coding for medications in a single step and ATC Classification: Terms are automatically classified based on the primary link. For High Level Classification with Primary Link, the Primary link can be reclassified at the
  • the ATC can be exported to the Medidata Rave System when the study is setup to receive the ATC Values. (See Figure 22)
  • the Indication Omissions window is available.
  • the coder is able to code/classify to the specific ATC in a single step. This is available when the WHODrug C version is installed.
  • TMS out-of-the box search object autoencoding capabilities include Auto-coding to previously coded verbatim term assignments (VTAs) within the same domain (this is available when the WHODrug C version is installed), Auto-coding to previously coded verbatim code individual (VTIs), (e.g., the verbatim term plus the auxiliary information of the source term) and Auto-coding of a source term to a non-unique dictionary term using additional drug information of the source term.
  • VTAs previously coded verbatim term assignments
  • VTIs previously coded verbatim code individual
  • a source term to a non-unique dictionary term using additional drug information of the source term.
  • Automatic ATC classification of a drug based on previous ATC classification of the same drug having the same indication
  • VTIs verbatim code individual
  • auxiliary information of the source term See Figure 24.
  • the objective is to use the Autocoder during the TMS Integration to increase auto classification efficiency gains, ensure consistency, and avoid delays in processing clinical data in TMS.
  • the TMS I NT application Custom Autocoder searches the appropriate dictionaries in TMS and attempts to find automatic matches for a VT based on the algorithm logic.
  • the logic includes word swapping, substitution logic for punctuation, stop words, insignificant word removal, and abbreviation expansion/contraction. Where matches are found, TMS posts the approved VT assignment to TMS and will flag the assignment as being Autocoded. This will enable the user to develop metrics reports for Autocoding performance.
  • the auto coded terms can then be automatically approved by the system.
  • TMS I NT application Custom Candidate list auto-coding solution can also be available to propose matches (candidate terms), rate and prioritize the proposed matches, and list the proposed matched for the coder to make the decision on coding to the best match. (See Figure 26).
  • Custom Autocoder Design [0273] The TMSINT application custom auto-coding search object algorithms are executed during the TMS__USER_FULLAUTOCODE.CLASSIFYTERM API based on the order identified in the Autocode Module Codelist. The logic is as follows:
  • PART_SUBS and WORD_SUBS where the description value is null. For example: When the following characters or words are included in the PARTJ3UBS and WORD_SUBS codelists, the system removes punctuation & symbols ("; *" to blank), removes numeric values ("0 - 9" to blank), removes stop words, and removes frequent or insignificant terms ("an, and, is, of to blank).
  • the VT Term is reconstructed and the dictionary is searched at the classification levels for a direct match for each substitution. If no match is found, continue substitution of same word is continues until there are no more available substitutions. If no match is found, individual words are reordered with all possible permutations of a multi-word term (with limits). The dictionary is then searched at the classification levels for a direct match.
  • the Custom Autocoder Partial Word Substitution Design is illustrated in Figure 28.
  • the Oracle TMS to Medidata Rave Integration (TMSINT): Architecture incorporating features of the invention is schematically shown in Figure 30.
  • the OPA Domain includes the WebLogic Server Administration Console, WebLogic Server Java, and OpaServerl , which are required to run RDC Onsite and, if the user has the Oracle Thesaurus Management System (TMS) installed, it includes the TMS Browser and TMS reports and the database connections.
  • TMS Thesaurus Management System
  • the Oracle Enterprise Manager is installed in the FR Domain, it also functions to monitoring the OPA Domain.
  • TMSINT Transaction TMS to Medidata Rave Integration
  • o Development and Validation can be scaled at 50% of the production capacity for Memory and CPU, and left as UNRESERVED. This will allow a fall-over capability from the Production ESXi to the Development/ Validation ESXi servers.
  • Applicant then dedicated servers on-line at both the user and applicant's facilities to perform the development, validation and production target environments running VMWare ESXi with a documented IQ for the hardware installation and ESXi installation.
  • Network connectivity is established between the provider and the users data center, in cooperation with the users networking team.
  • Dedicated hardware is preferred for several reasons because it provides a secure data isolation level from any other user's environments and allows user to acquire the most current hardware for performance and scalability purposes.
  • the Medical Coding Solution supplied by the provider allows for direct data transfer from EDC and Safety Database systems via WebServices and similar technology.
  • the processes are provide according to a fixed schedule an also as required as web services.
  • This synchronous approach uses Rave Web Services.
  • Rave Web Services integrates Medidata Rave with third-party systems to exchange CDISC ODM 1.3 standard clinical data or metadata synchronously and with immediate confirmation.
  • RWS uses the Representational State Transfer (RESTful) architecture. Data is posted to or retrieved from Rave using HTTP protocol messages posted to specific URLs. Each message received is programed to provide an immediate success or failure response. In the event of a failure, any pending changes are rolled back.
  • RWS uses "RESTful" web services, it does not mandate the use of either Simple Object Access Protocol (SOAP) or Web Services Description Language (WSDL). Synchronous integration high-level architecture
  • the study dataset is extracted from the source Medidata system using the RWS. https:// ⁇ host ⁇ /RaveWebServices/studies/ ⁇ study-id ⁇ /datasets/ ⁇ regular-or-raw ⁇ .
  • the extracted information is pushed to staging tables in the providers Custom IMS integration system.
  • the DBMS Custom APIs processes the data and identifies the items that require coding, a coding classification or opening a query.
  • Each Open query of each such identified item on the source system is updated using the RWS and validated against TMS and other related identied updates/queries. Updated/open queries are imported to Medidata and updated/current datasets are extracted from Medidata.
  • the system described herein saves the study datasets information in a file in standard ODM format and processes the information to validate against TMS using a DBMS Custom integration system.
  • This asynchronous integration high-level architecture extracts the study dataset from a text file placed on an accessible location and pushes the extracted information to staging tables of the DBMS Custom TMS integration system.
  • the DBMS Custom APIs processes this data and identifies the items that require coding, a coding classification or opening a query.
  • a text file is then generated for each item that requires an Update/Open query and this text file which contains the xm! request body that a RWS can accept.
  • Demonstration of User administration in TMS includes validation against TMS and identifying the updates/queries, generating text file for each identified update/open query action and reading datasets from a text file.
  • User administration overview comprises creating, configuring, modifying, and inactivating portions of the stored records.
  • the TMS Security module is used to create users, assign roles and deactivate users and to define access rights to different users.
  • TMS allows workflow assignments based on role (e.g. those responsible for coding and those responsible for approval).
  • the predefined roles are:opa_admin, rxclin jead, tms_access, tms_allocate_priv, tms_approve_priv, tms_classify_priv, tms_define_priv, tms_dictupg_priv, tms_dsi_priv, tms_integrate_priv, tms_maintain__priv, and ms__reciassify__priv, tms_research_priv. (See figure 32)
  • the Medical Coding Solution described herein allows for direct data transfer from EDC and Safety Database systems via WebServices and similar technology.
  • the processes are exposed as web services and are scheduled as required.
  • RWS Rave Web Services
  • RESTful Representational State Transfer
  • SOAP Simple Object Access Protocol
  • WSDL Web Services Description Language
  • the study dataset is extracted from the source Medidata system using the RWS (https:// ⁇ host ⁇ /RaveWebServices/studies/ ⁇ study-id ⁇ /datasets/ ⁇ regular-or-raw ⁇ ) and the extracted information is moved to staging tables of the TMS integration system.
  • the DBMS Custom APIs processes this data and identifies the items that require coding, a coding classification or open a query.
  • Each identified Update/Open query item on the source system using the RWS is validate against TMS and the updates/queries are imported to Medidata and data sets are extracted from Medidata.
  • the study datasets information is saved in a file in standard ODM format and this information is processed to validate against TMS using the Custom Asynchronous integration high-level architecture:
  • the study dataset is extracted from a text file placed on an accessible location, the extracted information is pushed to staging tables of the Custom TMS integration system.
  • the Custom APIs process this data and identify the items that require coding, a coding classification or opening a query, and a text file is generated for each item that requires an Update/Open, this text file containing the xml request body acceptable to a RWS.
  • User administration in TMS is validated against TMS, the updates/queries are identified, a text file is generated for each identified update/open query action and datasets are read from a text fife.
  • the TMS I NT application Definition Worksheet is used by the Administrator to configure the verbatim, route and indication fields and all other definitions and configurations for the client for coding the TMS Reference Codelists and the TMS External Systems.
  • Figure 33 is a sample of a Client definition based on a new client worksheet.
  • the TMSINT application has the ability to reuse study field configurations (e.g. copying the configuration for an AE Verbatim between studies).
  • a template SQL script is provided that may be cloned and modified for the given client being defined similar to the SQL script that has been designed to accommodate subsequent executions and will initially delete the client and ail related data prior to executing the SQL script.
  • Client Clinical Trial System Source System Configuration Setup A new client is setup for use with the TMSINT application. This includes determining the client's TMS dictionary requirements, external system requirements and detailed information on the format of the client's source data subject to processing.
  • a client specific Oracle schema is created that is responsible for extracting data from the clients Clinical Trial System Source system for TMS processing as well as re-importing data back into the client's source CDR system.
  • the Oracle schema is referred to as the "data-transfer” schema and is only responsible for the transfer of data from the client system and back into the client system.
  • the APIs within the package TMSINT_XFER_UTILS are written exclusively for the data-transfer functionality.
  • Data extracted from the client's source system is written to a staging tabie (TMSiNT__XFER_HTML_EXTRACT) in a raw HTML format by the data-transfer schema.
  • Data to be re-imported into the clients' source system is written to staging tables by the data-processing schema
  • UTI L.CLEAR_ALL_EXTRACT_DATA can be used to hard-delete all data in the staging table regardless of the PROCESS_FLAG. This API is intended for testing and development or to prevent ali subsequent data processing.
  • a client specific Oracle schema referred to as the data-processing schema
  • the data-processing schema is responsible for obtaining the client data pending processing from the data-transfer owned staging tables, processing the data in TMS, and subsequently writing the applicable TMS processed data back to data-transfer staging tables to be re-imported back into the client's source system.
  • the APIs within the package TMSINT__PROC_UTILS are written exclusively for the data-processing functionality.
  • the data in the TMSINT_XFER_HTML_EXTRACT table is marked as processed by updating the PROCESS_FLAG from "N" to "Y” which will prevent the data from being re-processed.
  • UTILS.PROCESS_INBOUND_FROM_XFER the data can then be processed in TMS.
  • the API to process the INBOUND data in TMS is TMSINT_PROC_UTILS.PROCESS_ INBOUND_IN_TMS. This process creates VTAs in the TMS repository, creates patient coding records in TMS_SOURCE_TERMS and creates patient omission records in TMS_VT_OMISSIONS.
  • each record in TMSINT_PROC_ INBOUND in which the TMS_PROCESS_FLAG is "N" will be set to a TMS_PROCESS_ FLAG value of "P" ("in progress").
  • the TMS related columns are populated based on attempts of coding the given record in TMS.
  • the CODING table data is persistent and will always represent the current client data in TMS at the time of the last execution of TMSINT_PROC_UTILS.PROCESS_ INBOUND-IN _TMS. If the INBOUND VT term and the External Values 1..8 do not exist in the CODING table, then this is a new patient record being processed. If the INBOUND VT and the External Values 1..8 already exist in the CODING table, and all TMS values and all client data values are the same, then the record has been previously processed and nothing has changed since the time the record was last processed.
  • the TMSINT__ADM administrator schema owns all of the application metadata tables.
  • the metadata tables allow for the definition and configuration of a client for use with the TMSINT application.
  • the metadata data tables facilitate "generic" applications and stored procedures that can be useable by all client configurations eliminating the need for client custom specific source code.
  • the application owner can view all data within metadata tables while the client specific data-transfer and data-processing Oracle schemas can see only data applicable to the client in which the Oracle schema is associated.
  • the application metadata tables require that accurate requirements be collected for a given client and properly defined within the metadata tables at the onset of processing.
  • TMSINT_ADM_ PROPERTIES table allows for the definition of various properties within the TMSINT application.
  • the only required entry within the properties table during the installation process is the name of the Oracle account that is the TMINT application owner.
  • the PropertyCategory is "APPLICATION”
  • the PropName is "OWNER”
  • the PropValue is the Oracle schema owning all of the application objects which is TMSINT_ADM”.
  • the properties table may optionally be used to specify other miscellaneous properties as needed.
  • a SQL script is executed during the installation process as the application owner that will automatically create the required entry in the TMSINT_ADM_PROPERTIES table. Additional SQL scripts are provided to insert, update, and delete properties as needed.
  • TMSINT__ADM_CL!ENTS table allows for the definition of a client within the TMSINT application.
  • Each client is assigned a unique numeric identifier and alias code.
  • the numeric identifier, Client-ID is used as a foreign key in other metadata tables for referential integrity.
  • the value of the client alias code selected dictates the names of the Data-Transfer and Data-Processing schemas TMSINT_XFER_ ⁇ client-alias> and TMSINT_PROC_ ⁇ client-alias>, respectively. SQL scripts are provided to easily allow for the insert, update, and delete of clients data.
  • TMS!NT__ADM_DATAFILES table allows for the definition of datafiles within the TMSINT application for a specific client in TMSINT_ADM_CLIENTS.
  • a client may have one or more datafiles, but a datafile can be assigned to only one client.
  • Each datafile is assigned a unique numeric identifier (DatafilelD) that is used as a foreign key in other metadata tables that have a child relationship to the datafile at hand.
  • a datafile is the URL (e.g. https://pharmanet.mdsol.com/RaveWebServices) in which to access the client's datafile, which is an HTML document.
  • the user name and password credentials needed to access the client datafile URL are required.
  • Each datafile corresponds to one clinical study name. When data is processed in TMS, the TMS domain name will be the "Study Name" concatenated with the string
  • TMSINT_ADM_DICT_MAPPING (dictionary mapping) table is a child table to the datafiles table TMSI NT_ADM_DATAF I LES and will contain one record for every DCM/VT within a datafile to be coded in TMS.
  • a single datafile can contain a DCM/VT (AE/AETERM) that is applicable to adverse event MedDRA coding and a DCM/VT (CONMEDS/CMTERM) that is applicable to Drug coding or multiples of each.
  • the dictionary mapping table associates the specific DCM/VT to be coded with a specific dictionary in TMS. This information is used during TMS processing by the Data-Processing schema.
  • a new DCM/VT dictionary mapping can be added to an existing datafile at any time by adding the new record to the TMS I NT_ADM_DI CT _ MAPPING table.
  • the Active Flag is set to "N°; to reinstate processing of the DCM/VT within the datafile, the Active Flag is reset to ⁇ ". All datafile dictionary mappings are initially created as active.
  • the TMS Dictionary must already exist in the TMS repository prior to adding a datafile dictionary mapping. SQL scripts have been provided to easily allow for the insert, update, and delete of datafile to dictionary mapping.
  • TMSINT_ADM_EXT_MAPP!NG (external system mapping) table is a child table to the datafiles table. This table creates a relationship between a given client datafile and a defined TMS External System. The table contains one record for every column defined in TMS definition of the External System which can be at most eight values (ExtValuel - ExtValue8). If the TMS External System has 6 defined external column values (Study, Site, Patient, Event, etc.) there can be only one record for each defined external system column. This table is used by the TMS I NT application to parse the datafile values that will ultimately be stored in TMS during processing. New external system mappings may be added to an existing datafile at any time as long as it corresponds to the TMS definition of the external system. To temporarily or
  • the Active Flag is set to "N” : to reinstate the Active Flag is set back to "Y”. All datafile external mappings are initially created as active.
  • the TMS External System must already exist in the TMS repository prior to adding a datafile external system mapping. SQL scripts are provided to allow for the insert, update, and delete of datafile to external system mapping.
  • TMSINT_ADM_ADDON_COLS add-on column mapping
  • This table provides and optional means to collect and store ancillary data associated to a specific DCM within a given datafile.
  • the column values will not be stored within the TMS repository during the coding process. However, they may optionally be used for later omission management via TMS Drill-Down views in which ancillary data can be queried and viewed as additional data to aid in the manual classification process.
  • the ancillary data is stored in the TMSINT application tables but not directly accessible from TMS.
  • Ancillary data is optional and as many columns as needed may be stored which may differ between DCMs.
  • AE DCM Adverse Event
  • CM DCM Concomitant Medication
  • CMDOSE dose
  • CCDOSFRQ frequency
  • the Active Flag can be set to "N" and to reinstate processing the Active Flag is set to ⁇ ". All datafile add-on column mappings are initially created as active. SQL scripts have been provided to easily allow for the insert, update, and delete of datafile ancillary data.
  • TMSINT_ADM_ORA_ACCOUNTS Order accounts table
  • This table is the mechanism that allows a given Oracle schema to only see the data associated to the given client in which the Oracle account is associated.
  • the actual Oracle schema must be pre-existing in the Oracle database and will be validated using the Oracle view ALL_USERS based on user name.
  • Creating or deleting entries does not actually create or drop the Oracle schema in the database but is more of a "registration" process which creates a relationship between the Oracle schema and a client as well as controls the selection of client data between the Oracle schemas.
  • Additional SQL scripts are provided to create the Oracle schemas in the database which will default the Oracle password for accounts to the user name. For example, the SQL script to create a data transfer schema for client "!NV" will create and Oracle user of TMSINT_XFER_!NV and the account will have an Oracle password of
  • the Oracle database password may optionally be changed at any time using normal Oracle means (ALTER ⁇ username> USER IDENTIFIED BY ⁇ psw>;). When passwords are modified within the database, it is preferred that the corresponding password in the Oracle accounts table be updated as well.
  • the Oracle passwords within the Oracle accounts table are used for query only and not used to connect to the database at the time of this document, SQL scripts are provided to easily allow for the registration and un-registration of Oracle accounts as well as suspending and unsuspending of the accounts by modifying the Active flag to "N" and ⁇ ", respectively. Suspending the Oracle Data-Transfer schema by setting the Active Flag to "N" "suspends" or prevents the data transfer activity. Unsuspending the Oracle Data-Transfer schema but setting the Active Flag to T allows the data transfer activity to resume. The same actions can occur for the Data-Processing schema.
  • Each client defined within the TMSINT is required to have a data-transfer schema. Data stored within this schema is applicable only to the client in which it is associated and is not intended to persist. The data-transfer schema is applied to a staging area only in which data will be extracted from the client's source CDR system for processing and subsequently re-incorporated back into the client's source CDR system where required.
  • TMSINT_XFER_HTML_EXTRACT table is the staging table that is used to store data extracted from the client source system.
  • Client data is extracted from an HTML document (i.e. client datafile) and written to the TMSINT_XFER_HTLM_ EXTRACT staging table in the original HTLM format.
  • APIs are written to insert and delete data from the extract staging table.
  • the APIs exist within the TMSINT_XFER_ UTILS package owned by the TMSINT application administrator account (TMSINT_ADM).
  • TMSINT_PROC_ ⁇ cllent> Data-Processing Owned Tables
  • TMSINT_PROC_INBOUND (inbound processing) table is the TMS staging table that stores data directly obtained from the data-transfer HTML extract table
  • TMSINT__XFER__HTML_EXTRACT after being transformed into a TMS useable record format.
  • Generic APIs are written to programmatically obtain the HTML extract data, transform data, write the data to the inbound processing table, and subsequently mark the data-transfer extract data as having been successfully processed (i.e. copied to INBOUND).
  • the APIs exist within the TMSiNT_PROC_UTILS package owned by the TMSINT application administrator account (TMSINT_ADM).
  • TMS!NT_PROC-INBOUND _DTLS (inbound details processing) table is the staging table that stores any ancillary data values obtained from the data-transfer HTML extract table that was transformed from HTML into a useable and meaningful format. If the client has no defined ancillary add-on columns in TMSINT_ADM_ADDON_COLS, then no data will be stored within the inbound details table; otherwise data is parsed from the HTLM data based on the corresponding HTLM_PREFIX value in
  • TMSINT_ADM_ADDON_COLS The records within the inbound details table are associated with a specific TMS coding record within the inbound table via the foreign key value INBOUND-ID.
  • the data values within the inbound details table may optionally be used for TMS Drill-Down views to be viewable from the TMS application.
  • I0312]TMSINT_PROC_INBOUND_DERV (inbound derivation processing) table i.e. "Derivation Path” is the staging table that stores the TMS "Derivation Path" for those records processed in TMS that successfully code to a TMS dictionary term.
  • the column TMS__DT__CONTENT -I D within the TMSINT _PROC_INBOUND table contains the TMS DictContentID of the dictionary term (DT) in which the VTA has been assigned in the TMS repository.
  • TMS DictContentID the full TMS "Derivation Path" is derived and each column value stored as one record in the TMS I NT_PROC_l N BOU N D_DERV table.
  • the records within the inbound derivation table must be associated to a specific TMS coding record within the inbound table via the foreign key value INBOUND-ID.
  • TMSINT_PROC-INBOUND_DERV-The Oracle views TMS I NT_ADM_DERV_ PATH_LEVELS and TMSINT_ADM_DERV_PATH_ COLUMNS are owned by the TMSINT application administrator and are used to determine what TMS dictionary levels are applicable to each dictionary "Derivation Path" and what column values are retrieved from TMS for each applicable dictionary level to be stored in the
  • TMSINT_PROC-INBOUND_DERV table Unlike the TMSINT some PROC_INBOUND and TMSINT wishes PROC_INBOUND_DTLS tables which are populated via client data, the TMSINT_PROC_INBOUND_DERV is populated based on "derived" TMS repository data.
  • TMSINT_PROC__CODING table PROC_CODING table and subsequently removed from the INBOUND staging table.
  • the TMSINT_PROC_INBOUND table which is a temporary staging table
  • the TMSINT_PROC__CODING table data will persist and will represent a client's current TMS coding data.
  • Generic APIs are written to programmatically transfer data from the staging table TMSINT_PROC-INBOUND to the TMSINT_PROC_CODING data table.
  • TMSINT_PROCclu CODING_DTLS (coding details) table is the table that will store any ancillary data values for the parent record in the TMSINT_PROC_CODING table identified by the foreign key CODING-ID. If the client has no defined ancillary add-on columns in TMSINT_ADM_ADDON_COLS, then no data will be stored within the coding details table; otherwise data is copied directly from the TMSINT_PROC_INBOUND_ DTLS table. The data values within the coding details table can be used for TMS Drill- Down views to be viewable from the TMS application.
  • TMSINT_PROC_CODING_DERV (coding derivation) table (i.e. "Derivation Path”) stores the TMS "Derivation Path" for those records in TMSINT _PROC_COD!NG in which the TMS_DT_CONTENT_l D contains a non-null value indicating there is a corresponding patient coding record in TMS_SOURCE_TERMS.
  • the TMSINT_PROC_ CODING_DERV table contains one record per derived column and derivation path level. Data is created in the TMS!NT_PROC_CODING__DERV only from the
  • TMSINT_PROC_CODING_JN table is a journal/audit table used to maintain a history of changes applicable to the TMSINT_PROC vigorous CODING table.
  • the TMSINT_PROC_ CODING_DTLS__JN table is a journal/audit table used to maintain a history of changes applicable to the TMSINT_PROC_CODING_DTLS table.
  • the TMSINT_PROC_ CODING_DERV_JN table is a journal/audit table used to maintain a history of changes applicable to the TMSINT_PROC_CODING_DERV table.
  • the system described herein utilizes Web Services technology and built-in data recovery/reload processes to manage the source data files and metadata and handle accidental deletions, transfer interruptions, partial load, or other failures that may occur. If the data is partial loaded or deleted in the extract temporary staging tables during the source data/metadata extract, the formatting, and record creation in the data-processing INBOUND staging tables, the data can be removed/purged and reloaded subsequent times from the source EDC system. The integration has been written such that processing data subsequent times will not have an impact on previously processed data unless an update is required. During the system data processing into TMS, the data in the INBOUND table can be reprocessed and synchronized with the TMS data to ensure no loss of data.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

L'invention concerne un procédé de collecte, de surveillance et de comparaison de données dans un système d'analyse clinique à participant et variables multiples, ainsi que l'intégration de la collecte et de la surveillance avec une base de données de dictionnaire analogique. Le procédé utilise une ou plusieurs unités informatiques pour collecter, valider, intégrer et stocker les données intégrées provenant de multiples participants dans une étude clinique. Le procédé utilise également un système informatique principal à distance, connecté aux unités de collecte informatiques, pour recevoir les données intégrées et téléchargées de l'unité de collecte pour un traitement ultérieur et pour télécharger des données nouvelles ou modifiées et des instructions de collecte aux unités de collecte à distance avec un logiciel et un appareil d'interface coopérant pour télécharger et entrer des données nouvelles ou mises à jour, les unités de collecte et l'ordinateur principal étant synchronisés de sorte que les données dans toutes les unités connectées sont les mêmes.
PCT/US2017/046168 2016-08-09 2017-08-09 Intégration de système d'essai clinique de données médicales avec le système de codage oracle WO2018031697A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201662372727P 2016-08-09 2016-08-09
US62/372,727 2016-08-09
US15/673,174 2017-08-09
US15/673,174 US20180060540A1 (en) 2016-08-09 2017-08-09 Medidata clinical trial system integration with oracle coding system

Publications (1)

Publication Number Publication Date
WO2018031697A1 true WO2018031697A1 (fr) 2018-02-15

Family

ID=61163331

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/046168 WO2018031697A1 (fr) 2016-08-09 2017-08-09 Intégration de système d'essai clinique de données médicales avec le système de codage oracle

Country Status (1)

Country Link
WO (1) WO2018031697A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3785190A4 (fr) * 2018-04-24 2022-01-19 Edifecs, Inc. Rapprochement rapide d'erreurs et de goulots d'étranglement dans des flux de travaux dirigés par des données

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060147099A1 (en) * 2004-12-30 2006-07-06 R2 Technology, Inc. Medical image review workstation with integrated content-based resource retrieval
US20080016111A1 (en) * 2005-11-28 2008-01-17 Ronald Keen Transactional storage and workflow routing for medical image objects
US20100049740A1 (en) * 2008-08-21 2010-02-25 Akio Iwase Workflow template management for medical image data processing
KR20140099506A (ko) * 2011-12-22 2014-08-12 코비디엔 엘피 원격 모니터링 시스템에서 환자 식별을 위한 시스템 및 방법
US20150178447A1 (en) * 2013-12-19 2015-06-25 Medidata Solutions, Inc. Method and system for integrating medical imaging systems and e-clinical systems

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060147099A1 (en) * 2004-12-30 2006-07-06 R2 Technology, Inc. Medical image review workstation with integrated content-based resource retrieval
US20080016111A1 (en) * 2005-11-28 2008-01-17 Ronald Keen Transactional storage and workflow routing for medical image objects
US20100049740A1 (en) * 2008-08-21 2010-02-25 Akio Iwase Workflow template management for medical image data processing
KR20140099506A (ko) * 2011-12-22 2014-08-12 코비디엔 엘피 원격 모니터링 시스템에서 환자 식별을 위한 시스템 및 방법
US20150178447A1 (en) * 2013-12-19 2015-06-25 Medidata Solutions, Inc. Method and system for integrating medical imaging systems and e-clinical systems

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3785190A4 (fr) * 2018-04-24 2022-01-19 Edifecs, Inc. Rapprochement rapide d'erreurs et de goulots d'étranglement dans des flux de travaux dirigés par des données

Similar Documents

Publication Publication Date Title
US11971945B2 (en) System for synchronization of changes in edited websites and interactive applications
US7979456B2 (en) Method of managing and providing parameterized queries
US8788457B2 (en) Ensuring that the archival data deleted in relational source table is already stored in relational target table
US7953744B2 (en) Database change verifier
US10025801B2 (en) Systems and methods for automatically generating user interface elements for complex databases
CN104981802B (zh) 针对对象存储器索引系统的内容类别
US20170316183A1 (en) Clinical data management system
US7730065B2 (en) File formats for external specification of object-relational mapping
US20120136839A1 (en) User-Driven Conflict Resolution Of Concurrent Updates In Snapshot Isolation
US20100325601A1 (en) Method and apparatus for creating a configurable browser-based forms application
US20140006342A1 (en) Systems for the integrated design, operation and modification of databases and associated web applications
US9477695B2 (en) Metadata cache supporting multiple heterogeneous systems
US6591275B1 (en) Object-relational mapping for tables without primary keys
US7941451B1 (en) Dynamic preconditioning of a B+ tree
US7765226B2 (en) Method of generating database transaction statements based on existing queries
US20180046779A1 (en) Caching technology for clinical data sources
US20180060540A1 (en) Medidata clinical trial system integration with oracle coding system
US20230289331A1 (en) Model generation service for data retrieval
WO2018031697A1 (fr) Intégration de système d'essai clinique de données médicales avec le système de codage oracle
US7634499B2 (en) Method, system and computer-readable media for repairing data record corruption
US8554722B2 (en) Method for transferring data into database systems
US11188228B1 (en) Graphing transaction operations for transaction compliance analysis
Krogh MySQL Concurrency [M]
WO2018031012A1 (fr) Technologie de mise en antémémoire pour sources de données cliniques
Juneau et al. JDBC with Jakarta EE

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17840235

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17840235

Country of ref document: EP

Kind code of ref document: A1