GB2539898B - A data handling method - Google Patents

A data handling method Download PDF

Info

Publication number
GB2539898B
GB2539898B GB1511380.6A GB201511380A GB2539898B GB 2539898 B GB2539898 B GB 2539898B GB 201511380 A GB201511380 A GB 201511380A GB 2539898 B GB2539898 B GB 2539898B
Authority
GB
United Kingdom
Prior art keywords
data
data structure
field
prioritised
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
GB1511380.6A
Other versions
GB201511380D0 (en
GB2539898A (en
Inventor
Browne Gavin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Broadridge Financial Solutions Ltd
Original Assignee
Broadridge Financial Solutions Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Broadridge Financial Solutions Ltd filed Critical Broadridge Financial Solutions Ltd
Priority to GB1511380.6A priority Critical patent/GB2539898B/en
Publication of GB201511380D0 publication Critical patent/GB201511380D0/en
Publication of GB2539898A publication Critical patent/GB2539898A/en
Application granted granted Critical
Publication of GB2539898B publication Critical patent/GB2539898B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24568Data stream processing; Continuous queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/221Column-oriented storage; Management thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2282Tablespace storage structures; Management thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/289Object oriented databases

Description

A DATA HANDLING METHOD
TECHNICAL FIELD
The present invention relates to a data handling method for use in handling a general data structure in an efficient manner and a new data structure. The new data structure is optimised to enable the data handling method to achieve faster handling of the data structures when high volumes of data structures have to be processed.
BACKGROUND
Generally, in computational data processing, data being processed are arranged in a data structure. Different programming languages utilise different data structures. One very common object oriented programming language is Java®. Java® has several different data structures which are accessed in different manners and are optimised for that type of access.
Data structures are sometimes created for specific way in which they will be handled (accessed). For example, in Java®, an example data structure may be a Collection. A Collection is generally a framework that provides an architecture to store and manipulate a group of objects. It can be considered to be an object that can hold references to other objects, each object having a behaviour and a state. The Collection has interfaces (classes) which declare the operations that can be performed on each type of Collection. Standard Java® libraries provide many types of Collections.
Figure 1 shows a type of Java® Collection known as an ArrayList 10. The ArrayList comprises a sequential index 12 and an array of values 14, each one being in a value field 16. Example elements of the value fields 16 are shown, though these are not limiting. ArrayLists 10 provide very fast write performance but are generally used for when all entries in the Collection are required to be read at the same time. In order to read or write to the ArrayList, a program running on a processor iterates through each entry in the Collection reading each item of the list sequentially even if only a few items are being read or written to.
Figure 2 shows a type of Java® Collection data structure known as a HashMap 20. The HashMap 20 comprises a non-sequential index 22 and an array of values 24 corresponding to each index. Example elements of the Index field and the Value field are shown. In addition each HashMap 20 also comprises a corresponding key field 26. The key field 26 has a special relationship with the index 22 which is determined by a hash function 28. Accordingly, the use of a key field 26 as input into the hash function 28 will result in the generation of a unique index value corresponding to that key. A standard Java Map such as a HashMap 20 involves a large number of equals checks (comparisons) on all of the available keys when implementing the hash function. This provides a generally constant access time for reading individual entries because they do not require any iteration of the data structure to find the required data. However, HashMaps 20 can be slower than an ArrayList 10 for a small size of Collection. HashMaps 20 are typically used when a large number of entries are being searched for a single item.
ArrayLists 10 and HashMaps 20 have the advantage that they are implemented in a standardised way in order to be interoperable across different processing systems.
Figure 3 shows a processing system 30 in which a stream of data structures is sent along an external message bus 32. For example, the external message bus 32 may be a communications channel on which real-time information is provided about an entity. The main objective of the processing system 30 is to read data from the external message bus 32 in different ways to build up a full picture about the entity. The processing system 30 may read or write to the same data structure many times via different components 34, 36, 38, 40.
When the system 30 processes data structures 10, 20, it is necessary for the system 30 to transform the data structure from a memory representation of the object to a common data format suitable for transmission or storage.
In this example, two components 34, 36 are shown connected to the external message bus 32, where each component comprises a processor 42, 44. A first component 34 is a marshalling component which generates a serial representation of the data and its conversion to the common data format (typically an XML format) is called ‘marshalling’ of the data. The serial representation generated by the marshalling component is output to an internal topic bus 46. As is known in the art, marshalling transforms the memory representation of a data object to a data format suitable for storage or transmission. This conversion into the common format is required when moving data between different processes within a system and enables rapid interaction of the processes with the data. The marshalling and unmarshalling components 34, 36 can be carried out by a well-known Java® API called JAXB (Java® architecture for XML binding). A second component 36 is an unmarshalling component which performs the converse transformation back from the serial representation of the internal topic bus to output to the external message bus.
Two additional components 38, 40 (named “Component” and “Component^’ respectively) are shown connected to the internal topic bus 46 and which each perform multiple processes on the data. Component2 38 is arranged to perform read and/or write operations to the data, and Component4 40 is arranged to perform read operations to the data and output results to an object. In other embodiments, there may be hundreds of components connected to the internal topic bus 46 where each component may perform different operations on the data structure.
Each of the components 34, 36, 38, 40 comprises a processor 42, 44, 48, 50, a configuration module 52 and a configuration datastore 54. Each configuration module 52 and configuration datastore 54 are connected to their respective processor 42, 44, 48, 50. Configuration data 56 is input into each component via the configuration module 52. Configuration data 56 comprises instructions on how to iterate through an ArrayList 10 and/or how to perform a hash function 28 for a HashMap 20. The configuration data 56 also includes a target type of object that the component is arranged to process. Configuration data 56 is stored in a configuration datastore 54 in each component.
The processor 42, 44, 48, 50 in each component can use the configuration data 56 to obtain data values 14, 24. For example, if the data structure is in the format of an ArrayList 10, a processor would iterate through all the data of the data structure until it finds the desired data entry. If the data structure is in the format of a HashMap 20, the processor would input the key into the hash function 28 to determine the unique index, then use the index to obtain the data entry.
Each component 34, 36, 38, 40 is configured to look for a specific type of data structure (Collection) on the internal topic bus 46. The type of data structure generally indicates which rules/processes need to run on what fields from the Collections are used in the processes. When a component 34, 36, 38, 40 finds a Collection which matches its specific target type, that Collection can be processed by the component which has previously been specifically configured for processing that Collection.
Having established how these prior art systems 30 generally work, some of their limitations are now discussed.
When processing a large volume of data structures, primarily with read operations that do not utilise all data fields, ArrayLists 10 perform inefficiently as the entire data structure must be iterated through to find the desired data. Similarly, HashMaps 20 do not perform efficiently when a plurality of data fields in a data structure is used by a processing system (although HashMaps may perform well when a single data field is desired from a large data structure).
Given that each component 34, 36, 38, 40 will typically carry out hundreds of microprocesses (instructions) which contribute to the overall process of the component, the number of data structure accesses for reading data fields will be high. The inefficiency of the existing prior art methods means that the processing time for each component is high. Previous attempts to address this processing overhead have been directed to providing more computing power. However, this increases cost and complexity and is not ideal.
Against this background, the present invention aims to provide a data handling method that provides improved read access whilst retaining the benefits of interoperability across different processing systems without substantially increasing cost or complexity.
SUMMARY OF THE INVENTION
The present invention resides in the appreciation that in a complex multi-component processing engine each data read that needs to be carried out is not completely independent of other data reads; namely, for any given application, a subset of all of the data fields can be determined which are the most common to all data searches for that application. Using this, it is possible to optimise the data processing by focussing on these most commonly accessed data fields and optimising then such that this reduces the overall processing time of a system which has thousands of processing components which each need to read the data structure multiple times, (typically hundreds of times).
In an embodiment of the present invention, a new data structure is created which is similar to an ArrayList in that it has a sequential index and an array of values, each one being in a value field. Also in the new data structure, an array of keys are provided for each value field in a similar fashion to a HashMap. A subset is defined in one embodiment as being the first several entries in the array, for example the first 20 out of 200 entries. These prioritised entries are determined to be the most commonly used fields in the reading of the Collections (or target data structures). In this regard, the data structure can act as a regular list for marshalling and unmarshalling, but can be accessed in a direct manner similar in some respects to a HashMap during component processing. Accordingly, when a data read is to be carried out, if the data field belongs to the subset of prioritised entries, then this data field can be accessed directly without the need for hash lookups or search iterations. If the data field is not in the subset, the remaining fields are searched, typically using an iterative search.
This weighted access to required data which prioritises a subset of data fields of the Collection, enables the desired result to be found more quickly, more often. This has led to a significant increase in the speed of reading a Collection by each of the different components of a processing system which has resulted in dramatic improvements to the processing speed of the system. At the same time, advantageously the data structure still retains a general format of a Collection in Java® and can still be used by standard third party utilities such as JAXB for marshalling and unmarshalling for example.
More specifically, according to one aspect of the present invention there is provided a combination of a target data structure and a data processing engine for processing the target data structure. The target data structure has a plurality of data fields with corresponding indices. The target data structure also has a pre-determined set of field names having a predetermined prioritised subset of data field names including at least one prioritised field name and a corresponding index identifier.
The data processing engine has a plurality of different processing components. Each processing component is arranged to perform a different processing operation on the target data structure. Each processing component has a receiver arranged to receive the predetermined set of field names. Each processing component has a key generator arranged to establish, for each prioritised field name in the predetermined set of field names, an access key for directly accessing one of the data fields using the index identifier corresponding to the prioritised field name to determine a corresponding one of the indices, where the data field corresponds to the prioritised field name. Each processing component has a component-specific processor is arranged to read a data value of one of the data fields directly using the access key when the data field to be read corresponds to a prioritised field name, and is arranged to generate an output based on the data value of the data field to be read.
The data processing engine includes an internal topic bus connected to the plurality of processing components. The data processing engine includes a marshalling component for marshalling the target data structure into a marshalled format obtained from an external message bus and for placing the target data structure onto the internal topic bus such that the target data structure can be conveyed to each of the plurality of processing components and processed by the same. The data processing engine includes an unmarshalling component for unmarshalling the processed target data structure from a marshalled format on the internal topic bus to an unmarshalled format, and for outputting the processed target data structure onto the external message bus.
The predetermined prioritised subset of data field names is an optimised subset, optimised with respect to all of the plurality of different processing components which are arranged to utilise the optimised subset to process the target data structure such that an overall processing time of the target data structure by the plurality of processing components is reduced.
The term ‘directly’ is to be considered to mean without requiring conventional access processes for a data structure such as indexing (index searching) and hashing (e.g. using a HashMap). Rather the index value is predetermined and so the required data value can simply be looked up in the indexed data structure using the known index.
Preferably the data processing engine is arranged to: obtain the target data structure from the external message bus; and for each data field in the target data structure which corresponds to a prioritised field name, map that data field to a location corresponding to the index identifier for that prioritised field name.
This mapping advantageously means that the target data structure does not have to be in the same format as the weighted data structure. When the target data structure is obtained, if it does not have the prioritised fields in the corresponding index locations to the weighted data structure, the mapping step accounts for this by linking the two locations together enabling the processing function of the component to be unaffected.
In a preferred embodiment, the data processing engine is arranged to: create an indexed weighted data structure; read the target data structure into the weighted data structure; and for each data field of the target data structure that corresponds to a prioritised data field, map that data field to an index of the weighted data structure corresponding to the index identifier for that prioritised field name.
An embodiment of the present invention is highly configurable and in one application can be used to model or process a complex entity. It can also handle large numbers of data structures each with many fields. Furthermore, the present embodiments use industry standard tools (in this case utilise Java Collection format) and have a low risk profile, namely avoid creation of complex customisations.
The indexed weighted data structure may be arranged to have a set of prioritised data fields corresponding to the pre-determined set of field names. Where this is the case, the indexed weighted data structure may be arranged to place ‘null’ values into any of the prioritised data fields which do not have any corresponding data available from the target data structure. In a described embodiment, the weighted data structure is a Java Collection data structure.
In an embodiment the marshalled format is an XML format and the unmarshalled, memory representation format is a Java format.
For each of the plurality of different processing components the respective receiver may be arranged to receive a predetermined target type value and the processing engine may be arranged to determine whether a value of a data structure type field of the target data structure matches the predetermined target type value to enable the respective processor to process the target data structure.
For each of the plurality of different processing components the respective receiver may be arranged to receive configuration data including configuration tasks. The respective processor may be arranged to create configuration constructs for each of the configuration tasks.
In an embodiment, the plurality of processors are arranged to run a plurality of micro processes on different fields of the weighted data structure. The plurality of processors may be arranged to run a plurality of different validation processes on the weighted data structure to ensure the data is valid. Also the plurality of processors may be arranged to run a plurality of different enrichment processes on the weighted data structure to generate new data values of data fields and writing the new data values to data fields of the weighted data structure.
At least one of the component-specific processors is arranged to create the output as an output data structure of a different type to that of the target data structure. Alternatively, at least of the component-specific processors is arranged to: create the output as an output data structure of the same type as the target data structure; marshal the data into a marshalled format; and output the marshalled data structure.
The plurality of component-specific processors may be arranged to read a data value of a specific data field of the target data structure by searching the target data structure for the specific data field when the data field to be read does not correspond to a prioritised field name.
In this case, the plurality of component-specific processors may be arranged to search by being arranged to carry out an iterative search of data fields of the target data structure which do not correspond to any of the predetermined set of field names.
The present invention can process millions of target data structures per hour and operate much faster than conventional processing engines and systems (see later for comparative results). The speed up achieved by the above described realisation that most of the required data reads for processes occurring at different components involves a small subset of common data fields. In fact the inventor has determined that in one field of use, more than 95% of all accesses can be traced to a relatively small subset of data fields.
BRIEF DESCRIPTION OF THE DRAWINGS
Figures 1 to 3 have already been described above by way of background, in which:
Figure 1 is a schematic view of the structure of an ArrayList;
Figure 2 is a schematic view of the structure of a HashMap; and
Figure 3 is a schematic block diagram of a prior art system for processing the ArrayList of Figure 1 and/or the HashMap of Figure 2.
An embodiment of the present invention will now be described in detail by way of example only, with reference to the remaining drawings, in which:
Figure 4 is a schematic view of the structure of a weighted data structure according to an embodiment of the invention;
Figure 5 is a schematic block diagram of a system for handling and processing the data structure of Figure 4;
Figure 6 is a flowchart of a data handling method according to an embodiment of the invention;
Figure 7 is a flowchart showing in detail a step of configuring components in the method of Figure 6;
Figure 8 is a flowchart showing in detail a step of running components in the method of Figure 6;
Figure 9 is a flowchart showing in detail a step of Process A in the method of Figure 8; and
Figure 10 is a flowchart showing in detail a step of Process B in the method of Figure 8.
DETAILED DESCRIPTION
Specific embodiments of the present invention are described below with reference to the figures.
Figure 4 shows a weighted data structure 100 according to an embodiment of the invention. Each weighted data structure 100, which in this embodiment is a Java® object in the form of a Collection, typically has between 150 and 200 data fields 102, namely this is the size of index ‘ri in Figure 4. Each data field 102 in the data structure comprises an index 104, a field name 106 (i.e. a key) and a data value 108. The data value 108 may be an integer, a floating point number, a time, a date, ASCII characters etc. The format of the weighted data structure 100 is sometimes termed a memory representation format because the format is suitable for representing the memory organisation of the data.
In the example of Figure 4, the data fields from index 0 to 5 (shaded in the figure for illustrative purposes) are defined as a set of prioritised data fields 110. The data values 108 in the set of prioritised data fields 110 are frequently accessed by processes using the data structure 100. The set of prioritised data fields 110 utilise the first six indexes. In other embodiments, the set of prioritised data fields utilise predetermined indexes in the data structure which may be non-sequential. In either case, this set of prioritised data fields 110 is a subset of all of the data fields of the weighted data structure 100.
It is to be appreciated that not all fields of the weighted data structure 100 are populated with meaningful data. In the cases where no data is available from the target data structure (namely the data structure being read from the internal topic bus 46) the data value at these fields can be set to ‘null’.
The weighted data structure 100 can be accessed by processes in accordance with a pre-determined configuration which associates the field names with the corresponding index for the set of prioritised data fields 110. Processes may perform actions (e.g. read and/or write) on a plurality of data fields 102 by referencing the field name 106 to obtain the corresponding data value 108. For field names in the set of prioritised data fields 110, the process looks up the corresponding index using the pre-determined configuration to directly access the data value. For field names that are not in the set of prioritised data fields 110, the process iterates through each data field that is not in the set of prioritised data fields until it finds the desired data field to access the data value (which is substantially similar to data access in an ArrayList 10).
Figure 5 shows a processing system 200 substantially similar in its general composition to the prior art processing system 30 described with reference to Figure 3. However there some differences in structure which enable substantial differences in the manner of operation of the system which in turn leads to the increased performance of the present embodiment (see later). Accordingly for the sake of brevity, only the differences between the prior art systems 30 and the present embodiment are elaborated on below.
In addition to the processing system of Figure 3, Component2 38 and Component4 40 in the processing system 200 of Figure 5 each comprise an accessor datastore 202. In the embodiment of Figure 5, configuration data 56 received by each component 34, 36, 38, 40 comprises configuration tasks which each define a pre-determined set of field names relating to prioritised data fields 110. These field names are a predetermined subset of the total possible field names of the target data structure. The field names chosen to be in this subset are the most commonly used field names in the processing carried out across all of the components of the processing engine.
The processors of Component2 48 and Component4 50 are arranged to interpret each of the configuration tasks into a Java object or subroutine that provides direct access to a specific location (index) of the data structure without requiring any search of the data structure. The interpreted configuration tasks are stored in the accessor datastore 202.
Figure 6 shows a process 300 utilising the weighted data structure 200 according to an embodiment of the invention. The process begins with each component 34, 36, 38, 40 being configured at Step 302 using the configuration data 56. The configuration is then loaded at Step 304 into the processor 42, 44, 48, 50 of each component.
Following this, the component runs at Step 306 its process using the data values in the data structure 200. This process 306 may in fact be hundreds of micro processes operating on the same weighted data structure (for example implementing rules, or carrying out validation processes or even enriching the data structure with newly calculated data), almost entirely requiring reads of the data structure. An output of the process is generated at Step 308.
Figure 7 shows a process of Step 302 of configuring each component in greater detail. The process 302 with the component receiving at Step 400 is the configuration data 56 at the configuration module 52. The configuration module 52 stores at Step 402 (in the configuration datastore 54) the type of object that the component 34, 36, 38, 40 is arranged to process. This is termed the target object type. Then the configuration module 52 stores at Step 404 the configuration tasks in the configuration datastore 54.
Following this, the processor selects at Step 406 the first configuration task. Then the processor creates at Step 408 configuration constructs for the first configuration task. A configuration construct may be a Java object or subroutine and may be referred to as an ‘accessor ‘or ‘access key’ in the discussion below. The configuration constructs for the first configuration task that are created in Step 408 are stored at Step 410 in the configuration datastore 54.
The processor then checks at Step 412 whether accessors have been created for each configuration task received at Step 400. If not, then the processor moves on at Step 414 to the next configuration task and the process returns to Step 408. Once accessors have been created for each task, then Step 302 of configuring the component is complete.
Figure 8 shows a process of Step 304 of running a component process in greater detail. Firstly, data is obtained from the internal topic bus and unmarshalled at Step 500 into the weighted data structure. Unmarshalling uses the configuration constructs to determine what index position to place the data in. Accordingly, the order of the data fields may change when the data is unmarshalled by the component. By this process, the exact locations each of the prioritised data fields 110 are now known to the component without the need at run time for any searching of the weighted data structure.
Then the header of the data structure from the internal topic bus is read at Step 502 to obtain the data type. The data type is compared at Step 504 against the target data type that the component is arranged to process.
If the data type does not match the target data type, then the data structure is ignored at Step 506. This is because the component is not configured to process the current data structure. Following this, the process of the Step 508 checks whether there are any more data structures on the internal topic bus that require processing. If not, then the process ends. If, after the check of Step 508, there are more data structures to be read, the process returns to Step 500 of unmarshalling the data structure.
If, following the check of Step 504, the data type matches the target data type then the process A or B is carried out at a Step 510. Process A is an example of a read-only process (described in more detail with reference to Figure 9) carried out by component4 40, and Process B is an example of read/write process (described in more detail with reference to Figure 10) carried out by component 38.
Figure 9 shows a process 600 of Step 510 (Process A) in greater detail. Process A begins with the accessors being run at Step 602 to obtain the data values from any of the set of prioritised data fields 110 that are used by the processor 50 of the component4 40.
The processor 50 then checks at Step 604 whether any additional data fields (i.e. data fields not in the set of prioritised data fields 110) are required by the processor component. If the component4 40 requires any additional data fields, then the processor 50 iterates at Step 606 through the data structure to find the desired data fields. Once all data fields have been identified through iteration at Step 606, or if after Step 604 no additional data fields are required, then the data values are extracted at Step 608 from the data fields.
The processor 50 then executes its processes which includes carrying out calculations on the extracted data values to create at Step 610 results. The processor is configured to create an output which may be in the form of a new type of object using the results. The generated output is then sent at Step 612 from the component to in the form of the output object 60, ending the processes 600 and 300.
Figure 10 shows a process 700 of an alternative Step 510 (Process B) in greater detail. This process would be carried out at a different component which has been configured differently to a component executing Process A, e.g. component2 38. Process B begins with the accessors being run at Step 702 to obtain the data values from any of the set of prioritised data fields 110 that are used by the component2 38.
The processor 48 then checks at Step 704 whether any additional data fields (i.e. data fields not in the set of prioritised data fields 110) are required by the component2 38. If the component requires any additional data fields, then the processor iterates (searches) at Step 706 through the data structure to find the desired data fields. Once all data fields have been identified through iterative searching at Step 706, or if after Step 704 no additional data fields are required, then the data values are extracted at Step 708 from the data fields.
The extracted data is validated (using validation processes provided at the component) at Step 708 to ensure data integrity. Calculations are then performed on the extracted data values to create at Step 710 new data values. These processes can be considered to be enrichment processes because they create new data which enriches the original data of the target data structure. The new data values are then written at Step 712 into the weighted data structure 100. The data structure is then marshalled at Step 714 into a serial representation of the data and subsequently the marshalled data is written at Step 716 back to the internal topic bus, ending the process.
Using the weighted data structure with the set of prioritised data fields reduces the data access times by components compared to using an ArrayList 10 or a HashMap 20 as the commonly accessed data fields can be directly accessed.
Table 1 shows indicative access counts for different operations (e.g. read, write, marshal, unmarshal) that a processing system may carry out on a data structure. Note, for a given field, the number of write operations tend to be either zero or one throughout the lifecycle so write performance is not a major contributor to processing time compared to total read time.
Table 1 - Typical data field access counts
Table 2 - Example test run times
The results of two tests are shown in Table 2. In Test 1, a test on a component using five prioritised data fields carrying out its process of generating an output 1 million times, took
20,626 milliseconds using an ArrayList and 13,802 milliseconds using a HashMap, compared to 3,370 milliseconds using the weighted data structure. This shows that the weighted data structure enabled the process to perform about four times faster than a HashMap, and six times faster than a ArrayList.
In Test 2, a component using five non-prioritised data fields carrying out its process of generating an output 1 million times, took 27,478 milliseconds using an ArrayList and 13,643 milliseconds using a HashMap, compared to 3,843 milliseconds using the weighted data structure. Performance is improved for the non-prioritised data fields because the process does not have to iterate through the whole data structure, it can iterate through only the non-prioritised data fields. This shows that the weighted data structure enabled the process to perform about 3.5 times faster than a HashMap, and seven times faster than a ArrayList.
As the person skilled in the art will appreciate, modifications and variations to the above embodiments may be provided, and further embodiments may be developed, without departing from the spirit and scope of the disclosure.

Claims (16)

1. A combination of a target data structure and a data processing engine for processing the target data structure, the target data structure comprising a plurality of data fields with corresponding indices, and a predetermined set of field names having a predetermined prioritised subset of data field names including at least one prioritised field name and a corresponding index identifier, the data processing engine comprising a plurality of different processing components, each processing component being arranged to perform a different processing operation on the target data structure and comprising: a receiver arranged to receive the predetermined set of field names; a key generator arranged to establish, for each prioritised field name in the predetermined set of field names, an access key for directly accessing one of the data fields using the index identifier corresponding to the prioritised field name to determine a corresponding one of the indices, where the data field corresponds to the prioritised field name; and, a component-specific processor arranged to read a data value of one of the data fields directly using the access key when the data field to be read corresponds to a prioritised field name, and arranged to generate an output based on the data value of the data field to be read, the data processing engine further comprising: an internal topic bus connected to the plurality of processing components; a marshalling component for marshalling the target data structure into a marshalled format obtained from an external message bus and placing the target data structure onto the internal topic bus such that the target data structure can be conveyed to each of the plurality of processing components and processed by the same; and, an unmarshalling component for unmarshalling the processed target data structure from a marshalled format on the internal topic bus to an unmarshalled format, and outputting the processed target data structure onto the external message bus, wherein the predetermined prioritised subset of data field names is an optimised subset, optimised with respect to all of the plurality of different processing components which are arranged to utilise the optimised subset to process the target data structure such that an overall processing time of the target data structure by the plurality of processing components is reduced.
2. A combination according to Claim 1, the data processing engine being arranged to: obtain the target data structure from the external message bus; and for each data field in the target data structure which corresponds to a prioritised field name, map that data field to a location corresponding to the index identifier for that prioritised field name.
3. A combination according to Claim 2, wherein the data processing engine is arranged to: create an indexed weighted data structure; read the target data structure into the indexed weighted data structure; and, for each data field of the target data structure that corresponds to a prioritised data field, map that data field to an index of the indexed weighted data structure corresponding to the index identifier for that prioritised field name.
4. A combination according to Claim 3, wherein the indexed weighted data structure is arranged to include a set of prioritised data fields corresponding to the predetermined set of field names.
5. A combination according to Claim 4, wherein the indexed weighted data structure is arranged to place null values into any of the prioritised data fields which do not have any corresponding data available from the target data structure.
6. A combination according to any of Claims 3 to 5, wherein the indexed weighted data structure is a Java Collection data structure.
7. A combination according to any preceding claim, wherein the marshalled format is an XML format and the unmarshalled format is a Java format.
8. A combination according to any preceding claim, wherein for each of the plurality of different processing components the respective receiver is arranged to receive a predetermined target type value and the processing engine is arranged to determine whether a value of a data structure type field of the target data structure matches the predetermined target type value to enable the respective processor to process the target data structure.
9. A combination according to any preceding claim, wherein for each of the plurality of different processing components the respective receiver is arranged to receive configuration data including configuration tasks, and the respective processor is arranged to create configuration constructs for each of the configuration tasks.
10. A combination according to any of Claims 3 to 6, wherein the plurality of component-specific processors are arranged to run a plurality of micro processes on different fields of the indexed weighted data structure.
11. A combination according to any of Claims 3 to 6, wherein the plurality of component-specific processors are arranged to run a plurality of different validation processes on the indexed weighted data structure to ensure the data is valid.
12. A combination according to any of Claims 3 to 6, wherein at least one of the component-specific processors is arranged to run a plurality of different enrichment processes on the indexed weighted data structure to generate new data values of data fields and to write the new data values to data fields of the weighted data structure.
13. A combination according to any of Claims 1 to 12, wherein at least one of the component-specific processors is arranged to create the output as an output data structure of a different type to that of the target data structure.
14. A combination according to any of Claims 1 to 12, wherein at least one of the component-specific processors is arranged to: create the output as an output data structure of the same type as the target data structure; marshal the data into a marshalled format; and output the marshalled data structure.
15. A combination according to any preceding claim, wherein the plurality of component-specific processors are arranged to read a data value of a specific data field of the target data structure by being arranged to search the target data structure for the specific data field when the data field to be read does not correspond to a prioritised field name.
16. A combination according to Claim 15, wherein the plurality of component-specific processors are arranged to search by being arranged to carry out an iterative search of data fields of the target data structure which do not correspond to any of the predetermined set of field names.
GB1511380.6A 2015-06-29 2015-06-29 A data handling method Active GB2539898B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1511380.6A GB2539898B (en) 2015-06-29 2015-06-29 A data handling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1511380.6A GB2539898B (en) 2015-06-29 2015-06-29 A data handling method

Publications (3)

Publication Number Publication Date
GB201511380D0 GB201511380D0 (en) 2015-08-12
GB2539898A GB2539898A (en) 2017-01-04
GB2539898B true GB2539898B (en) 2019-08-28

Family

ID=53872373

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1511380.6A Active GB2539898B (en) 2015-06-29 2015-06-29 A data handling method

Country Status (1)

Country Link
GB (1) GB2539898B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111538747B (en) * 2020-05-27 2023-04-14 支付宝(杭州)信息技术有限公司 Data query method, device and equipment and auxiliary data query method, device and equipment
CN112347146A (en) * 2020-10-22 2021-02-09 深圳前海微众银行股份有限公司 Index recommendation method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060277178A1 (en) * 2005-06-02 2006-12-07 Wang Ting Z Table look-up method with adaptive hashing
US20130346274A1 (en) * 2012-06-25 2013-12-26 Liquid Holdings Group, Inc. Electronic financial trading platform with real-time data analysis and reporting

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060277178A1 (en) * 2005-06-02 2006-12-07 Wang Ting Z Table look-up method with adaptive hashing
US20130346274A1 (en) * 2012-06-25 2013-12-26 Liquid Holdings Group, Inc. Electronic financial trading platform with real-time data analysis and reporting

Also Published As

Publication number Publication date
GB201511380D0 (en) 2015-08-12
GB2539898A (en) 2017-01-04

Similar Documents

Publication Publication Date Title
US7860863B2 (en) Optimization model for processing hierarchical data in stream systems
US7861222B2 (en) Discoscript: a simplified distributed computing scripting language
US8868576B1 (en) Storing files in a parallel computing system based on user-specified parser function
US20140025684A1 (en) Indexing and searching a data collection
CN113283613B (en) Deep learning model generation method, optimization method, device, equipment and medium
US20100251227A1 (en) Binary resource format and compiler
WO2018194722A1 (en) Systems and methods for proactive spilling of probe records in hybrid hash join
CN112860730A (en) SQL statement processing method and device, electronic equipment and readable storage medium
GB2539898B (en) A data handling method
EP3113038B1 (en) A data handling method
CN111125216B (en) Method and device for importing data into Phoenix
US8631013B2 (en) Non-intrusive data logging
US20120030235A1 (en) Priority search trees
CN112148746B (en) Method, device, electronic device and storage medium for generating database table structure document
CN110851178B (en) Inter-process program static analysis method based on distributed graph reachable computation
CN113467828A (en) Method and system for converting programming language in heterogeneous many-core processor
CN113779311A (en) Data processing method, device and storage medium
CN112445468A (en) Typescript type file generation method, device, equipment and computer readable storage medium
RU2659492C1 (en) Unification unit with parallel comparison of terms
CN112988778A (en) Method and device for processing database query script
US9471715B2 (en) Accelerated regular expression evaluation using positional information
US10719304B2 (en) Computer program generation using a library
US11861171B2 (en) High-throughput regular expression processing with capture using an integrated circuit
CN112765433B (en) Text keyword scanning method, device, equipment and computer readable storage medium
Tachon et al. Automated generation of bsp automata