WO2006104932A1 - Modelage de services et utilisation de plans d'interrogation pour l'elaboration et la mise au point des performances - Google Patents
Modelage de services et utilisation de plans d'interrogation pour l'elaboration et la mise au point des performances Download PDFInfo
- Publication number
- WO2006104932A1 WO2006104932A1 PCT/US2006/010913 US2006010913W WO2006104932A1 WO 2006104932 A1 WO2006104932 A1 WO 2006104932A1 US 2006010913 W US2006010913 W US 2006010913W WO 2006104932 A1 WO2006104932 A1 WO 2006104932A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- query
- data services
- data
- services
- requestor
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2453—Query optimisation
- G06F16/24534—Query rewriting; Transformation
- G06F16/24542—Plan optimisation
Definitions
- the current invention relates generally to accessing services on behalf of applications, and more particularly to a mechanism for modeling data services and using query plans for building and performance tuning.
- SOA Service Oriented Architecture
- IDE Integrated Development Environment
- BPM business process management
- FIGURES 1A - 1 B are functional block diagrams illustrating an example computing environment in which techniques for data service modeling, query plan generation and performance tuning may be implemented in one embodiment.
- FIGURES 2A - 2C are operational flow diagrams illustrating a high level overview of techniques for modeling data services of one embodiment of the present invention.
- FIGURES 2D - 2G are operational flow diagrams illustrating a high level overview of techniques for preparing a query plan for tuning a service in one embodiment of the present invention.
- FIGURE 3A - 3B are screen shots illustrating a high level overview of an example view and model creation tool operable in one embodiment of the present invention.
- FIGURE 4 is a hardware block diagram of an example computer system, which may be used to embody one or more components of an embodiment of the present invention.
- mechanisms and methods for modeling data services make it possible for organizations to lessen dependence on service implementations by providing a unified view of disparate services to one or more requestors.
- Requestors may be users, proxies or automated entities.
- the view of data services provided to the requestor may be substantially independent of structure or format of the data services underlying the model.
- the data services underlying the model are mapped to the view. This ability of a liquid data framework to support modeling data services makes it possible to attain improved usage from computing resources in a computer system.
- multiple models of data services may be created, stored and used to increase flexibility in changing or adapting the organization's IT infrastructure.
- a method for modeling data services includes determining information of interest to at least one requestor.
- a data model for data services is created based upon the determination of which data services are relevant to the information of interest.
- the view of data services is substantially independent of structure or format of the data services underlying the model. Data services underlying the model are mapped to the view.
- model based request processing includes receiving a request to access at least one service in the view.
- a request to access at least one of a plurality of services underlying the data services model based upon the request is prepared by mapping at least one service in the request to at least one underlying service.
- the at least one underlying service is accessed to obtain a result set.
- a result set for the requestor is prepared.
- the result set for the requestor includes data selected from the result set(s) received from the at least one underlying service by mapping the data selected from the result set(s) received from the at least one underlying service to the view associated with the requestor.
- a query plan comprises steps to take to get data to satisfy a query.
- These mechanisms and methods for using query plans for building and performance tuning services makes it possible to examine the query plan and response times for query execution.
- the ability to examine the query plans and response times for query execution makes it possible to improve query efficiency and apply caching more effectively.
- the invention provides a method for accessing a service.
- One embodiment of the method includes receiving a query.
- a preferred way for satisfying the query is determined from one or more possible ways for satisfying the query.
- the preferred way is provided as at least a portion of the query plan.
- Determining a preferred way for satisfying the query includes, in one embodiment, determining one or more ways for satisfying the query.
- a preferred way for satisfying the query that meets a performance criteria is select and provided in a query plan.
- the query plan may be used to access one or more services to obtain a result set.
- the result set from accessing the service according to the query plan may be provided to a requestor, along with information about time or resources usage to perform the query.
- selecting a preferred way for satisfying the query and meeting a performance criteria can include selecting a technique such as reading each of the database tables into memory and then performing a join operation, if speed performance is preferred over memory usage performance.
- selecting a technique such as reading a smaller one of the tables into memory and then requesting values from remaining tables as needed to complete a join operation could be selected if memory capacity would be constrained by at least one table.
- Another alternative technique such as requesting values from each of the tables as needed to complete a join operation could be selected if both tables are too large to be brought into memory.
- SQL pushdown techniques include deferring processing to the underlying SQL sources for operations such as string searches, comparison operations, local joins, sorting, aggregate functions, and grouping.
- Batched join processing techniques include passing join values from one data source to another data source in batches, which can reduce the number of SQL calls that would otherwise be needed for the join.
- Index join techniques include fetching join targets in their entirety into memory in one call if one of the join tables is small (e.g. code table).
- Parallel data source requests employ parallelism to reduce latency for queries involving multiple data sources.
- a streaming API passes data as a continuous stream from the underlying data source to the consuming application.
- time-out instructions are wrapped around a portion of a query that depends upon unreliable data. These time-out instructions specify how long to wait for a response from the data source and what the alternative content to be returned to the caller if the time out expires.
- a query plan viewer is provided to assist with creating efficient queries. The query plan viewer shows a compiled view of the query to enable users to improve queries. In one embodiment, optimization techniques may be used for speeding data access and transformations as well.
- performance criteria is intended to be broadly construed to include any condition placed upon a time or resources usage.
- Some examples of performance criteria include without limitation a maximum query response time, an average response time for data queries, a peak usage or a maximum degradation of performance.
- an application may use query response times to provide a measurement for ensuring and documenting compliance with performance-based service level agreements (SLA).
- SLA performance-based service level agreements
- a business partner that has such a SLA can see the average response time of data queries, when peak usage occurs, what sources are degrading performance, and so on.
- the term service is intended to be broadly construed to include any computer resident application capable of providing services to a requestor or other recipient, including without limitation network based applications, web based server resident applications, web portals, search engines, photographic, audio or video information storage applications, e-Commerce applications, backup or other storage applications, sales/revenue planning, marketing, forecasting, accounting, inventory management applications and other business applications and other contemplated computer implemented services.
- the term result set is intended to be broadly construed to include any result provided by one or more services. Result sets may include multiple entries into a single document, file, communication or other data construct.
- the term view is intended to be broadly construed to include any mechanism that provides a presentation of data and/or services in a format suited for a particular application, service, client or process.
- the presentation may be virtualized, filtered, molded, or shaped.
- data returned by services to a particularapplication can be mapped to a view associated with that application (or service).
- Embodiments can provide multiple views of available services to enable organizations to compartmentalize or streamline access to services, increasing the security of the organization's IT infrastructure.
- query plan is intended to be broadly construed to include steps to take to get data to satisfy a query. For example: Go to source 1 , get customer data Go to source 2, get order data - Join Customer data with Order data
- FIGS. 1A - 1 B are functional block diagrams illustrating an example computing environment in which techniques for data service modeling, query plan generation and performance tuning may be implemented in one embodiment.
- a liquid data framework 104 is used to provide a mechanism by which a set of applications, or application portals 94, 96, 98, 100 and 102, can integrate with, or otherwise access in a tightly couple manner, a plurality of services.
- Such services may include a Materials Requirements and Planning (MRP) system 112, a purchasing system 114, a third-party relational database system 116, a sales forecast system 118 and a variety of other data- related services 120.
- MRP Materials Requirements and Planning
- one or more of the services may interact with one or more other services through the liquid data framework 104 as well.
- the liquid data framework 104 employs a liquid data integration engine 110 to process requests from the set of portals to the services.
- the liquid data integration engine 110 allows access to a wide variety of services, including data storage services, server-based or peer-based applications, Web services and other services capable of being delivered by one or more computational devices are contemplated in various embodiments.
- a services model 108 provides a structured view of the available services to the application portals 94, 96, 98, 100 and 102.
- the services model 108 provides a plurality of views 106 that may be filtered, molded, or shaped views of data and/or services into a format specifically suited for each portal application 94, 96, 98, 100 and 102.
- data returned by services to a particular application is mapped to the view 106 associated with that application (or service) by liquid data framework 104.
- Embodiments providing multiple views of available services can enable organizations to compartmentalize or streamline access to services, thereby increasing the security of the organization's IT infrastructure.
- services model 108 may be stored in a repository 122 of service models.
- Embodiments providing multiple services models can enable organizations to increase the flexibility in changing or adapting the organization's IT infrastructure by lessening dependence on service implementations. Techniques for modeling data services implemented by liquid data framework 104 will be described below in greater detail with reference to FIGS. 2A - 2C. FIG.
- the liquid data integration engine 110 includes an interface processing layer 140, a query compilation layer 150 and a query execution layer 160.
- the interface layer 140 includes a request processor 142, which takes the request 10 and processes this request into an XML query 50.
- Interface layer 140 also includes access control mechanism 144, which determines based upon a plurality of policies 20 whether the client, portal application, service or other process making the request 10 is authorized to access the resources and services required to satisfy the request. Provided that the client, application, service or other process is authorized to make the request 10, the interface layer sends the XML query 50 to the query compilation layer 150.
- a query parsing and analysis mechanism 152 receives the query 50 from the client applications, parses the query and sends the results of the parsing to a query rewrite optimizer 154.
- the query rewrite optimizer 154 determines whether the query can be rewritten in order to improve performance of servicing the query based upon one or more of execution time, resource use, efficiency or other performance criteria.
- the query rewrite optimizer 154 may rewrite or reformat the query based upon input from one or more of a source description 40 and a function description 30 if it is determined that performance may be enhanced by doing so.
- a runtime query plan generator 156 generates a query plan for the query provided by the query rewrite optimizer 154 based upon input from one or more of the source description 40 and the function description 30. Techniques for accessing services on behalf of a requestor implemented by runtime query plan generator 156 will be described below in greater detail with reference to FIGS. 2D - 2F.
- the query compilation layer 150 passes the query plan output from the runtime query plan generator 156 to a runtime query engine 162 in the query execution layer 160.
- the runtime query engine 162 is coupled with one or more functions 70 that may be used in conjunction with formulating queries and fetch requests to sources 52, which are passed on to the appropriate service(s).
- the service responds to the queries and fetch requests 52 with results from sources 54.
- the runtime query engine 162 of the query execution layer 160 translates the results into a format usable by the client or portal application, such as without limitation XML, in order to form the XML query results 56.
- a query result filter 170 in the interface layer 140 determines based upon filter parameters 90 what portion of the results will be passed back to the client or portal application, forming a filtered query response 58.
- filter parameters 90 may accompany service request 10 in one embodiment.
- query result filter 170 also determines based upon access policies implementing security levels 80 what portions of the filtered query response 58 a requestor is permitted to access and may redact the filtered query response accordingly.
- access policies implementing security levels 80 may be stored with policies 20 in one embodiment. When properly formed, the response is returned to the calling client or portal application.
- FIG. 2A is an operational flow diagram illustrating a high level overview of a technique for modeling data services of one embodiment of the present invention.
- the technique for modeling data services shown in FIG. 2A is operable with an application sending data, such as Materials Requirements and Planning (MRP) system 112, an purchasing system 114, a third-party relational database system 116, sales forecast system 118, or a variety of other data-related services 120 of FIG. 1A, for example.
- MRP Materials Requirements and Planning
- FIG. 2A information of interest to at least one requestor are determined (block 202).
- a data model for data services is created based upon a determination of which data services are relevant to the information of interest (block 204).
- a view of data services available to the requestor is presented to the requestor (block 206).
- the view of data services is substantially independent of structure or format of the data services underlying the model, and wherein data services underlying the model are mapped to the view.
- the method illustrated by blocks 202 - 206 may be advantageously disposed in the interface processing layer 140, query compilation layer 150 and query execution layer 160 of FIG. 1 B.
- FIG. 2B is an operational flow diagram illustrating a high level overview of a client process operable with the technique for accessing a service illustrated in FIG. 2A.
- the technique for exchanging data with data services using a data model shown in FIG. 2B is operable with an application sending or receiving data, such as applications 94, 96, 98, 100 and 102 of FIG. 1A 1 for example or a service, such as Materials Requirements and Planning (MRP) system 112, an purchasing system 114, a third-party relational database system 116, sales forecast system 118, or a variety of other data-related services 120 of FlG. 1A.
- MRP Materials Requirements and Planning
- FIG. 2B a request to access at least one service in a view is sent.
- a result set is received.
- the result set comprises data selected from at least one of a plurality of result set(s) received from at least one of a plurality of services underlying the view by mapping the data selected from the result set(s) received from the at least one underlying service(s) to the at least one service indicated in the request.
- FIG. 2C is an operational flow diagram of an example a technique for servicing a request to access a service, which may be used in conjunction with the technique illustrated in FIG. 2A.
- a request to access at least one service in the view is received (block 222).
- a request to access at least one of a plurality of services underlying the data services model based upon the request is prepared (block 224).
- the request is prepared by mapping at least one service in the request to at least one underlying service.
- the at least one underlying service is accessed to obtain a result set (block 226).
- a result set is prepared for the requestor (block 228).
- the result set for the requestor comprises data selected from the result set(s) received from the at least one underlying service by mapping the data selected from the result set(s) received from the at least one underlying service to the at least one service indicated by the request.
- FIG. 2D is an operational flow diagram illustrating a high level overview of a technique for preparing a query plan for tuning a service of one embodiment of the present invention.
- the technique for accessing a service shown in FIG. 2D is operable with an application sending data, such as Materials Requirements and Planning (MRP) system 112, an purchasing system 114, a third-party relational database system 116, sales forecast system 118, or a variety of other data-related services 120 of FIG. 1A, for example.
- MRP Materials Requirements and Planning
- a query is received from a requestor (block 232).
- a preferred way for satisfying the query is determined from one or more possible ways for satisfying the query (block 234).
- the preferred way is provided as at least a portion of the query plan (block 236).
- FIG. 2E is an operational flow diagram illustrating a high level overview of a client process operable with the technique for preparing a query plan for tuning a service illustrated in FIG. 2D.
- the technique for receiving data shown in FIG. 2E is operable with an application sending data, such as applications application 94, 96, 98, 100 and 102 of FIG. 1A, for example or a service, such as Materials Requirements and Planning (MRP) system 112, an purchasing system 114, a third-party relational database system 116, sales forecast system 118, or a variety of other data-related services 120 of FIG.
- MRP Materials Requirements and Planning
- a query is sent to a server (block 242).
- a result set of one or more services is received (block 244) from the server.
- the result set includes a portion that has been prepared by the server according to the server's determination of a preferred way for satisfying the query.
- an input specifying a change to the way the query was implemented for improving query efficiency is sent to the server (not shown in Fig. 2E for clarity).
- FIG. 2F is an operational flow diagram of an example a technique for determining a preferred way for satisfying a query, which may be used in conjunction with the technique illustrated in FIG. 2D. As shown in FIG. 2F at least one of a plurality of ways for satisfying the query are determined (block 252). A preferred way for satisfying the query and meeting performance criteria is selected from the plurality of ways (block 254). The selected way is provided in a query plan (block 256).
- FIG. 2G is an operational flow diagram illustrating a high level overview of an example embodiment implementing a query processing selection technique.
- a determination whether speed is more important than memory usage is made (block 262).
- this determination can be made in a variety of ways. For example, in some embodiments, information about speed, memory and other resource requirements may be solicited from an IT administrator or other such person. In other embodiments, parameters correlating the relative importance of speed, memory and other resource usage may be encoded in a configuration file or other data structure.
- the determination of parameters correlating the relative importance of speed, memory and other resource usage may be automated by processing designed to run test cases of the system in order to determine physical limitations, i.e., installed memory, processor clock speed, I/O devices and configurations or the like, of the underlying system. If speed performance is preferred over memory usage performance, then each of the plurality of tables is read into memory and then a join operation is performed (block 264). Otherwise, a determination whether memory capacity would be constrained by including only one table is made (block 266). If memory capacity would be constrained by including at least one table, then a smaller one of the plurality of tables is read into memory and values are requested from remaining tables as needed to complete a join operation (block 268).
- FIG. 3A is a screen shot illustrating a high level overview of an example view according to an example services model operable with the technique for modeling services illustrated in FIGS. 2A - 2C.
- a view 306 created for one or more data services may be used to display a presentation of data services available to a requestor interested in sales data related services.
- FIG. 3A illustrates a customer data services view and a support data view. Other views, not shown in FIG. 3A for clarity, may also be included by some embodiments.
- a modeling tool presentation 350 displays a plurality of information entities, such as a customer information entity 352, an order information entity 354 and a case information entity 356.
- an IT administrator can create business entities, capture relationships between entities and define mapping of logical entities to physical data sources and/or services.
- Model creation tools such as that illustrated by FIG. 3B, can provide in various embodiments, XML Metadata Interchange (XMI) based interchange with Unified Modeling Language (UML) tools, an easier way to organize and present data services to developers, a way to more rapidly create logical data model(s) that span across multiple data sources and/or services.
- XMI XML Metadata Interchange
- UML Unified Modeling Language
- a data services model is created to logically organize the data services.
- the data services model comprises a critical link in the organization of a large quantity of data services in the typical enterprise. Without a data model, enterprises have only a list of potentially thousands of services, but no indication what service is accessible to whom or where the service resides.
- One benefit of the Liquid Data framework is that users are enabled to create a data model to organize data services. Using the Liquid Data framework, users can define entities (like Customer, Order) in the information and define services relevant to the entities (like getcustomerbylD).
- the data model can span multiple underlying sources of services. These multiple underlying sources can be integrated into a unified data model by the Liquid Data framework.
- the unified data model In addition to organizing the services, the unified data model also enables users to define business rules for the data elements.
- the unified data model presents a single, unified view of underlying data services, regardless of the source, structure or format of the underlying data services. In this way, a data model becomes an effective way to solve the complexity of data discovery and aggregation.
- the invention encompasses in some embodiments, computer apparatus, computing systems and machine-readable media configured to carry out the foregoing methods.
- the present invention may be conveniently implemented using a conventional general purpose or a specialized digital computer or microprocessor programmed according to the teachings of the present disclosure, as will be apparent to those skilled in the computer art.
- the present invention includes a computer program product which is a storage medium (media) having instructions stored thereon/in which can be used to program a computer to perform any of the processes of the present invention.
- the storage medium can include, but is not limited to, any type of rotating media including floppy disks, optical discs, DVD, CD-ROMs, microdrive, and magneto-optical disks, and magnetic or optical cards, nanosystems (including molecular memory ICs) 1 or any type of media or device suitable for storing instructions and/or data.
- the present invention includes software for controlling both the hardware of the general purpose/specialized computer or microprocessor, and for enabling the computer or microprocessor to interact with a human user or other mechanism utilizing the results of the present invention.
- software may include, but is not limited to, device drivers, operating systems, and user applications.
- FIG. 4 illustrates an exemplary processing system 400, which can comprise one or more of the elements of FIGS. 1A and 1 B.
- FIG. 4 an exemplary computing system is illustrated that may comprise one or more of the components of FIGS. 1A and 1 B. While other alternatives might be utilized, it will be presumed for clarity sake that components of the systems of FIGS. 1A and 1B are implemented in hardware, software or some combination by one or more computing systems consistent therewith, unless otherwise indicated.
- Computing system 400 comprises components coupled via one or more communication channels (e.g., bus 401) including one or more general or special purpose processors 402, such as a Pentium®, Centrino®, Power PC®, digital signal processor ("DSP"), and so on.
- System 400 components also include one or more input devices 403 (such as a mouse, keyboard, microphone, pen, and so on), and one or more output devices 404, such as a suitable display, speakers, actuators, and so on, in accordance with a particular application.
- input devices 403 such as a mouse, keyboard, microphone, pen, and so on
- output devices 404 such as a suitable display, speakers, actuators, and so on, in accordance with a particular application.
- System 400 also includes a computer readable storage media reader 405 coupled to a computer readable storage medium 406, such as a storage/memory device or hard or removable storage/memory media; such devices or media are further indicated separately as storage 408 and memory 409, which may include hard disk variants, floppy/compact disk variants, digital versatile disk (“DVD”) variants, smart cards, read only memory, random access memory, cache memory, and so on, in accordance with the requirements of a particular application.
- a computer readable storage media reader 405 coupled to a computer readable storage medium 406, such as a storage/memory device or hard or removable storage/memory media; such devices or media are further indicated separately as storage 408 and memory 409, which may include hard disk variants, floppy/compact disk variants, digital versatile disk (“DVD”) variants, smart cards, read only memory, random access memory, cache memory, and so on, in accordance with the requirements of a particular application.
- DVD digital versatile disk
- One or more suitable communication interfaces 407 may also be included, such as a modem, DSL, infrared, RF or other suitable transceiver, and so on for providing inter-device communication directly or via one or more suitable private or public networks or other components that may include but are not limited to those already discussed.
- Working memory 410 further includes operating system (“OS”) 411 elements and other programs 412, such as one or more of application programs, mobile code, data, and so on for implementing system 400 components that might be stored or loaded therein during use.
- OS operating system
- the particular OS or OSs may vary in accordance with a particular device, features or other aspects in accordance with a particular application (e.g. Windows, WindowsCE, Mac, Linux, Unix or Palm OS variants, a cell phone OS, a proprietary OS, Symbian, and so on).
- Various programming languages or other tools can also be utilized, such as those compatible with C variants (e.g., C++, C#), the Java 2 Platform, Enterprise Edition (“J2EE”) or other programming languages in accordance with the requirements of a particular application.
- Other programs 412 may further, for example, include one or more of activity systems, education managers, education integrators, or interface, security, other synchronization, other browser or groupware code, and so on, including but not limited to those discussed elsewhere herein.
- a learning integration system or other component When implemented in software (e.g. as an application program, object, agent, downloadable, servlet, and so on in whole or part), a learning integration system or other component may be communicated transitionally or more persistently from local or remote storage to memory (SRAM, cache memory, etc.) for execution, or another suitable mechanism can be utilized, and components may be implemented in compiled or interpretive form. Input, intermediate or resulting data or functional elements may further reside more transitionally or more persistently in a storage media, cache or other volatile or non-volatile memory, (e.g., storage device 408 or memory 409) in accordance with a particular application.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Operations Research (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
L'invention concerne des mécanismes et des méthodes pour modeler des services de données et pour utiliser des plans d'interrogation pour des services d'élaboration et de mise au point des performances, services auxquels on accède pour le compte d'un demandeur. Grâce à ces méthodes et à ces mécanismes, il est possible d'examiner les durées d'interrogation et de réponse pour l'exécution d'une interrogation. Dans d'autres modes de réalisation donnés en exemple, plusieurs modèles de services de données peuvent être créés, stockés et utilisés pour changer ou adapter l'infrastructure IT d'une organisation.
Applications Claiming Priority (12)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US66590805P | 2005-03-28 | 2005-03-28 | |
US66576805P | 2005-03-28 | 2005-03-28 | |
US60/665,908 | 2005-03-28 | ||
US60/665,768 | 2005-03-28 | ||
US66607905P | 2005-03-29 | 2005-03-29 | |
US60/666,079 | 2005-03-29 | ||
US11/341,235 US20060224628A1 (en) | 2005-03-29 | 2006-01-27 | Modeling for data services |
US11/342,111 US7778998B2 (en) | 2005-03-28 | 2006-01-27 | Liquid data services |
US11/342,112 | 2006-01-27 | ||
US11/342,111 | 2006-01-27 | ||
US11/341,235 | 2006-01-27 | ||
US11/342,112 US20060218118A1 (en) | 2005-03-28 | 2006-01-27 | Using query plans for building and performance tuning services |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006104932A1 true WO2006104932A1 (fr) | 2006-10-05 |
Family
ID=37053695
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2006/010913 WO2006104932A1 (fr) | 2005-03-28 | 2006-03-24 | Modelage de services et utilisation de plans d'interrogation pour l'elaboration et la mise au point des performances |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2006104932A1 (fr) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6175837B1 (en) * | 1998-06-29 | 2001-01-16 | Sun Microsystems, Inc. | Object-relational mapping toll that processes views |
US6862594B1 (en) * | 2000-05-09 | 2005-03-01 | Sun Microsystems, Inc. | Method and apparatus to discover services using flexible search criteria |
-
2006
- 2006-03-24 WO PCT/US2006/010913 patent/WO2006104932A1/fr active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6175837B1 (en) * | 1998-06-29 | 2001-01-16 | Sun Microsystems, Inc. | Object-relational mapping toll that processes views |
US6862594B1 (en) * | 2000-05-09 | 2005-03-01 | Sun Microsystems, Inc. | Method and apparatus to discover services using flexible search criteria |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20060224628A1 (en) | Modeling for data services | |
US7487191B2 (en) | Method and system for model-based replication of data | |
US8712965B2 (en) | Dynamic report mapping apparatus to physical data source when creating report definitions for information technology service management reporting for peruse of report definition transparency and reuse | |
US8086615B2 (en) | Security data redaction | |
US7778998B2 (en) | Liquid data services | |
KR101085639B1 (ko) | 테이블 값 함수를 호출하는 질의의 효율적 평가를 위한시스템 및 방법 | |
EP2182448A1 (fr) | Gestion de données de configuration fédérée | |
US9201700B2 (en) | Provisioning computer resources on a network | |
US9251222B2 (en) | Abstracted dynamic report definition generation for use within information technology infrastructure | |
AU2007238453A1 (en) | Search-based application development framework | |
WO2022029516A1 (fr) | Génération automatisée de flux de travail etl | |
US20110131247A1 (en) | Semantic Management Of Enterprise Resourses | |
US11354332B2 (en) | Enabling data access by external cloud-based analytics system | |
CN113704300B (zh) | 供数据检索方法使用的数据印记技术 | |
US20060218118A1 (en) | Using query plans for building and performance tuning services | |
US20060224692A1 (en) | Adhoc queries for services | |
US20060224556A1 (en) | SQL interface for services | |
US9170998B2 (en) | Generating simulated containment reports of dynamically assembled components in a content management system | |
Hoschek | A unified peer-to-peer database framework for XQueries over dynamic distributed content and its application for scalable service discovery | |
Ouzzani | Efficient delivery of web services | |
US20060224557A1 (en) | Smart services | |
US11893015B2 (en) | Optimizing query performance in virtual database | |
Dinda et al. | Nondeterministic queries in a relational grid information service | |
WO2006104932A1 (fr) | Modelage de services et utilisation de plans d'interrogation pour l'elaboration et la mise au point des performances | |
US20140143278A1 (en) | Application programming interface layers for analytical applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
NENP | Non-entry into the national phase |
Ref country code: RU |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 06739603 Country of ref document: EP Kind code of ref document: A1 |