CN117314033A - Large-scale line project event processing system - Google Patents

Large-scale line project event processing system Download PDF

Info

Publication number
CN117314033A
CN117314033A CN202310609579.0A CN202310609579A CN117314033A CN 117314033 A CN117314033 A CN 117314033A CN 202310609579 A CN202310609579 A CN 202310609579A CN 117314033 A CN117314033 A CN 117314033A
Authority
CN
China
Prior art keywords
event
lli
request
database
tenant
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310609579.0A
Other languages
Chinese (zh)
Inventor
M·阿胡贾
R·P·R·拉奥
G·乔杜里
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SAP SE
Original Assignee
SAP SE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/886,733 external-priority patent/US20230419251A1/en
Application filed by SAP SE filed Critical SAP SE
Publication of CN117314033A publication Critical patent/CN117314033A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06311Scheduling, planning or task assignment for a person or group
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/252Integrating or interfacing systems involving database management systems between a Database Management System and a front-end application
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • G06F16/285Clustering or classification
    • G06F16/287Visualization; Browsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06312Adjustment or analysis of established resource schedule, e.g. resource or task levelling, or dynamic rescheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06313Resource planning in a project environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0633Workflow analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/103Workflow collaboration or project management

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

In an example embodiment, an extensible solution is provided that identifies purchasing events as large line item events and reroutes requests for operations related to such events to a dedicated content service. The dedicated content service authenticates the request and causes data related to the large line item event to be stored in and/or retrieved from a document database for LLI event processing. The result is that operations for LLI events can be handled much faster than in prior art solutions.

Description

Large-scale line project event processing system
This patent application claims priority from the filing date of the indian provisional application No.202211036710 filed on the year 2022, month 6, and 27, the entire contents of which are incorporated herein by reference.
Background
Strategic purchases (strategic sourcing) may be performed by a company to monitor and evaluate purchasing strategies. The purchase policy may include determining from which entity to purchase the item that needs to be purchased. Strategic purchases may include supply chain management, vendor development, contract negotiations, and outsourcing assessment.
Drawings
The present disclosure is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which like reference numerals refer to similar elements.
Fig. 1 is a diagram illustrating a lifecycle of an LLI event according to an example embodiment.
Fig. 2 is a block diagram illustrating a system for processing LLI events according to an example embodiment.
FIG. 3 is a block diagram illustrating a data model used by a document database in an example embodiment.
Fig. 4 is a screen shot illustrating a graphical user interface for adding a line item to an LLI event according to an example embodiment.
Fig. 5 is a screen shot illustrating a graphical user interface for adding a bar item (term) to a line item of an LLI event according to an example embodiment.
FIG. 6 is a screen shot illustrating a graphical user interface for adding line item information according to an example embodiment.
FIG. 7 is a screen shot illustrating a graphical user interface for publishing a created report according to an example embodiment.
Fig. 8 is a screen shot illustrating a graphical user interface for viewing reports on data recorded via LLI events according to an example embodiment.
Fig. 9 is a flowchart illustrating a method of processing a request for an operation corresponding to an LLI event, according to an example embodiment.
Fig. 10 is a block diagram illustrating a software architecture that can be installed on any one or more of the devices described above.
FIG. 11 shows a diagrammatic representation of machine in the form of a computer system within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed.
Detailed Description
The following description discusses illustrative systems, methods, techniques, sequences of instructions, and computer program products. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide an understanding of various example embodiments of the present subject matter. It will be apparent, however, to one skilled in the art that the various example embodiments of the present subject matter may be practiced without these specific details.
An organization may wish to purchase an item(s) for the organization. For example, an organization may wish to find a provider that can offer an item(s) at a minimum price. Other factors can influence the vendor's choice, such as past interactions, overall reputation, lead time factors, and the like. Strategic purchases may be a process of attempting to find the best supplier of an organization for a set of one or more items that the organization wants to purchase. After performing the policy analysis, the vendor can be selected and given a purchasing opportunity.
The input to strategic purchases may be a purchase event that includes a list of line item(s) that an organization wants to purchase. A line item of a procurement event can be defined using a user interface having a fixed field to which a value of the line item can be added. The data relating to the purchase event is then stored in a relational database for later retrieval for subsequent purchase event and/or analysis.
However, technical problems are encountered with purchasing events having large amounts of data. The user expects the event to be processed in 30 seconds or less and the purchase event with a large amount of data (e.g., more than 2000 line items and/or more than 125 participants) cannot be processed within the 30 second limit. This is because the server cache will save the event in memory. Additionally, acquiring procurement event data having a large amount of data would also be time consuming. As a result, the fixed field graphical user interface typically limits the input purchase event to only "standard size," i.e., a purchase event having less than 2000 line items. Large procurement events, known as Large Line Item (LLI) events, are handled separately via spreadsheet input and are not stored in the same database as standard event data, limiting reporting and analysis features.
However, there is a risk in changing the existing core procurement application (core sourcing application) because most events (greater than 90%) have a standard size that is the inverse of LLI, and the existing core procurement application is fairly stable and efficient for such standard size events.
In an example embodiment, an extensible solution is provided that identifies purchasing events as large line item events and reroutes requests for operations related to such events to a dedicated content service. The dedicated content service authenticates the request and causes data related to the large line item event to be stored in and/or retrieved from the document database for LLI event processing. The result is that the operation of LLI events can be handled much faster than in prior art solutions.
In purchasing, events are performed through multiple processing steps, beginning with the creation of an organization's event and ending with the selection of a "winning" provider. At each stage of the process, the event has a defined state that determines the actions that the user can take. Fig. 1 is a diagram illustrating a lifecycle 100 of an LLI event, according to an example embodiment. The creation process 102 involves creating a draft LLI event 104, and then the draft LLI event 104 can be published. The monitoring process 106 involves a preview 108 of the bidding process and events 110 in which LLI events are open to bidding. Once the bid time has ended, the evaluation process 112 takes over and a pending selection 114 of a "winner" is made. Once the bid has been awarded, a completion event 116 occurs, or if the bid is cancelled at any point during the monitoring process 106 or evaluation process 112, a cancellation event 118 occurs.
At each of these event lifecycle stages there is a functionality that has to handle large amounts of data within defined Service Level Agreements (SLAs). For example, the expected SLA for content upload with 20,000 lines of items may be 30 seconds, and the expected SLA for bid upload with 20,000 lines of items may be 30 seconds. Report generation with 20 vendor bids may have an expected SLA of 2 minutes while 200 vendor bids are processed with an expected SLA of 10 minutes.
Fig. 2 is a block diagram illustrating a system 200 for processing LLI events according to an example embodiment. Graphical user interface 202 may allow a user to enter a fixed line item for a purchase event and/or request a report regarding the purchase event. It should be noted that the graphical user interface 202 displayed herein may be a server-based graphical user interface that may communicate with a client device running a client-side portion of the procurement application 204 to perform various graphical user interface operations requested. However, this is not mandatory. In other example embodiments, the graphical user interface 202 may be client-based and thus may be contained within a client-side portion of the purchasing application 204, for example.
The user can create a purchase event using purchase application 204. The procurement event can be an auction request, an information Request (RFI), a proposal request, or a bid request, which can be collectively referred to as RFX (e.g., a request for "X"). The procurement event can be sent from the graphical user interface 202 to the application core 206.
Application core 206 may receive event requests from graphical user interface 202. Notably, the event request can be a request to provide or utilize data from any event lifecycle stage (such as the stages described above with respect to fig. 1). It is also worth noting that the event request may correspond to a standard event or LLI event. In an example embodiment, a standard event is defined as an event involving 2000 or fewer line items, while a LLE event is defined as an event involving more than 2000 line items. However, the threshold can vary depending on the implementation. For purposes of this disclosure, it is sufficient for the application core 206 to know how to distinguish between standard events and LLI events. This is because LLI event processing will abstract from the integrated components into a micro-service based stack supported by the document database.
In an example embodiment, the application core 206 may be S/4Hana from SAP SE, frankfurt, germany TM And (5) cloud. S/4Hana TM Is a modular cloud enterprise resource process (ERP) software. A memory database (also referred to as a memory database management system) is a database management system that relies primarily on main memory for computer data storage. In contrast to database management systems that employ disk storage mechanisms. Memory databases are traditionally faster than disk storage databases because disk accesses are slower than memory accesses.
The core 206 determines whether the event request corresponds to an LLI event or to a standard event. If it corresponds to an LLI event, the request is rerouted to the content service 208, the content service 208 being a micro-service based stack supported by the document database. More specifically, a tenant authentication component 210 within the content service 208 is used to authenticate a tenant for LLI requests by sending OAuth tokens to OAuth service 212 for authentication. Once verified, domain information is sent from the OAuth service 212 to the tenant authentication component 210, which tenant authentication component 210 then uses the domain information to obtain the tenant identification.
The tenant authentication component 210 then indicates that the LLI request has been validated, and the content storage component 213 forwards the LLI request to a representational state transfer (REST) Application Program Interface (API) 214, which then uses a tenant-specific in-memory database instance 216 (identified using tenant identification) to connect to a tenant profile (tenant schema) 218. Tenant summary 218 identifies content/bid data in document database 220 and metadata in relational database 222. This allows REST API 214 to use a combination of content/bid data and metadata to perform any operations necessary to satisfy the LLI request. These operations may include create, read, update, and delete (CRUD) operations.
Notably, the line item data itself is stored in the document database 220. Two types of information are provided for the line item of an event: one from the purchasing organization and the other from the supplier organization that provides bids for the line items. Thus, the data is semi-structured in nature and varies according to the terms provided by the purchasing organization and the bid amount provided by the supplier. Because the large amount of data in LLI events requires faster reads and writes, the document database 220 is used for such storage rather than relying on the relational database 222.
The document database is a non-relational database that stores information as documents (as opposed to tables, as in relational databases). A document typically stores information about an object and any associated metadata as records in the document database 220. The document stores the data in field value pairs. These values can be of various types and structures, including strings, numbers, dates, arrays, or objects. The documents can be stored in various formats, but in one example embodiment, the documents are stored as Javascript TM An object representation (JSON) file. Documents may be grouped into collections in the document database 220.
The report generating component can generate a report if report generation is required as part of the LLI request. In addition, the content service 208 can also communicate data retrieved via LLI requests to one or more other applications, such as an Optimization Workbench (OWB) 224, capable of performing bid analysis and optimization scenarios.
FIG. 3 is a block diagram illustrating a data model 300 used by document database 220 in an example embodiment. Here, event commit event 301 may use domain information 302 and event metadata 304 from the table and create a document for each event entry in collection 306. Event publication event 308 may use payload information 310 and spreadsheet content data 312 to create a document for each event item in collection 314, and for each event participant in collection 316, and for each event item value in collection 318. The bid submission event 320 can use the spreadsheet bid data 322 to create a document for each event item value in the collection 318. The content bid generation event 324 uses the spreadsheet content generation data 326 and the spreadsheet bid generation data 328 to create a document for each event item value in the collection 318. Finally, the event 330 moving to the pending selection event can use the spreadsheet bid ranking data 332 and event report data 334 to create a document for each event summary item value in the collection 336.
When an LLI event is received, it can be in the form of a large spreadsheet, where a row represents each row itemOrder (1). For example, a concurrency handling framework (such as Akka may be used TM ) Repeated tasks such as verification of each spreadsheet row, processing of each spreadsheet row, and converting each spreadsheet row into a JSON document to enable its saving in a document database are performed concurrently (in parallel processing). Akka TM Is a toolkit and runtime that simplifies the construction of concurrent and distributed applications on virtual machines. It focuses on behavior-based concurrency. The behavioral volume is treated as a general basis for concurrent computation based on concurrent processing of the behavioral volume (universal primate). In response to the message it receives, the actor can make local decisions, create more actors, send more messages, and determine how to respond to the next message received. The behaviours can also modify their own private state, but can only indirectly influence each other through messaging (eliminating the need for lock-based synchronization).
In an example embodiment, there are several concurrent threads in a round robin (round robin) fashion for implementing the above functions.
In an example embodiment, communications between the various components are performed asynchronously, rather than synchronously, via a message broker.
As described above, the graphical user interface 202 may be implemented on the server side. For further efficiency, the graphical user interface 202 may utilize cursor-based paging. Here, the line item is on a vertical access and the bid value is on a horizontal access, and the user can scroll vertically or horizontally using the cursor.
Fig. 4 is a screen shot illustrating a graphical user interface 400 for adding a line item to an LLI event according to an example embodiment. Here, the graphical user interface 400 provides the user with the ability to enter a line item by selecting the "Add" button 402. In addition, a bar item may be added to a row item by selecting the "Add bar item" button 404.
Fig. 5 is a screen shot illustrating a graphical user interface 500 for adding a bar item to a row item of an LLI event according to an example embodiment. Here, graphical user interface 500 provides the user with the ability to enter the name of an item in name field 502, as well as the ability to enter various aspects of an item in various drop-downs 504, 506, 508 and buttons 510, 512, 514, 516, 518, 520, 522, and 524.
Fig. 6 is a screen shot illustrating a graphical user interface 600 for adding line item information according to an example embodiment. Here, graphical user interface 600 provides a user with the ability to enter the name of a line item in name field 602, as well as description 604, merchandise 606, and region 608. Further, check boxes 610 and 612 allow the user to indicate whether a response is required and/or whether a line item is visible to the participant, respectively. In addition, initial values for various entries may be entered in fields 614, 616, and 618.
FIG. 7 is a screen shot illustrating a graphical user interface 700 for publishing a created report according to an example embodiment. Here, the graphical user interface 700 allows a user to post a created LLI event by pressing the post button 702.
Fig. 8 is a screen shot illustrating a graphical user interface 800 for viewing reports on data recorded via LLI events, according to an example embodiment. Here, the report may include a chart 802 of the provider participants, an indication 804 of how long the event has left (here the event is a bid), and an indication 806 of total project coverage. Other metrics may also be displayed in the graphical user interface 800, but are not shown here for simplicity. Notably, all of this information can be retrieved from standard events (stored in a relational database) and/or LLI events (stored in a document database). Thus, the graphical user interface 800 is essentially integrated with existing graphical user interfaces that only allow reporting from standard events (stored in a relational database) rather than LLI events.
Fig. 9 is a flowchart illustrating a method 900 of processing a request for an operation corresponding to an LLI event, according to an example embodiment. At operation 902, a request for an operation of a Large Line Item (LLI) event is received from an application core, such as a purchasing application core. Based on determining that the request for operation of the event corresponds to an LLI event, the request has been rerouted from the application core in accordance with a threshold defined for large line item events at the application core. The threshold may specify a particular number of line items for an event, such as 2000, beyond which the event is considered an LLI event rather than a standard event.
At operation 904, an authorization token corresponding to the request for operation of the LLI event is sent to an authorization service. At operation 906, domain information is received from an authorization service. The domain information may include tenant identification. At operation 908, the tenant identity is used to access the REST API to obtain a tenant profile in an instance of the in-memory database corresponding to the tenant identity. The memory database comprises a non-relational document database and a relational database.
At operation 910, an operation is performed on one or more documents corresponding to the LLI event, the one or more documents stored in a collection in a non-relational document database. At operation 912, the results of the operation are received from the non-relational document database. At operation 914, the results are sent to the application core for reporting to the user via the graphical user interface.
In view of the above-described embodiments of the subject matter, the present application discloses the following list of examples, wherein one feature of an example alone or more than one feature combination of the examples and optionally one or more feature combinations with one or more further examples are further examples that also fall within the disclosure of the present application:
example 1. A system, comprising:
At least one hardware processor; and
a computer-readable medium storing instructions that, when executed by at least one hardware processor, cause the at least one hardware processor to perform operations comprising:
receiving a request for an operation of a Large Line Item (LLI) event from an application core, wherein the request has been rerouted from the application core based on a determination that the request for the operation of the event corresponds to the LLI event in accordance with a threshold defined for the large line item event at the application core;
performing an operation on one or more documents corresponding to the LLI event, the one or more documents stored in a collection in a non-relational document database;
receiving a result of the operation from the non-relational document database; and
the results are sent to the application core for reporting to the user via the graphical user interface.
Example 2 the system of example 1, wherein the threshold indicates a number of line items in the procurement event.
Example 3 the system of example 2, wherein the procurement event includes a plurality of line items, each line item having one or more terms defined by the procurement organization, and having a field for each term for completion by the bid organization.
Example 4 the system of any one of examples 1-3, wherein the operations further comprise:
transmitting an authorization token corresponding to a request for operation of the LLI event to an authorization service; and
domain information is received from an authorization service.
Example 5 the system of example 4, wherein the domain information includes a tenant identification, and wherein the operations further comprise:
a representational state transfer (REST) Application Program Interface (API) is accessed using the tenant identity to obtain a tenant profile in an instance of the memory database corresponding to the tenant identity.
Example 6 the system of example 5, wherein the in-memory database comprises a non-relational document database and a relational database for storing metadata about LLI events.
Example 7 the system of any of examples 1-6, wherein performing the operation comprises performing the operation on multiple row items within the LLI event simultaneously via multiple concurrent threads.
Example 8. A method, comprising:
receiving a request for an operation of a Large Line Item (LLI) event from an application core, wherein the request has been rerouted from the application core based on a determination that the request for the operation of the event corresponds to the LLI event in accordance with a threshold defined for the large line item event at the application core;
Performing the operation on one or more documents corresponding to the LLI event, the one or more documents stored in a collection in a non-relational document database;
receiving a result of the operation from the non-relational document database; and
the results are sent to the application core for reporting to the user via the graphical user interface.
Example 9. The method of example 8, wherein the threshold indicates a number of line items in the procurement event.
Example 10 the method of example 9, wherein the procurement event includes a plurality of line items, each line item having one or more terms defined by the procurement organization, and having a field for each term for completion by the bid organization.
Example 11. The method of any of examples 8-10, further comprising:
transmitting an authorization token corresponding to a request for operation of the LLI event to an authorization service; and
domain information is received from an authorization service.
Example 12 the method of example 11, wherein the domain information includes a tenant identity, and wherein the method further comprises:
a representational state transfer (REST) Application Program Interface (API) is accessed using the tenant identity to obtain a tenant profile in an instance of the memory database corresponding to the tenant identity.
Example 13 the method of example 12, wherein the in-memory database comprises a non-relational document database and a relational database for storing metadata about LLI events.
Example 14 the method of any of examples 8-13, wherein performing the operation includes performing the operation on multiple row items within the LLI event simultaneously via multiple concurrent threads.
Example 15. A non-transitory machine-readable medium storing instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising:
receiving a request for an operation of a Large Line Item (LLI) event from an application core, wherein the request has been rerouted from the application core based on a determination that the request for the operation of the event corresponds to the LLI event in accordance with a threshold defined for the large line item event at the application core;
performing an operation on one or more documents corresponding to the LLI event, the one or more documents stored in a collection in a non-relational document database;
receiving a result of the operation from the non-relational document database; and
the results are sent to the application core for reporting to the user via the graphical user interface.
Example 16. The non-transitory machine-readable medium of example 15, wherein the threshold indicates a number of line items in the procurement event.
Example 17 the non-transitory machine-readable medium of example 16, wherein the procurement event comprises a plurality of line items, each line item having one or more terms defined by the procurement organization, and having a field for each term for completion by the bidding organization.
Example 18 the non-transitory machine-readable medium of any one of examples 15-17, wherein the operations further comprise:
transmitting an authorization token corresponding to a request for operation of the LLI event to an authorization service; and
domain information is received from an authorization service.
Example 19 the non-transitory machine-readable medium of example 18, wherein the domain information includes a tenant identification, and wherein the operations further comprise:
a representational state transfer (REST) Application Program Interface (API) is accessed using the tenant identity to obtain a tenant profile in an instance of the memory database corresponding to the tenant identity.
Example 20. The non-transitory machine-readable medium of example 19, wherein the in-memory database comprises a non-relational document database and a relational database to store metadata about LLI events.
Fig. 10 is a block diagram 1000 illustrating a software architecture 1002, the software architecture 1002 being capable of being installed on any one or more of the devices described above. Fig. 10 is merely a non-limiting example of a software architecture, and it should be understood that many other architectures can be implemented to facilitate the functionality described herein. In various embodiments, the software architecture 1002 is implemented by hardware, such as the machine 1100 of fig. 11, the machine 1100 comprising a processor 1110, a memory 1130, and input/output (I/O) components 1150. In this example architecture, the software architecture 1002 of fig. 10 can be conceptualized as a stack of layers, each of which may provide specific functionality. For example, the software architecture 1002 includes layers such as an operating system 1004, libraries 1006, frameworks 1008, and applications 1010. Operationally, consistent with some embodiments, application 1010 calls Application Program Interface (API) call 1012 through a software stack and receives message 1014 in response to API call 1012.
In various embodiments, operating system 1004 manages hardware resources and provides common services. Operating system 1004 includes, for example, kernel 1020, services 1022, and drivers 1024. Consistent with some embodiments, kernel 1020 acts as an abstraction layer between hardware and other software layers. For example, the kernel 1020 provides memory management, processor management (e.g., scheduling), component management, networking and security settings, and other functions. The services 1022 are capable of providing other common services for other software layers. The drivers 1024 are responsible for controlling or interfacing with the underlying hardware. For example, the drivers 1024 may include a display driver, a camera driver, Or->Low energy drive, flash drive, serial communication drive (e.g., universal Serial Bus (USB) drive), or->Drivers, audio drivers, power management drivers, etc.
In some embodiments, library 1006 provides a low-level public infrastructure utilized by application 1010. The library 1006 can include a system library 1030 (e.g., a C-standard library) that can provide functions such as memory allocation functions, string manipulation functions, mathematical functions, and the like. In addition, library 1006 can include API libraries 1032, such as media libraries (e.g., libraries for supporting presentation and manipulation of various media formats, such as moving Picture experts group-4 (MPEG 4), advanced video coding (H.264 or AVC), moving Picture experts group layer-3 (MP 3), advanced Audio Coding (AAC), adaptive Multi-Rate (AMR) audio codec, joint Picture experts group (JPEG or JPG) or Portable Network Graphics (PNG)), graphics libraries (e.g., openGL framework for rendering in two-dimensional (2D) and three-dimensional (3D) in a graphics context on a display), database libraries (e.g., SQLite for providing various relational database functions), web libraries (e.g., webKit for providing web browsing functions), and the like. The library 1006 can also include a variety of other libraries 1034 to provide many other APIs to the application 1010.
Framework 1008 provides a high-level public infrastructure that can be utilized by applications 1010. For example, the framework 1008 provides various Graphical User Interface (GUI) functions, advanced resource management, advanced location services, and the like. The framework 1008 can provide a wide variety of other APIs that can be utilized by the application 1010, some of which can be specific to a particular operating system 1004 or platform.
In an example embodiment, the applications 1010 include a home application 1050, a contacts application 1052, a browser application 1054, a book reader application 1056, a location application 1058, a media application 1060, a messaging application 1062, a gaming application 1064, and various other applications, such as a third party application 1066. The application 1010 is a program that executes functions defined in the program. One or more applications 1010 structured in various ways can be created using various programming languages, such as an object oriented programming language (e.g., objective-C, java or C++) or a procedural programming language (e.g., C or assembly language). In a particular example, third party applications 1066 (e.g., used by entities other than the vendor of a particular platform)ANDROID TM Or IOS TM The application developed by the Software Development Kit (SDK) may be in a mobile operating system (such as IOS) TM 、ANDROID TM Phone or other mobile operating system). In this example, third party applications 1066 can call API calls 1012 provided by operating system 1104 to facilitate the functionality described herein.
FIG. 11 shows a pictorial representation of a machine 1100 in the form of a computer system in which a set of instructions may be executed to cause the machine 1100 to perform any one or more of the methods discussed herein. In particular, FIG. 11 shows a pictorial representation of a machine 1100 in the example form of a computer system in which instructions 1116 (e.g., software, programs, applications, applets, apps, or other executable code) for causing the machine 1100 to perform any one or more of the methods discussed herein may be executed. For example, instructions 1116 may cause machine 1100 to perform the method of fig. 9. Additionally or alternatively, instructions 1116 may implement fig. 1-9, etc. The instructions 1116 transform the generic, un-programmed machine 1100 into a specific machine 1100 programmed to perform the functions described and illustrated in the manner described. In alternative embodiments, machine 1100 operates as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine 1100 may operate in the capacity of a server machine or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. Machine 1100 may include, but is not limited to, a server computer, a client computer, a Personal Computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a Personal Digital Assistant (PDA), an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart home appliance), other smart devices, a web device, a network router, a network switch, a bridge, or any machine capable of sequentially or otherwise executing instructions 1116 that specify actions to be taken by machine 1100. Furthermore, while only a single machine 1100 is illustrated, the term "machine" shall also be taken to include a collection of machines 1100 that individually or jointly execute instructions 1116 to perform any one or more of the methodologies discussed herein.
Machine 1100 may include a processor 1110, memory 1130, and I/O components 1150, which may be configured to communicate with each other, such as via bus 1102. In example embodiments, the processor 1110 (e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor 1112 and a processor 1114 that may execute instructions 1116. The term "processor" is intended to include a multi-core processor, which may include two or more separate processors (sometimes referred to as "cores") that may concurrently execute instructions 1116. Although fig. 11 shows multiple processors 1110, machine 1100 may include a single processor 1112 with a single core, a single processor 1112 with multiple cores (e.g., multi-core processor 1112), multiple processors 1112, 1114 with a single core, multiple processors 1112, 1114 with multiple cores, or any combination thereof.
Memory 1130 may include a main memory 1132, a static memory 1134, and a storage unit 1136, each accessible by processor 1110, such as via bus 1102. The main memory 1132, static memory 1134, and storage unit 1136 store instructions 1116 embodying any one or more of the methodologies or functions described herein. The instructions 1116 may also reside, completely or partially, within the main memory 1132, within the static memory 1134, within the storage unit 1136, within at least one of the processors 1110 (e.g., within a cache memory of a processor), or within any suitable combination thereof, during execution thereof by the machine 1100.
The I/O component 1150 can include a variety of components for receiving input, providing output, generating output, sending information, exchanging information, capturing measurements, and the like. The particular I/O components 1150 included in a particular machine will depend on the type of machine. For example, a portable machine such as a mobile phone will likely include a touch input device or other such input mechanism, while a no-peripheral server machine will likely not include such a touch input device. It should be appreciated that the I/O component 1150 may include many other components not shown in FIG. 11. The I/O components 1150 are grouped according to functionality, for simplicity of discussion below only, and the grouping is in no way limiting. In various example embodiments, the I/O components 1150 may include output components 1152 and input components 1154. The output component 1152 may include visual components (e.g., a display such as a Plasma Display Panel (PDP), a Light Emitting Diode (LED) display, a Liquid Crystal Display (LCD), a projector, or a Cathode Ray Tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., vibration motors, resistance mechanisms), other signal generators, and so forth. The input components 1154 may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, an optoelectronic keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, touchpad, trackball, joystick, motion sensor, or other pointing instrument), tactile input components (e.g., physical buttons, a touch screen providing location and/or force of a touch or touch gesture, or other tactile input components), audio input components (e.g., a microphone), and the like.
In further example embodiments, the I/O components 1150 may include a biometric component 1156, a motion component 1158, an environment component 1160, or a location component 1162, among various other components. For example, the biometric component 1156 may include components for detecting expressions (e.g., hand expressions, facial expressions, voice expressions, body gestures, or eye tracking), measuring biological signals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identifying a person (e.g., voice recognition, retinal recognition, facial recognition, fingerprint recognition, or electroencephalogram-based recognition), and the like. The motion components 1158 may include acceleration sensor components (e.g., accelerometers), gravity sensor components, rotation sensor components (e.g., gyroscopes), and so forth. The environmental components 1160 may include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors that detect hazardous gas concentrations or measure contaminants in the atmosphere for safety), or other components that may provide an indication, measurement, or signal corresponding to the surrounding physical environment. The location component 1162 can include a location sensor component (e.g., a Global Positioning System (GPS) receiver component), an altitude sensor component (e.g., an altimeter or barometer that detects air pressure from which altitude can be derived), an orientation sensor component (e.g., a magnetometer), and so forth.
Communication may be accomplished using a variety of techniques. The I/O components 1150 may include a communication component 1164, the communication component 1164 being operable to couple the machine 1100 to the network 1180 or device 1170 via the coupling 1182 and the coupling 1172, respectively. For example, communication components 1164 may include a network interface component or another suitable device to interface with network 1180. In a further example of this embodiment, the method comprises, the communication components 1164 may include wired communication components, wireless communication components, cellular communication components, near Field Communication (NFC) components,Component (e.g. low power consumption),)>Components and other communication components that provide communication via other modalities. The device 1170 may be another machine or any of a variety of peripheral devices (e.g., coupled via USB).
In addition, communication component 1164 may detect an identifier or include components operable to detect an identifier. For example, the communication component 1164 may include a Radio Frequency Identification (RFID) tag reader component, an NFC smart tag detection component, an optical reader component (e.g., an optical sensor for detecting one-dimensional barcodes such as Universal Product Code (UPC) barcodes, multidimensional barcodes such as QR codes, aztec codes, data matrices, dataglyph, maxiCode, PDF, supercodes, UCCRSS-2D barcodes, and other optical codes) or An acoustic detection component (e.g., a microphone for identifying the marked audio signal). In addition, various information may be derived via the communication component 1164, such as a geographically located location via Internet Protocol (IP), viaThe location of signal triangulation, the location of NFC beacon signals that may indicate a particular location via detection, etc.
The various memories (i.e., 1130, 1132, 1134 and/or memories of the processors 1110) and/or storage units 1136 may store one or more sets of instructions 1116 and data structures (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. These instructions (e.g., instructions 1116), when executed by processor 1110 (or multiple processors), cause various operations to implement the disclosed embodiments.
As used herein, the terms "machine storage medium," "device storage medium," and "computer storage medium" mean the same thing and may be used interchangeably. These terms refer to a single or multiple storage devices and/or media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the executable instructions and/or data. Accordingly, these terms should be considered to include, but are not limited to, solid-state memory as well as optical and magnetic media, including memory internal or external to the processor. Specific examples of machine, computer, and/or device storage media include non-volatile memory, including by way of example semiconductor memory devices, such as erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), field-programmable gate array (FPGA), and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disk; CD-ROM and DVD-ROM discs. The terms "machine storage medium," "computer storage medium," and "device storage medium" specifically exclude carrier waves, modulated data signals, and other such medium, at least some of which are covered by the term "signal medium" as discussed below.
In the various embodiments of the present invention in accordance with the present invention, one or more portions of network 1180 may be an ad hoc network, an intranet, an extranet, a Virtual Private Network (VPN), a Local Area Network (LAN), a Wireless LAN (WLAN), a Wide Area Network (WAN), a Wireless WAN (WWAN), a wireless network (WWAN) Metropolitan Area Networks (MANs), the Internet, portions of the Public Switched Telephone Network (PSTN), plain Old Telephone Service (POTS) networks, cellular telephone networks, wireless networks,A network, other type of network, or a combination of two or more such networks. For example, the network 1180 or a portion of the network 1180 may include a wireless or cellular network, and the coupling 1182 may be a Code Division Multiple Access (CDMA) connection, a global system for mobile communications (GSM) connection, or other type of cellular or wireless coupling. In this example, the coupling 1182 may implement any of various types of data transmission technologies, such as a single carrier radio transmission technology (1 xRTT), an evolution data optimized (EVDO) technology, a General Packet Radio Service (GPRS) technology, an enhanced data rates for GSM evolution (EDGE) technology, a third generation partnership project (3 GPP) including 3G, a fourth generation wireless (4G) network, a Universal Mobile Telecommunications System (UMTS), high Speed Packet Access (HSPA), worldwide Interoperability for Microwave Access (WiMAX), a Long Term Evolution (LTE) standard, other standards defined by various standards set-up organizations, other remote protocols, or other data transmission technologies.
The instructions 1116 may be transmitted or received over the network 1180 using a transmission medium via a network interface device (e.g., a network interface component included in the communication component 1164) and utilizing any of several well-known transmission protocols (e.g., HTTP). Similarly, instructions 1116 may be transmitted or received to device 1170 via coupling 1172 (e.g., a peer-to-peer coupling) using a transmission medium. The terms "transmission medium" and "signal medium" mean the same thing and may be used interchangeably in this disclosure. The terms "transmission medium" and "signal medium" should be taken to include any intangible medium capable of storing, encoding or carrying instructions 1116 for execution by machine 1100, and include digital or analog communication signals or other intangible medium to facilitate communication of such software. Accordingly, the terms "transmission medium" and "signal medium" should be construed to include any form of modulated data signal, carrier wave, or the like. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
The terms "machine-readable medium," "computer-readable medium," and "device-readable medium" mean the same thing and may be used interchangeably in this disclosure. These terms are defined to include both machine storage media and transmission media. Thus, the term includes both storage devices/media and carrier wave/modulated data signals.

Claims (20)

1. A system, comprising:
at least one hardware processor; and
a computer-readable medium storing instructions that, when executed by the at least one hardware processor, cause the at least one hardware processor to perform operations comprising:
receiving a request for an operation of a Large Line Item (LLI) event from an application core, wherein the request has been rerouted from the application core based on a determination that the request for the operation of the event corresponds to the LLI event in accordance with a threshold defined for the large line item event at the application core;
performing an operation on one or more documents corresponding to the LLI event, the one or more documents stored in a collection in a non-relational document database;
receiving a result of an operation from the non-relational document database; and
the results are sent to the application core for reporting to a user via a graphical user interface.
2. The system of claim 1, wherein the threshold indicates a number of line items in a procurement event.
3. The system of claim 2, wherein the procurement event comprises a plurality of line items, each line item having one or more terms defined by a procurement organization, and having a field for each term for completion by a bidding organization.
4. The system of claim 1, wherein the operations further comprise:
transmitting an authorization token corresponding to a request for operation of the LLI event to an authorization service; and
domain information is received from the authorization service.
5. The system of claim 4, wherein the domain information comprises a tenant identification, and wherein the operations further comprise:
using the tenant identity to access a representational state transfer (REST) Application Program Interface (API) to obtain a tenant profile in an instance of a memory database corresponding to the tenant identity.
6. The system of claim 5, wherein the in-memory database comprises a non-relational document database and a relational database for storing metadata about LLI events.
7. The system of claim 1, wherein performing an operation comprises performing an operation on multiple row items within the LLI event simultaneously via multiple concurrent threads.
8. A method, comprising:
receiving a request for an operation of a Large Line Item (LLI) event from an application core, wherein the request has been rerouted from the application core based on a determination that the request for the operation of the event corresponds to the LLI event in accordance with a threshold defined for the large line item event at the application core;
Performing the operation on one or more documents corresponding to the LLI event, the one or more documents stored in a collection in a non-relational document database;
receiving a result of the operation from the non-relational document database; and
the results are sent to the application core for reporting to a user via a graphical user interface.
9. The method of claim 8, wherein the threshold indicates a number of line items in a procurement event.
10. The method of claim 9, wherein the procurement event comprises a plurality of line items, each line item having one or more terms defined by a procurement organization, and having a field for each term for completion by a bidding organization.
11. The method of claim 8, further comprising:
transmitting an authorization token corresponding to a request for operation of the LLI event to an authorization service; and
domain information is received from the authorization service.
12. The method of claim 11, wherein the domain information comprises a tenant identity, and wherein the method further comprises:
using the tenant identity to access a representational state transfer (REST) Application Program Interface (API) to obtain a tenant profile in an instance of a memory database corresponding to the tenant identity.
13. The method of claim 12, wherein the in-memory database comprises a non-relational document database and a relational database for storing metadata about LLI events.
14. The method of claim 8, wherein performing an operation comprises performing an operation on multiple row items within the LLI event simultaneously via multiple concurrent threads.
15. A non-transitory machine-readable medium storing instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising:
receiving a request for an operation of a Large Line Item (LLI) event from an application core, wherein the request has been rerouted from the application core based on a determination that the request for the operation of the event corresponds to the LLI event in accordance with a threshold defined for the large line item event at the application core;
performing an operation on one or more documents corresponding to the LLI event, the one or more documents stored in a collection in a non-relational document database;
receiving a result of an operation from the non-relational document database; and
the results are sent to the application core for reporting to a user via a graphical user interface.
16. The non-transitory machine-readable medium of claim 15, wherein the threshold indicates a number of line items in a procurement event.
17. The non-transitory machine-readable medium of claim 16, wherein the procurement event comprises a plurality of line items, each line item having one or more terms defined by a procurement organization, and having a field for each term for completion by a bidding organization.
18. The non-transitory machine-readable medium of claim 15, wherein the operations further comprise:
transmitting an authorization token corresponding to a request for operation of the LLI event to an authorization service; and
domain information is received from the authorization service.
19. The non-transitory machine-readable medium of claim 18, wherein the domain information comprises a tenant identification, and wherein the operations further comprise:
using the tenant identity to access a representational state transfer (REST) Application Program Interface (API) to obtain a tenant profile in an instance of a memory database corresponding to the tenant identity.
20. The non-transitory machine-readable medium of claim 19, wherein the in-memory database comprises a non-relational document database and a relational database for storing metadata about LLI events.
CN202310609579.0A 2022-06-27 2023-05-26 Large-scale line project event processing system Pending CN117314033A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
IN202211036710 2022-06-27
US17/886,733 US20230419251A1 (en) 2022-06-27 2022-08-12 Large line item event processing system
US17/886,733 2022-08-12

Publications (1)

Publication Number Publication Date
CN117314033A true CN117314033A (en) 2023-12-29

Family

ID=89272539

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310609579.0A Pending CN117314033A (en) 2022-06-27 2023-05-26 Large-scale line project event processing system

Country Status (1)

Country Link
CN (1) CN117314033A (en)

Similar Documents

Publication Publication Date Title
US11507884B2 (en) Embedded machine learning
US11792733B2 (en) Battery charge aware communications
US20170337612A1 (en) Real-time recommendation of entities by projection and comparison in vector spaces
US9794322B2 (en) Web barcode scanner
KR20230078785A (en) Analysis of augmented reality content item usage data
CN112418976B (en) Method and system for redirecting to trusted device
US20190050920A1 (en) Dynamic group purchase flows using authorized temporal payment tokens
US11386485B2 (en) Capture device based confidence indicator
US20220114631A1 (en) Social network initiated listings
US10776177B2 (en) Optimization of parallel processing using waterfall representations
US10915851B2 (en) Generating a unified graphical user interface view from disparate sources
US11144943B2 (en) Draft completion system
CN111512599B (en) Adding images to draft documents via MMS
US11222376B2 (en) Instant offer distribution system
US20220180422A1 (en) Multi-dimensional commerce platform
CN117314033A (en) Large-scale line project event processing system
EP4300388A1 (en) Large line item event processing system
US20230419251A1 (en) Large line item event processing system
US20160364741A1 (en) Rewarding trusted persons based on a product purchase
CN112352257B (en) Instant quotation distribution system
KR20240154095A (en) Collaborative public user profile

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination