CN114756227A - Resource release processing method and device - Google Patents

Resource release processing method and device Download PDF

Info

Publication number
CN114756227A
CN114756227A CN202210390581.9A CN202210390581A CN114756227A CN 114756227 A CN114756227 A CN 114756227A CN 202210390581 A CN202210390581 A CN 202210390581A CN 114756227 A CN114756227 A CN 114756227A
Authority
CN
China
Prior art keywords
task
data
action
resource
task data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210390581.9A
Other languages
Chinese (zh)
Inventor
王昌亮
马帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Holding Co Ltd
Original Assignee
Jingdong Technology Holding Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Holding Co Ltd filed Critical Jingdong Technology Holding Co Ltd
Priority to CN202210390581.9A priority Critical patent/CN114756227A/en
Publication of CN114756227A publication Critical patent/CN114756227A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/36Software reuse
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/71Version control; Configuration management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4488Object-oriented
    • G06F9/4492Inheritance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4498Finite state machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Stored Programmes (AREA)

Abstract

The invention discloses a resource publishing processing method and device, and relates to the technical field of computers. One specific implementation of the method comprises selecting a rule component and an action component in response to a resource issuing instruction to pull corresponding configuration data from a cache cluster; calling an execution engine, assembling the configuration data based on the rule component and the action component through a finite state machine, generating task data and storing the task data into a task list; receiving a resource request, acquiring a parameter field of the resource request, further inquiring a task list according to the parameter field, acquiring corresponding task data and sending the task data so that a client updates resources for a started application program. Therefore, the method and the device for managing the application program APP can solve the problem that the existing resources in the application program APP have no whole set of universal release management temporarily.

Description

Resource release processing method and device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for processing resource release.
Background
At present, a thermal repair function and a dynamic function exist in an application program APP, and two sets of interface services with similar functions generally need to be developed and maintained. In addition, the APP of the application program a already has a hot repair function, but the APP of the application program B does not have the hot repair function, and at this time, the APP of the application program B needs to redevelop a set of same hot repair function services. In addition, the application program APP needs to perform gray release verification on one resource file and then perform full release, and at present, only one set of interface service with specific rule filtering conditions can be developed according to requirements.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art:
the method aims at the similar resource release function of the same APP, the repeated development and maintenance cost is high, and no standard life cycle model and unified standard exist for the resource release. In addition, aiming at different application programs APP, the same basic functions cannot be directly multiplexed and quickly responded, basic general functions cannot be precipitated, and modularization and componentization cannot be formed for unified management and use.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for processing resource release, which can solve the problem that a whole set of general release management is temporarily unavailable for resources in an application APP in the prior art.
In order to achieve the above object, according to an aspect of the embodiments of the present invention, a method for processing resource release is provided, including selecting a rule component and an action component in response to a resource release instruction, so as to pull corresponding configuration data from a cache cluster; calling an execution engine, assembling the configuration data based on the rule component and the action component through a finite state machine, generating task data and storing the task data in a task list; receiving a resource request, acquiring a parameter field of the resource request, further inquiring a task list according to the parameter field, acquiring corresponding task data and sending the corresponding task data so as to enable a client to update resources of a started application program.
Optionally, assembling, by the finite state machine, the configuration data with a rule-based component and an action component, comprising: constructing a finite state machine example, and loading configuration data into a memory; the execution rule component is used for filtering the configuration data and further loading the release service class based on the filtered configuration data so as to generate task data; and loading the task data to a memory, executing the loading of the task service class, and rendering the loaded task data through the action component.
Optionally, after loading the task data into the memory, the method includes:
and the execution rule component is used for filtering the task data so as to load the task service class according to the filtered task data.
Optionally, after generating the task data, the method includes:
and mapping the task data with the task event abstract class and the task event listener by utilizing an event mechanism so as to execute the implementation class of the task event listener corresponding to the task event abstract class, and implementing state change and persistence operation on the task data in the task event listener.
Optionally, rendering the loaded task data through an action component includes:
configuring the action components in an action set of the task and packaging the action components into an action set interface; the method comprises the steps of realizing a DefaultAction-like through an interface of an execution unit of task action logic, calling an action set interface, and acquiring an expression; rendering the loaded task data according to the expression, and storing the rendered action result into an action pool.
Optionally, before the instruction is issued in response to the resource, the method includes:
and synchronously updating the resource release configuration data corresponding to the application program in the database, and the corresponding rule component and action component to the cache cluster.
Optionally, obtaining and sending corresponding task data, so that after the client updates resources of the started application program, the method further includes: receiving a data processing request sent by calling a data service interface, identifying the service type of the data processing request, further determining a target object to be processed, and executing the data processing request; and sending the executed processing result to a query center, and storing the processing result into a database through a timing task.
In addition, the invention also provides a processing device for resource release, which comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for responding to the resource release instruction and selecting the rule component and the action component so as to pull the corresponding configuration data from the cache cluster; calling an execution engine, assembling the configuration data based on the rule component and the action component through a finite state machine, generating task data and storing the task data in a task list; and the processing module is used for receiving the resource request, acquiring the parameter field of the resource request, further inquiring the task list according to the parameter field, acquiring corresponding task data and sending the corresponding task data so as to enable the client to update the resources of the started application program.
One embodiment of the above invention has the following advantages or benefits: the invention can carry out rule check and action processing on resources (APK, JS, ZIP and the like) in any application program APP according to the releasing configuration of the user on the page, and meets the releasing integration flow that the releasing resources firstly carry out small-range gray scale safety verification and then carry out full releasing. And, provide the conventional main line and throw in (grey scale release, total release, AA release, latest stable edition), and realize the grey scale and release the whole life cycle management, form the resource initialization, grey scale, report, statistics, throw in the ability closed loop of the total. In addition, the method is deeply integrated with the business scene, and provides a resource release general flow of various business scenes (android gray level release, IOS gray level release, thermal restoration, dynamism and the like), namely, the method can be used after being matched without repeated development. In addition, the rule component library is enriched through continuous precipitation, rule components supporting various dimensions (dimensions such as system versions, APP versions, cities, crowd pictures, black and white lists, installation amount and the like) are provided, and the rule components can be arranged and combined randomly.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
fig. 1 is a schematic view of a main flow of a processing method of resource release according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of an execution engine architecture according to an embodiment of the present invention;
FIG. 3 is a timing diagram of the main flow of a response to a resource issue instruction according to an embodiment of the present invention;
fig. 4 is a schematic view of a main flow of a processing method of resource release according to a second embodiment of the present invention;
FIG. 5 is a schematic diagram of a resource publishing process according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of the main modules of a processing device for resource publishing in accordance with an embodiment of the present invention;
FIG. 7 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 8 is a schematic structural diagram of a computer system suitable for implementing a terminal device or a server according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram of a main flow of a processing method of resource publishing according to a first embodiment of the present invention, and as shown in fig. 1, the processing method of resource publishing includes:
step S101, responding to the resource issuing instruction, selecting a rule component and an action component to pull corresponding configuration data from the cache cluster.
In an embodiment, steps S101, S102, S103 may be performed in response to a resource issuing instruction of an application. The Application is the largest unit in the system, and can be used for associating various types of resources and creating various types of releases and tasks. Resource is the minimum unit for creating task selection release in the system. For example: APK, JS, ZIP file resources, etc. Release is a basic unit of the application-based Release type, and comprises a type of general service model which represents that the system is already precipitated, such as dynamic, thermal restoration, App gray level Release and the like; and adding and configuring a plurality of rule components and globally taking effect for all tasks under the release.
In some embodiments, before performing step S101, the method includes: and synchronously updating the resource release configuration data corresponding to the application program in the database, and the corresponding rule component and action component to the cache cluster. For example, the database DB shown in fig. 2 includes persistent storage MySQL, a cache cluster Redis, and an elastic search database (ES database for short), and the preset resource publishing configuration data and corresponding rule components and action components are all stored in MySQL in a persistent manner and can be synchronized to the cache cluster Redis.
And step S102, calling an execution engine, assembling the configuration data based on the rule component and the action component through the finite state machine, generating task data and storing the task data to a task list.
In the embodiment, the Rule can be added and configured with a plurality of rules under the release and task, and the Rule is standardized to be componentized in a multiplexing scene. For example, when the publication and task configures the page configuration rule, a plurality of rule components can be selected and ordered to determine the execution check priority. The rules may be set as: short-circuit rule and non-short-circuit rule. And the short-circuit rule directly returns rule verification failure information when the executed short-circuit rule fails, and cancels continuous execution. The non-short-circuit rule is to continue the execution of the next rule when the execution of the non-short-circuit rule fails; if the plurality of rule association relations are verified, the judgment intermediate information is stored in the TaskContext in the context object of the Task.
In addition, an Action is an execution unit of task Action logic, illustratively, invoking a third party component, data encryption, result processing, and the like.
In some embodiments, the configuration data is assembled by a finite state machine based on a rule component and an action component, and the specific implementation process comprises: constructing a finite state machine example, and loading configuration data into a memory; the execution rule component is used for filtering the configuration data and further loading the release service class based on the filtered configuration data so as to generate task data; and loading the task data into a memory, executing the loading of the task service class, and rendering the loaded task data through the action component. Further, after the task data is loaded into the memory, rule filtering may be performed on the task data first, and then loading of the task service class is performed, specifically: and the execution rule component is used for filtering the task data so as to load the task service class according to the filtered task data. Preferably, the input resource of the invention filters the configuration data through the rule, whether the configuration data is a short-circuit rule can be distinguished by writing an implementation class inheriting the abstract Task rule, and the state judgment of the non-short-circuit rule logic can be used for temporary variable storage in the Task context (state context) in the Task object to realize the logic judgment of the associated context data. The Task is the minimum basic unit in the system, one Task corresponds to one resource, a plurality of rules and actions can be added, and the Task can be divided into a main Task and a subtask according to type distinction.
In addition, in a further embodiment, after the task data is generated, the event mechanism may be processed on the task data, and the specific implementation process includes: as shown in fig. 2, in the created FSM, the task data TaskCode is mapped with the task event abstract class taskeevent and the task event listener taskelistene by using an event mechanism, so as to execute the implementation class of the task event listener taskelistene corresponding to the task event abstract class taskeevent, and implement the state change and persistence operation of the task data TaskCode in the task event listener taskelistene.
As a further embodiment, the loaded task data is rendered through the action component, and the specific implementation process includes: as shown in fig. 2, the action component is configured in the action set ActionMap of the task and packaged as an action set interface. And realizing the DefaultAction class through an interface of an execution unit Action of the task Action logic, calling an Action set interface, and acquiring the expression. Rendering the loaded task data according to the expression, and storing the rendered Action result into an Action Pool, namely, an execution unit Action interface inheriting the task Action logic is maintained into the Action Pool. Preferably, the action set ActionMap stores a status code StateCode and a corresponding ActionInfo list, and the action set interface can be called through the Spring expression language (coil). That is, when the Task is executed to next, the expression of each ActionInfo in the ActionInfo list corresponding to the current Task is started to be executed. The expression is used for realizing each function realization class of the Action interface, and the ActionInfo object is injected into the expression analysis engine before execution.
With respect to step S101 and step S102, a preferred embodiment, as shown in fig. 3, is a timing diagram of a main flow of responding to a resource issue instruction according to an embodiment of the present invention, including:
the finite state machine of the execution engine is passed through the gateway service in response to a resource issue instruction of an application of the front-end page. And acquiring the interface name exposed in the resource issuing instruction, and constructing a finite state machine instance. And pulling corresponding configuration data from the cache cluster according to the entry of the resource issuing instruction (namely the selected rule component and action component), and then initializing the configuration data to load the configuration data into the memory. And then, filtering the configuration data by executing rule verification under release based on the rule component, and returning a rule verification result. And loading the filtered configuration data into a publishing service class to generate task data, wherein the tasks can be multiple and form a task set. And filtering the task data by executing rule verification under the task based on the rule component, and returning a rule verification result. And loading a task service class according to the filtered task data, calling a downstream interface (such as an action set interface) for executing action configuration to render the loaded task data, and transmitting a rendered action result to a front-end page through a gateway service.
It should be noted that the rendered task data may be stored, and the stored task data is in a to-be-issued state, and the states of the tasks (for example, states of to-be-issued, in gray scale, full-amount issue, pause, offline, completed, and the like) may be freely switched according to the resource task issuing situation.
Step S103, receiving a resource request, acquiring a parameter field of the resource request, further querying a task list according to the parameter field, acquiring corresponding task data and sending the task data, so that a client updates resources of a started application program.
In some embodiments, step S103 is followed by: receiving a data processing request sent by calling a data service interface, identifying the service type (such as a reporting service and a query service) of the data processing request, further determining a target object to be processed, and executing the data processing request; and sending the executed processing result to a query center, and storing the processing result into a database through a timing task. As shown in fig. 2, the present invention provides a gray release delivery interface, for example, resource gray release such as APK install package, JS resource for micro application, and thermal repair, and the core uses a finite state machine FSM, and also provides Data Service such as Data reporting Service (for example, buried point information reporting), and query Service SearchService (for example, installation quantity query), and the Data Service can be encapsulated as a Data Service interface, and the reported and queried Data is sent to a query center, and is stored in a database elastic search through a timed task.
Fig. 4 is a schematic diagram of a main flow of a processing method of resource publishing according to a second embodiment of the present invention, and as shown in fig. 4, the processing method of resource publishing includes:
step S401, resource release configuration data corresponding to the application program in the database, and corresponding rule components and action components are synchronously updated to the cache cluster.
Step S402, responding to the resource issuing instruction, selecting a rule component and an action component to pull corresponding configuration data from the cache cluster.
Step S403, invoking an execution engine, constructing a finite state machine instance, and loading configuration data into a memory.
Step S404, executing a rule component, filtering the configuration data, and further performing loading of the publishing service class based on the filtered configuration data to generate task data.
Step S405, loading the task data into the memory, executing the loading of the task service class, and then rendering the loaded task data through the action component.
And step S406, generating task data according to the rendered processing result, and storing the task data in a task list.
Step S407, receiving a resource request, obtaining a parameter field of the resource request, further querying a task list according to the parameter field, obtaining corresponding task data and sending the corresponding task data, so that the client updates the resource of the started application program.
Step S408, receiving a data processing request sent by calling a data service interface, identifying a service type of the data processing request, further determining a target object to be processed, and executing the data processing request.
And step S409, sending the executed processing result to a query center, and storing the processing result into a database through a timing task.
Fig. 5 is a schematic structural diagram of a processing method for resource publishing according to an embodiment of the present invention, where the processing method for resource publishing mainly relates to configuration management (Gaea) of a business layer, a delivery service (Athena), a data service (Hermes), a MySQL service, a Redis cluster service, and a query center. And the Client can perform resource publishing interaction with the service layer through the gateway service. Wherein, a proxy server Nginx is arranged in the gateway service. The data storage layer mainly depends on MySQL database service and Redis cluster service.
Configuration management (Gaea) may provide user operations and configuration tasks, persist data to MySQL databases (e.g., may store background configuration information and user state records), and synchronize the latest configuration data to the Redis cache cluster (e.g., may store delivery data, user data, metric data, etc. for configuration data). Application management in configuration management (Gaea) can uniformly manage accessed applications. The resource management is used for creating tasks, selecting resources for releasing and releasing to perform unified management and checking, for example, released APK, JS, ZIP file resources and the like. The release management performs unified management on the types and records of resource release, for example: publishing a generic model of a class of services that represents what has been precipitated within a system: dynamism, thermal restoration, App grayscale publishing, and the like. The task management can be unified management for a certain type of tasks issued by the application program, wherein one task corresponds to one resource, and a plurality of rules and actions can be added. Of course, tasks can be divided into main tasks and subtasks according to types. The rule management is to perform unified management and check on the abstracted rule components, and the components can be reused according to different scenes. Illustratively, the management is performed in the form of a rule component market, such as components of directional rules, crowd labels, frequency control quantity control, black and white lists, APP versions, installation quantity comparison, and the like; multiple rules may be configured for add-on under release and task. The action management is to perform unified management and check on the abstracted action components, the components can be reused according to different scenes, and a plurality of rules can be added and configured under the release and task. Where an action is an abstraction for a class of execution flows. In addition, the data billboard is used for carrying out centralized display on data running in the platform, such as total number of accessed applications, issuing times, total number of tasks, total amount of downloading and installation and the like.
The launching service (Athena) can provide interface service, configuration data is pulled through a Redis cache cluster (namely a cache center), the configuration data is filtered through an execution engine, then the data is combined and processed through an action assembly, and finally the data which is assembled according to conditions is issued, wherein the specific process comprises the following steps: firstly, initializing the edition sending data, namely pulling the corresponding configuration data from the cache cluster according to the entry parameter of the resource issuing instruction, and then initializing the configuration data to load the configuration data into the memory. Then, the release rule is executed, that is, the configuration data is filtered by executing the rule check under release based on the rule component, and then the release service class is loaded: and loading the filtered configuration data into a publishing service class to generate task data, wherein the tasks can be multiple and form a task set. Loading a task set: and executing the corresponding characteristic task service class and the corresponding task set. And executing the task rule: and filtering the task data by executing rule verification under the task based on the rule component. And executing action processing: and loading the task service class according to the filtered task data, and calling a downstream interface for executing action configuration to render the loaded task data. And (4) return value processing: and uniformly outputting the rendered action results.
In addition, the delivery service (Athena) may also provide security authentication (e.g., security verification of resource release instructions) and metric statistics (e.g., metrics, statistics on data during resource release).
The data service (Hermes) can provide interface service and is mainly responsible for reporting and collecting data to the query center, synchronizing the data in the Redis cache cluster and then persisting the data in the MySQL database through a timing task.
Fig. 6 is a schematic diagram of main modules of a processing apparatus for resource publishing according to an embodiment of the present invention, and as shown in fig. 6, the processing apparatus 600 for resource publishing includes an obtaining module 601 and a processing module 602. The obtaining module 601 responds to the resource issuing instruction, and selects a rule component and an action component to pull corresponding configuration data from the cache cluster; calling an execution engine, assembling the configuration data based on the rule component and the action component through a finite state machine, generating task data and storing the task data in a task list; the processing module 602 receives the resource request, obtains the parameter field of the resource request, further queries the task list according to the parameter field, obtains corresponding task data, and sends the corresponding task data, so that the client updates the resource of the started application program.
In some embodiments, the obtaining module 601 assembles the configuration data by a finite state machine based on the rule component and the action component, including:
constructing a finite state machine example, and loading configuration data into a memory;
the execution rule component is used for filtering the configuration data and further loading the release service class based on the filtered configuration data so as to generate task data;
and loading the task data to a memory, executing the loading of the task service class, and rendering the loaded task data through the action component.
In some embodiments, after the obtaining module 601 loads the task data into the memory, the method includes: and the execution rule component is used for filtering the task data so as to load the task service class according to the filtered task data.
In some embodiments, after the obtaining module 601 generates the task data, it includes:
and mapping the task data with the task event abstract class and the task event listener by utilizing an event mechanism so as to execute the implementation class of the task event listener corresponding to the task event abstract class, and implementing state change and persistence operation on the task data in the task event listener.
In some embodiments, the obtaining module 601 renders the loaded task data through an action component, including:
Configuring the action components in the action set of the task and packaging the action components into an action set interface;
realizing a class DefaultAction through an interface of an execution unit of the task action logic, calling an action set interface, and acquiring an expression;
rendering the loaded task data according to the expression, and storing the rendered action result into an action pool.
In some embodiments, the obtaining module 601, in response to the resource issuing the instruction, includes: and synchronously updating the resource release configuration data corresponding to the application program in the database, and the corresponding rule component and action component to the cache cluster.
In some embodiments, the processing module 602 obtains and sends the corresponding task data, so that after the client updates the resources of the started application program, the method further includes:
receiving a data processing request sent by calling a data service interface, identifying the service type of the data processing request, further determining a target object to be processed, and executing the data processing request; and sending the executed processing result to a query center, and storing the processing result into a database through a timing task.
It should be noted that, the processing method for resource distribution and the processing apparatus for resource distribution according to the present invention have a corresponding relationship in the specific implementation content, and therefore, the repeated content is not described again.
Fig. 7 shows an exemplary system architecture 700 of a processing method of resource publishing or a processing apparatus of resource publishing to which embodiments of the present invention may be applied.
As shown in fig. 7, the system architecture 700 may include terminal devices 701, 702, 703, a network 704, and a server 705. The network 704 is the medium used to provide communications links between terminal devices 701, 702, 703 and the server 705. Network 704 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, to name a few.
A user may interact with a server 705 via a network 704 using terminal devices 701,702, 703 to receive or send messages or the like. Various communication client applications may be installed on the terminal devices 701, 702, and 703.
The terminal devices 701, 702, 703 may be various electronic devices with a processing screen for resource distribution and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 705 may be a server providing various services, such as a background management server (for example only) providing support for users utilizing the terminal devices 701, 702, 703. The backend management server may analyze and process the received data such as the product information query request, and feed back a processing result (for example, target push information and product information — just an example) to the terminal device.
It should be noted that the processing method for resource distribution provided by the embodiment of the present invention is generally executed by the server 705, and accordingly, the computing device is generally disposed in the server 705.
It should be understood that the number of terminal devices, networks, and servers in fig. 7 are merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for an implementation.
Referring now to FIG. 8, shown is a block diagram of a computer system 800 suitable for use with a terminal device implementing an embodiment of the present invention. The terminal device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 8, the computer system 800 includes a Central Processing Unit (CPU)801 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data necessary for the operation of the computer system 800 are also stored. The CPU801, ROM802, and RAM803 are connected to each other via a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
The following components are connected to the I/O interface 805: an input portion 805 including a keyboard, a mouse, and the like; an output section 806 including components such as a Cathode Ray Tube (CRT), a processor (LCD) for liquid crystal resource distribution, and the like, and a speaker; a storage section 808 including a hard disk and the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. A drive 810 is also connected to the I/O interface 805 as needed. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted on the storage section 808 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 809 and/or installed from the removable medium 811. The computer program executes the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 801.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present invention, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes an acquisition module and a processing module. Wherein the names of the modules do not in some cases constitute a limitation of the module itself.
As another aspect, the present invention also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be separate and not assembled into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to include, in response to a resource publishing instruction, a selection rule component and an action component to pull corresponding configuration data from the cache cluster; calling an execution engine, assembling the configuration data based on the rule component and the action component through a finite state machine, generating task data and storing the task data in a task list; receiving a resource request, acquiring a parameter field of the resource request, further inquiring a task list according to the parameter field, acquiring corresponding task data and sending the corresponding task data so as to enable a client to update resources of a started application program.
According to the technical scheme of the embodiment of the invention, the problem that the existing resources in the application program APP have no whole set of universal release management temporarily can be solved.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A processing method for resource release is characterized by comprising the following steps:
responding to a resource issuing instruction, and selecting a rule component and an action component to pull corresponding configuration data from the cache cluster;
calling an execution engine, assembling the configuration data based on the rule component and the action component through a finite state machine, generating task data and storing the task data into a task list;
receiving a resource request, acquiring a parameter field of the resource request, further inquiring a task list according to the parameter field, acquiring corresponding task data and sending the task data so that a client updates resources for a started application program.
2. The method of claim 1, wherein assembling configuration data by a finite state machine with rule-based components and action components comprises:
constructing a finite state machine example, and loading configuration data into a memory;
the execution rule component is used for filtering the configuration data and further loading the release service class based on the filtered configuration data so as to generate task data;
and loading the task data into a memory, executing the loading of the task service class, and rendering the loaded task data through the action component.
3. The method of claim 2, wherein after loading the task data into the memory, comprising:
and the execution rule component is used for filtering the task data so as to load the task service class according to the filtered task data.
4. The method of claim 2, after generating task data, comprising:
and mapping the task data with the task event abstract class and the task event listener by utilizing an event mechanism so as to execute the implementation class of the task event listener corresponding to the task event abstract class, and implementing state change and persistence operation on the task data in the task event listener.
5. The method of claim 2, wherein rendering the loaded task data by an action component comprises:
configuring the action components in the action set of the task and packaging the action components into an action set interface;
realizing a class DefaultAction through an interface of an execution unit of the task action logic, calling an action set interface, and acquiring an expression;
rendering the loaded task data according to the expression, and storing the rendered action result into an action pool.
6. The method of claim 1, wherein prior to issuing the instruction in response to the resource, comprising:
and synchronously updating the resource release configuration data corresponding to the application program in the database, and the corresponding rule component and action component to the cache cluster.
7. The method according to any one of claims 1 to 6, wherein after acquiring and sending the corresponding task data to enable the client to update the resources of the started application, the method further comprises:
receiving a data processing request sent by calling a data service interface, identifying the service type of the data processing request, further determining a target object to be processed, and executing the data processing request;
and sending the executed processing result to a query center, and storing the processing result into a database through a timing task.
8. A processing apparatus for resource publishing, comprising:
the acquisition module is used for responding to the resource issuing instruction, selecting the rule component and the action component to pull corresponding configuration data from the cache cluster; calling an execution engine, assembling the configuration data based on the rule component and the action component through a finite state machine, generating task data and storing the task data into a task list;
And the processing module is used for receiving the resource request, acquiring the parameter field of the resource request, further inquiring the task list according to the parameter field, acquiring corresponding task data and sending the corresponding task data so as to enable the client to update the resources of the started application program.
9. An electronic device, comprising:
one or more processors;
a storage device to store one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202210390581.9A 2022-04-14 2022-04-14 Resource release processing method and device Pending CN114756227A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210390581.9A CN114756227A (en) 2022-04-14 2022-04-14 Resource release processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210390581.9A CN114756227A (en) 2022-04-14 2022-04-14 Resource release processing method and device

Publications (1)

Publication Number Publication Date
CN114756227A true CN114756227A (en) 2022-07-15

Family

ID=82330212

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210390581.9A Pending CN114756227A (en) 2022-04-14 2022-04-14 Resource release processing method and device

Country Status (1)

Country Link
CN (1) CN114756227A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116009949A (en) * 2023-03-28 2023-04-25 税友软件集团股份有限公司 Numerical value acquisition method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116009949A (en) * 2023-03-28 2023-04-25 税友软件集团股份有限公司 Numerical value acquisition method, device, equipment and storage medium
CN116009949B (en) * 2023-03-28 2023-08-29 税友软件集团股份有限公司 Numerical value acquisition method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110532020B (en) Data processing method, device and system for micro-service arrangement
CN109697075A (en) File updating method, system and device
CN112069265A (en) Configuration data synchronization method, service data system, computer system and medium
CN108108986B (en) Design method and device of customer relationship management system and electronic equipment
US20150012669A1 (en) Platform runtime abstraction
CN113448702A (en) Front-end-based micro-service design method
CN111831461A (en) Method and device for processing business process
CN108667660B (en) Method and device for route management and service routing and routing system
CN114706690B (en) Method and system for sharing GPU (graphics processing Unit) by Kubernetes container
CN112860343A (en) Configuration changing method, system, device, electronic equipment and storage medium
CN114756227A (en) Resource release processing method and device
CN111414154A (en) Method and device for front-end development, electronic equipment and storage medium
CN114185734A (en) Cluster monitoring method and device and electronic equipment
CN117609226A (en) Information stream data storage method and device, electronic equipment and readable medium
CN110807058B (en) Method and system for exporting data
CN111382953A (en) Dynamic process generation method and device
CN108334374A (en) The method and apparatus of component dynamic load and execution
CN111857736B (en) Cloud computing product generation method, device, equipment and storage medium
CN113296829A (en) Method, device, equipment and computer readable medium for processing service
CN114528140A (en) Method and device for service degradation
CN113778993A (en) Service data processing method and device
CN114070889A (en) Configuration method, traffic forwarding method, device, storage medium, and program product
CN112463616A (en) Chaos testing method and device for Kubernetes container platform
CN113779122A (en) Method and apparatus for exporting data
CN113050962A (en) Mobile service upgrading method, device and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination