CN106776753B - Service data processing method and device - Google Patents

Service data processing method and device Download PDF

Info

Publication number
CN106776753B
CN106776753B CN201611032547.5A CN201611032547A CN106776753B CN 106776753 B CN106776753 B CN 106776753B CN 201611032547 A CN201611032547 A CN 201611032547A CN 106776753 B CN106776753 B CN 106776753B
Authority
CN
China
Prior art keywords
algorithm
service
time period
business
preset time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611032547.5A
Other languages
Chinese (zh)
Other versions
CN106776753A (en
Inventor
王就堂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201611032547.5A priority Critical patent/CN106776753B/en
Priority to PCT/CN2017/073387 priority patent/WO2018086265A1/en
Publication of CN106776753A publication Critical patent/CN106776753A/en
Application granted granted Critical
Publication of CN106776753B publication Critical patent/CN106776753B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5682Policies or rules for updating, deleting or replacing the stored data

Abstract

The invention is applicable to the field of communication and provides a service data processing method and a service data processing device. The method comprises the following steps: after receiving an application service starting instruction, reading a business algorithm in an oracle database and data related to the business algorithm; caching the read business algorithm and data related to the business algorithm; receiving a service algorithm calling instruction, wherein the service algorithm calling instruction carries a unique identifier of a service algorithm; calling a cached corresponding business algorithm and data related to the business algorithm according to the business algorithm calling instruction; and processing the appointed service data according to the called service algorithm and the data related to the service algorithm. By the method, the downtime condition of the oracle database can be avoided.

Description

Service data processing method and device
Technical Field
The embodiment of the invention belongs to the field of communication, and particularly relates to a service data processing method and device.
Background
An Oracle Database (Oracle) is a relational Database management system of Oracle corporation, has good portability, convenient use and strong function, and is suitable for various large, medium, small and microcomputer environments.
At present, algorithm programs such as insurance premium, present price, dividend and the like of life products are generally put in an orance database and then directly interact with the orance database through a front-end system. However, in the service use scenario of the services such as the emergency sale, the emergency sale may generate a service peak in a short time, so that an overlarge oracle database resource is consumed when the front-end system interacts with the oracle database, so that the oracle database is difficult to effectively support the service use, and the downtime is likely to be caused directly.
Disclosure of Invention
The embodiment of the invention provides a method and a device for processing service data, and aims to solve the problems that an oracle database is difficult to effectively support service use and is likely to cause downtime directly in the conventional method.
In a first aspect of the embodiments of the present invention, a method for processing service data is provided, where the method includes:
after receiving an application service starting instruction, reading a business algorithm in an oracle database and data related to the business algorithm;
caching the read business algorithm and data related to the business algorithm;
receiving a service algorithm calling instruction, wherein the service algorithm calling instruction carries a unique identifier of a service algorithm;
calling a cached corresponding business algorithm and data related to the business algorithm according to the business algorithm calling instruction;
and processing the appointed service data according to the called service algorithm and the data related to the service algorithm.
In a second aspect of the embodiments of the present invention, a service data processing apparatus is provided, where the apparatus includes:
the service algorithm reading unit is used for reading a service algorithm in the oracle database and data related to the service algorithm after receiving an application service starting instruction;
the business algorithm caching unit is used for caching the read business algorithm and the data related to the business algorithm;
the service algorithm calling instruction receiving unit is used for receiving a service algorithm calling instruction, and the service algorithm calling instruction carries the unique identifier of the service algorithm;
the service algorithm calling unit is used for calling the cached corresponding service algorithm and the data related to the service algorithm according to the service algorithm calling instruction;
and the service data processing unit is used for processing the appointed service data according to the called service algorithm and the data related to the service algorithm.
In the embodiment of the invention, the service algorithm in the oracle database and the data related to the service algorithm are cached in the server, so that the server can quickly call the service algorithm corresponding to the service algorithm calling instruction and the data related to the service algorithm, the calling speed of the service algorithm is improved, and the service algorithm and the data related to the service algorithm are cached in the server, so that the consumption of the oracle database resources is reduced, and the condition that the oracle database is down is avoided.
Drawings
Fig. 1 is a flowchart of a service data processing method according to a first embodiment of the present invention;
fig. 2 is a structural diagram of a service data processing apparatus according to a second embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the embodiment of the invention, after an application service starting instruction is received, a business algorithm in an oracle database and data related to the business algorithm are read, the read business algorithm and the data related to the business algorithm are cached, a business algorithm calling instruction is received, the business algorithm calling instruction carries a unique identifier of the business algorithm, the cached corresponding business algorithm and the data related to the business algorithm are called according to the business algorithm calling instruction, and the appointed business data is processed according to the called business algorithm and the data related to the business algorithm.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
The first embodiment is as follows:
fig. 1 shows a flowchart of a service data processing method according to a first embodiment of the present invention, which is detailed as follows:
step S11, after receiving the application service starting instruction, reading the business algorithm in the oracle database and the data related to the business algorithm.
The application service refers to a service corresponding to various applications implemented by a server, for example, when the application is an application of a life product, the application service refers to a service corresponding to an application of a life product, when a user clicks the application corresponding to the life product, an application service starting instruction is sent out, the application corresponding to the life product receives the application service starting instruction, the server sends out a business algorithm reading instruction, the business algorithm reading instruction carries the unique identification of the application corresponding to the life product, the oracle database receives the business algorithm reading instruction, searching a service algorithm related to the application corresponding to the life product and data related to the service algorithm according to the unique identifier of the application corresponding to the life product carried by the server, feeding back the search result to the server, so that the server reads the business algorithm and the data it relates to from the oracle database.
The business algorithm includes, but is not limited to, algorithms related to products such as premium, present price and dividend of life products. The business algorithms involved in each product are usually different, for example, if the algorithm involved in a premium product corresponds to the formula a + B-C × D, when reading the certain algorithm, data such as variables, specific values, etc., involved in the 4 parameters "A, B, C, D" need to be read simultaneously.
And step S12, caching the read business algorithm and the data related to the business algorithm.
Optionally, in order to ensure that the server still has sufficient memory after caching the service algorithm, the service algorithm with a higher cache use frequency is preferentially cached, at this time, the step S12 specifically includes:
a1, judging whether the use frequency of the read business algorithm is larger than a preset frequency threshold value according to the use frequency counted in advance. Specifically, when the service algorithm is called, the service algorithm is judged to be used, the number of times of using different service algorithms in a period of time is counted, and then the using frequency of the different service algorithms is calculated according to the number of times of using the different service algorithms. The preset frequency threshold may be set according to actual conditions, for example, when the traffic volume is small, the preset frequency threshold is also small, such as 15, 20, and the like, and when the traffic volume is large, the preset frequency threshold is also large, such as 60, 70, and the like, which is not limited herein.
A2, when the use frequency of the read service algorithm is greater than a preset frequency threshold, caching the service algorithm greater than the preset frequency threshold and the data related to the service algorithm.
Or, optionally, in order to ensure that the server still has enough memory after the business algorithm is cached, the business algorithm with a higher use frequency is preferentially cached, at this time, the step S12 specifically includes:
a1', sorting the use frequency of the read business algorithm according to the use frequency counted in advance;
a2', buffering N business algorithms with the frequency of use being N at the top and data related to the business algorithms, wherein N is determined by the following modes: acquiring the use frequency of a service algorithm with the highest use frequency, the size of a memory occupied by the service algorithm with the highest use frequency and the size of a Central Processing Unit (CPU), and estimating the size of the memory to be occupied by the service algorithm with the highest use frequency and the size of the CPU according to the use frequency, the size of the memory occupied by the service algorithm with the highest use frequency and the size of the CPU (for example, assuming that the use frequency of a service algorithm X is 5 and a word calls the memory occupied by X to be 200, then calling the memory occupied by X for 5 times to be 5 × 200 — 1000); judging whether the estimated memory size occupied by the service algorithm with the highest use frequency is larger than a preset memory cache threshold value or not, and/or judging whether the estimated CPU size occupied by the service algorithm with the highest use frequency is larger than a preset CPU cache threshold value or not, if so, only caching the service algorithm with the highest use frequency and data related to the service algorithm, otherwise, acquiring the use frequency of the service algorithm with the higher use frequency, the memory size occupied by the service algorithm with the higher use frequency and the CPU size, and processing by adopting the same operation as the processing of the use frequency of the service algorithm with the highest use frequency, the memory size occupied by the service algorithm with the higher use frequency and the CPU size.
And step S13, receiving a service algorithm calling instruction, wherein the service algorithm calling instruction carries the unique identifier of the service algorithm.
Specifically, when a user needs to process specified service data through a certain function of an application service, after the certain function of the application service is started, a service algorithm calling instruction corresponding to the certain function is sent.
In other embodiments, optionally, in order to notify the user of information such as a memory to be occupied by subsequently processing the service data, so as to reduce the downtime of the oracle database, the number of the service algorithm call instructions received in step S13 may be used to estimate the service algorithm call instructions to be subsequently received, and further estimate information such as a memory to be occupied by the service data to be processed, so that the user may be prompted to perform emergency processing on the database resource before the service is suddenly increased and the database resource is in tension, so as to reduce the downtime risk of the database. Specifically, this time includes:
b1, counting the number of all service algorithm call instructions received in a first preset time period and a second preset time period respectively, wherein the first preset time period and the second preset time period are adjacent time periods, and the first preset time period is earlier than the second preset time period. The specific duration of the first preset time period and the second preset time period may be set according to an actual situation, for example, the traffic volume may not suddenly increase or decrease in the absence of activity, at this time, the first preset time period and the second preset time period may be set to be longer durations, but the traffic volume may increase in a short time in the case of activity release, at this time, the first preset time period and the second preset time period may be set to be shorter durations, so as to improve the accuracy of subsequent pre-estimation values.
B2, estimating the number of the service algorithm call instructions received in a third preset time period according to the number of all the service algorithm call instructions received in the first preset time period and the second preset time period, and estimating the size of a memory and the size of a central processing unit to be occupied by processing service data according to the estimated number of the service algorithm call instructions received in the third preset time period, wherein the second preset time period is earlier than the third preset time period. Specifically, the increase rate from the first preset time period to the second preset time period is determined according to the counted number of all service algorithm call instructions received in the first preset time period and the counted number of all service algorithm call instructions received in the second preset time period, the number of the service algorithm call instructions received in the third preset time period is estimated according to the number of all service algorithm call instructions received in the second preset time period and the determined increase rate, and then the size of a memory and the size of a central processing unit to be occupied for processing service data are estimated.
And B3, when the size of the memory to be occupied by the pre-estimated processing service data is larger than a preset memory threshold value, or when the size of the central processing unit to be occupied by the pre-estimated processing service data is larger than a preset central processing unit threshold value, sending a prompt. Because a certain amount of memory and CPU are occupied in the running process of the server, in order to ensure that the server can run normally and the downtime is avoided, a prompt is sent when the size of the memory to be occupied for predicting and processing the service data is larger than a preset memory threshold value or when the size of the central processing unit to be occupied for predicting and processing the service data is larger than a preset central processing unit threshold value, for example, a user is reminded to increase the memory of the server in time. The preset memory threshold and the preset central processing unit threshold are memory values and CPU values required by normal startup and operation of the server, for example, the memory values and CPU values may be minimum memory values and CPU values required by normal startup and operation of the server.
And step S14, calling the cached corresponding business algorithm and the data related to the business algorithm according to the business algorithm calling instruction.
Specifically, the corresponding business algorithm is searched according to the unique identifier of the business algorithm carried by the business algorithm calling instruction, and the searched business algorithm and the data related to the business algorithm are called. Further, if the calling method of the data is specified in advance, the data related to the business algorithm is called according to the specified calling method, for example, if the data related to the specified business algorithm is Structured Query Language (SQL), the data related to the business algorithm is called by SQL.
And step S15, processing the appointed service data according to the called service algorithm and the data related to the service algorithm.
The service algorithm and the data related to the service algorithm are cached in the server, so that the service algorithm and the corresponding data can be called quickly to process the designated service data, the processing speed of the designated service data is greatly improved, and the service algorithm and the data related to the service algorithm are cached in the server, so that the consumption of the oracle database resources is reduced, and the condition that the oracle database is delayed is avoided.
Optionally, at the time of step S15, the method includes:
and inquiring the oracle database regularly whether the new service algorithm and/or the data related to the service algorithm exist, and reading the new service algorithm and/or the data related to the service algorithm when the oracle database stores the new service algorithm and/or the data related to the service algorithm. When a new service algorithm appears in the oracle database, in order to enable the server to call the new service algorithm in time, the new service algorithm and data related to the new service algorithm need to be read in time; in addition, when some data are added to the original business algorithm of the oracle database, the server also needs to read some data added to the original business algorithm. Certainly, after the server reads the newly added service algorithm and/or the data related to the service algorithm, the data is cached in the server.
Optionally, to avoid that the memory of the server is always occupied, after the step S15, the method includes:
and deleting the cached business algorithm and the data related to the business algorithm.
In the first embodiment of the invention, after an application service starting instruction is received, a service algorithm in an oracle database and data related to the service algorithm are read, the read service algorithm and the data related to the service algorithm are cached, a service algorithm calling instruction is received, the service algorithm calling instruction carries a unique identifier of the service algorithm, the cached corresponding service algorithm and the data related to the service algorithm are called according to the service algorithm calling instruction, and the appointed service data is processed according to the called service algorithm and the data related to the service algorithm. The service algorithm in the oracle database and the data related to the service algorithm are cached in the server, so that the server can quickly call the service algorithm corresponding to the service algorithm calling instruction and the data related to the service algorithm, the calling speed of the service algorithm is improved, and the service algorithm and the data related to the service algorithm are cached in the server, so that the consumption of the oracle database resources is reduced, and the condition that the oracle database is down is avoided.
It should be understood that, in the embodiment of the present invention, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiment of the present invention.
Example two:
fig. 2 shows a block diagram of a service data processing apparatus according to a second embodiment of the present invention, which may be applied to various mobile terminals, which may include user equipment communicating with one or more core networks via a radio access network RAN, the user equipment may be a mobile telephone (or referred to as a "cellular" telephone), a computer with mobile equipment, etc., and the user equipment may also be a portable, pocket, hand-held, computer-included or vehicle-mounted mobile apparatus, for example, which exchanges voice and/or data with the radio access network. Also for example, the mobile device may include a smart phone, a tablet computer, a family education machine, a Personal Digital Assistant (PDA), a point of sale (POS), or a vehicle computer, etc. For convenience of explanation, only portions related to the embodiments of the present invention are shown.
The service data processing apparatus 2 includes: the system comprises a service algorithm reading unit 21, a service algorithm caching unit 22, a service algorithm calling instruction receiving unit 23, a service algorithm calling unit 24 and a service data processing unit 25. Wherein:
and the business algorithm reading unit 21 is configured to read a business algorithm in the oracle database and data related to the business algorithm after receiving the application service starting instruction.
The business algorithm includes, but is not limited to, algorithms related to products such as premium, present price and dividend of life products. The business algorithms involved in each product are usually different, for example, if the algorithm involved in a premium product corresponds to the formula a + B-C × D, when reading the certain algorithm, data such as variables, specific values, etc., involved in the 4 parameters "A, B, C, D" need to be read simultaneously.
And the business algorithm caching unit 22 is configured to cache the read business algorithm and the data related to the business algorithm.
Optionally, in order to ensure that the server still has sufficient memory after the business algorithm is cached, the business algorithm with a higher use frequency is preferentially cached, at this time, the business algorithm caching unit 22 includes:
and the use frequency comparison module is used for judging whether the read use frequency of the service algorithm is greater than a preset frequency threshold value according to the use frequency counted in advance. Specifically, when the service algorithm is called, the service algorithm is judged to be used, the number of times of using different service algorithms in a period of time is counted, and then the using frequency of the different service algorithms is calculated according to the number of times of using the different service algorithms. The preset frequency threshold may be set according to actual conditions, for example, when the traffic volume is small, the preset frequency threshold is also small, such as 15, 20, and the like, and when the traffic volume is large, the preset frequency threshold is also large, such as 60, 70, and the like, which is not limited herein.
And the data caching module is used for caching the service algorithm larger than the preset frequency threshold value and the data related to the service algorithm when the use frequency of the read service algorithm is larger than the preset frequency threshold value.
Optionally, in order to ensure that the server still has sufficient memory after the business algorithm is cached, the business algorithm with a higher use frequency is preferentially cached, at this time, the business algorithm caching unit 22 includes:
and the use frequency sorting module is used for sorting the read use frequency of the business algorithm according to the use frequency counted in advance.
The system comprises N business algorithm caching modules, a first N business algorithms and data related to the business algorithms, wherein the N business algorithms are used for caching the data related to the first N business algorithms, and the N business algorithms are determined in the following mode: acquiring the use frequency of a service algorithm with the highest use frequency, and the size of a memory and a CPU (Central processing Unit) occupied by the service algorithm with the highest use frequency, and estimating the size of the memory and the CPU to be occupied by the service algorithm with the highest use frequency according to the use frequency, the size of the memory and the CPU occupied by the service algorithm with the highest use frequency; judging whether the estimated memory size occupied by the service algorithm with the highest use frequency is larger than a preset memory cache threshold value or not, and/or judging whether the estimated CPU size occupied by the service algorithm with the highest use frequency is larger than a preset CPU cache threshold value or not, if so, only caching the service algorithm with the highest use frequency and data related to the service algorithm, otherwise, acquiring the use frequency of the service algorithm with the higher use frequency, the memory size occupied by the service algorithm with the higher use frequency and the CPU size, and processing by adopting the same operation as the processing of the use frequency of the service algorithm with the highest use frequency, the memory size occupied by the service algorithm with the higher use frequency and the CPU size.
And the service algorithm calling instruction receiving unit 23 is configured to receive a service algorithm calling instruction, where the service algorithm calling instruction carries a unique identifier of a service algorithm.
Optionally, the service data processing apparatus includes:
and the business algorithm calling instruction counting unit is used for counting the number of all the business algorithm calling instructions received in a first preset time period and a second preset time period respectively, wherein the first preset time period and the second preset time period are adjacent time periods, and the first preset time period is earlier than the second preset time period. The specific duration of the first preset time period and the second preset time period may be set according to an actual situation, for example, the traffic volume may not suddenly increase or decrease in the absence of activity, at this time, the first preset time period and the second preset time period may be set to be longer durations, but the traffic volume may increase in a short time in the case of activity release, at this time, the first preset time period and the second preset time period may be set to be shorter durations, so as to improve the accuracy of subsequent pre-estimation values.
And the service algorithm calling instruction pre-estimating unit is used for estimating the number of the service algorithm calling instructions received in a third preset time period according to the counted number of all the service algorithm calling instructions received in the first preset time period and the second preset time period, and estimating the size of a memory and the size of a central processing unit which are occupied by processing service data according to the estimated number of the service algorithm calling instructions received in the third preset time period, wherein the second preset time period is earlier than the third preset time period. Specifically, the increase rate from the first preset time period to the second preset time period is determined according to all statistical business algorithm call instructions received in the first preset time period and the second preset time period, and then the business algorithm call instructions received in the third preset time period are estimated according to all the business algorithm call instructions received in the second preset time period and the determined increase rate, so that the size of a memory and the size of a central processing unit to be occupied for processing business data are estimated.
And the message reminding unit is used for sending out a reminding when the size of the memory to be occupied by the pre-estimated service data is larger than a preset memory threshold value or the size of the central processing unit to be occupied by the pre-estimated service data is larger than a preset central processing unit threshold value.
And the service algorithm calling unit 24 is configured to call the cached corresponding service algorithm and the data related to the service algorithm according to the service algorithm calling instruction.
Specifically, the corresponding business algorithm is searched according to the unique identifier of the business algorithm carried by the business algorithm calling instruction, and the searched business algorithm and the data related to the business algorithm are called. Further, if the calling mode of the data is specified in advance, the data related to the business algorithm is called according to the specified calling mode, for example, if SQL is adopted for the data related to the specified business algorithm, the data related to the business algorithm is called through SQL.
And the service data processing unit 25 is used for processing the specified service data according to the called service algorithm and the data related to the service algorithm.
Optionally, the service data processing apparatus includes:
and the service algorithm updating unit is used for inquiring the oracle database at regular time whether the new service algorithm and/or the data related to the service algorithm exist, and reading the new service algorithm and/or the data related to the service algorithm when the oracle database stores the new service algorithm and/or the data related to the service algorithm. When a new service algorithm appears in the oracle database, in order to enable the server to call the new service algorithm in time, the new service algorithm and data related to the new service algorithm need to be read in time; in addition, when some data are added to the original business algorithm of the oracle database, the server also needs to read some data added to the original business algorithm. Certainly, after the server reads the newly added service algorithm and/or the data related to the service algorithm, the data is cached in the server.
Optionally, to avoid that the memory of the server is always occupied, the service data processing apparatus includes:
and the service algorithm deleting unit is used for deleting the cached service algorithm and the data related to the service algorithm.
In the second embodiment of the invention, the service algorithm in the oracle database and the data related to the service algorithm are cached in the server, so that the server can quickly call the service algorithm corresponding to the service algorithm calling instruction and the data related to the service algorithm, the calling speed of the service algorithm is improved, and the service algorithm and the data related to the service algorithm are cached in the server, so that the consumption of the oracle database resources is reduced, and the situation that the oracle database is down is avoided.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. A method for processing service data, the method comprising:
after receiving an application service starting instruction, reading a business algorithm in an oracle database and data related to the business algorithm;
caching the read business algorithm and data related to the business algorithm to the local;
receiving a service algorithm calling instruction, wherein the service algorithm calling instruction carries a unique identifier of a service algorithm;
if the calling mode of the data is specified in advance, calling the cached corresponding business algorithm and the data related to the business algorithm according to the business algorithm calling instruction and the calling mode of the data specified in advance, wherein the calling mode of the data specified in advance comprises the calling mode of a structured query language;
processing appointed service data according to the called service algorithm and the data related to the service algorithm;
respectively counting the number of all service algorithm call instructions received in a first preset time period and a second preset time period, wherein the first preset time period and the second preset time period are adjacent time periods, the first preset time period is earlier than the second preset time period, and the time length of the first preset time period and the second preset time period is in an inverse proportion relation with the number of the service volume;
estimating the number of service algorithm call instructions received in a third preset time period according to the counted number of all the service algorithm call instructions received in the first preset time period and the second preset time period, and estimating the size of a memory and the size of a central processing unit to be occupied by processing service data according to the estimated number of the service algorithm call instructions received in the third preset time period, wherein the second preset time period is earlier than the third preset time period;
and sending out a prompt when the size of the memory to be occupied by the pre-estimated service data is larger than a preset memory threshold value or when the size of the central processing unit to be occupied by the pre-estimated service data is larger than a preset central processing unit threshold value.
2. The method according to claim 1, wherein the caching the read business algorithm and the data related to the business algorithm specifically includes:
judging whether the use frequency of the read service algorithm is greater than a preset frequency threshold value or not according to the use frequency counted in advance;
and when the use frequency of the read service algorithm is greater than a preset frequency threshold, caching the service algorithm greater than the preset frequency threshold and data related to the service algorithm.
3. The method according to claim 1, wherein when processing the specified service data according to the called service algorithm and the data related to the service algorithm, the method comprises:
and inquiring the oracle database regularly whether the new service algorithm and/or the data related to the service algorithm exist, and reading the new service algorithm and/or the data related to the service algorithm when the oracle database stores the new service algorithm and/or the data related to the service algorithm.
4. The method according to any one of claims 1 to 3, characterized in that after processing the specified business data according to the called business algorithm and the data related to the business algorithm, the method comprises:
and deleting the cached business algorithm and the data related to the business algorithm.
5. A service data processing apparatus, characterized in that the apparatus comprises:
the service algorithm reading unit is used for reading a service algorithm in the oracle database and data related to the service algorithm after receiving an application service starting instruction;
the business algorithm caching unit is used for caching the read business algorithm and the data related to the business algorithm to the local;
the service algorithm calling instruction receiving unit is used for receiving a service algorithm calling instruction, and the service algorithm calling instruction carries the unique identifier of the service algorithm;
the service algorithm calling unit is used for calling the cached corresponding service algorithm and the data related to the service algorithm according to the service algorithm calling instruction and the calling mode of the pre-designated data if the calling mode of the data is pre-designated, wherein the calling mode of the pre-designated data comprises the calling mode of a structured query language;
the service data processing unit is used for processing the appointed service data according to the called service algorithm and the data related to the service algorithm;
the business algorithm calling instruction counting unit is used for respectively counting the number of all business algorithm calling instructions received in a first preset time period and a second preset time period, the first preset time period and the second preset time period are adjacent time periods, the first preset time period is earlier than the second preset time period, and the time length of the first preset time period and the second preset time period is in an inverse proportion relation with the number of business volumes;
the service algorithm calling instruction pre-estimating unit is used for estimating the number of service algorithm calling instructions received in a third preset time period according to the number of all service algorithm calling instructions received in the first preset time period and the second preset time period, and estimating the size of a memory and the size of a central processing unit which are occupied by processing service data according to the estimated number of the service algorithm calling instructions received in the third preset time period, wherein the second preset time period is earlier than the third preset time period;
and the message reminding unit is used for sending out a reminding when the size of the memory to be occupied by the pre-estimated service data is larger than a preset memory threshold value or the size of the central processing unit to be occupied by the pre-estimated service data is larger than a preset central processing unit threshold value.
6. The apparatus of claim 5, wherein the business algorithm buffer unit comprises:
the using frequency comparison module is used for judging whether the using frequency of the read service algorithm is greater than a preset frequency threshold value according to the using frequency counted in advance;
and the data caching module is used for caching the service algorithm larger than the preset frequency threshold value and the data related to the service algorithm when the use frequency of the read service algorithm is larger than the preset frequency threshold value.
7. The apparatus of claim 5, wherein the apparatus comprises:
and the service algorithm updating unit is used for inquiring the oracle database at regular time whether the new service algorithm and/or the data related to the service algorithm exist, and reading the new service algorithm and/or the data related to the service algorithm when the oracle database stores the new service algorithm and/or the data related to the service algorithm.
8. The apparatus according to any one of claims 5 to 7, characterized in that it comprises:
and the service algorithm deleting unit is used for deleting the cached service algorithm and the data related to the service algorithm.
CN201611032547.5A 2016-11-14 2016-11-14 Service data processing method and device Active CN106776753B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201611032547.5A CN106776753B (en) 2016-11-14 2016-11-14 Service data processing method and device
PCT/CN2017/073387 WO2018086265A1 (en) 2016-11-14 2017-02-13 Service data processing method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611032547.5A CN106776753B (en) 2016-11-14 2016-11-14 Service data processing method and device

Publications (2)

Publication Number Publication Date
CN106776753A CN106776753A (en) 2017-05-31
CN106776753B true CN106776753B (en) 2020-08-21

Family

ID=58970686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611032547.5A Active CN106776753B (en) 2016-11-14 2016-11-14 Service data processing method and device

Country Status (2)

Country Link
CN (1) CN106776753B (en)
WO (1) WO2018086265A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108038667B (en) * 2017-12-21 2021-01-08 平安科技(深圳)有限公司 Policy generation method, device and equipment
CN115270275B (en) * 2022-08-17 2023-11-24 佛山市南海区微高软件有限公司 Window size input reminding method and device, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101625699B (en) * 2009-07-28 2013-01-09 大连新中连软件集团有限公司 Application software business control method and system based on business componentization
CN102137144B (en) * 2010-11-11 2015-04-08 华为终端有限公司 Method and system for configuration management of third-party software as well as management server
CN105446827B (en) * 2014-08-08 2018-12-14 阿里巴巴集团控股有限公司 Date storage method and equipment when a kind of database failure
CN104850509B (en) * 2015-04-27 2017-12-12 交通银行股份有限公司 A kind of operating method and system of banking business data memory cache

Also Published As

Publication number Publication date
WO2018086265A1 (en) 2018-05-17
CN106776753A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN106569585B (en) A kind of method and terminal managing program process
CN108616654B (en) Message reminding method, device, terminal and computer readable storage medium
US20120271822A1 (en) System for establishing preferred contacts for a central user of a mobile communication device
CN107395697A (en) Push Channel Selection, information push method, device and equipment, computer-readable recording medium
CN104737157A (en) A federated database system
CN106776753B (en) Service data processing method and device
CN107395252A (en) Frequency-hopping method, frequency-hopping arrangement, terminal and baseband chip
CN109978114B (en) Data processing method, device, server and storage medium
US10609523B2 (en) Context and environmentally aware notifications on mobile devices
CN110633302B (en) Method and device for processing massive structured data
CN109195153B (en) Data processing method and device, electronic equipment and computer readable storage medium
US20030162559A1 (en) Mobile communications terminal, information transmitting system and information receiving method
CN101867886A (en) Information notification method and device
CN111026529B (en) Task stopping method and device of distributed task processing system
CN103841508A (en) User information obtaining method and information aggregation platform
CN113765771A (en) Instant message processing method and device
CN108347403B (en) Method and device for distributing intermediate communication identification
CN115640151B (en) Service calling method, device and storage medium
CN116132528B (en) Flight management message pushing method and device and electronic equipment
CN115665074B (en) Message flow-limiting sending method, device, equipment and storage medium
CN110418020B (en) List state information processing method and device, electronic terminal and storage medium
CN112306649B (en) Method and device for managing processes
CN115599515A (en) Service request processing method and device, computer equipment and storage medium
CN107783897B (en) Software testing method and device
CN117499533A (en) Message reminding method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant