CN114826806A - Service processing method, system, device and readable storage medium - Google Patents

Service processing method, system, device and readable storage medium Download PDF

Info

Publication number
CN114826806A
CN114826806A CN202210258538.7A CN202210258538A CN114826806A CN 114826806 A CN114826806 A CN 114826806A CN 202210258538 A CN202210258538 A CN 202210258538A CN 114826806 A CN114826806 A CN 114826806A
Authority
CN
China
Prior art keywords
service
type
user
historical
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210258538.7A
Other languages
Chinese (zh)
Inventor
豆红雷
王健彪
刘征宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Huacheng Software Technology Co Ltd
Original Assignee
Hangzhou Huacheng Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Huacheng Software Technology Co Ltd filed Critical Hangzhou Huacheng Software Technology Co Ltd
Priority to CN202210258538.7A priority Critical patent/CN114826806A/en
Publication of CN114826806A publication Critical patent/CN114826806A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2816Controlling appliance services of a home automation network by calling their functionalities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2805Home Audio Video Interoperability [HAVI] networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application relates to a service processing method, a device, computer equipment and a computer readable storage medium, which predict the next service type to be operated by a user after the current service operation according to the historical service operation information of the user and take the predicted service type as a target service type; the historical service operation information comprises service types processed by the user, and historical operation time and historical operation times of each service type; before the user performs the next service operation, acquiring service data corresponding to the target service type and storing the service data in a cache region; and when the user actually carries out the next service operation, acquiring corresponding service data from the buffer under the condition that the service type of the service operation is the same as the target service type. The method and the device can save the interactive processing time of the operation terminal and the target device, and effectively improve the service processing efficiency.

Description

Service processing method, system, device and readable storage medium
Technical Field
The present application relates to the field of smart home technologies, and in particular, to a service processing method and apparatus, a computer device, and a computer-readable storage medium.
Background
In general, smart homes have many functions and application services, and a user can operate various services of the smart home at an operation end. For example, when the smart home device is a home camera and the operation terminal is a mobile phone APP, the user can implement service operations such as live broadcast, image capture, video viewing, and pan/tilt/zoom of the home camera on the mobile phone APP.
In the prior art, when a user performs a certain service operation, the user obtains service data from a target device by sending a service request to the target device from an operation terminal. That is, the target device returns the service data to the operation terminal, and the operation terminal obtains the service data returned from the target device to complete the service operation. However, when the prior art is used for performing service operation, the processing from the operation end to each node of the target device takes a long time, and there is network transmission time, which results in a problem of low service processing efficiency.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a service processing method, a service processing apparatus, a computer device, and a computer readable storage medium to solve the problem of low service processing efficiency in the related art.
In a first aspect, an embodiment of the present application provides a service processing method, configured to process a service of a target device at an operation end, where the method includes the following steps:
predicting a next service type to be operated by the user after the current service operation according to the historical service operation information of the user, and taking the next service type as a target service type; the historical service operation information comprises the service types processed by the user, and the historical operation time and the historical operation times of each service type;
before the user performs the next service operation, acquiring the service data corresponding to the target service type and storing the service data in a cache region;
and when the user actually carries out the next service operation, acquiring corresponding service data from the cache region under the condition that the service type of the service operation is the same as the target service type.
In some embodiments, the predicting, according to the historical service operation information of the user, a service type to be operated next after the current service operation by the user, and taking the service type as a target service type, includes the following steps:
calculating the probability that the next service type to be operated by the user after the current service operation belongs to each service type in the historical service operation information when the service type of the current service operation of the user belongs to each service type in the historical service operation information according to the historical service operation information of the user, and generating a service association probability table;
and predicting the next service type to be operated by the user after the current service operation according to the service association probability table, and taking the service type as a target service type.
In some embodiments, after the obtaining and storing the service data corresponding to the target service type in the buffer, and before the obtaining the corresponding service data from the buffer, the method further includes:
and if the service data corresponding to the target service type is updated, updating the updated service data to the cache region.
In some embodiments, before the obtaining the corresponding service data from the buffer, the method further includes:
obtaining a time interval between the operation time of the target service type and the operation time of the current service according to the operation time of the service of the same type as the current service in the historical service operation information of the user and the operation time of the service of the same type as the target service type, wherein the time interval is used as a keep-alive time interval;
and in the keep-alive time interval, if the service data corresponding to the target service type is updated, updating the updated service data to the cache region.
In some embodiments, the obtaining, according to the operation time of the service of the same type as the current service in the historical service operation information of the user and the operation time of the service of the same type as the target service type, a time interval between the operation time of the target service type and the operation time of the current service as a keep-alive time interval includes the following steps:
obtaining the operation time of the same type of service as the current service in the historical service operation information of the user and the time interval between the operation time of the same type of service as the target service in all the historical service operation information and the operation time of the same type of service as the current service;
and screening out the maximum time interval within a preset time interval range from the time intervals between the operation time of the service with the same type as the target service and the operation time of the service with the same type as the current service in all the historical service operation information, wherein the maximum time interval is used as the keep-alive time interval.
In some embodiments, the obtaining service data corresponding to the target service type and storing the service data in a buffer before the user performs the next service operation includes the following steps:
before the user carries out the next business operation, judging whether the historical operation times ranking of the target business type is above a preset name time or not;
if yes, acquiring the service data corresponding to the target service type and storing the service data in the cache region.
In some embodiments, the calculating, according to the historical service operation information of the user, a probability that a service type to be operated next by the user after a current service operation belongs to each service type in the historical service operation information when a service type of the current service operation of the user belongs to each service type in the historical service operation information, and generating a service association probability table includes:
according to the historical service operation information of the user, acquiring a service type with the historical operation times ranked above a preset name time, taking the service type as a hot service, and recording the historical operation time and the historical operation times of the hot service;
and when the service type of the current service operation of the user belongs to the hot service, calculating the probability that the next service type to be operated by the user after the current service operation belongs to the hot service, and forming the service association probability table.
In some of these embodiments, the method further comprises:
determining the operation time of the initial service according to the historical service operation time of the user;
and before the operation time of the initial service, acquiring service data corresponding to the initial service and storing the service data in the cache region.
In a second aspect, an embodiment of the present application provides a service processing apparatus, configured to process a service of a target device at an operation end, where the apparatus includes: the device comprises a prediction module, an acquisition module and an operation module;
the prediction module is used for predicting the next service type to be operated by the user after the current service operation according to the historical service operation information of the user and taking the next service type as a target service type; the historical service operation information comprises service types processed by a user, and historical operation time and historical operation times of each service type;
the obtaining module is used for obtaining the service data corresponding to the target service type and storing the service data in a cache area before the user performs the next service operation;
and the operation module is used for acquiring corresponding service data from the cache region under the condition that the service type of the service operation is the same as the target service type when the user actually performs the next service operation.
In a third aspect, there is provided in this embodiment a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method according to the first aspect when executing the computer program.
In a fourth aspect, in the present embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method according to the first aspect as described above.
According to the service processing method, the service processing device, the computer equipment and the computer readable storage medium, the next service type to be operated after the current service operation of the user is predicted according to the historical service operation information of the user, and the predicted service type is used as the target service type; the historical service operation information comprises service types processed by the user, and historical operation time and historical operation times of each service type; before the user performs the next service operation, acquiring service data corresponding to the target service type and storing the service data in a cache region; and when the user actually carries out the next service operation, acquiring corresponding service data from the buffer under the condition that the service type of the service operation is the same as the target service type. According to the method and the device, the next service type to be operated after the current service operation of the user is predicted according to the historical service operation information of the user, and the service data corresponding to the predicted service type is stored in the cache region in advance, so that the interactive processing time of the operation end and the target device can be saved, and the service processing efficiency is effectively improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is an application scenario diagram of a service processing method provided according to an embodiment of the present application;
fig. 2 is a flowchart of a service processing method provided according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a service processing apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a computer device provided according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as used herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
Fig. 1 is an application scenario diagram of a service processing method according to an embodiment of the present application. As shown in fig. 1, data transmission may be performed between a server 101 and a mobile terminal 102 via a network. The mobile terminal 102 is configured to collect historical service operation information of a user, transmit the collected historical service operation information to the server 101, predict a next service type to be operated by the user after a current service operation according to the historical service operation information of the user after the server 101 receives the historical service operation information of the user, and use the predicted service type as a target service type; the historical service operation information comprises service types processed by the user, and historical operation time and historical operation times of each service type; before the user performs the next service operation, acquiring service data corresponding to the target service type and storing the service data in a cache region; and when the user actually carries out the next service operation, acquiring corresponding service data from the buffer under the condition that the service type of the service operation is the same as the target service type. The server 101 may be implemented by an independent server or a server cluster composed of a plurality of servers, and the mobile terminal 102 may be any display screen with an input function.
The embodiment provides a service processing method, configured to process a service of a target device at an operation end, as shown in fig. 2, where the method includes the following steps:
step S210, according to the historical business operation information of the user, predicting the next business type to be operated after the current business operation of the user, and taking the next business type as a target business type; the historical service operation information comprises service types processed by the user, and historical operation time and historical operation times of each service type.
Specifically, the historical service operation information of the user may select the historical service operation information within a preset number of days from the current time, and the preset number of days may be set according to actual needs. For the same device and the same user, the business operation habit is not easy to change in a long period of time, the common business types are relatively concentrated, and the business operation types and the business operation sequence are basically fixed. Based on the service type prediction method, the service operation habit of the user can be known according to the historical service operation information of the user, and the next service type to be operated after the current service operation can be predicted according to the service operation habit of the user and is taken as the target service type.
Step S230, before the user performs the next service operation, obtaining the service data corresponding to the target service type and storing the service data in the cache region.
Specifically, the buffer is used for storing the service data, and the operation end can quickly obtain the service data from the buffer. As one of the embodiments, the buffer may be directly disposed at the operation end. After the target service type is predicted, the service data corresponding to the target service type can be obtained from the target equipment terminal and stored in the cache region.
Step S250, when the user actually performs the next service operation, and under the condition that the service type of the service operation is the same as the target service type, acquiring the corresponding service data from the buffer.
Specifically, when the user actually performs the next service operation, it may be determined whether the service type of the service operation is the same as the target service type, and under the condition that the service type of the service operation is the same as the target service type, the corresponding service data may be directly obtained from the cache region.
In the related art, when a user performs a certain service operation, the user sends a service request to a target device from an operation terminal, obtains service data from the target device, the target device returns the service data to the operation terminal, and the operation terminal obtains the service data returned from the target device to complete the service operation. However, when the prior art is used for performing service operation, the processing from the operation end to each node of the target device takes a long time, and there is network transmission time, which results in a problem of low service processing efficiency.
In order to solve the above problem, the present application provides a service processing method for processing a service of a target device at an operation end. Predicting a next service type to be operated by a user after the current service operation according to historical service operation information of the user, and taking the next service type as a target service type; the historical service operation information comprises service types processed by a user, and historical operation time and historical operation times of each service type; before the user performs the next service operation, acquiring service data corresponding to the target service type and storing the service data in a cache region; and when the user actually carries out the next service operation, acquiring corresponding service data from the buffer under the condition that the service type of the service operation is the same as the target service type. According to the method and the device, the service type to be operated next after the current service operation of the user is predicted according to the historical service operation information of the user, and the service data corresponding to the predicted service type is stored in the cache region in advance, so that the interactive processing time of the operation end and the target device can be saved, and the service processing efficiency is effectively improved.
In one embodiment, the step S210 predicts a next service type to be operated by the user after the current service operation according to the historical service operation information of the user, and takes the predicted service type as the target service type, including the following steps:
step S211, according to the historical service operation information of the user, calculating a probability that a next service type to be operated by the user after the current service operation belongs to each service type in the historical service operation information when the service type of the current service operation of the user belongs to each service type in the historical service operation information, and generating a service association probability table.
Step S212, according to the service association probability table, predicting the next service type to be operated by the user after the current service operation, and taking the next service type as the target service type.
Specifically, the historical service operation information of the user may select the historical service operation information within a preset number of days from the current time, the preset number of days may be set according to actual needs, for example, 10 days, 20 days, and the like, and for the same device, a service operation habit of the same user is generally not changed easily within a long period of time. The historical service operation information of the user comprises the service types processed by the user, the historical operation time and the historical operation times of all the service types, and according to the historical service operation information of the user, the user can know which service types to be operated next after the user operates a certain service and the operation times of all the service types to be operated next, so that the probability of the service types to be operated next after the user operates a certain service under all the service types is obtained. Taking an intelligent home device as a home camera and an operation terminal as a mobile phone APP as an example, taking service operation information within 10 days from the current time as historical service operation information of the user, wherein service operation conditions within 10 days from the current time are shown in table 1, each datum in table 1 represents a service type and operation time of the current operation, wherein the service type A can represent opening of the APP, the service type B can represent entering of a main menu, the service type C can represent live broadcast operation, the service type D can represent image capture, the service type E can represent first-class cradle head operation, the service type F can represent exiting, and the service type G can represent second-class cradle head operation.
TABLE 1
A(7:00) D(7:05) E(7:15) C(7:21)
A(8:15) B(8:20) C(8:25) G(8:26) F(8:27) H(8:30)
A(9:01) B(9:03) C(9:07) G(9:12) F(9:20)
A(8:16) B(8:18) C(8:24) D(8:26)
A(9:02) B(9:05) C(9:10)
A(8:15) B(8:17) A(8:21) C(8:23)
A(9:02) B(9:06) C(9:08) E(9:11)
A(9:01) E(9:05) F(9:18)
A(9:03) B(9:07) C(9:08) G(9:15) F(9:20)
A(9:01) E(9:04) C(9:09) G(9:17) F(9:21)
As can be seen from table 1, service types of the next operation after the a-type service within 10 days from the current time include B, D, E and C, the number of times of the next operation of the B-type service after the a-type service is 7 times, the number of times of the next operation of the D-type service after the a-type service is 1 time, the number of times of the next operation of the E-type service after the a-type service is 2 times, and the number of times of the next operation of the C-type service after the a-type service is 1 time, then it can be known from the historical service operation information of the user that the probability of the next operation of the B-type service after the a-type service is 7/(1+1+2+7) ═ 63.6%, the probability of the next operation of the D-type service after the a-type service is 1/(1+1+2+7) ═ 9.1%, and the probability of the next operation of the E-type service after the a-type service is 2/(1+1+2+7) ═ 18.2%, the probability of the next operating C traffic type after a type a traffic is 1/(1+1+2+7) ═ 9.1%. The next service type after the type B service is C and a, the number of times the type C service is next operated after the type B service is 6 times, and the number of times the type a service is next operated after the type B service is 1 time, then according to the historical service operation information of the user, the probability of the next operation type C service after the type B service is 6/(1+6) ═ 85.7%, and the probability of the next operation type a service after the type B service is 1/(1+6) ═ 14.3%, and then the probability of next operation type service after each type of service can be sequentially calculated C, D, E, F, G, thereby forming a service association probability table, as shown in table 2.
TABLE 2
Figure BDA0003549748230000101
According to the service association probability table, the probability of which service the next service type to be operated is the highest after the current service operation of the user can be known, so that the next service type to be operated after the current service operation of the user is predicted and used as the target service type. For example, assuming that the current service type is a, the service association probability table indicates that the probability of next operating type B service after type a service is the highest, so that it is predicted that the next service type to be operated after type a service is B, and therefore, the type B service is taken as the target service type to be operated next after type a service.
By forming the service associated probability table, the next service type to be operated after the current service operation can be effectively and accurately predicted according to the service associated probability table.
As one embodiment, the service data corresponding to the service type with the next service operation probability being greater than or equal to the preset standard may be stored in the buffer, for example, after the a-type service, the probabilities of the next operation of the B-type service and the C-type service both exceed 30%, and then, when the current service type is a, the service data corresponding to the B-type service and the C-type service are stored in the buffer, so that the probability that the service data stored in the buffer is valid service data is increased, and the service processing efficiency is further improved.
In one embodiment, after the step S230 acquires the service data corresponding to the target service type and stores the service data in the buffer, and before the step S250 acquires the corresponding service data from the buffer, the service processing method provided by the present application further includes the following steps:
step S240, if the service data corresponding to the target service type is updated, the updated service data is updated to the cache region.
Specifically, for example, when the smart home device is a home camera and the operation terminal is a mobile phone APP, and the mobile phone APP needs to obtain service data corresponding to a target service type in advance, the mobile phone APP sends an interface fGet1 (...., ex) with an ex parameter to the home camera terminal, after the camera receives the interface fGet1 (...., ex) with the ex parameter, the corresponding service data is returned to the mobile phone APP terminal, and the mobile phone APP stores the service data returned by the home camera terminal in the cache region. After the service data corresponding to the target service type is obtained and stored in the buffer area, before the corresponding service data is obtained from the buffer area, when the relevant service data of the home camera end is updated, the relevant service data is reported to the mobile phone APP end through an interface fSend (.. ex., ex) with ex parameters, and after the mobile phone APP end receives the interface fSend (.. ex., ex), the updated service data is updated to the buffer area. When the service data is updated, the updated service data is updated to the cache region in time, so that the accuracy of the service data in the cache region can be ensured, and the accuracy of the service data acquired by the user is further ensured. As one of the implementation manners, if the target device receives the interface with the ex parameter, the interface service is preferentially processed, and it is ensured that the service data in the buffer area can be stored in time.
In one embodiment, before the step S250 acquires the corresponding service data from the buffer, the service processing method provided by the present application further includes the following steps:
obtaining a time interval between the operation time of the target service type and the operation time of the current service according to the operation time of the service of the same type as the current service in the historical service operation information of the user and the operation time of the service of the same type as the target service type, wherein the time interval is used as a keep-alive time interval;
and in the keep-alive time interval, if the service data corresponding to the target service type is updated, updating the updated service data to the cache region.
Specifically, assuming that the current service type is a and the predicted target service type is B, according to the operation time of the service of the same type as a and the operation time of the service of the same type as B in the historical service operation information, it can be predicted how long the service of type B will be operated after the service of type a is operated by means of probability statistics and the like, and the predicted time interval can be used as a keep-alive time interval. In addition, after the keep alive time interval, the service data in the buffer area can be released, i.e. the buffer area can be released in time.
In one embodiment, obtaining a time interval between the operation time of the target service type and the operation time of the current service according to the operation time of the service of the same type as the current service in the historical service operation information of the user and the operation time of the service of the same type as the target service type, and using the time interval as a keep-alive time interval, includes the following steps:
obtaining the operation time of the service with the same type as the target service in all the historical service operation information and the time interval between the operation time of the service with the same type as the current service in the historical service operation information of the user and the operation time of the service with the same type as the target service;
and screening out the maximum time interval within a preset time interval range from the time intervals between the operation time of the service with the same type as the target service and the operation time of the service with the same type as the current service in all the historical service operation information as the keep-alive time interval.
Specifically, assuming that the current service type is a and the predicted target service type is B, the time interval between the operation time of the service of the same type as a and the operation time of the service of the same type as B in all the historical service operation information may be obtained according to the operation time of the service of the same type as a and the operation time of the service of the same type as B in the historical service operation information. For example, as shown in table 1 above, the type a service is followed by the operation B service 7 times, with the time intervals of 7 times being 5 minutes, 2 minutes, 3 minutes, 2 minutes, 4 minutes and 4 minutes, respectively, and then the maximum time interval is 5 minutes. In order to avoid that B is operated after an abnormally large time interval after a few type-a traffic operations occur in the historical traffic operation information, a preset time interval range, for example, 10 minutes, is set, and a maximum time interval, that is, 5 minutes, in which 10 minutes is not exceeded, is taken as a keep-alive time interval from among time intervals between the operation time of the same type of traffic as the target traffic type and the operation time of the same type of traffic as the current traffic in all the historical traffic operation information. In the guarantee time interval, if the service data corresponding to the target service type is updated, the updated service data is updated to the cache region, and by the mode, the performance consumed by continuously processing the service data update can be effectively avoided. In addition, after the keep alive time interval, the service data in the buffer area can be released, that is, the buffer area can be released in time.
Specifically, for example, the smart home device is a home camera, the operation end is a mobile phone APP, and when the mobile phone APP needs to obtain service data corresponding to a target service type in advance, the mobile phone APP sends an interface fGet1 (ex, t) with an ex parameter to the home camera end, where t represents a keep-alive time interval. After the camera receives an interface fGet1(.. multidot., ex, t) with ex parameters, corresponding service data are returned to the mobile phone APP terminal, and the mobile phone APP terminal stores the service data returned by the household camera terminal in a cache region. After the service data corresponding to the target service type is obtained and stored in the buffer area, before the corresponding service data is obtained from the buffer area, when the relevant service data of the home camera end is updated, the relevant service data is reported to the mobile phone APP end through an interface fSend (.. ex, t) with an ex parameter, and after the mobile phone APP end receives the interface fSend (.. ex, t) with the ex, the updated service data is updated to the buffer area. After the household camera end receives the keep-alive time interval t, the countdown is started, the remaining keep-alive time is less and less along with the lapse of time, and after the remaining keep-alive time becomes 0, the service data corresponding to the target service type stored in the cache region and the logic of the reported data of the household camera end are released, so that the performance consumed by continuously processing the service data update is effectively avoided.
In one embodiment, the step S230 of obtaining the service data corresponding to the target service type and storing the service data in the buffer before the user performs the next service operation includes the following steps:
step S231, before the user performs the next service operation, judging whether the historical operation times ranking of the target service type is more than a preset name time;
step S232, if yes, obtaining the service data corresponding to the target service type and storing the service data in the cache region.
Specifically, because the operation information is based on the historical service of the user, after a certain service is operated, the probability of operating the certain service is very small although it is very high. For example, as shown in table 2, according to the historical service operation information of the user, after the F-type service is operated, only the H-type service is operated, and theoretically, the next service type to be operated after the F-type service is operated is H, but the actual H-type service is operated only once in the entire historical service operation information, and the actual occurrence probability is very small. Taking the smart home device as a home camera and the operation terminal as a mobile phone APP as an example, the H service is similar to the mobile phone APP terminal to restore the factory setting service of the home camera, and although only the factory setting restoration service is operated after the F-type service, the service actually restoring the factory setting is operated once for the home camera for a long time. Therefore, after the F service is operated, the service data restored to the factory setting is stored in the cache region for service data maintenance, but the probability of actually operating the service restored to the factory setting is not high, and the performance of service data maintenance in the cache region is further wasted to a great extent. Taking table 1 above as an example, the historical operation times are ranked as follows: a (11), C (9), B (7), F (5), G (4), E (4), D (2), and H (1), in this embodiment, the preset number of names may be set according to actual requirements, for example, the preset number of names is the fifth name, and then the service types above the fifth name are A, C, B, F and G. Before the next business operation is carried out by the user, whether the historical operation times ranking of the target business type is above the preset name times or not is judged, if yes, the business data corresponding to the target business type is obtained and stored in the cache region, and waste of performance of business data maintenance in the cache region can be effectively reduced. As one implementation manner, some sensitive service operations of the target device are not a target service type of the target device, for example, services such as factory reset, restart, and data deletion are restored, and data loss of the target device caused by improper operations is avoided.
Further, in one embodiment, the step S211 includes the following steps:
according to the historical service operation information of the user, acquiring the service types with the historical operation times ranked above a preset name time, taking the service types as hot services, and recording the historical operation time and the historical operation times of the hot services;
when the service type of the current service operation of the user belongs to the hot service, calculating the probability that the next service type to be operated after the current service operation of the user belongs to the hot service, and forming a service association probability table.
Specifically, the historical service operation information is given as an example in table 1, and according to the historical service operation information, the historical operation times are ranked as follows: a (11 times), C (9 times), B (7 times), F (5 times), G (4 times), E (4 times), D (2 times) and H (1 time). In this embodiment, the preset name number may be set according to actual requirements, for example, if the preset name number is the fifth name, then the service types above the fifth name are A, C, B, F and G, and then A, C, B, F and G are hot services. According to the historical service operation information, when the service type of the current service operation of the user belongs to the hot service, the probability that the next service type to be operated after the current service operation of the user belongs to the hot service can be calculated, and a service association probability table is formed as shown in table 3.
TABLE 3
Figure BDA0003549748230000151
Through the steps, the service types with the historical operation times ranked below the preset name times can be excluded from the predicted target service types, and waste of performance of service data maintenance in the cache region can be effectively reduced.
In one embodiment, the service processing method provided by the present application further includes the following steps:
determining the operation time of the initial service according to the historical service operation time of the user;
and before the operation time of the initial service, acquiring service data corresponding to the initial service and storing the service data in a buffer area.
Specifically, since the initial service does not have a corresponding service type of the previous operation, the initial service type cannot be predicted according to the service type of the current operation, for example, the service type a of the initial operation in table 1 is the initial service. But the operation time of the initial service can be determined according to the historical service operation time of the user. For example, the operation time of the type a service in table 1 is about 7 to 9, so to ensure that the service data corresponding to the initial service can be prepared in advance, the operation time of the initial service can be set slightly ahead, for example, the operation time of the initial service is determined to be 7, then the service data corresponding to the initial service is stored in the buffer before 7, the service data corresponding to the initial service can be prepared in advance, and the processing efficiency of the initial service is effectively improved.
The embodiment also provides a service processing method, and the process includes the following steps:
step S310, according to the historical service operation information of the user, obtaining the service type with the historical operation times ranked above the preset name times, taking the service type as the hot service, and recording the historical operation time and the historical operation times of the hot service.
Step S320, when the service type of the current service operation of the user belongs to the hot service, calculating a probability that a next service type to be operated by the user after the current service operation belongs to the hot service, and forming a service association probability table.
Step S330, according to the service association probability table, predicting the next service type to be operated after the current service operation by the user, and taking the next service type as the target service type.
Step S340, before the user performs the next service operation, obtaining the service data corresponding to the target service type and storing the service data in the buffer area.
Step S350, obtaining the operation time of the service with the same type as the target service type in all the historical service operation information and the time interval between the operation time of the service with the same type as the current service in the historical service operation information of the user according to the operation time of the service with the same type as the current service in the historical service operation information of the user and the operation time of the service with the same type as the target service.
Step S360, from the operation time of the same type of service as the target service in all the historical service operation information and the time interval between the operation time of the same type of service as the current service, the maximum time interval within the preset time interval range is screened out and used as the keep-alive time interval.
Step S360, in the keep alive time interval, if the service data corresponding to the target service type is updated, the updated service data is updated to the cache region.
Step S370, when the user actually performs the next service operation, and under the condition that the service type of the service operation is the same as the target service type, the corresponding service data is obtained from the buffer.
Fig. 3 is a schematic diagram of a service processing apparatus according to an embodiment of the present invention, and as shown in fig. 3, there is provided a service processing apparatus 30, which includes a prediction module 31, an acquisition module 32, and an operation module 33;
the prediction module 31 is configured to predict, according to historical service operation information of the user, a next service type to be operated after a current service operation of the user, and use the predicted service type as a target service type; the historical service operation information comprises service types processed by the user, and historical operation time and historical operation times of each service type;
an obtaining module 32, configured to obtain service data corresponding to a target service type and store the service data in a cache area before a user performs a next service operation;
an operation module 33, configured to, when the user actually performs the next service operation, obtain corresponding service data from the buffer if the service type of the service operation is the same as the target service type.
The service processing device 30 predicts a next service type to be operated by the user after the current service operation according to the historical service operation information of the user, and takes the predicted service type as a target service type; the historical service operation information comprises service types processed by the user, and historical operation time and historical operation times of each service type; before the user performs the next service operation, acquiring service data corresponding to the target service type and storing the service data in a cache region; and when the user actually carries out the next service operation, acquiring corresponding service data from the buffer under the condition that the service type of the service operation is the same as the target service type. According to the method and the device, the service type to be operated next after the current service operation of the user is predicted according to the historical service operation information of the user, and the service data corresponding to the predicted service type is stored in the cache region in advance, so that the interactive processing time of the operation end and the target device can be saved, and the service processing efficiency is effectively improved.
In one embodiment, the prediction module 31 is further configured to calculate, according to the historical service operation information of the user, a probability that a service type to be operated next by the user after the current service operation belongs to each service type in the historical service operation information when the service type of the current service operation of the user belongs to each service type in the historical service operation information, and generate a service association probability table;
and predicting the next service type to be operated by the user after the current service operation according to the service association probability table, and taking the next service type as a target service type.
In one embodiment, the service processing apparatus 30 further includes an updating module, and after the service data corresponding to the target service type is obtained and stored in the buffer, before the corresponding service data is obtained from the buffer, the updating module is configured to update the updated service data to the buffer if the service data corresponding to the target service type is updated.
In one embodiment, before the corresponding service data is obtained from the buffer, the updating module is further configured to obtain a time interval between the operation time of the target service type and the operation time of the current service according to the operation time of the service of the same type as the current service in the historical service operation information of the user and the operation time of the service of the same type as the target service type, and use the time interval as a keep-alive time interval;
and in the keep-alive time interval, if the service data corresponding to the target service type is updated, updating the updated service data to the cache region.
In one embodiment, the updating module is further configured to obtain, according to the operation time of the service of the same type as the current service in the historical service operation information of the user and the operation time of the service of the same type as the target service, the operation time of the service of the same type as the target service in all the historical service operation information and the time interval between the operation times of the services of the same type as the current service;
and screening out the maximum time interval within a preset time interval range from the time intervals between the operation time of the service with the same type as the target service and the operation time of the service with the same type as the current service in all the historical service operation information as the keep-alive time interval.
In one embodiment, the obtaining module 32 is further configured to determine whether the ranking of the historical operation times of the target service type is above a preset ranking before the user performs the next service operation;
if yes, acquiring service data corresponding to the target service type and storing the service data in a cache region.
In one embodiment, the prediction module 31 is further configured to obtain, according to historical service operation information of the user, a service type with a historical operation frequency ranked above a preset name frequency, use the service type as a hot service, and record historical operation time and historical operation frequency of the hot service;
when the service type of the current service operation of the user belongs to the hot service, calculating the probability that the next service type to be operated after the current service operation of the user belongs to the hot service, and forming a service association probability table.
In one embodiment, the service processing apparatus 30 further includes a service starting module, configured to determine an operation time of a service starting according to a historical service operation time of a user;
and before the operation time of the initial service, acquiring service data corresponding to the initial service and storing the service data in a buffer area.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 4. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The memory provides an environment for the operating system and the computer programs to run in the non-volatile storage medium. The database of the computer device is used for storing a preset configuration information set. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement the service processing method described above.
In one embodiment, a computer device is provided, which may be a terminal. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The memory provides an environment for the operating system and the computer programs to run in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a business process method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on a shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the configuration shown in fig. 4 is a block diagram of only a portion of the configuration associated with the present application and does not constitute a limitation on the computing device to which the present application may be applied, and that a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
predicting a next service type to be operated after the current service operation of the user according to the historical service operation information of the user, and taking the next service type as a target service type; the historical service operation information comprises service types processed by the user, and historical operation time and historical operation times of each service type;
before the user performs the next service operation, acquiring service data corresponding to the target service type and storing the service data in a cache region;
and when the user actually carries out the next service operation, acquiring corresponding service data from the buffer under the condition that the service type of the service operation is the same as the target service type.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
according to the historical service operation information of the user, calculating the probability that the next service type to be operated by the user after the current service operation belongs to each service type in the historical service operation information when the service type of the current service operation of the user belongs to each service type in the historical service operation information, and generating a service association probability table;
and predicting the next service type to be operated by the user after the current service operation according to the service association probability table, and taking the next service type as a target service type.
In one embodiment, after obtaining the service data corresponding to the target service type and storing the service data in the buffer, and before obtaining the corresponding service data from the buffer, the processor further implements the following steps when executing the computer program:
and if the service data corresponding to the target service type is updated, updating the updated service data to the cache region.
In one embodiment, before obtaining the corresponding service data from the buffer, the processor, when executing the computer program, further performs the following steps:
obtaining a time interval between the operation time of the target service type and the operation time of the current service according to the operation time of the service of the same type as the current service in the historical service operation information of the user and the operation time of the service of the same type as the target service type, wherein the time interval is used as a keep-alive time interval;
and in the keep-alive time interval, if the service data corresponding to the target service type is updated, updating the updated service data to the cache region.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
obtaining the operation time of the same type of service as the current service in all the historical service operation information and the time interval between the operation time of the same type of service as the target service in all the historical service operation information according to the operation time of the same type of service as the current service in the historical service operation information of the user and the operation time of the same type of service as the target service;
and screening out the maximum time interval within a preset time interval range from the time intervals between the operation time of the service with the same type as the target service and the operation time of the service with the same type as the current service in all the historical service operation information as the keep-alive time interval.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
before the user carries out the next business operation, judging whether the historical operation times ranking of the target business type is more than the preset times;
if yes, acquiring service data corresponding to the target service type and storing the service data in a cache region.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
according to the historical service operation information of the user, acquiring the service types with the historical operation times ranked above a preset name time, taking the service types as hot services, and recording the historical operation time and the historical operation times of the hot services;
when the service type of the current service operation of the user belongs to the hot service, calculating the probability that the next service type to be operated after the current service operation of the user belongs to the hot service, and forming a service association probability table.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
determining the operation time of the initial service according to the historical service operation time of the user;
and before the operation time of the initial service, acquiring service data corresponding to the initial service and storing the service data in a buffer area.
The storage medium predicts the next service type to be operated by the user after the current service operation according to the historical service operation information of the user and takes the predicted service type as the target service type; the historical service operation information comprises service types processed by the user, and historical operation time and historical operation times of each service type; before the user performs the next service operation, acquiring service data corresponding to the target service type and storing the service data in a cache region; and when the user actually carries out the next service operation, acquiring corresponding service data from the buffer under the condition that the service type of the service operation is the same as the target service type. According to the method and the device, the service type to be operated next after the current service operation of the user is predicted according to the historical service operation information of the user, and the service data corresponding to the predicted service type is stored in the cache region in advance, so that the interactive processing time of the operation end and the target device can be saved, and the service processing efficiency is effectively improved.
It should be understood that the specific embodiments described herein are merely illustrative of this application and are not intended to be limiting. All other embodiments, which can be derived by a person skilled in the art from the examples provided herein without any inventive step, shall fall within the scope of protection of the present application.
It is obvious that the drawings are only examples or embodiments of the present application, and it is obvious to those skilled in the art that the present application can be applied to other similar cases according to the drawings without creative efforts. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
The term "embodiment" is used herein to mean that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly or implicitly understood by one of ordinary skill in the art that the embodiments described in this application may be combined with other embodiments without conflict.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the patent protection. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (11)

1. A service processing method for processing a service of a target device at an operation side, the method comprising the steps of:
predicting a next service type to be operated by the user after the current service operation according to the historical service operation information of the user, and taking the next service type as a target service type; the historical service operation information comprises the service types processed by the user, and the historical operation time and the historical operation times of each service type;
before the user performs the next service operation, acquiring the service data corresponding to the target service type and storing the service data in a cache region;
and when the user actually carries out the next service operation, acquiring corresponding service data from the cache region under the condition that the service type of the service operation is the same as the target service type.
2. The service processing method according to claim 1, wherein the step of predicting a service type to be operated next by the user after the current service operation according to the historical service operation information of the user, and setting the predicted service type as the target service type, comprises the steps of:
calculating the probability that the next service type to be operated by the user after the current service operation belongs to each service type in the historical service operation information when the service type of the current service operation of the user belongs to each service type in the historical service operation information according to the historical service operation information of the user, and generating a service association probability table;
and predicting the next service type to be operated by the user after the current service operation according to the service association probability table, and taking the next service type as a target service type.
3. The service processing method according to claim 1 or 2, wherein after the obtaining and storing the service data corresponding to the target service type in a buffer, and before the obtaining the corresponding service data from the buffer, the method further comprises:
and if the service data corresponding to the target service type is updated, updating the updated service data to the cache region.
4. The service processing method according to claim 1 or 2, wherein before the obtaining the corresponding service data from the buffer, the method further comprises:
obtaining a time interval between the operation time of the target service type and the operation time of the current service according to the operation time of the service of the same type as the current service in the historical service operation information of the user and the operation time of the service of the same type as the target service type, wherein the time interval is used as a keep-alive time interval;
and in the keep-alive time interval, if the service data corresponding to the target service type is updated, updating the updated service data to the cache region.
5. The service processing method according to claim 4, wherein the obtaining a time interval between the operation time of the target service type and the operation time of the current service as a keep-alive time interval according to the operation time of the service of the same type as the current service and the operation time of the service of the same type as the target service in the historical service operation information of the user comprises the following steps:
obtaining the operation time of the same type of service as the current service in the historical service operation information of the user and the time interval between the operation time of the same type of service as the target service in all the historical service operation information and the operation time of the same type of service as the current service;
and screening out the maximum time interval within a preset time interval range from the time intervals between the operation time of the service with the same type as the target service and the operation time of the service with the same type as the current service in all the historical service operation information, wherein the maximum time interval is used as the keep-alive time interval.
6. The service processing method according to claim 1, wherein before the user performs the next service operation, the service data corresponding to the target service type is obtained and stored in a buffer area, comprising the following steps:
before the user carries out the next business operation, judging whether the historical operation times ranking of the target business type is above a preset name time or not;
if yes, acquiring the service data corresponding to the target service type and storing the service data in the cache region.
7. The service processing method according to claim 2, wherein said calculating, according to the historical service operation information of the user, a probability that a service type to be operated next by the user after a current service operation belongs to each service type in the historical service operation information when the service type of the current service operation of the user belongs to each service type in the historical service operation information, and generating a service association probability table, comprises:
according to the historical service operation information of the user, acquiring a service type with the historical operation times ranked above a preset name time, taking the service type as a hot service, and recording the historical operation time and the historical operation times of the hot service;
and when the service type of the current service operation of the user belongs to the hot service, calculating the probability that the service type to be operated next by the user after the current service operation belongs to the hot service, and forming the service association probability table.
8. The traffic processing method according to claim 1, wherein the method further comprises:
determining the operation time of the initial service according to the historical service operation time of the user;
and before the operation time of the initial service, acquiring service data corresponding to the initial service and storing the service data in the cache region.
9. A service processing apparatus for processing a service of a target device at an operation end, the apparatus comprising: the device comprises a prediction module, an acquisition module and an operation module;
the prediction module is used for predicting the next service type to be operated by the user after the current service operation according to the historical service operation information of the user and taking the next service type as a target service type; the historical service operation information comprises service types processed by a user, and historical operation time and historical operation times of each service type;
the obtaining module is used for obtaining the service data corresponding to the target service type and storing the service data in a cache area before the user performs the next service operation;
and the operation module is used for acquiring corresponding service data from the cache region under the condition that the service type of the service operation is the same as the target service type when the user actually performs the next service operation.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 8 are implemented when the computer program is executed by the processor.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
CN202210258538.7A 2022-03-16 2022-03-16 Service processing method, system, device and readable storage medium Pending CN114826806A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210258538.7A CN114826806A (en) 2022-03-16 2022-03-16 Service processing method, system, device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210258538.7A CN114826806A (en) 2022-03-16 2022-03-16 Service processing method, system, device and readable storage medium

Publications (1)

Publication Number Publication Date
CN114826806A true CN114826806A (en) 2022-07-29

Family

ID=82528913

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210258538.7A Pending CN114826806A (en) 2022-03-16 2022-03-16 Service processing method, system, device and readable storage medium

Country Status (1)

Country Link
CN (1) CN114826806A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105635319A (en) * 2016-03-03 2016-06-01 北京邮电大学 Data caching method and device
CN105824705A (en) * 2016-04-01 2016-08-03 广州唯品会网络技术有限公司 Task distribution method and electronic equipment
CN108763453A (en) * 2018-05-28 2018-11-06 浙江口碑网络技术有限公司 The page data processing method and device of Behavior-based control prediction
CN110187945A (en) * 2019-04-26 2019-08-30 平安科技(深圳)有限公司 Indicate information generating method, device, terminal and storage medium
CN112182295A (en) * 2019-07-05 2021-01-05 浙江宇视科技有限公司 Business processing method and device based on behavior prediction and electronic equipment
CN113157198A (en) * 2020-01-07 2021-07-23 伊姆西Ip控股有限责任公司 Method, apparatus and computer program product for managing a cache
CN113453036A (en) * 2020-03-24 2021-09-28 中国电信股份有限公司 Video resource caching method and edge streaming media server of content distribution network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105635319A (en) * 2016-03-03 2016-06-01 北京邮电大学 Data caching method and device
CN105824705A (en) * 2016-04-01 2016-08-03 广州唯品会网络技术有限公司 Task distribution method and electronic equipment
CN108763453A (en) * 2018-05-28 2018-11-06 浙江口碑网络技术有限公司 The page data processing method and device of Behavior-based control prediction
CN110187945A (en) * 2019-04-26 2019-08-30 平安科技(深圳)有限公司 Indicate information generating method, device, terminal and storage medium
CN112182295A (en) * 2019-07-05 2021-01-05 浙江宇视科技有限公司 Business processing method and device based on behavior prediction and electronic equipment
CN113157198A (en) * 2020-01-07 2021-07-23 伊姆西Ip控股有限责任公司 Method, apparatus and computer program product for managing a cache
CN113453036A (en) * 2020-03-24 2021-09-28 中国电信股份有限公司 Video resource caching method and edge streaming media server of content distribution network

Similar Documents

Publication Publication Date Title
US11146502B2 (en) Method and apparatus for allocating resource
KR100791628B1 (en) Method for active controlling cache in mobile network system, Recording medium and System thereof
CN110896404B (en) Data processing method and device and computing node
CN106599146B (en) Cache page processing method and device and cache page updating request processing method and device
CN110944219B (en) Resource allocation method, device, server and storage medium
WO2017193873A1 (en) Distributed processing system, data processing method, and control node apparatus
CN111818117A (en) Data updating method and device, storage medium and electronic equipment
CN113392041A (en) Application cache cleaning method, device, equipment and storage medium
CN109086158B (en) Abnormal cause analysis method and device and server
CN112506670A (en) Multi-node automatic operation and maintenance task processing method, system and storage medium
CN111159233A (en) Distributed caching method, system, computer device and storage medium
CN111459676A (en) Node resource management method, device and storage medium
CN114936086A (en) Task scheduler, task scheduling method and task scheduling device under multi-computing center scene
CN110738156A (en) face recognition system and method based on message middleware
CN108111591B (en) Method and device for pushing message and computer readable storage medium
CN114826806A (en) Service processing method, system, device and readable storage medium
CN106550021B (en) Push method and device for push message
CN110019372B (en) Data monitoring method, device, server and storage medium
CN113302593A (en) Task processing method, device and system, electronic equipment and storage medium
CN113965915B (en) Data processing method and electronic equipment
CN115658745A (en) Data processing method, data processing device, computer equipment and computer readable storage medium
CN113783921A (en) Method and device for creating cache component
CN112363940A (en) Data processing method and device, storage medium and server
CN113965535B (en) Message push optimization method, device, equipment and readable storage medium
CN114884974B (en) Data multiplexing method, system and computing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination