CN117527906A - Data scheduling processing method and device and electronic equipment - Google Patents

Data scheduling processing method and device and electronic equipment Download PDF

Info

Publication number
CN117527906A
CN117527906A CN202311511824.0A CN202311511824A CN117527906A CN 117527906 A CN117527906 A CN 117527906A CN 202311511824 A CN202311511824 A CN 202311511824A CN 117527906 A CN117527906 A CN 117527906A
Authority
CN
China
Prior art keywords
network
priority
data packet
scheduling
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311511824.0A
Other languages
Chinese (zh)
Inventor
徐士立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202311511824.0A priority Critical patent/CN117527906A/en
Publication of CN117527906A publication Critical patent/CN117527906A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/61Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • H04L67/125Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks involving control of end-device applications over a network
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application provides a data scheduling processing method, a device and electronic equipment, which are applied to various scenes such as cloud technology, artificial intelligence, intelligent traffic, auxiliary driving and the like, wherein the method comprises the following steps: switching the target application program from the foreground to the background operation in response to the switching instruction; performing scene type analysis on the identification information of the current operation scene and the state information of the client object to obtain current scene type information; acquiring a current network scheduling strategy corresponding to the current scene type information; under the condition that the current network scheduling strategy is a scheduling strategy for maintaining network communication, determining the processing priority of a network data packet in a target application program, and acquiring target network resources for background operation of the target application program from an operating system; and scheduling the network data packet according to the processing priority of the network data packet and the target network resource. According to the embodiment of the application, the occupation of system resources can be reduced, and the user experience is improved.

Description

Data scheduling processing method and device and electronic equipment
Technical Field
The application belongs to the technical field of computers, and particularly relates to a data scheduling processing method and device and electronic equipment.
Background
A plurality of application programs (APPlication, APP) can be run in the intelligent terminal, the application programs needing to be operated currently can be switched to the foreground through switching among the application programs, and the application programs not needing to be operated currently can be switched to the background.
In order to ensure the experience of the foreground application, the network communication priority of the background application is usually limited, and even the background application is not allowed to access the network, so that the function of the background application is affected, and the user experience is reduced.
Disclosure of Invention
In order to solve the technical problems, the application provides a data scheduling processing method, a data scheduling processing device and electronic equipment.
In one aspect, the present application proposes a data scheduling processing method, where the method includes:
switching the target application program from the foreground to the background operation in response to the switching instruction;
acquiring identification information of a current running scene and state information of a client object corresponding to the target application program, and performing scene type analysis on the identification information of the current running scene and the state information of the client object to obtain current scene type information;
acquiring a current network scheduling strategy corresponding to the current scene type information from a preset network scheduling strategy;
Determining the processing priority of a network data packet in the target application program under the condition that the current network scheduling strategy is a scheduling strategy for maintaining network communication, and sending a network resource request to an operating system so that the operating system responds to the network resource request to analyze the current network state of the operating system and obtain target network resources for background operation of the target application program;
receiving the target network resource sent by the operating system;
and scheduling the network data packet according to the processing priority of the network data packet and the target network resource.
In another aspect, the present application proposes a data scheduling processing method, where the method includes:
receiving a network resource request sent by a client under the condition that the current network scheduling strategy is a scheduling strategy for maintaining network communication; the current network scheduling policy is a network scheduling policy corresponding to the current scene type information, which is acquired from a preset network scheduling policy by the client; the current scene type information is obtained by performing scene type analysis on identification information of a current operation scene and state information of a client object when a client responds to a switching instruction and switches a target application program from a foreground to a background operation;
Analyzing the current network state of the local operating system in response to the network resource request to obtain a target network resource for background operation of the target application program;
sending the target network resource to the client so that the client performs scheduling processing on the network data packet according to the processing priority of the network data packet in the target application program and the target network resource; the processing priority of the network data packet is determined by the client.
In another aspect, the present application proposes a data scheduling processing apparatus, including:
the switching response module is used for switching the target application program from the foreground to the background operation in response to the switching instruction;
the scene type analysis module is used for acquiring the identification information of the current running scene and the state information of the client object corresponding to the target application program, and performing scene type analysis on the identification information of the current running scene and the state information of the client object to acquire current scene type information;
the current network scheduling strategy acquisition module is used for acquiring a current network scheduling strategy corresponding to the current scene type information from a preset network scheduling strategy;
The priority processing and request sending module is used for determining the processing priority of the network data packet in the target application program under the condition that the current network scheduling policy is the scheduling policy for maintaining network communication, and sending a network resource request to an operating system so that the operating system responds to the network resource request to analyze the current network state of the operating system and obtain target network resources for background operation of the target application program;
a network resource receiving module, configured to receive the target network resource sent by the operating system;
and the scheduling module is used for scheduling the network data packet according to the processing priority of the network data packet and the target network resource.
In another aspect, the present application proposes a data scheduling processing apparatus, including:
the request receiving module is used for receiving a network resource request sent by the client under the condition that the current network scheduling strategy is a scheduling strategy for maintaining network communication; the current network scheduling policy is a network scheduling policy corresponding to the current scene type information, which is acquired from a preset network scheduling policy by the client; the current scene type information is obtained by performing scene type analysis on identification information of a current operation scene and state information of a client object when a client responds to a switching instruction and switches a target application program from a foreground to a background operation;
The network resource generation module is used for responding to the network resource request to analyze the current network state of the local operating system so as to obtain a target network resource for background operation of the target application program;
the network resource sending module is used for sending the target network resource to the client so that the client can schedule the network data packet according to the processing priority of the network data packet in the target application program and the target network resource; the processing priority of the network data packet is determined by the client
In another aspect, the application proposes an electronic device for data scheduling processing, where the electronic device includes a processor and a memory, where at least one instruction or at least one program is stored in the memory, and the at least one instruction or at least one program is loaded and executed by the processor to implement a data scheduling processing method as described above.
In another aspect, the present application proposes a computer readable storage medium having stored therein at least one instruction or at least one program, the at least one instruction or the at least one program being loaded and executed by a processor to implement a data scheduling processing method as described above.
In another aspect, the present application proposes a computer program product comprising a computer program which, when executed by a processor, implements a data scheduling processing method as described above.
According to the data scheduling processing method, the device and the electronic equipment, when the target application program is switched from the foreground to the background, scene type analysis is carried out on identification information of a current operation scene and state information of a client object to obtain current scene type information, the current network scheduling strategy corresponding to the current scene type information is obtained from a preset network scheduling strategy, and under the condition that the current network scheduling strategy is a scheduling strategy for maintaining network communication, target network resources which can be used for background operation of the target application program are obtained from an operating system, and according to the processing priority of network data packets and the target network resources, the network data packets are scheduled, so that when the target application program is switched to the background operation, the current scene type information is determined according to the identification information of the current operation scene and the scene type of the current scene type information of the client object, and the scene type information is determined, and the scene type of network access is required to be maintained.
Drawings
In order to more clearly illustrate the technical solutions and advantages of embodiments of the present application or of the prior art, the following description will briefly introduce the drawings that are required to be used in the embodiments or the prior art descriptions, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram illustrating an implementation environment of a data scheduling processing method according to an exemplary embodiment.
Fig. 2 is a flow chart diagram of a data scheduling processing method according to an exemplary embodiment.
Fig. 3 is a flow diagram illustrating a process for obtaining a corresponding preset network scheduling policy from a server according to an exemplary embodiment.
Fig. 4 is a second flow chart illustrating a data scheduling processing method according to an exemplary embodiment.
Fig. 5 is a flow diagram illustrating an update of a preset network scheduling policy according to an exemplary embodiment.
Fig. 6 is a flowchart illustrating a data scheduling processing method according to an exemplary embodiment.
Fig. 7 is a flow chart diagram showing a data scheduling processing method according to an exemplary embodiment.
Fig. 8 is a block diagram one of a data scheduling processing apparatus according to an exemplary embodiment.
Fig. 9 is a block diagram two of a data scheduling processing apparatus according to an exemplary embodiment.
Fig. 10 is a block diagram of a hardware structure of a terminal according to an exemplary embodiment.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present application based on the embodiments herein.
It should be noted that the terms "first," "second," and the like in the description and the claims of the embodiments of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 1 is a schematic diagram illustrating an implementation environment of a data scheduling processing method according to an exemplary embodiment. As shown in fig. 1, the implementation environment may at least include an operating system 01, a client 02, and a server 03, where the operating system 01, the client 02, and the server 03 may be directly or indirectly connected through a wired or wireless communication manner, and the embodiment of the present application is not limited herein.
Alternatively, the operating system 01 may include:
and the real-time monitoring module is used for monitoring the running condition of the system network module in real time in the running process of the system and determining how many network resources can be used for the APP running in the background.
And the APP interaction module receives the connection request of the APP and returns network resources which are currently available for background running of the APP to the system according to the request of the APP.
And the network resource scheduling module is used for determining which network resources can be allocated to the background running APP according to the real-time network resource condition returned by the real-time monitoring module and the network resource requirement of the foreground APP.
Alternatively, the client 02 may include:
and the network communication module processes the network request according to the scheduling scheme determined by the strategy control module and adopts different processing strategies such as network connection suspension, normal network communication and the like.
And the system interaction module is used for interacting with the operating system at fixed time when the APP is switched to the background, and confirming the current network resource quantity which can be allocated to the APP by the operating system.
And the network policy control module downloads a corresponding control policy from the server, and determines whether the network packet of the APP can be normally transmitted and received or is stopped to be transmitted and received when the APP is switched to the background according to the scene type, the processing priority of the corresponding network data packet and the available network resource quantity distributed by the operating system.
Alternatively, the server 03 may include:
and the network communication module is used for receiving the network request of the client, issuing a network scheduling strategy and processing normal logic communication.
And the priority management module is used for processing the priority information of various network data packets preset by a developer and storing the priority information for scheduling by the policy management module.
And the scene classification module is used for processing various game scene classification information preset by a developer and storing the information for scheduling by the strategy management module.
And the policy management module determines a network scheduling policy according to the corresponding data stored by the priority management module and the scene classification module and sends the network scheduling policy to the client through the network communication module according to the request of the client.
It should be noted that, in the embodiment of the present application, the APP may be various types of APPs, which is not limited in particular. For example, the APP may be a game APP, a short video APP, a shopping APP, a news APP, or the like.
Taking game APP as an example, the client 02 may be a game client, in which the game APP is installed. Correspondingly, the network policy control module in the client 02 is configured to download a corresponding control policy from the game server, and determine, when the game APP switches to the background, whether the network packet of the game APP can be normally transmitted or received or suspended during the background operation according to the game scene type, the priority of the corresponding data packet, and the amount of available network resources allocated by the system. By way of example, the game client may be, but is not limited to, a smart phone, tablet, notebook, desktop computer, smart speaker, smart voice interaction device, smart home appliance, smart watch, vehicle terminal, aircraft, etc.
Taking game APP as an example, the operating system may be an operating system corresponding to a game client.
Taking the game APP as an example, the server 03 may be a game server. Correspondingly, the network communication module in the game server is used for receiving the network request of the game client, issuing a network scheduling strategy and processing normal game logic communication, the priority management module is used for processing the priority information of various network data packets preset by a game developer and storing the priority information for the strategy management module to schedule, the scene classification module is used for processing various game scene classification information preset by the game developer and storing the strategy classification information for the strategy management module to schedule, and the strategy management module is used for determining the network scheduling strategy according to the corresponding data stored by the priority management module and the scene classification module and issuing the strategy to the game client through the network communication module according to the request of the game client.
The game server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDNs), basic cloud computing services such as big data and artificial intelligence platforms.
It should be noted that fig. 1 is only an example. In other scenarios, other implementation environments may also be included.
It will be appreciated that in the specific embodiments of the present application, related data such as status information of a client object, etc. is related to user information, and when the above embodiments of the present application are applied to specific products or technologies, user permission or consent needs to be obtained, and the collection, use and processing of related data needs to comply with related laws and regulations and standards of related countries and regions.
Fig. 2 is a flow chart diagram of a data scheduling processing method according to an exemplary embodiment. The method may be used in the implementation environment of fig. 1. The present specification provides method operational steps as described above, for example, in the examples or flowcharts, but may include more or fewer operational steps based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When implemented in a real system or server product, the methods illustrated in the embodiments or figures may be performed sequentially or in parallel (e.g., in a parallel processor or multithreaded environment). As shown in fig. 2, the method may include:
S101, the client responds to a switching instruction to switch the target application program from the foreground to the background to run.
In the implementation of the application, the switching instruction may be triggered by a client object, or may be automatically triggered by a system under a certain scene. Taking the triggering of the client object as an example, when the client object needs to switch the target application program from the foreground to the background in the process of using the target application program, a switching instruction can be triggered, and the client responds to the switching instruction to switch the target application program from the foreground to the background for running.
S103, the client acquires the identification information of the current operation scene and the state information of the client object, and performs scene type analysis on the identification information of the current operation scene and the state information of the client object to obtain current scene type information.
After the client switches the target application program from the foreground to the background, the client may acquire identification information of the current running scene and status information of the client object.
The current running scene may refer to a scene where the target application program is currently located. Taking a target application program as a game class APP as an example, the current running scene may refer to: download class scenes, hang-up class scenes, sightseeing class scenes, etc. The identification information is used to uniquely identify the current operating scenario. The identification information of the current running scene may be, for example, an identification number (identity document, id) of the current scene.
The state information may refer to a state of the client object in the current running scenario. It should be noted that the state information is set according to the current operation scenario. For example, if the current running scene is a background download class scene, the state information of the client object may be that the client object downloads normally or that the client object pauses downloading. The client object may refer to a user using the target application.
After obtaining the identification information of the current operation scene and the state information of the client object, the client analyzes the scene types of the identification information and the state information of the client object to obtain the current scene type information. For example, the current running scene is a download class scene, the state information of the client object is the state of normal download of the client object, and the current scene type information may be a background download class.
S105, the client acquires a current network scheduling strategy corresponding to the current scene type information from a preset network scheduling strategy.
In this embodiment of the present application, preset network scheduling policies corresponding to different current type information may be cached in advance in a client, and after determining that current scene type information is obtained, the client may obtain, from the cached preset network scheduling policies, a current network scheduling policy corresponding to the current scene type information.
S107, under the condition that the current network scheduling strategy is the scheduling strategy for maintaining network communication, the client determines the processing priority of the network data packet in the target application program and sends a network resource request to the operating system.
S109, the operating system responds to the network resource request to analyze the current network state of the operating system, and target network resources for background operation of the target application program are obtained.
S1011, the operating system sends the target network resource to the client.
S1013, the client performs scheduling processing on the network data packet according to the processing priority of the network data packet and the target network resource.
In the embodiment of the present application, under the condition that the current network scheduling policy is a scheduling policy for maintaining network communication, since the operating system resources are limited and also need to be inclined to the foreground application program, the client may rank the processing priority of the network data packet, and interactively determine, with the operating system, a target network resource capable of being used for background operation of the target application program, that is, the client sends a network resource request to the operating system, and the operating system analyzes the current network state of the operating system in response to the network resource request, so as to obtain the target network resource for background operation of the target application program, and sends the target network resource to the client. And finally, the client performs scheduling processing on the network data packet according to the processing priority of the network data packet and the target network resource.
Therefore, when the target application program is switched to the background operation, the current scene type information is determined according to the identification information of the current operation scene and the state information of the client object, the network scheduling strategy corresponding to the current scene type information is determined, and aiming at the scene type needing to keep network access, the processing priority of the network data packet can be graded and the target network resource capable of being used for the background operation of the target application program is interactively determined with the operating system due to limited operating system resource and the inclination to the foreground application program, and finally the network data packet is scheduled according to the processing priority of the network data packet and the target network resource, so that the network communication capacity and the user experience of the background application program are ensured not to be damaged while the occupation of the system resource is reduced.
In an alternative embodiment, before the step S101, that is, before the client switches the target application program from the foreground to the background running in response to the switching instruction, the client may further acquire the corresponding preset network scheduling policy from the server in advance, and fig. 3 is a schematic flow chart illustrating that the corresponding preset network scheduling policy is acquired from the server according to an exemplary embodiment, and as shown in fig. 3, the acquiring, in advance, the corresponding preset network scheduling policy from the server may include:
S001, the client responds to the service starting instruction and sends a scheduling policy acquisition request to the server.
S003, the server responds to a scheduling policy acquisition request to acquire a preset network scheduling policy from the storage module; the preset network scheduling policy comprises network scheduling policies corresponding to different scene type information.
S005, the server sends a preset network scheduling strategy to the client.
S007, caching a preset network scheduling strategy by the client.
Optionally, the APP development client may preset a preset network scheduling policy, where the preset network scheduling policy may include network scheduling policies corresponding to different scene type information, and store the preset network scheduling policy in a storage module of the server.
Taking the game class APP as an example, the different scene type information may include, but is not limited to: background downloading, hanging, sightseeing, temporary passive background cutting, exiting games, other types and the like.
Background download class: in some specific scenarios, the client object needs to download a large number of resource packages from the CDN, and after the downloading is completed and loaded, the client object can enter the next scenario, and typical scenarios, such as version update before the client object enters the game, and resource package download before the client object enters the copy, etc., because the waiting time is long, the client object may switch the game APP to the background to process other things. In this case, network communication is important, and in particular, the downloading data packet of the relevant resource packet should be guaranteed preferentially.
Hanging machine type: when doing some specific tasks, the client object operation is not needed, but the on-hook is needed to keep on line, and the data packet related to the relevant game logic and the heartbeat packet related to the network connection can be set to be higher in priority.
And (3) sightseeing class: in the sightseeing scene, the client object only views the game process of teammates or other client objects, and when the client object is switched to the background, most of data packets can be interrupted because the interface is invisible, but the corresponding network connection needs to be maintained, so that the heartbeat packets need to be maintained with high priority.
Temporary passive cutting background class: the client object is interrupted by a short message, a telephone or other reminding message of the APP, the game APP is temporarily switched to background operation, and the client object is generally switched back soon, so that the normal network communication is kept as much as possible.
Exiting the game class: the actual intention of the client object to switch to the background under a specific scene is to exit the game, and the related network communication can be directly interrupted at the moment.
Other classes: the priority of the game network data packets is set according to the preset influence on the user experience in the scenes which are not of the types or cannot be specifically classified.
Respective corresponding network scheduling policies may be preset for the above scenario type information, and the network scheduling policies may include, but are not limited to: suspending network communication, partially maintaining network communication, and maintaining normal network communication.
Network communication is suspended. Such as "exit game class", where it is not necessary to maintain normal network communications in the background, all network communications may be suspended directly.
Part of the network communication is maintained. For example, "sightseeing class", "hanging up class" and "other class", which levels of network data packets need to be normally received and transmitted and which can be directly suspended can be set according to actual scenes, so that the experience is not affected when the switching of the client object and the foreground is ensured, and meanwhile, too much system network resources are not additionally occupied.
And normal network communication is maintained. For example, the background download class and the temporary passive switching background class, because the scene also needs to keep the normal operation of all logics when the game is switched to the background, the client object can return to the game at any time, so all data packets should be normally received and transmitted, and the scheduling can be performed according to the priority of the network resource according to the tension degree of the network resource.
Alternatively, in the above step S001, before the switching, the client object may normally start the game, thereby triggering a service start instruction, and the client transmits a scheduling policy acquisition request to the server in response to the service start instruction. In the above step S003, the server may obtain the pre-stored preset network scheduling policy from the storage module in response to the scheduling policy obtaining request. In the above steps S005-S007, the server may send the preset network scheduling policy to the client, cache the preset network scheduling policy by the client, switch the application program to be used in the background, and load other game logic for the client object. Therefore, when the client object starts the game, the preset network scheduling strategies including the network scheduling strategies corresponding to the different scene types are obtained and cached from the server so that the application program can be switched to the background for use, the current network scheduling strategy corresponding to the current scene type information can be directly obtained from the cache after the target application program is switched to the background, the efficiency and the accuracy of obtaining the current network scheduling strategy are improved, and therefore the network communication capacity and the user experience of the background application program are further guaranteed not to be damaged.
The client object can perform normal business logic after the client finishes caching the preset network scheduling policy.
It should be noted that, in the step S103, the scene type analysis is performed on the identification information of the current running scene and the state information of the client object to obtain the current scene type information, which may be implemented in various manners, and is not limited herein.
Fig. 4 is a second flow chart of a data scheduling processing method according to an exemplary embodiment, as shown in fig. 4, in an embodiment, performing scene type analysis on the identification information of the current running scene and the state information of the client object to obtain current scene type information may include:
s1031, a client acquires preset mapping information; the preset mapping information characterizes the mapping relation among the identification information of the operation scene, the state information of the client object and the type identification information of the scene type information.
S1033, the client determines target type identification information corresponding to the identification information of the current operation scene and the state information of the client object according to preset mapping information, and determines the scene type information corresponding to the target type identification information as current scene type information.
In this embodiment, preset mapping information representing a mapping relationship between the identification information of the operation scene, the state information of the client object, and the type identification information of the scene type information may be pre-established, after the client obtains the identification information of the current operation scene and the state information of the client object, the target type identification information having a mapping relationship with the identification information of the current operation scene and the state information of the client object may be determined according to the preset mapping information, and the scene type information corresponding to the target type identification information is determined to be the current scene type information. Because the preset mapping information can accurately represent the mapping relation among the identification information of the running scene, the state information of the client object and the type identification information of the scene type information, the current scene type information is determined according to the preset mapping information, and the confirmation precision of the current scene type information can be improved, so that the precision of scheduling processing of the network data packet is improved.
The preset mapping information may be a preset list, through which mapping relationships among the identification information of the running scene, the state information of the client object, and the type identification information of the scene type information are recorded. Therefore, after the target application program is switched from the foreground to the background, the client can query the identification information of the current running scene and the state information of the client object according to the preset list, wherein the scene type belongs to.
Taking game APP as an example, table 1 is a preset list shown in an exemplary embodiment, and as shown in fig. 1, the preset list records not only the mapping relationship between the identification information of the running scene, the state information of the client object, and the type identification information of the scene type information, but also the field names, the field types, the resource descriptions, the remarks, and the like corresponding to the identification information of the running scene, the state information of the client object, and the type identification information of the scene type information.
TABLE 1 Preset List
In other embodiments, the performing scene type analysis on the identification information of the current operation scene and the state information of the client object to obtain current scene type information may further include: and acquiring the historical data scheduling process, acquiring the identification information of the historical operation scene and the historical state information of the client object, which are used in the historical data scheduling process, and determining the identification information of the historical operation scene and the historical scene type information corresponding to the historical state information of the client object as the current scene type information.
It should be noted that, in the step S105, the client may acquire the current network scheduling policy corresponding to the current scene type information from the preset network scheduling policies, which is not limited in particular.
In an embodiment, in the step S105, the obtaining, by the client, a current network scheduling policy corresponding to the current scene type information from the preset network scheduling policies may include: the client acquires a current network scheduling strategy corresponding to the current scene type information from the cached preset network scheduling strategy.
In this embodiment, since the client obtains the preset network scheduling policy from the server in advance and caches the preset network scheduling policy, where the preset network scheduling policy includes network scheduling policies corresponding to different scene types of information, after the target application program is switched to the background, the client may directly obtain the current network scheduling policy corresponding to the current scene type information from the cached preset network scheduling policy, so that it is not necessary to request the preset network scheduling policy from the server after the target application program is switched, and the current network scheduling policy corresponding to the current scene type information may be directly obtained from the cache, thereby improving efficiency and accuracy of obtaining the current network scheduling policy, and further ensuring that network communication capability and user experience of the background application program are not damaged.
In another embodiment, the client may not acquire the preset network scheduling policy from the server in advance, and store the preset network scheduling policy. In the step S105, the client obtains a current network scheduling policy corresponding to the current scene type information from the preset network scheduling policies, and may further include: after the target application program is switched to the background, the client sends a scheduling policy acquisition request to the server; the server responds to a scheduling policy acquisition request and acquires a preset network scheduling policy from the storage module; the server sends a preset network scheduling strategy to the client, and the client acquires the current network scheduling strategy corresponding to the current scene type information from the preset network scheduling strategy.
In step S107, the client determines the processing priority of the network packet in the target application, which is not particularly limited, and may be implemented in various manners.
In one embodiment, as further shown in fig. 4, in the step S107, the number of the network packets is plural, and the determining, by the client, the processing priority of the network packet in the target application may include:
S1071, the client acquires priority associated data; the priority association data includes at least one of influence data of interruption of each network data packet on the client object, link information of each network data packet, and network resource requirement information of the operating system.
S1073, the client analyzes the priority of each network data packet according to the priority related data to obtain the processing priority of each network data packet.
In this embodiment, after the target application program is switched from the foreground to the background, the priority operating system side of the network data packet in the target application program is not the highest, so that in order to make the target application program not be constrained by the system and thus affect the user experience, the client may rank the processing priority of the network data packet in the target application program, reduce unnecessary network communication with low priority as much as possible, and ensure that the data packet with the most central influence on the user experience can be received and sent normally.
In one embodiment, the client may analyze the priority of each network packet through the priority association data to obtain the processing priority of each network packet. Optionally, the priority association data includes at least one of impact data of interruption of each network data packet on the client object, link information of each network data packet, and network resource requirement information of the operating system.
Wherein, the "influence data of interruption of each network data packet on the client object" may refer to influence of interruption of each network data packet on the client object on the user experience. The "link information of each network data packet" may refer to a network link transmitting each network data packet, and the network link may include a long link and a short link, where the long link refers to: meaning that multiple packets can be sent consecutively on one link, both sides are required to send link detection packets if no packets are sent during the link hold. The short link means that when the two parties of communication have data interaction, a link is established, and after the data transmission is completed, the link is disconnected, namely, each time the link only completes the transmission of one service.
The "network resource requirement information of the operating system" refers to the amount of network resources required by the operating system in normal operation.
In another embodiment, the client may further obtain a historical processing priority of each network packet after the target application switches from the foreground to the background in the historical time, and generate the processing priority of each network packet according to the historical processing priority of each network packet.
In an optional embodiment, in S1073, the analyzing, by the client, the priority of each network data packet according to the priority association data to obtain the processing priority of each network data packet may include: in the case that the priority association data includes link information of each network data packet, the client sets a processing priority of the network data packet of which the link information is a long link as a first priority, and sets a processing priority of the network data packet of which the link information is a short link as a second priority; wherein the first priority is greater than the second priority.
In this embodiment, the short link may also involve a series of preconditions such as authentication, status data, etc. since the recovery cost after the interruption is small, and the long link may also involve a large cost for recovery after the interruption. Therefore, on the premise that other conditions are the same, the priority of the long link is higher based on the cost of recovering the communication, the client can set the processing priority of the network data packet with the link information being the long link as the first priority, and set the processing priority of the network data packet with the link information being the short link as the second priority, and the first priority is larger than the second priority. Therefore, the priority of the network data packet can be set based on the recovery cost after interruption, so that the occupation of system resources is further reduced, and the user experience is improved.
The embodiment of the application does not specifically limit the first priority and the second priority. For example four, the first priority may be a high priority and the second priority may be a low priority.
It should be noted that, the processing priority of the network packets of the long link of the unnecessary type may also be different, for example, the heartbeat packets of the long link may also be relatively higher than the priority of the normal packets.
In another optional embodiment, in S1073, the analyzing, by the client, the priority of each network data packet according to the priority association data to obtain the processing priority of each network data packet may include: when the priority related data comprises network resource demand information of an operating system and the network resource demand information is larger than a preset resource demand threshold, the client sets the processing priority of the network data packet meeting the first preset condition as a first priority and sets the processing priority of the network data packet meeting the second preset condition as a second priority; the first preset condition is at least one of a packet quantity smaller than or equal to a preset packet quantity threshold and a transceiving frequency smaller than or equal to a preset transceiving threshold, the second preset condition is at least one of a packet quantity larger than a preset packet quantity threshold and a transceiving frequency larger than a preset transceiving threshold, and the first priority is larger than the second priority.
In this embodiment, when the system network is congested, under the same condition, the more the number of network data packets and the more frequent the transceiving, the lower the priority is correspondingly, the fewer the number of network data packets and the lower the transceiving frequency, and the higher the priority is correspondingly, so that the processing priority of the network data packets with the packet quantity smaller than or equal to the preset packet quantity threshold and the transceiving frequency smaller than or equal to the preset transceiving threshold can be set as a first priority, and the processing priority of the network data packets with the packet quantity larger than the preset packet quantity threshold and the transceiving frequency larger than the preset transceiving threshold can be set as a second priority, where the first priority is larger than the second priority. Therefore, the processing priority of the network data packets can be set according to the packet quantity and the receiving and transmitting frequency of the network data packets, so that the network data packets with smaller packet quantity and lower receiving and transmitting frequency obtain higher priority, the network data packets with larger packet quantity and higher receiving and transmitting frequency obtain lower priority, and therefore different kinds of network data packets can be reasonably scheduled, occupation of system resources is further reduced, and user experience is improved.
The embodiment is not limited to the first priority and the second priority. Illustratively, the first priority may be a high priority and the second priority may be a low priority.
In another optional embodiment, in S1073, the analyzing, by the client, the priority of each network data packet according to the priority association data to obtain the processing priority of each network data packet may include:
in the case that the priority association data includes influence data of interruption of each network data packet to the client object, the client sets processing priority of the network data packet of which influence data is greater than a preset influence threshold as a first priority, and sets processing priority of the network data packet of which influence data is less than or equal to the preset influence threshold as a second priority.
In this embodiment, when the target application program is switched from the foreground to the background, the influence of the interruption of the network data packets of different types on the user experience is greater than the preset influence threshold, the client object needs to wait for the transmission of the network data packets to complete before performing the next operation, and the processing priority of the network data packets of this type may be set to the first priority. The influence of interrupting the user experience of a part of the network data packets is smaller than or equal to a preset influence threshold, and when the user switches to the background, the user cannot perceive the change of the data, so that the corresponding network data packets can be interrupted, and the processing priority of the network data packets can be set to be a second priority. Therefore, the priority of the network data packet can be set on the basis of the influence data of the interruption of each network data packet on the client object, so that the occupation of system resources and user experience are further reduced.
The embodiment is not particularly limited to the first priority and the second priority. Illustratively, the first priority may be a high priority and the second priority may be a low priority.
In other embodiments, where the priority association data includes impact data of interruption of each network data packet on the client object, link information of each network data packet, and network resource requirement information of the operating system, the determining, by the client, the processing priority of the network data packet in the target application program may include:
if a certain network data packet is a long link data packet (e.g., a heartbeat data packet with a long link) or the influence data of the interrupt on the client object is that the influence on the client object is serious (e.g., a resource download request packet in a download scenario), the processing priority of the network data packet is set to the highest priority. If the impact data of the interruption of a certain network data packet on the client object has a relatively large impact on the client, but the reconnection cost is relatively low, the processing priority can be set to be high. If the impact data of an interruption of a certain network data packet on a client object is that the impact on the client is general or the cost of reconnection is relatively small, the processing priority may be set to a medium priority. If the impact data of the interrupt of a certain network data packet on the client object has a certain impact on the client object, the processing priority may be a normal priority. If the impact data of the interrupt of a certain network data packet on the client object is that the impact on the client object is small, the processing priority is the lowest priority.
In the following, using the game APP as an example, the processing priority of the network packet in the above determination target application will be described:
when the game APP is switched to the background, the following factors influence the priority division:
1. the impact of the user experience (i.e., the impact data of the interruption of individual network packets on the client object).
When an application program is switched from the foreground to the background, the interruption of different types of network data packets has a great difference on the user experience, and after the interruption of part of network data packets has little influence on the user experience (namely, the influence data is smaller than the influence data), for example, interaction information, ranking list information and the like displayed in a game are switched to the background, and the user cannot perceive the change of the data because the interface is invisible, so that the corresponding network data packets can be interrupted, and the network data packets can be set to be of lower priority (namely, the second priority).
In some situations, the user needs to wait for the network transmission to complete to perform the next operation, at this time, the user may perform the operation of switching the background, at this time, the processing priority of the network data packet blocking the user may be set to a high priority (i.e., the first priority), for example, the processing priority of the corresponding download data packet may be set to a high priority when the user waits for the game APP to complete the downloading of the resource packet.
2. The reconstruction cost of long links (i.e., the link information of individual network packets).
The game has various network connections, some are short links, the recovery cost after interruption is low, some are long links, and a series of preconditions such as authentication, state data and the like can be involved, so that the recovery cost after interruption is high. Therefore, the processing priority of the long-link network data packet is higher (i.e. the first priority), and especially the heartbeat data packet of the long link is higher than the priority of the normal data packet under the condition that other conditions are the same and based on the cost of recovering the communication.
3. The amount of network resources required (i.e., the network resource requirement information of the operating system).
When the system network is congested, the more the number of background network data packets is, the more frequent the receiving and transmitting are, the lower the corresponding processing priority is.
Therefore, the receiving and transmitting frequency and the packet quantity of the background network data packet can be reduced, different processing priorities are given to different packet quantities, so that more network data packets of the modules can be processed, and accordingly, the user experience can be improved greatly.
Based on the above factors, the network packets of game APP can be classified into 5 classes, where L5 is the highest priority and L1 is the lowest priority. It should be noted that the division into 5 levels is mainly for controlling the complexity of implementation, and may be divided into fewer or more levels in practical implementation.
In the implementation, a processing priority can be preset for each network data packet according to the influencing factors, and continuous optimization is performed in the actual running process of the game.
L1, the lowest priority, has the smallest influence on user experience, such as various stateless short link data and auxiliary data in a game.
L2, ordinary priority has certain influence on user experience, such as various ranking list data and the like displayed in a game, and the user cannot see when switching to the background, so that the user only needs to request again when switching to the foreground, and the experience influence is relatively controllable.
L3, medium priority, has relatively small impact on the user's return to the game or reconnection cost, such as reporting some network packets of the current running state of the game.
L4, high priority, has a relatively large influence on the user to return to the game again, such as various short links for transmitting core data in the game, although the short links are very important, the reconnection cost is relatively low, and therefore, the processing priority is set to be smaller than L5.
L5, the highest priority, such as heartbeat packets of the main long connection in the game, resource download request packets in the download scene, etc., the interruption can seriously affect the experience of the user when returning to the game, such as requiring the user to log in again, requiring the resource to be downloaded again, etc.
In step S109, the operating system analyzes the current network state of the operating system in response to the network resource request to obtain the target network resource for the background operation of the target application program, which is not limited in particular.
In one embodiment, if the current network state is associated with the network packet amount, in the step S109, the operating system analyzes the current network state of the operating system in response to the network resource request to obtain the target network resource for the background operation of the target application program, which may include:
the operating system determines a first amount of network data packets that can be processed per unit time in response to the network resource request.
The operating system determines a second amount of network data packets that are required by a foreground application currently running in the foreground when running and a third amount of network data packets that are required by the operating system when running.
The operating system determines candidate network data packet quantity which can be used by a background application program running in the background at present according to the first network data packet quantity, the second network data packet quantity and the third network data packet quantity; the background application includes a target application.
The operating system obtains a historical network packet amount used by the background application at a historical time.
And the operating system distributes network data packet quantity for the background application program according to the historical network data packet quantity to obtain target network resources for background operation of the target application program.
In this embodiment, when the operating system receives the network resource request sent by the client, the operating system may first determine the current network state of the system, and the system dynamically evaluates the target network resources available to the background APP, where the current network state is associated with the network packet size.
Alternatively, the operating system may determine a first amount of network data packets that can be processed per unit time in response to the network resource request. The unit time may be set according to actual business requirements, e.g., every minute, every second, etc.
Optionally, the operating system determines a second network packet amount required by a foreground application currently running in the foreground at runtime and a third network packet amount required by the operating system at runtime. The foreground application refers to APP which is not switched to background operation at present, and the third network data packet required by the operating system during operation refers to the data packet amount required by the operating system itself for normal operation.
Optionally, the operating system may calculate a sum of the second network packet amount and the third network packet amount to obtain a sum of the network packet amounts, and calculate a difference between the first network packet and the sum of the network packet amounts to obtain a candidate network packet amount that can be used by a background application currently running in the background. In other embodiments, the operating system may further evaluate the impact weights of the first network packet amount, the second network packet amount, and the third network packet amount on the network state, for example, the impact weights of the first network packet amount, the second network packet amount, and the third network packet amount on the network state are respectively weight 1, weight 2, and weight 3, the operating system may calculate the product of the second network packet amount and weight 2 to obtain a first product, calculate the product of the third network packet amount and weight 3 to obtain a second product, calculate the sum of the first product and the second product to obtain a sum of the network packet amounts, calculate the product of the first network packet and weight 1 to obtain a third product, and calculate the difference between the third product and the sum of the network packet amounts to obtain a candidate network packet amount that can be used by a background application currently running in the background. Wherein the background application currently running in the background includes the target application.
Optionally, the operating system may obtain each background application program and a historical network packet amount used by each background application program in a historical time, and the operating system may calculate a ratio of the historical network packet amount used by each background application program in the historical time to all network packets to obtain a duty ratio of the historical network packet amount used by each background application program in the historical time, and then calculate a product of the duty ratio of the historical network packet amount used by each background application program in the historical time and the candidate network packet amount to obtain a network packet amount required by each background application program to run in the background. Since the background application includes the target application, the network packet amount required for the target application to run in the background can be obtained.
For example, the background application program is application program 1, application program 2 and application program 3, the historical network data packet amount of application program 1 is network data packet amount 1, the historical network data packet amount of application program 2 is network data packet amount 2, the historical network data packet amount of application program 3 is network data packet amount 3, then the sum of the network data packet amount 1, the network data packet amount 2 and the network data packet amount 3 can be calculated to obtain the sum of the historical network data packet amounts, the ratio of the network data packet amount 1 to the sum of the historical network data packet amounts is calculated to obtain the first duty ratio of the network data packet amount 1, the ratio of the network data packet amount 2 to the sum of the historical network data packet amounts is calculated to obtain the second duty ratio of the network data packet amount 2, the ratio of the network data packet amount 3 to the sum of the historical network data packet amounts is calculated to obtain the third duty ratio of the network data packet amount 3, the product of the first duty ratio and the candidate network data packet amount is calculated to obtain the product of the network data packet amount required by background operation of application program 1, the product of the third duty ratio of the candidate network data packet required by background operation of the application program 2 is calculated to obtain the product of the required network data packet of background operation of the network data packet of the application program 2.
The candidate network data packet quantity used by the background application program running in the background can be accurately determined according to the first network data packet quantity, the second network data packet quantity and the third network data packet quantity, the historical network data packet quantity can reflect the historical actual demand of the background application program, the candidate network data packet quantity is distributed for the background application program according to the historical network data packet quantity, the distribution precision of the candidate network data packet quantity can be improved, and the network data packet scheduling precision and efficiency are further improved.
In another embodiment, if the current network state is associated with bandwidth information, in the step S109, the operating system analyzes the current network state of the operating system in response to the network resource request to obtain the target network resource for the background running of the target application program, which may include:
and the operating system responds to the network resource request and determines the first bandwidth information corresponding to the client.
The operating system determines second bandwidth information of the client at the current time.
The operating system determines candidate bandwidth information which can be used by a background application program running in the background at present according to the first bandwidth information and the second bandwidth information; the background application includes a target application.
The operating system obtains historical bandwidth information used by the background application at a historical time.
And the operating system distributes bandwidth information for the background application program according to the historical bandwidth information to obtain target network resources for background operation of the target application program.
In this embodiment, when the operating system receives the network resource request sent by the client, the operating system may first determine a current network state of the system, and the system dynamically evaluates the target network resources available to the background APP, where the current network state may be associated with bandwidth information.
Optionally, the operating system may determine, in response to the network resource request, the first bandwidth information corresponding to the client. The first bandwidth information corresponding to the client may be a total bandwidth corresponding to the client.
Alternatively, the operating system may determine second bandwidth information of the client at the current time. The second bandwidth information may be used to identify the amount of data that is passing through the link during the current time.
Alternatively, the operating system may calculate a difference between the first bandwidth information and the second bandwidth information to obtain candidate bandwidth information that can be used by a background application currently running in the background. In other embodiments, the operating system may further evaluate the impact weights of the first bandwidth information and the second bandwidth information on the network state, for example, the impact weights of the first bandwidth information and the second bandwidth information on the network state are respectively weight 4 and weight 5, the operating system may calculate the product of the first bandwidth information and the weight 4 to obtain a third product, calculate the product of the second bandwidth information and the weight 5 to obtain a fourth product, and calculate the difference between the third product and the fourth product to obtain the candidate bandwidth information. Wherein the background application currently running in the background includes the target application.
Optionally, the operating system may obtain each background application program and historical bandwidth information used by each background application program in a historical time, and the operating system may calculate a ratio of the historical bandwidth information used by each background application program in the historical time to the total bandwidth information to obtain a duty ratio of the historical bandwidth information used by each background application program in the historical time, and then calculate a product of the duty ratio of the historical bandwidth information used by each background application program in the historical time and the candidate bandwidth information to obtain candidate bandwidth information required by each background application program to run in the background. Since the background application includes the target application, bandwidth information required for the target application to run in the background can be obtained.
For example, the background application program is application program 1, application program 2 and application program 3, the historical bandwidth information of application program 1 is historical bandwidth information 1, the historical bandwidth information of application program 2 is historical bandwidth information 2, and the historical bandwidth information of application program 3 is historical bandwidth information 3, then the sum of the historical bandwidth information 1, the historical bandwidth information 2 and the historical bandwidth information 3 can be calculated to obtain the sum of the historical bandwidth information, the ratio of the historical bandwidth information 1 to the sum of the historical bandwidth information is calculated to obtain the fourth duty ratio of the historical bandwidth information 1, the ratio of the historical bandwidth information 2 to the sum of the historical bandwidth information is calculated to obtain the fifth duty ratio of the network data packet quantity 2, the ratio of the network data packet quantity 3 to the sum of the historical bandwidth information is calculated to obtain the product of the sixth duty ratio to the sum of the historical bandwidth information, and the product of the fifth duty ratio to the sum of the historical bandwidth information is calculated to obtain the bandwidth information required by the background operation of application program 1, and the product of the sixth duty ratio to obtain the bandwidth information required by background operation of application program 3 is calculated.
The candidate bandwidth information used by the background application program running in the background can be accurately determined according to the first bandwidth information and the second bandwidth information, the historical bandwidth information can reflect the historical actual demand of the background application program, the bandwidth information is allocated to the background application program according to the historical bandwidth information, the allocation precision of the bandwidth information can be improved, and the precision and the efficiency of network data packet scheduling are further improved.
When the operating system analyzes the current network state to obtain the target network resource, the operating system may consider the network data packet amount alone, consider the bandwidth information alone, and consider both the network data packet amount and the bandwidth information. The packet amount processing and the bandwidth processing described above may be performed simultaneously while considering both the network packet amount and the bandwidth information.
Taking the game APP as an example, except for a resource downloading scene, most of the time, the package quantity is large, the bandwidth is small, and therefore, the general bottleneck is the package quantity, and therefore, aiming at the game APP, the target network resource can be determined based on the package quantity in a key way.
In an alternative embodiment, the APP development client may further update the preset network scheduling policy stored in the storage module in the server, and fig. 5 is a schematic flow chart illustrating updating the preset network scheduling policy according to an exemplary embodiment, and as shown in fig. 5, the updating the preset network scheduling policy may include:
S201, a server receives a strategy updating request; the policy updating request carries an updated preset network scheduling policy, the updated preset network scheduling policy is determined based on updated scene type information, and the updated scene type information is obtained by updating scene type information represented by preset mapping information based on the running condition of a target application program in the background.
S203, the server responds to the strategy updating request, and updates the preset network scheduling strategy stored in the storage module based on the updated preset network scheduling strategy.
In this embodiment, the APP development client may update the preset network scheduling policy in the storage module of the server at regular or irregular intervals, and the APP development client may obtain the running condition of the target application in the background, update the existing scene type information and the processing priority of each network data packet according to the running condition, and obtain updated scene type information. The APP development client can send a policy update request to the server, wherein the policy update request carries an updated preset network scheduling policy, the server responds to the policy update request to update the preset network scheduling policy stored in a storage module based on the updated preset network scheduling policy, and the updated preset network scheduling policy in the storage module is used for being issued to the client for use. Therefore, the scene type information can be updated according to the actual running condition of the target application program in the background, so that the client uses the updated scene type information to schedule the network data packet, and the scheduling accuracy and the user experience of the network data packet are improved.
In step S1013, the client performs the scheduling process on the network data packet according to the processing priority of the network data packet and the target network resource, which may be implemented in various manners.
In one embodiment, as further shown in fig. 4, if the number of network data packets is plural, in step S1013, the client performs scheduling processing on the network data packets according to the processing priority of the network data packets and the target network resource, and may include:
s10131, under the condition that the client determines that the target network resource does not meet the preset network resource condition, scheduling the network data packet with the processing priority greater than the preset priority threshold, and stopping the scheduling operation of the network data packet with the processing priority less than or equal to the preset priority threshold or discarding the network data packet with the processing priority less than or equal to the preset priority threshold.
S10133, under the condition that the client determines that the network resource meets the preset network resource condition, scheduling processing is carried out on each network data packet.
In this embodiment, when the client determines that the target network resource meets the preset network resource condition, scheduling of each network data packet is performed normally, and when the client determines that the target network resource does not meet the preset network resource condition, the client preferentially schedules and processes the network data packet with the priority greater than the preset priority threshold, and simultaneously suspends the scheduling operation of processing the network data packet with the priority less than or equal to the preset priority threshold or discards the network data packet with the priority less than or equal to the preset priority threshold.
The preset network resource condition and the preset priority threshold may be set according to the actual service requirement, which is not limited specifically.
Therefore, the network data packets can be scheduled according to the priority of the network data packets of the client and the amount of available target network resources of the operating system, the network use experience of the foreground APP of the system is ensured, the network communication capability of the background APP is also ensured to the greatest extent, and the user experience of the background APP can be properly improved according to different scenes.
In other embodiments, the client may further obtain a scheduling condition of each network data packet at a historical time, and schedule each network data at the current time according to a historical processing priority and a historical network resource of each network data packet in the scheduling condition of the historical time.
In an alternative embodiment, the method may further include:
and under the condition that the current network scheduling policy is a scheduling policy for suspending network communication, the client side suspends the scheduling operation of the network data packet.
In this embodiment, all network communications in the terminal APP may be directly performed for the scheduling policy capable of suspending network communications, so as to ensure the use experience of the foreground APP.
The following describes the data scheduling method using a game APP, a client as a game client, and a server as a game server as an example:
1. when a user starts a game, a corresponding preset network scheduling strategy is acquired from a game server in advance:
1. the user normally starts the game.
2. The game client requests a network scheduling policy. And the game client responds to the service starting instruction and sends a scheduling policy acquisition request to the game server.
3. The game server issues a preset network scheduling policy. The game server responds to a scheduling policy acquisition request and acquires a preset network scheduling policy from a storage module; the preset network scheduling strategies comprise network scheduling strategies corresponding to different scene type information; the game server sends a preset network scheduling policy to the game client.
4. The game client locally caches a preset network scheduling policy. After receiving a preset network scheduling strategy issued by a game server, the game client locally caches the strategy for the game APP to be used when being switched to the background, and normally loads other game logic for a user.
5. The user plays normally. After the game client finishes policy caching, the user continues normal game logic.
2. At the game server, the game development client may also update the corresponding preset network scheduling policy:
1. APP develops a client-side timing or non-timing preset network scheduling strategy. According to the actual condition of the game APP running in the background, updating scene type information and priority of each network data packet, and updating a preset network scheduling strategy according to the updated scene type information so as to better promote user experience.
2. The game server stores the updated preset network scheduling policy. The game server receives a policy update request; the policy updating request carries the updated preset network scheduling policy, and the game server responds to the policy updating request and updates the preset network scheduling policy stored in the storage module based on the updated preset network scheduling policy.
3. When a user switches the game APP to the background, the game APP schedules network data packets in the game APP according to the strategy of the previous flow cache:
1. the user switches the game APP from the foreground to the background. And the game client responds to the switching instruction, and when the game APP is switched from the foreground to the background to run, the game client starts a background network scheduling strategy.
2. The game client determines current scene type information. The game client acquires the identification information of the current running scene and the state information of the client object corresponding to the target application program, and performs scene type analysis on the identification information of the current running scene and the state information of the client object to obtain current scene type information.
3. The game client determines whether network communication needs to be interrupted according to the current scene type information. The game client acquires a current network scheduling strategy corresponding to the current scene type information from a preset network scheduling strategy.
4. Aiming at the scene type capable of interrupting network communication, the scheduling module directly interrupts all network communication of the game APP.
5. For the scene type that can not interrupt network communication, the game client determines the processing priority of the network data packet in the target application program.
6. The game client acquires a target network resource for background running of the game APP. The game client sends a network resource request to the operating system, the operating system responds to the network resource request to analyze the current network state of the operating system, so as to obtain target network resources (packet quantity and bandwidth) for background running of the target application program, and the operating system sends the target network resources to the client. To increase processing efficiency, this call is invoked by the game client at regular intervals (e.g., once every 5 seconds) during background operation. Of course, in some embodiments, the call may also be made in real time.
7. And processing the corresponding network data packet. The game client side dispatches the network data packets according to the processing priority of the network data packets and the target network resources: under the condition that the game client determines that the target network resource does not meet the preset network resource condition, scheduling the network data packet with the processing priority greater than the preset priority threshold, and stopping the scheduling operation of the network data packet with the processing priority less than or equal to the preset priority threshold or discarding the network data packet with the processing priority less than or equal to the preset priority threshold; and under the condition that the game client determines that the network resource meets the preset network resource condition, scheduling each network data packet.
8. It is determined whether the background operational state is ended. Judging whether the operation is switched to the foreground operation or not, and if the operation is still kept in the background operation, continuing to process the corresponding background network data packet.
9. If the operation is switched to the foreground operation, the normal network communication logic is restored.
The following describes the data scheduling processing method with the client as an execution body:
fig. 6 is a flowchart third illustrating a data scheduling processing method according to an exemplary embodiment, and as shown in fig. 6, the data scheduling processing method may include:
S301, switching the target application program from the foreground to the background operation in response to the switching instruction.
S303, acquiring identification information of a current running scene and state information of a client object corresponding to the target application program, and performing scene type analysis on the identification information of the current running scene and the state information of the client object to obtain current scene type information.
S305, acquiring a current network scheduling strategy corresponding to the current scene type information from a preset network scheduling strategy.
S307, under the condition that the current network scheduling policy is a scheduling policy for maintaining network communication, determining the processing priority of the network data packet in the target application program, and sending a network resource request to an operating system, so that the operating system responds to the network resource request to analyze the current network state of the operating system, and a target network resource for background operation of the target application program is obtained.
S309, receiving the target network resource sent by the operating system.
S3011, scheduling the network data packet according to the processing priority of the network data packet and the target network resource.
In an optional embodiment, the performing scene type analysis on the identification information of the current running scene and the state information of the client object to obtain current scene type information includes:
acquiring preset mapping information; the preset mapping information characterizes the mapping relation among the identification information of the operation scene, the state information of the client object and the type identification information of the scene type information.
And determining target type identification information corresponding to the identification information of the current running scene and the state information of the client object according to the preset mapping information, and determining scene type information corresponding to the target type identification information as the current scene type information.
In an alternative embodiment, before the switching of the target application from the foreground to the background operation in response to the switching instruction, the method further comprises:
responding to a service starting instruction, sending a scheduling policy acquisition request to a server, so that the server responds to the scheduling policy acquisition request and acquires the preset network scheduling policy from a storage module; the preset network scheduling strategies comprise network scheduling strategies corresponding to different scene type information;
Receiving the preset network scheduling strategy sent by the server;
caching the preset network scheduling strategy;
the obtaining the current network scheduling policy corresponding to the current scene type information from the preset network scheduling policy includes:
and acquiring a current network scheduling strategy corresponding to the current scene type information from the cached preset network scheduling strategy.
In an alternative embodiment, the number of the network data packets is plural, and the determining the processing priority of the network data packet in the target application program includes:
acquiring priority associated data; the priority associated data comprises at least one of influence data of interruption of each network data packet on a client object, link information of each network data packet and network resource demand information of an operating system;
and analyzing the priority of each network data packet according to the priority related data to obtain the processing priority of each network data packet.
In an optional embodiment, the analyzing the priority of each network data packet according to the priority related data to obtain the processing priority of each network data packet includes:
Setting the processing priority of the network data packet with the link information being a long link as a first priority and setting the processing priority of the network data packet with the link information being a short link as a second priority in the case that the priority-related data includes the link information of each network data packet;
wherein the first priority is greater than the second priority.
In an optional embodiment, the analyzing the priority of each network data packet according to the priority related data to obtain the processing priority of each network data packet includes:
setting the processing priority of the network data packet meeting the first preset condition as a first priority and setting the processing priority of the network data packet meeting the second preset condition as a second priority under the condition that the priority related data comprises network resource demand information of an operating system and the network resource demand information is larger than a preset resource demand threshold;
the first preset condition is at least one of a packet quantity smaller than or equal to a preset packet quantity threshold and a transceiving frequency smaller than or equal to a preset transceiving threshold, the second preset condition is at least one of a packet quantity larger than the preset packet quantity threshold and a transceiving frequency larger than the preset transceiving threshold, and the first priority is larger than the second priority.
In an optional embodiment, the analyzing the priority of each network data packet according to the priority related data to obtain the processing priority of each network data packet includes:
setting the processing priority of the network data packet with the influence data larger than a preset influence threshold value as a first priority and setting the processing priority of the network data packet with the influence data smaller than or equal to the preset influence threshold value as a second priority under the condition that the priority related data comprises the influence data of the interruption of each network data packet to the client object;
wherein the first priority is greater than the second priority.
In an optional embodiment, the number of the network data packets is a plurality, and the scheduling processing of the network data packets according to the processing priority of the network data packets and the target network resource includes:
under the condition that the target network resource does not meet the preset network resource condition, carrying out scheduling processing on the network data packets with the processing priority greater than a preset priority threshold, and stopping scheduling operation on the network data packets with the processing priority less than or equal to the preset priority threshold, or discarding the network data packets with the processing priority less than or equal to the preset priority threshold;
And under the condition that the network resource meets the preset network resource condition, scheduling each network data packet.
In an alternative embodiment, the method may further comprise:
and under the condition that the current network scheduling strategy is a scheduling strategy for suspending network communication, suspending the scheduling operation of the network data packet.
The following describes the data scheduling processing method using the operating system as an execution body:
fig. 7 is a flowchart illustrating a data scheduling processing method according to an exemplary embodiment, and as shown in fig. 7, the data scheduling processing method may include:
s401, receiving a network resource request sent by a client under the condition that a current network scheduling strategy is a scheduling strategy for maintaining network communication; the current network scheduling policy is a network scheduling policy corresponding to the current scene type information, which is acquired from a preset network scheduling policy by the client; the current scene type information is obtained by performing scene type analysis on the identification information of the current operation scene and the state information of the client object when the client responds to the switching instruction and switches the target application program from the foreground to the background operation.
S403, responding to the network resource request, analyzing the current network state of the local operating system, and obtaining the target network resource for background operation of the target application program.
S405, sending the target network resource to the client so that the client can schedule the network data packet according to the processing priority of the network data packet in the target application program and the target network resource; the processing priority of the network data packet is determined by the client.
In an alternative embodiment, the current network state is associated with a network packet amount, and the analyzing the current network state of the local operating system in response to the network resource request to obtain the target network resource for background operation of the target application program includes:
determining a first network data packet quantity which can be processed in unit time in response to the network resource request;
determining a second network data packet quantity required by a foreground application program currently running in the foreground when running and a third network data packet quantity required by the operating system when running;
determining a candidate network data packet amount that can be used by a background application currently running in the background according to the first network data packet amount, the second network data packet amount and the third network data packet amount; the background application program comprises the target application program;
Acquiring the historical network data packet quantity used by the background application program in the historical time;
and distributing the candidate network data packet quantity to the background application program according to the historical network data packet quantity to obtain a target network resource for background operation of the target application program.
In an optional embodiment, the current network state is associated with bandwidth information, and the analyzing the current network state of the local operating system in response to the network resource request, to obtain the target network resource for background running of the target application program includes:
responding to the network resource request, and determining first bandwidth information corresponding to the client;
determining second bandwidth information of the client at the current time;
determining candidate bandwidth information which can be used by a background application program running in the background at present according to the first bandwidth information and the second bandwidth information; the background application program comprises the target application program;
acquiring historical bandwidth information used by the background application program in historical time;
and distributing bandwidth information to the background application program according to the historical bandwidth information to obtain target network resources for background operation of the target application program.
Fig. 8 is a block diagram one of a data scheduling processing apparatus according to an exemplary embodiment, which may include, as shown in fig. 8:
a switching response module 501, configured to switch the target application from the foreground to the background running in response to the switching instruction;
a scene type analysis module 503, configured to obtain identification information of a current running scene and state information of a client object corresponding to the target application, and perform scene type analysis on the identification information of the current running scene and the state information of the client object to obtain current scene type information;
a current network scheduling policy obtaining module 505, configured to obtain a current network scheduling policy corresponding to the current scene type information from a preset network scheduling policy;
a priority processing and request sending module 507, configured to determine a processing priority of a network data packet in the target application program when the current network scheduling policy is a scheduling policy for maintaining network communication, and send a network resource request to an operating system, so that the operating system analyzes a current network state of the operating system in response to the network resource request, and obtains a target network resource for background operation of the target application program;
A network resource receiving module 509, configured to receive the target network resource sent by the operating system;
and the scheduling module 5011 is configured to schedule the network data packet according to the processing priority of the network data packet and the target network resource.
In an alternative embodiment, the scene type analysis module includes:
the mapping information acquisition unit is used for acquiring preset mapping information; the preset mapping information characterizes the mapping relation among the identification information of the operation scene, the state information of the client object and the type identification information of the scene type information.
And the scene type information generating unit is used for determining target type identification information corresponding to the identification information of the current running scene and the state information of the client object according to the preset mapping information, and determining the scene type information corresponding to the target type identification information as the current scene type information.
In an alternative embodiment, the apparatus further comprises:
the starting instruction response module is used for responding to the service starting instruction and sending a scheduling policy acquisition request to a server so that the server responds to the scheduling policy acquisition request and acquires the preset network scheduling policy from the storage module; the preset network scheduling strategies comprise network scheduling strategies corresponding to different scene type information.
And the preset network scheduling policy receiving module is used for receiving the preset network scheduling policy sent by the server.
And the caching module is used for caching the preset network scheduling strategy.
Correspondingly, the current network scheduling policy obtaining module is further configured to obtain a current network scheduling policy corresponding to the current scene type information from the cached preset network scheduling policies.
In an alternative embodiment, the number of the network data packets is plural, and the priority processing and request sending module includes:
the associated data acquisition unit is used for acquiring the priority associated data; the priority association data includes at least one of influence data of interruption of each network data packet on the client object, link information of each network data packet, and network resource requirement information of the operating system.
And the priority analysis unit is used for analyzing the priority of each network data packet according to the priority associated data to obtain the processing priority of each network data packet.
In an alternative embodiment, the priority analysis unit comprises:
a first analysis subunit, configured to set, in a case where the priority association data includes link information of each network packet, a processing priority of a network packet whose link information is long link as a first priority, and a processing priority of a network packet whose link information is short link as a second priority;
Wherein the first priority is greater than the second priority.
In an alternative embodiment, the priority analysis unit comprises:
a second analysis subunit, configured to set, when the priority related data includes network resource requirement information of an operating system and the network resource requirement information is greater than a preset resource requirement threshold, a processing priority of a network packet that satisfies a first preset condition as a first priority, and set, as a second priority, a processing priority of a network packet that satisfies a second preset condition;
the first preset condition is at least one of a packet quantity smaller than or equal to a preset packet quantity threshold and a transceiving frequency smaller than or equal to a preset transceiving threshold, the second preset condition is at least one of a packet quantity larger than the preset packet quantity threshold and a transceiving frequency larger than the preset transceiving threshold, and the first priority is larger than the second priority.
In an alternative embodiment, the priority analysis unit comprises:
a third analysis subunit, configured to set, in a case where the priority association data includes impact data of interruption of each network data packet on the client object, a processing priority of a network data packet whose impact data is greater than a preset impact threshold as a first priority, and set a processing priority of a network data packet whose impact data is less than or equal to the preset impact threshold as a second priority;
Wherein the first priority is greater than the second priority.
In an alternative embodiment, the number of the network data packets is a plurality, and the scheduling module includes:
and the first scheduling unit is used for scheduling the network data packets with the processing priority greater than a preset priority threshold value under the condition that the target network resource does not meet the preset network resource condition, and stopping the scheduling operation of the network data packets with the processing priority less than or equal to the preset priority threshold value, or discarding the network data packets with the processing priority less than or equal to the preset priority threshold value.
And the second scheduling unit is used for scheduling each network data packet under the condition that the network resource meets the preset network resource condition.
In an alternative embodiment, the apparatus further comprises:
and the suspension module is used for suspending the scheduling operation of the network data packet under the condition that the current network scheduling strategy is the scheduling strategy for suspending network communication.
Fig. 9 is a block diagram two of a data scheduling processing apparatus according to an exemplary embodiment, and as shown in fig. 9, the data scheduling processing apparatus may include:
A request receiving module 601, configured to receive a network resource request sent by a client, where a current network scheduling policy is a scheduling policy for maintaining network communication; the current network scheduling policy is a network scheduling policy corresponding to the current scene type information, which is acquired from a preset network scheduling policy by the client; the current scene type information is obtained by performing scene type analysis on the identification information of the current operation scene and the state information of the client object when the client responds to the switching instruction and switches the target application program from the foreground to the background operation.
And the network resource generating module 603 is configured to analyze a current network state of the local operating system in response to the network resource request, so as to obtain a target network resource for background running of the target application program.
A network resource sending module 605, configured to send the target network resource to the client, so that the client performs scheduling processing on the network data packet according to the processing priority of the network data packet in the target application program and the target network resource; the processing priority of the network data packet is determined by the client.
In an alternative embodiment, the current network state is associated with a network packet amount, and the network resource generating module includes:
and the first network data packet quantity determining unit is used for determining the first network data packet quantity which can be processed in unit time in response to the network resource request.
And the second and third network data quantity determining units are used for determining the second network data packet quantity required by the foreground application program currently running in the foreground in the running process and the third network data packet quantity required by the operating system in the running process.
A candidate network data packet amount determining unit configured to determine, based on the first network data packet amount, the second network data packet amount, and the third network data packet amount, a candidate network data packet amount that can be used by a background application currently running in the background; the background application includes the target application.
And the historical network data packet quantity acquisition unit is used for acquiring the historical network data packet quantity used by the background application program in the historical time.
And the target network resource generating unit is used for distributing the candidate network data packet quantity to the background application program according to the historical network data packet quantity to obtain target network resources for background operation of the target application program.
In an alternative embodiment, the current network state is associated with bandwidth information, and the network resource generating module includes:
and the first bandwidth information determining unit is used for responding to the network resource request and determining the first bandwidth information corresponding to the client.
And the second bandwidth information determining unit is used for determining the second bandwidth information of the client at the current time.
A candidate bandwidth information determining unit configured to determine candidate bandwidth information that can be used by a background application currently running in the background, based on the first bandwidth information and the second bandwidth information; the background application includes the target application.
And the historical bandwidth information acquisition unit is used for acquiring historical bandwidth information used by the background application program in historical time.
And the target network resource generating unit is used for distributing bandwidth information to the background application program according to the historical bandwidth information to obtain target network resources for background operation of the target application program.
It should be noted that the device embodiments provided in the embodiments of the present application are based on the same inventive concept as the method embodiments described above.
The embodiment of the application also provides an electronic device for data scheduling processing, which comprises a processor and a memory, wherein at least one instruction or at least one section of program is stored in the memory, and the at least one instruction or the at least one section of program is loaded and executed by the processor to realize the data scheduling processing method provided by any embodiment.
Embodiments of the present application also provide a computer readable storage medium that may be provided in a terminal to store at least one instruction or at least one program for implementing a data scheduling processing method in a method embodiment, where the at least one instruction or the at least one program is loaded and executed by a processor to implement the data scheduling processing method as provided in the method embodiment described above.
Alternatively, in the present description embodiment, the storage medium may be located in at least one network server among a plurality of network servers of the computer network. Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The memory of the embodiment of the present specification may be used for storing software programs and modules, and the processor executes various functional application programs and data scheduling processes by executing the software programs and modules stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for functions, and the like; the storage data area may store data created according to the use of the device, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory may also include a memory controller to provide access to the memory by the processor.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the data scheduling processing method provided by the above method embodiment.
The embodiments of the data scheduling processing method provided in the embodiments of the present application may be executed in a terminal, a computer terminal, a server, or a similar computing device. Taking the example of running on a terminal, fig. 10 is a block diagram of a hardware structure of a terminal according to an exemplary embodiment. As shown in fig. 10, the server 700 may vary considerably in configuration or performance and may include one or more central processing units (Central Processing Units, CPU) 710 (the central processing unit 710 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA), a memory 730 for storing data, one or more storage mediums 720 (e.g., one or more mass storage devices) for storing applications 723 or data 722. Wherein memory 730 and storage medium 720 may be transitory or persistent. The program stored in the storage medium 720 may include one or more modules, each of which may include a series of instruction operations on the server. Still further, the central processor 710 may be configured to communicate with the storage medium 720 and execute a series of instruction operations in the storage medium 720 on the server 700. The server 700 may also include one or more power supplies 760, one or more wired or wireless network interfaces 750, one or more input/output interfaces 740, and/or one or more operating systems 721, such as Windows ServerTM, mac OS XTM, unixTM, linuxTM, freeBSDTM, and the like.
Input-output interface 740 may be used to receive or transmit data via a network. The specific example of the network described above may include a wireless network provided by a communication provider of the server 700. In one example, the input-output interface 740 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the input/output interface 740 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 10 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, server 700 may also include more or fewer components than shown in fig. 10, or have a different configuration than shown in fig. 10.
It should be noted that: the foregoing sequence of the embodiments of the present application is only for describing, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the device and server embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and references to the parts of the description of the method embodiments are only required.
It will be appreciated by those of ordinary skill in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by a program to instruct related hardware, and the program may be stored in a computer readable storage medium, where the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the present application is not intended to be limiting, but rather is intended to cover any and all modifications, equivalents, alternatives, and improvements within the spirit and principles of the present application.

Claims (15)

1. A data scheduling processing method, the method comprising:
switching the target application program from the foreground to the background operation in response to the switching instruction;
Acquiring identification information of a current running scene and state information of a client object corresponding to the target application program, and performing scene type analysis on the identification information of the current running scene and the state information of the client object to obtain current scene type information;
acquiring a current network scheduling strategy corresponding to the current scene type information from a preset network scheduling strategy;
determining the processing priority of a network data packet in the target application program under the condition that the current network scheduling strategy is a scheduling strategy for maintaining network communication, and sending a network resource request to an operating system so that the operating system responds to the network resource request to analyze the current network state of the operating system and obtain target network resources for background operation of the target application program;
receiving the target network resource sent by the operating system;
and scheduling the network data packet according to the processing priority of the network data packet and the target network resource.
2. The method according to claim 1, wherein the performing scene type analysis on the identification information of the current running scene and the state information of the client object to obtain current scene type information includes:
Acquiring preset mapping information; the preset mapping information characterizes the mapping relation among the identification information of the operation scene, the state information of the client object and the type identification information of the scene type information;
and determining target type identification information corresponding to the identification information of the current running scene and the state information of the client object according to the preset mapping information, and determining scene type information corresponding to the target type identification information as the current scene type information.
3. The method of claim 1, wherein prior to said switching the target application from the foreground to the background run in response to the switch instruction, the method further comprises:
responding to a service starting instruction, sending a scheduling policy acquisition request to a server, so that the server responds to the scheduling policy acquisition request and acquires the preset network scheduling policy from a storage module; the preset network scheduling strategies comprise network scheduling strategies corresponding to different scene type information;
receiving the preset network scheduling strategy sent by the server;
caching the preset network scheduling strategy;
the obtaining the current network scheduling policy corresponding to the current scene type information from the preset network scheduling policy includes:
And acquiring a current network scheduling strategy corresponding to the current scene type information from the cached preset network scheduling strategy.
4. The method of claim 1, wherein the number of network packets is a plurality, and wherein determining the processing priority of the network packets in the target application comprises:
acquiring priority associated data; the priority associated data comprises at least one of influence data of interruption of each network data packet on a client object, link information of each network data packet and network resource demand information of an operating system;
and analyzing the priority of each network data packet according to the priority related data to obtain the processing priority of each network data packet.
5. The method according to claim 4, wherein analyzing the priority of each network data packet according to the priority association data to obtain the processing priority of each network data packet comprises:
setting the processing priority of the network data packet with the link information being a long link as a first priority and setting the processing priority of the network data packet with the link information being a short link as a second priority in the case that the priority-related data includes the link information of each network data packet;
Wherein the first priority is greater than the second priority.
6. The method according to claim 4, wherein analyzing the priority of each network data packet according to the priority association data to obtain the processing priority of each network data packet comprises:
setting the processing priority of the network data packet meeting the first preset condition as a first priority and setting the processing priority of the network data packet meeting the second preset condition as a second priority under the condition that the priority related data comprises network resource demand information of an operating system and the network resource demand information is larger than a preset resource demand threshold;
the first preset condition is at least one of a packet quantity smaller than or equal to a preset packet quantity threshold and a transceiving frequency smaller than or equal to a preset transceiving threshold, the second preset condition is at least one of a packet quantity larger than the preset packet quantity threshold and a transceiving frequency larger than the preset transceiving threshold, and the first priority is larger than the second priority.
7. The method according to claim 4, wherein analyzing the priority of each network data packet according to the priority association data to obtain the processing priority of each network data packet comprises:
Setting the processing priority of the network data packet with the influence data larger than a preset influence threshold value as a first priority and setting the processing priority of the network data packet with the influence data smaller than or equal to the preset influence threshold value as a second priority under the condition that the priority related data comprises the influence data of the interruption of each network data packet to the client object;
wherein the first priority is greater than the second priority.
8. The method according to any one of claims 1 to 7, wherein the number of the network data packets is plural, and wherein the scheduling the network data packets according to the processing priority of the network data packets and the target network resource includes:
under the condition that the target network resource does not meet the preset network resource condition, carrying out scheduling processing on the network data packets with the processing priority greater than a preset priority threshold, and stopping scheduling operation on the network data packets with the processing priority less than or equal to the preset priority threshold, or discarding the network data packets with the processing priority less than or equal to the preset priority threshold;
and under the condition that the network resource meets the preset network resource condition, scheduling each network data packet.
9. The method according to any one of claims 1 to 7, further comprising:
and under the condition that the current network scheduling strategy is a scheduling strategy for suspending network communication, suspending the scheduling operation of the network data packet.
10. A data scheduling processing method, the method comprising:
receiving a network resource request sent by a client under the condition that the current network scheduling strategy is a scheduling strategy for maintaining network communication; the current network scheduling policy is a network scheduling policy corresponding to the current scene type information, which is acquired from a preset network scheduling policy by the client; the current scene type information is obtained by performing scene type analysis on identification information of a current operation scene and state information of a client object when a client responds to a switching instruction and switches a target application program from a foreground to a background operation;
analyzing the current network state of the local operating system in response to the network resource request to obtain a target network resource for background operation of the target application program;
sending the target network resource to the client so that the client performs scheduling processing on the network data packet according to the processing priority of the network data packet in the target application program and the target network resource; the processing priority of the network data packet is determined by the client.
11. The method of claim 10, wherein the current network state is associated with a network packet quantity, wherein analyzing the current network state of the local operating system in response to the network resource request results in a target network resource for background running of the target application, comprising:
determining a first network data packet quantity which can be processed in unit time in response to the network resource request;
determining a second network data packet quantity required by a foreground application program currently running in the foreground when running and a third network data packet quantity required by the operating system when running;
determining a candidate network data packet amount that can be used by a background application currently running in the background according to the first network data packet amount, the second network data packet amount and the third network data packet amount; the background application program comprises the target application program;
acquiring the historical network data packet quantity used by the background application program in the historical time;
and distributing the candidate network data packet quantity to the background application program according to the historical network data packet quantity to obtain a target network resource for background operation of the target application program.
12. The method of claim 10, wherein the current network state is associated with bandwidth information, wherein the analyzing the current network state of the local operating system in response to the network resource request results in the target network resource for background running of the target application program, comprising:
responding to the network resource request, and determining first bandwidth information corresponding to the client;
determining second bandwidth information of the client at the current time;
determining candidate bandwidth information which can be used by a background application program running in the background at present according to the first bandwidth information and the second bandwidth information; the background application program comprises the target application program;
acquiring historical bandwidth information used by the background application program in historical time;
and distributing bandwidth information to the background application program according to the historical bandwidth information to obtain target network resources for background operation of the target application program.
13. A data scheduling processing apparatus, the apparatus comprising:
the switching response module is used for switching the target application program from the foreground to the background operation in response to the switching instruction;
The scene type analysis module is used for acquiring the identification information of the current running scene and the state information of the client object corresponding to the target application program, and performing scene type analysis on the identification information of the current running scene and the state information of the client object to acquire current scene type information;
the current network scheduling strategy acquisition module is used for acquiring a current network scheduling strategy corresponding to the current scene type information from a preset network scheduling strategy;
the priority processing and request sending module is used for determining the processing priority of the network data packet in the target application program under the condition that the current network scheduling policy is the scheduling policy for maintaining network communication, and sending a network resource request to an operating system so that the operating system responds to the network resource request to analyze the current network state of the operating system and obtain target network resources for background operation of the target application program;
a network resource receiving module, configured to receive the target network resource sent by the operating system;
and the scheduling module is used for scheduling the network data packet according to the processing priority of the network data packet and the target network resource.
14. A data scheduling processing apparatus, the apparatus comprising:
the request receiving module is used for receiving a network resource request sent by the client under the condition that the current network scheduling strategy is a scheduling strategy for maintaining network communication; the current network scheduling policy is a network scheduling policy corresponding to the current scene type information, which is acquired from a preset network scheduling policy by the client; the current scene type information is obtained by performing scene type analysis on identification information of a current operation scene and state information of a client object when a client responds to a switching instruction and switches a target application program from a foreground to a background operation;
the network resource generation module is used for responding to the network resource request to analyze the current network state of the local operating system so as to obtain a target network resource for background operation of the target application program;
the network resource sending module is used for sending the target network resource to the client so that the client can schedule the network data packet according to the processing priority of the network data packet in the target application program and the target network resource; the processing priority of the network data packet is determined by the client.
15. An electronic device for data scheduling processing, characterized in that the electronic device comprises a processor and a memory, wherein at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded by the processor and performs the data scheduling processing method according to any one of claims 1 to 9 or 10 to 12.
CN202311511824.0A 2023-11-14 2023-11-14 Data scheduling processing method and device and electronic equipment Pending CN117527906A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311511824.0A CN117527906A (en) 2023-11-14 2023-11-14 Data scheduling processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311511824.0A CN117527906A (en) 2023-11-14 2023-11-14 Data scheduling processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN117527906A true CN117527906A (en) 2024-02-06

Family

ID=89752579

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311511824.0A Pending CN117527906A (en) 2023-11-14 2023-11-14 Data scheduling processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN117527906A (en)

Similar Documents

Publication Publication Date Title
CN109120426B (en) Network slice management method and device and computer readable storage medium
CN112162865B (en) Scheduling method and device of server and server
US10726035B2 (en) Database access control method and apparatus
CN109246229B (en) Method and device for distributing resource acquisition request
US9417908B2 (en) Managing data delivery based on device state
CN110958281B (en) Data transmission method and communication device based on Internet of things
CN108600005A (en) A method of defence micro services avalanche effect
CN116547958A (en) Method, system and computer readable medium for ranking process of network function selection
EP3264723B1 (en) Method, related apparatus and system for processing service request
CN110166524B (en) Data center switching method, device, equipment and storage medium
CN110602180B (en) Big data user behavior analysis method based on edge calculation and electronic equipment
EP3306866A1 (en) Message processing method, device and system
CN108829519A (en) Method for scheduling task, cloud platform and computer readable storage medium based on cloud platform
US20230112127A1 (en) Electronic device for deploying application and operation method thereof
US20230275976A1 (en) Data processing method and apparatus, and computer-readable storage medium
CN103677983A (en) Scheduling method and device of application
CN115499447A (en) Cluster master node confirmation method and device, electronic equipment and storage medium
US11144359B1 (en) Managing sandbox reuse in an on-demand code execution system
US20220053373A1 (en) Communication apparatus, communication method, and program
CN110868323A (en) Bandwidth control method, device, equipment and medium
CN111356182A (en) Resource scheduling and processing method and device
CN112398802B (en) Data downloading method and related equipment
CN114598659A (en) Rule base optimization method and device
CN117527906A (en) Data scheduling processing method and device and electronic equipment
CN109587068A (en) Flow switching method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination