CN116560809A - Data processing method and device, equipment and medium - Google Patents

Data processing method and device, equipment and medium Download PDF

Info

Publication number
CN116560809A
CN116560809A CN202210702527.3A CN202210702527A CN116560809A CN 116560809 A CN116560809 A CN 116560809A CN 202210702527 A CN202210702527 A CN 202210702527A CN 116560809 A CN116560809 A CN 116560809A
Authority
CN
China
Prior art keywords
target
target data
data
queue
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210702527.3A
Other languages
Chinese (zh)
Inventor
徐士立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210702527.3A priority Critical patent/CN116560809A/en
Publication of CN116560809A publication Critical patent/CN116560809A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses a data processing method, a data processing device, data processing equipment and a data processing medium. The method comprises the following steps: if the task thread is detected to be established in the virtual scene, determining the priority of the task thread according to the type of the task thread, and determining the importance of target data associated with the task thread; selecting a first target queue matched with the priority of the task thread from a plurality of queues corresponding to the virtual scene according to the priority of the task thread; adding the target data to a designated position in a first target queue according to the importance of the target data, wherein the designated position is matched with the importance of the target data; executing the target data in the first target queue, and adjusting the target data of at least one queue in the plurality of queues according to the execution condition of the target data in the first target queue. The technical scheme of the embodiment of the application promotes the rationality of data processing, so that the service can be executed in time.

Description

Data processing method and device, equipment and medium
The application is a divisional application with application number 202210103894.1 and the name of data processing method, device, equipment and medium in virtual scene, which is filed on 28 days of 2022 and 01.
Technical Field
The present invention relates to the field of computer technology, and in particular, to a data processing method, a data processing apparatus, an electronic device, and a computer readable medium.
Background
Currently, as data bursts grow, the handling of traffic faces greater challenges. In the related art, corresponding services are generally processed through each thread included in the process, wherein data associated with each thread is processed sequentially according to the creation sequence, but in some special scenarios, a mode of processing sequentially according to the creation sequence cannot well meet the time delay required by the services, which may cause that the services cannot be processed in time.
It can be seen that how to reasonably process data so that services can be handled in time is a problem to be solved.
Disclosure of Invention
In order to solve the technical problems, embodiments of the present application provide a data processing method, apparatus, device, and medium, so as to improve the rationality of data processing at least to a certain extent, and enable a service to be executed in time.
According to an aspect of the embodiments of the present application, the embodiments of the present application provide a data processing method, the method includes: determining the importance of target data associated with the task thread according to a mapping relation table of the data type associated with the preset task thread and the importance; selecting a first target queue matched with the priority of the task thread from a plurality of queues according to the priority of the task thread; adding the target data to a designated position in the first target queue according to the importance of the target data, wherein the designated position is matched with the importance of the target data; and processing the target data in the first target queue, and adjusting the position of the target data in the first target queue according to the processing condition of the target data in the first target queue.
According to an aspect of the embodiments of the present application, there is provided a data processing apparatus, the apparatus including: the detection and determination module is configured to determine the importance of the target data associated with the task thread according to a mapping relation table of the data type and the importance associated with the preset task thread; a selecting module configured to select a first target queue matching the priority of the task thread from a plurality of queues according to the priority of the task thread; an adding module configured to add the target data to a specified position in the first target queue according to the importance of the target data, the specified position matching the importance of the target data; the processing and adjusting module is configured to process the target data in the first target queue and adjust the position of the target data in the first target queue according to the processing condition of the target data in the first target queue.
In one embodiment of the present application, based on the foregoing solution, the adding module includes: a determining unit configured to determine a position matching the importance of the target data from the first target queue according to the importance of the target data, and take the matched position as the specified position; an adding unit configured to add the target data to the first target queue at the specified position; the higher the importance of the target data is, the closer the designated position is to the head position in the first target queue.
In one embodiment of the present application, based on the foregoing solution, the processing and adjusting module includes: a detection unit configured to detect whether or not a congestion occurs in the first target queue; and the adjusting unit is configured to adjust the position of the target data in the first target queue according to the processing condition of the target data in the first target queue if the first target queue is blocked.
In one embodiment of the present application, based on the foregoing solution, the detection unit is specifically configured to: acquiring the quantity of target data contained in the first target queue; if the number is greater than or equal to a preset number threshold, determining that the first target queue is blocked; and if the number is smaller than the preset number threshold, determining that the first target queue is not blocked.
In one embodiment of the present application, based on the foregoing scheme, the processing case includes waiting for a processing duration; the adjusting unit is specifically configured to: acquiring waiting processing time length of each target data contained in the first target queue; and adjusting the position of the target data in the first target queue according to the waiting processing time of each target data contained in the first target queue.
In an embodiment of the present application, based on the foregoing solution, the adjusting unit is further specifically configured to: selecting target data exceeding a preset waiting time threshold from the first target queue according to the waiting time of each target data contained in the first target queue; adjusting the target data exceeding a preset waiting time threshold from a designated position to a target position in the first target queue; wherein the closer the target location is to a head of queue location in the first target queue than the designated location.
In an embodiment of the present application, based on the foregoing solution, the adjusting unit is further specifically configured to: respectively adding the respective target data to the first target queue at the time at the designated position as the start time; and starting timing from the starting time to obtain the waiting processing time of each target data.
In an embodiment of the present application, based on the foregoing solution, the detecting and determining module is further configured to: and determining the priority of the task thread according to the type of the task thread.
In an embodiment of the present application, based on the foregoing solution, the adjusting unit is further specifically configured to: dequeuing the target data exceeding the threshold value of the preset waiting time period from the first target queue and adding the target data into a temporary queue; selecting a second target queue matched with the importance of the target data exceeding the preset waiting time threshold from the queues according to the importance of the target data exceeding the preset waiting time threshold in the temporary queues; dequeuing the target data exceeding the preset waiting time threshold from the temporary queue, and adding the target data to the second target queue.
In one embodiment of the present application, based on the foregoing solution, the detecting and determining module is specifically configured to: determining the priority matched with the type of the task thread according to a mapping relation table of the type and the priority of the preset task thread; the mapping relation table of the types and the priorities of the preset task threads is preset with a plurality of types of task threads and priorities corresponding to the task threads respectively.
In one embodiment of the present application, based on the foregoing scheme, the data processing apparatus further includes: the acquisition module is configured to acquire attribute data of the executed task threads in a preset historical time period; wherein the executed task threads comprise a plurality of types of task threads; the determining module is configured to determine the priorities of the task threads of the multiple types according to the attribute data of the executed task threads so as to generate a mapping relation table of the types and the priorities of the preset task threads.
In one embodiment of the present application, based on the foregoing solution, the multiple types of task threads include: at least two of a critical task thread, an auxiliary task thread, a reporting data task thread and a third party task thread; the determining module is specifically configured to: if the attribute data of the executed task thread represents that the service requirement time delay is higher than the set time delay, setting the priority of the executed task thread as a first priority; if the attribute data of the executed task thread represents that the service requirement time delay is lower than the set time delay, setting the priority of the executed task thread as a second priority; wherein the first priority is higher than the second priority.
In one embodiment of the present application, based on the foregoing solution, the detecting and determining module is specifically configured to: acquiring the type of target data associated with the task thread; determining importance matched with the type of target data associated with a task thread according to a mapping relation table of the type of the data associated with the task thread and the importance; the mapping relation table of the data types and the importance of the preset task threads is preset with a plurality of types of data associated with the task threads and the importance of the plurality of types of data respectively.
According to one aspect of embodiments of the present application, embodiments of the present application provide an electronic device comprising one or more processors; and storage means for storing one or more programs which, when executed by the one or more processors, cause the electronic device to implement the data processing method as described above.
According to an aspect of the embodiments of the present application, there is provided a computer-readable medium having stored thereon a computer program which, when executed by a processor, causes the computer to perform the data processing method as described above.
According to one aspect of embodiments of the present application, there is provided a computer program product comprising computer instructions which, when executed by a processor, implement a data processing method as described above.
In the technical scheme provided in the embodiment of the application:
on one hand, the enqueuing of the target data is determined by combining the priority of the task thread and the importance of the target data associated with the task thread, so that the enqueuing of the target data can be more accurate, and correspondingly, the processing of the target data is more reasonable based on sequential processing in the queue, so that the service can be processed in time.
On the one hand, in the processing process of the target data, the position of the target data in the queue can be adjusted in real time according to the processing condition of the target data, so that the actual processing condition of the target data is combined, and further, the processing of the target data is more reasonable, thereby ensuring that the service can be processed in time, and the method is suitable for various application scenes.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. It is apparent that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a schematic diagram of an exemplary implementation environment in which the techniques of embodiments of the present application may be applied.
Fig. 2 is a flowchart illustrating a data processing method in a virtual scenario according to an exemplary embodiment of the present application.
Fig. 3 is a flowchart illustrating a data processing method in a virtual scenario according to an exemplary embodiment of the present application.
Fig. 4 is a flowchart of step S201 in the embodiment shown in fig. 2 in an exemplary embodiment.
Fig. 5 is a flowchart of step S203 in the embodiment shown in fig. 2 in an exemplary embodiment.
FIG. 6 is a schematic diagram of a first target queue shown in an exemplary embodiment of the present application.
Fig. 7 is a flowchart of step S204 in the embodiment shown in fig. 2 in an exemplary embodiment.
Fig. 8 is a flowchart of step S701 in the embodiment shown in fig. 7 in an exemplary embodiment.
Fig. 9 is a flowchart of step S702 in the embodiment shown in fig. 7 in an exemplary embodiment.
Fig. 10 is a flowchart of step S902 in the embodiment shown in fig. 9 in an exemplary embodiment.
Fig. 11 is a flowchart of step S1002 in the embodiment shown in fig. 10 in an exemplary embodiment.
FIG. 12 is a schematic diagram of a data processing architecture in a virtual scenario illustrated by an exemplary embodiment of the present application.
Fig. 13 is a schematic diagram of a network packet format shown in an exemplary embodiment of the present application.
Fig. 14 is a flowchart illustrating a data processing method in a virtual scene according to an exemplary embodiment of the present application.
Fig. 15 is a block diagram of a data processing apparatus in a virtual scene shown in an exemplary embodiment of the present application.
Fig. 16 is a schematic diagram of a computer system suitable for use in implementing embodiments of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations identical to the present application. Rather, they are merely examples of apparatus and methods that are identical to some aspects of the present application, as detailed in the appended claims.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
In this application, the term "plurality" means two or more. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., a and/or B may represent: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Currently, as data bursts grow, the handling of traffic faces greater challenges. In the related art, corresponding services are generally processed through each thread included in a process, wherein data associated with each thread is processed sequentially according to a creation sequence, but in some special scenes, such as virtual scenes, a mode of processing sequentially according to the creation sequence cannot well meet service requirement time delay, and service cannot be processed timely possibly.
Therefore, in a virtual scene, the embodiment of the application provides a data processing method in the virtual scene. Referring to fig. 1, fig. 1 is a schematic diagram of an exemplary implementation environment of the present application. The implementation environment includes a terminal device 101 and a server 102, where the terminal device 101 and the server 102 communicate through a wired or wireless network.
It should be understood that the number of terminal devices 101 and servers 102 in fig. 1 is merely illustrative. There may be any number of terminal devices 101 and servers 102 as practical.
The terminal device 101 corresponds to a client, and may be any electronic device having a user input interface, including but not limited to a touch screen, a keyboard, physical keys, an audio pickup device, and the like, including but not limited to a smart phone, a tablet, a notebook, a computer, and the like.
The server 102 corresponds to a server, which may be a server providing various services, may be an independent physical server, may be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content distribution networks), and basic cloud computing services such as big data and artificial intelligent platforms, which are not limited herein.
In some embodiments of the present application, the data processing method in the virtual scene may be performed by the terminal device 101, and accordingly, the data processing apparatus in the virtual scene is configured in the terminal device 101. Optionally, if the terminal device 101 detects that a task thread is created in the virtual scene, determining a priority of the task thread according to a type of the task thread, and determining importance of target data associated with the task thread; then selecting a first target queue matched with the priority of the task thread from a plurality of queues corresponding to the virtual scene according to the priority of the task thread; then adding the target data to a designated position in a first target queue according to the importance of the target data, wherein the designated position is matched with the importance of the target data; and then processing the target data in the first target queue, and adjusting the target data of at least one queue in the plurality of queues according to the processing condition of the target data in the first target queue.
In some embodiments of the present application, the data processing method in the virtual scene may be executed by the server 102, and accordingly, the data processing apparatus in the virtual scene is configured in the server 102. Optionally, if the server 102 detects that a task thread is created in the virtual scene, determining a priority of the task thread according to a type of the task thread, and determining importance of target data associated with the task thread; then selecting a first target queue matched with the priority of the task thread from a plurality of queues corresponding to the virtual scene according to the priority of the task thread; then adding the target data to a designated position in a first target queue according to the importance of the target data, wherein the designated position is matched with the importance of the target data; and then processing the target data in the first target queue, and adjusting the target data of at least one queue in the plurality of queues according to the processing condition of the target data in the first target queue.
By implementing the technical scheme of the embodiment of the application, the granularity is finer and finer as the priority of the created task thread and the importance of the target data associated with the created task thread are considered; therefore, the enqueuing of the target data is determined by combining the priority of the created task thread and the importance of the target data associated with the created task thread, so that the enqueuing of the target data can be more accurate; therefore, the processing of the target data is more reasonable based on the sequential processing in the queue, so that the service can be processed in time. In addition, in the processing process of the target data, the queue of the target data can be adjusted in real time according to the processing condition of the target data, and further, the processing of the target data is more reasonable, so that the service can be timely processed.
Various implementation details of the technical solutions of the embodiments of the present application are set forth in detail below:
referring to fig. 2, fig. 2 is a flowchart illustrating a data processing method in a virtual scenario, which may be performed by the terminal device 101 or the server 102 illustrated in fig. 1, according to an embodiment of the present application. As shown in fig. 2, the data processing method in the virtual scene at least includes steps S201 to S204, and is described in detail as follows:
In step S201, if it is detected that a task thread is created in the virtual scenario, the priority of the task thread is determined according to the type of the task thread, and the importance of the target data associated with the task thread is determined.
In this embodiment of the present application, the virtual scenario refers to a scenario that is presented by installing and deploying a virtual application in an operating system provided by the terminal device 101 or the server 102, and starting and running the virtual application. Optionally, virtual applications include, but are not limited to, gaming applications, virtual Reality (VR) applications, augmented Reality (Augmented Reality, AR) applications, and the like; accordingly, the virtual scene corresponding to the game application is a game scene, the virtual scene corresponding to the virtual reality application is a virtual reality scene, and the virtual scene corresponding to the augmented reality application is an augmented reality scene.
Task threads in the embodiments of the present application refer to threads that execute tasks, i.e. task threads are used to execute a certain task or tasks. It can be understood that a thread is the smallest unit that an operating system can perform operation scheduling, and is contained in a process and is the actual operation unit in the process; one thread refers to a single sequential control flow in a process, and multiple threads may be concurrent in a process, each thread executing different tasks in parallel.
The task thread types in the embodiment of the application include, but are not limited to, a critical task thread, an auxiliary task thread, a report data task thread, a third party task thread, and other task threads. The critical task threads are generally task threads with high service requirement time delay, and have great influence on the virtual scene; for example, for gaming applications, it may be a main task thread of a game, a game logic processing task thread, or the like. The auxiliary task thread is usually a task thread with higher service requirement time delay, and has larger influence on the virtual scene. The task threads for reporting data are generally task threads with moderate service requirement time delay, and have moderate influence on virtual scenes. The third-party task thread is an accessed external task thread; for example, for a game application, it may access a task thread initiated by the third party sdk, and the level of the service requirement delay may need to be determined according to the specific function provided by the third party sdk. The other threads may be task threads other than the above types, and are generally task threads with low service requirement latency.
In the embodiment of the application, the target data associated with the task thread refers to data having an association relationship with the task thread. It is understood that each task thread corresponds to data when created, and the data may be set inside the task thread, or may be data inside a process where the task thread is located, etc.
In one embodiment of the present application, the process of determining the priority of the task thread according to the type of the task thread in step S201 may include the following steps, which are described in detail below:
determining the priority matched with the type of the task thread according to a mapping relation table of the type and the priority of the preset task thread; the method comprises the steps of presetting a mapping relation table of the types and the priorities of task threads, wherein the preset mapping relation table is provided with a plurality of types of task threads and priorities corresponding to the task threads.
That is, in an alternative embodiment, the process of determining the priority of the task thread may be to obtain the type of the task thread, and then search a mapping relation table of the type of the preset task thread and the priority according to the type of the task thread, so as to determine the priority matched with the type of the task thread.
In an alternative embodiment, a mapping relation table of the type and the priority of the preset task thread is preset; for example, please refer to table 1, which is an exemplary mapping table of the type and the priority of the preset task thread.
TABLE 1
Presetting task thread types Priority level
A P1
B P2
C P3
D P4
…… ……
It will be appreciated that, as shown in table 1, the preset task thread may be of A, B, C, D types, wherein the priority is set to be P1> P2> P3> P4 in the order from high to low; meanwhile, if the type of the newly created task thread is B, the priority of the newly created task thread can be determined to be P2 according to the mapping relation table of the type and the priority of the preset task thread shown in the table 1.
Thus, by implementing alternative embodiments, the priority of the newly created task thread may be quickly and accurately determined.
In an embodiment of the present application, referring to fig. 3, before determining the process of determining the priority matching the type of the task thread according to the mapping relation table of the type of the preset task thread and the priority, the following steps S301 to S302 may be further included, which are described in detail below:
step S301, obtaining attribute data of executed task threads corresponding to virtual scenes in a preset historical time period; wherein the executed task threads comprise a plurality of types of task threads;
step S302, determining priorities of a plurality of task threads according to the attribute data of the executed task threads so as to generate a mapping relation table of the types and the priorities of the preset task threads.
That is, in an alternative embodiment, the generating process of the mapping relationship table of the type and the priority of the preset task thread may be that attribute data of the executed task thread corresponding to the virtual scene in the preset history period is obtained, where the executed task thread includes a plurality of types of task threads, and then the priorities of the plurality of types of task threads are determined according to the attribute data of the executed task thread.
The preset history time period in the optional embodiment can be flexibly set; for example, the last month, half year, etc.
In this alternative embodiment, the executed task thread is still essentially a task thread, but is an executed task thread corresponding to the virtual scene in the acquired history period.
Wherein, the attribute data in the alternative embodiment refers to data related to the attribute of the task thread; wherein the attribute data includes, but is not limited to, data related to task thread execution, such as business requirement latency, etc.
The attribute data of the executed task thread corresponding to the virtual scene in the preset historical time period, which is obtained in the optional embodiment, may be specific to one or more users, and generally, the more the number of users is, the more accurate the determined priority of the task thread is.
For example, attribute data of a plurality of types of executed task threads corresponding to 1000 users aiming at a virtual scene is acquired in the last 1 month; assuming that the virtual scenario is game scenario 1 and the plurality of types of executed task threads are A, B, C, D, then the attribute data of 1000 types of executed task threads for game scenario 1 for the 1000 users, the attribute data of 1000 types of executed task threads for game scenario 1 for the C type of executed task threads for game scenario 1, and the attribute data of 1000 types of executed task threads for game scenario 1 for the D type of executed task threads for game scenario 1 are obtained.
In this way, by implementing the alternative embodiment, the mapping relation table of the type and the priority of the preset task thread is generated according to the attribute data of the executed task thread corresponding to the virtual scene in the preset historical time period, so that the generated mapping relation table of the type and the priority of the preset task thread is more accurate.
In one embodiment of the present application, the plurality of types of task threads include: at least two of a critical task thread, an auxiliary task thread, a reporting data task thread and a third party task thread; the process of determining the priorities of the task threads of multiple types according to the attribute data of the executed task threads in step S302 may include the following steps, which are described in detail below:
if the attribute data of the executed task thread represents that the service requirement time delay is higher than the set time delay, setting the priority of the executed task thread as a first priority;
if the attribute data of the executed task thread represents that the service requirement time delay is lower than the set time delay, setting the priority of the executed task thread as a second priority; wherein the first priority is higher than the second priority.
That is, in an alternative embodiment, if the attribute data of the executed task thread characterizes the service requirement delay to be higher than the set delay, that is, the executed task thread cannot be executed later than the set delay, the priority of the executed task thread may be set to be the first priority; if the attribute data of the executed task thread characterizes the service requirement latency less than the set latency, i.e., the execution of the executed task thread may be later than the set latency, the priority of the executed task thread may be set to a second priority, where the first priority is higher than the second priority. In short, the service demand delay is positively correlated with the priority.
The specific value of the set time delay in the optional embodiment may be one or more, which may be flexibly set according to a specific application scenario; meanwhile, the first priority and the second priority are only for representing the relationship between the two, but are not limited to specific contents, for example, the first priority is "high", the second priority may be "medium", "lower", "low", etc., and this may be specifically set according to the set delay.
For example, with the above example, if, for the type a executed task thread, there is a service requirement delay corresponding to a preset proportion (for example, 80%, 90%, etc.) of the number of users, which is higher than the set delay, the service is required to be processed in time for most users, so the priority of the type a executed task thread may be set to be "high"; and so on for other types of executed task threads, and no further description is given here.
For example, please refer to table 2, which is another exemplary mapping table of the type and the priority of the task thread.
TABLE 2
It can be understood that, as shown in table 2, the critical task thread and the reporting data task thread c1 belong to a class of task threads with high service requirement delay, and the corresponding priorities are "high"; the reporting data task thread c2 belongs to a task thread with higher service requirement time delay, and the corresponding priority is higher; the auxiliary task thread and the reporting data task thread c4 belong to a class of task threads with moderate service requirement time delay, and the corresponding priority is 'medium'; the reported data task thread c3 belongs to a task thread with lower service requirement time delay, and the corresponding priority is lower; the third-party task thread belongs to a class of task threads with low service requirement delay, and the corresponding priority is low.
In this way, by implementing the alternative embodiment, the priority of the executed task thread is set according to the relationship between the service requirement time delay and the set time delay, which is characterized by the attribute data of the executed task thread, so that the set priority of the executed task thread is more accurate and meets the virtual scene requirement.
In one embodiment of the present application, referring to fig. 4, the process of determining the importance of the target data associated with the task thread in step S201 may include the following steps S401 to S402, which are described in detail below:
step S401, obtaining the type of target data associated with a task thread;
step S402, determining importance matched with the type of target data associated with the task thread according to a mapping relation table of the type of the data associated with the task thread and the importance; the method comprises the steps of presetting a mapping relation table of data types and importance of task threads, wherein the mapping relation table is preset with a plurality of types of data associated with the task threads and the importance of the plurality of types of data respectively.
That is, the process of determining the importance of the target data associated with the task thread in alternative embodiments may be,
and acquiring the type of the target data associated with the task thread, and searching a mapping relation table of the type of the data associated with the task thread and the importance according to the type of the target data associated with the task thread, so as to determine the importance matched with the type of the target data associated with the task thread.
In an alternative embodiment, a mapping relation table of the data types and the importance associated with the preset task threads is preset; for example, please refer to table 3, which is an exemplary mapping relationship table of data types and importance associated with preset task threads.
TABLE 3 Table 3
It will be appreciated that, as shown in table 3, for the task thread of type a, the associated data types are a1, a2, a33, and for the task thread of type B, the associated data types are B1, B22, wherein the order of importance from high to low is k1> k2> k3; meanwhile, if the type of the newly created task thread is A and the type of the associated target data is a1, the priority of the newly created task thread is P1 and the importance of the associated target data is k1 can be determined according to the mapping relation table of the data type and the importance of the associated data of the preset task thread shown in the table 3.
In this way, by implementing the alternative embodiment, the importance of the target data associated with the newly created task thread can be quickly and accurately determined.
It should be noted that, the mapping relationship table of the type and the priority of the preset task thread described above and the mapping relationship table of the data type and the importance associated with the preset task thread described herein may be the same table or may be different tables; in an alternative embodiment, a mapping relationship table of the data types and the importance associated with the corresponding preset task threads may be set for different task threads, that is, the above table 3 may be split into multiple tables according to the task thread types. In practical application, the method can be flexibly set according to specific application scenes.
Step S202, selecting a first target queue matched with the priority of the task thread from a plurality of queues corresponding to the virtual scene according to the priority of the task thread.
In the embodiment of the present application, if it is detected that a task thread is created in a virtual scenario, the priority of the task thread is determined according to the type of the task thread, and the importance of target data associated with the task thread is determined, and then, according to the priority of the task thread, a first target queue matched with the priority of the task thread may be selected from a plurality of queues corresponding to the virtual scenario.
In this embodiment of the present application, the first target queue refers to a queue that matches a priority of a task thread, where in this embodiment of the present application, a plurality of queues are provided, where each queue corresponds to a priority; for example, the queues corresponding to the virtual scene are set to have 1-5, at this time, a queue matching the priority of the task thread is selected from the queues 1-5 according to the priority of the task thread, and at this time, the selected queue matching the priority of the task thread is the first target queue.
In step S203, the target data is added to the first target queue at a designated position according to the importance of the target data, and the designated position matches the importance of the target data.
According to the priority of the task thread, a first target queue matched with the priority of the task thread is selected from a plurality of queues corresponding to the virtual scene, and then the target data can be added to a designated position in the first target queue according to the importance of the target data, wherein the designated position is matched with the importance of the target data.
In one embodiment of the present application, referring to fig. 5, the process of adding the target data to the specified position in the first target queue according to the importance of the target data in step S203 may include the following steps S501 to S502, which are described in detail below:
step S501, determining a position matched with the importance of the target data from a first target queue according to the importance of the target data, and taking the matched position as a designated position;
step S502, adding target data to a designated position in a first target queue; the higher the importance of the target data is, the closer the designated position is to the head position in the first target queue.
That is, the process of adding the target data to the first target queue in the alternative embodiment may be to determine a position matching the importance of the target data from the first target queue according to the importance of the target data, and to take the matched position as a designated position, and then add the target data to the first target queue at the designated position, wherein the higher the importance of the target data, the closer the designated position is to the head position in the first target queue.
Wherein, the higher the importance of the target data in the alternative embodiment, the closer the matched position (i.e. the designated position) is to the head position in the first target queue. It can be understood that the elements in the queue (i.e. the target data) are dequeued from the head of the queue and enqueued from the tail of the queue, so that the position of the target data in the first target queue can be adjusted according to the importance of the target data, and the target data near the head of the queue in the first target queue has shorter waiting processing duration.
For example, referring to fig. 6, an exemplary first target queue is shown, where the first target queue has 1-10 corresponding positions, one position corresponds to adding one target data, it is understood that the first position is 1, and the last position is 10; and determining the position matched with the importance of the target data as 3 from the first target queue according to the importance of the target data, wherein the position 3 is the designated position, and adding the target data to the position 3 in the first target queue.
In this way, by implementing the alternative embodiment, the target data can be quickly and accurately added to the designated position in the first target queue, and the designated position is matched with the importance of the target data, so that the service requirement time delay of the same task thread for different target data can be met, the granularity is finer, and the virtual scene requirement is met.
Step S204, processing the target data in the first target queue, and adjusting the target data of at least one of the queues according to the processing condition of the target data in the first target queue.
According to the method and the device for processing the target data, the target data are added to the designated position in the first target queue according to the importance of the target data, then the target data in the first target queue can be processed, and the target data of at least one queue in the plurality of queues can be adjusted according to the processing condition of the target data in the first target queue.
In this embodiment of the present application, the processing of the target data in the first target queue is to perform an operation on the target data, where the operation includes, but is not limited to, performing a logic calculation according to the target data to obtain new data, transferring the target data, and so on.
In one embodiment of the present application, referring to fig. 7, the process of adjusting the target data of at least one of the queues in step S204 according to the processing condition of the target data in the first target queue may include the following steps S701 to S702, which are described in detail below:
step S701, detecting whether the first target queue is blocked;
in step S702, if the first target queue is blocked, the target data of at least one of the queues is adjusted according to the processing condition of the target data in the first target queue.
That is, in an alternative embodiment, the process of adjusting the target data of at least one of the queues according to the processing condition of the target data in the first target queue may be to detect whether the first target queue is blocked, if the first target queue is blocked, the target data of at least one of the queues may be adjusted according to the processing condition of the target data in the first target queue, and if the first target queue is not blocked, no processing may be performed.
In this case, in an alternative embodiment, whether the first target queue is blocked may be detected periodically, for example, whether the first target queue is blocked once every 1 minute, 2 minutes, or the like. Optionally, whether the first target queue is blocked or not can be detected at random, and in practical application, flexible adjustment can be performed according to specific application scenes.
It should be noted that, in the embodiment of the present application, there are a plurality of queues, so in practical application, the target data in other queues (i.e. queues other than the first target queue in the plurality of queues) will also be processed. In short, the processing process of the target data is to poll the target data in each queue in turn, wherein the order of the sequential polling is the priority order of the queues; for example, there are 1-5 corresponding queues in the virtual scenario, where the priority of queues 1-5 are each from high to low, and thus are the target data in queues 1-5 in turn.
Thus, by implementing the alternative embodiment, only when the first target queue is blocked, the target data of at least one queue of the plurality of queues is adjusted according to the processing condition of the target data in the first target queue, so that certain system resources can be saved.
In one embodiment of the present application, referring to fig. 8, the process of detecting whether the first target queue is blocked in step S701 may include the following steps S801 to S803, which are described in detail below:
step S801, acquiring the number of target data contained in a first target queue;
step S802, if the number is greater than or equal to a preset number threshold, determining that the first target queue is blocked;
in step S803, if the number is smaller than the preset number threshold, it is determined that the first target queue is not blocked.
That is, in an alternative embodiment, the process of detecting whether the first target queue is blocked may be to acquire the amount of target data included in the first target queue, compare the acquired amount of target data included in the first target queue with a preset amount threshold, determine that the first target queue is blocked if the amount is greater than or equal to the preset amount threshold, and determine that the first target queue is not blocked if the amount is less than the preset amount threshold.
In an alternative embodiment, the preset number of thresholds may be flexibly set according to a specific application scenario, for example, may be set to 10, 20, 30, or the like.
For example, the number of the target data included in the acquired first target queue is set to be 10, and the preset number threshold is set to be 10, and since the number of the target data included in the acquired first target queue 10 is equal to the preset number threshold 10, it may be determined that the first target queue is blocked, and if the number of the target data included in the acquired first target queue is less than the preset number threshold 10, it may be determined that the first target queue is not blocked.
In this way, by implementing the optional embodiment, according to the obtained relation between the number of the target data contained in the first target queue and the preset number threshold, whether the first target queue is blocked or not can be quickly and accurately determined, so as to provide support for adjusting the target data of at least one queue of the multiple queues according to the processing condition of the target data in the first target queue.
In one embodiment of the present application, referring to fig. 9, the processing case includes waiting for a processing duration; the process of adjusting the target data of at least one of the queues according to the processing condition of the target data in the first target queue in step S702 may include the following steps S901 to S902, which are described in detail below:
Step S901, acquiring a waiting processing duration of each target data contained in the first target queue;
in step S902, the target data of at least one of the queues is adjusted according to the waiting duration of each target data included in the first target queue.
That is, in an alternative embodiment, the process of adjusting the target data of at least one of the plurality of queues according to the processing condition of the target data in the first target queue may be to obtain a waiting processing duration of each target data included in the first target queue, and then adjust the target data of at least one of the plurality of queues according to the waiting processing duration of each target data included in the first target queue.
The waiting duration of the target data in the alternative embodiment refers to the duration of adding the target data to the first target queue, and may be counted from the time of adding the target data to the first target queue. It can be understood that the longer the waiting processing time of the target data is, the longer the processing time of the target data is delayed, and the longer the waiting processing time of the corresponding service is; therefore, in order to enable the target data to be processed as soon as possible, that is, to enable the corresponding service to be processed in time, the target data of at least one of the queues may be adjusted according to the waiting processing duration of each target data included in the first target queue.
In this way, by implementing the alternative embodiment, the target data of at least one queue in the plurality of queues is flexibly adjusted in combination with the waiting processing time of the target data, so that the phenomenon that part of the target data cannot be processed later and thus the timely processing of the service is affected can be avoided.
In an embodiment of the present application, referring to fig. 10, the process of adjusting the target data of at least one of the queues according to the waiting duration of each target data included in the first target queue in step S902 may include the following steps S1001 to S1002, which are described in detail below:
step S1001, selecting, from a first target queue, target data exceeding a preset waiting duration threshold according to waiting duration of each target data included in the first target queue;
in step S1002, the target data exceeding the threshold of the preset waiting duration is dequeued from the first target queue, and added to at least one of the queues to adjust the target data in the at least one queue.
That is, in an alternative embodiment, the process of adjusting the target data of at least one of the plurality of queues according to the waiting duration of each target data included in the first target queue may be that, according to the waiting duration of each target data included in the first target queue, the target data exceeding the preset waiting duration threshold is selected from the first target queue, and then the target data exceeding the preset waiting duration threshold is dequeued from the first target queue and added to at least one of the plurality of queues to adjust the target data in at least one of the queues.
In an optional embodiment, the preset waiting time threshold may be flexibly set according to a specific application scenario, for example, may be set to 1 minute, 2 minutes, and so on.
In an alternative embodiment, the target data is dequeued from the first target queue and added to at least one of the queues, so that the target data in at least one queue is adjusted to the target data in the first target queue (the target data in the first target queue is reduced), and the target data in another queue is also adjusted (the target data in another queue is increased).
For example, the first target queue includes 10 target data 1-10, where the waiting duration of the target data 2, 6-7 exceeds the preset waiting duration threshold, and then the target data 2, 6-7 is selected from the target data 1-10, and then the target data 2, 6-7 is dequeued from the first target queue and added to at least one of the queues.
In this way, by implementing the alternative embodiment, according to the relationship between the waiting processing duration of the target data and the preset waiting duration, the target data can be quickly and accurately determined, so as to provide support for the target data of at least one queue in the plurality of queues to be adjusted according to the target data.
It should be noted that, in other embodiments, after the target data is selected, the target data may be removed from the first target queue, and only the position of the target data in the first target queue needs to be adjusted; wherein if the waiting processing time of the target data is longer, the target data is adjusted to be closer to the head position in the first target queue.
In one embodiment of the present application, referring to fig. 11, the process of dequeuing the target data exceeding the preset waiting duration threshold from the first target queue and adding the target data to at least one of the queues in step S1002 may include the following steps S1101 to S1103, which are described in detail below:
step S1101, dequeuing target data exceeding a preset waiting duration threshold from a first target queue and adding the target data to a temporary queue;
step S1102, selecting a second target queue matched with the importance of the target data exceeding the preset waiting time period threshold from a plurality of queues according to the importance of the target data exceeding the preset waiting time period threshold in the temporary queues;
in step S1103, the target data exceeding the threshold value of the preset waiting duration is dequeued from the temporary queue and added to the second target queue.
That is, in an alternative embodiment, the process of dequeuing the target data exceeding the preset waiting duration threshold from the first target queue and adding the target data exceeding the preset waiting duration threshold to at least one of the plurality of queues may be that the target data exceeding the preset waiting duration threshold is dequeued from the first target queue and added to the temporary queue, then a second target queue matching the importance of the target data exceeding the preset waiting duration threshold is selected from the plurality of queues according to the importance of the target data exceeding the preset waiting duration threshold in the temporary queue, and then the target data exceeding the preset waiting duration threshold is dequeued from the temporary queue and added to the second target queue.
Wherein the temporary queue in the alternative refers to a temporarily created queue for temporarily storing target data dequeued from the first target queue.
Wherein, the second target queue in the alternative embodiment refers to a queue matched with the importance of the target data exceeding the preset waiting duration threshold; alternatively, the number of second target queues selected from the plurality of queues may be one or more.
For example, in carrying out the foregoing example, it is determined that the target data exceeding the threshold value of the preset waiting duration is 2, 6-7, and meanwhile, 1-5 queues corresponding to the virtual scene are set, the first target queue is 2, and at this time, the target data 2, 6-7 exceeding the threshold value of the preset waiting duration are dequeued from the first target queue 2 and added to the temporary queue; meanwhile, setting an importance matching queue 1 of target data 2 exceeding a preset waiting time threshold, and setting an importance matching queue 3 of target data 6-7 exceeding the preset waiting time threshold, wherein at the moment, the queues 1 and 3 are selected from the queues 1-5, and at the moment, the selected queues 1 and 3 are second target queues; then, the target data 2 exceeding the preset waiting time period threshold is dequeued from the temporary queue and added to the second target queue 1, and the target data 6-7 exceeding the preset waiting time period threshold is dequeued from the temporary queue and added to the second target queue 3.
In this way, by implementing the alternative embodiment, the target data of at least one queue in the plurality of queues is adjusted by means of the temporary queues, so that the phenomenon of adjustment errors caused by excessive amount of target data exceeding the threshold value of the preset waiting duration can be avoided, and the adjustment accuracy is improved.
In the embodiment of the application, the granularity is finer and finer because the priority of the created task thread and the importance of the target data associated with the created task thread are considered; therefore, the enqueuing of the target data is determined by combining the priority of the created task thread and the importance of the target data associated with the created task thread, so that the enqueuing of the target data can be more accurate; therefore, the processing of the target data is more reasonable based on the sequential processing in the queue, so that the service can be processed in time. In addition, in the processing process of the target data, the queue of the target data can be adjusted in real time according to the processing condition of the target data, and further, the processing of the target data is more reasonable, so that the service can be timely processed.
One specific application scenario of the embodiments of the present application is described in detail below:
In the embodiment of the application, the virtual scene takes a game scene as an example; referring to fig. 12, fig. 12 is a schematic diagram illustrating a data processing architecture in a virtual scenario according to an embodiment of the present application. As shown in fig. 12, the data processing architecture in the virtual scenario includes:
the thread identification module 1201 is configured to detect a task thread newly created in the game scene in real time, determine the priority of the task thread according to a mapping relation table of the type and the priority of the task thread, and determine the importance of the target data associated with the task thread according to a mapping relation table of the data type and the importance associated with the task thread.
Alternatively, the thread recognition module 1201 may create one recognition thread alone to detect each task thread created in the game scene in real time by the recognition thread.
The network communication module 1202 is configured to provide network communication capability, and all network communication requests of the task thread are forwarded into the queue through the module. Optionally, the target data is allocated to the corresponding queue according to the priority of the task thread marked by the thread identification module 1201 and the importance of the target data associated with the marked task thread.
It can be understood that the network communication request corresponding to the task thread carries a network data packet. Optionally, referring to fig. 13, an exemplary format of a network packet is shown; the network data packet shown in fig. 13 may include fields including a protocol version number (version), a protocol field header length (head_len), a service type, a total packet length (pac_len), a reassembly identifier, a flag, a segment offset, a lifetime, a protocol code, a check code (scr_addr), a 32-bit source address (scr_addr), a 32-bit destination address (scr_addr), and service data; among these, fields such as application priority (app_priority), task Thread priority (thread_priority), and importance of target data associated with task threads (thread_priority) may be added to the options. It will be appreciated that the application priority may be set to 10 levels, where the game-like application may be set to the highest 10 levels, the task thread priority may likewise be set to 10 levels, the importance of the target data associated with the task thread may be set to 5 levels, etc.
A plurality of queues 1203 are used for storing target data, wherein each queue corresponds to a priority.
And the sending module 1204 is used for scanning each queue according to the queue priority, so as to ensure that the target data in the high-priority queue can be executed preferentially.
The detection module 1205 is configured to periodically scan each queue, determine the importance of the target data when the target data in the queue is backlogged for a long time (i.e. the waiting processing time is too long), dequeue the target data with high importance, and enqueue the target data in the queue with high priority (which may be higher by one level or multiple levels), so as to ensure that the target data with low priority can be preferentially executed when the importance of the target data with low priority is high.
Optionally, referring to fig. 14, a flowchart of an exemplary method for processing data in a virtual scene is described in detail as follows:
step S1401, polling a plurality of queues in a game scene;
step S1402, respectively acquiring waiting processing duration of each target data contained in each queue;
step S1403, dequeuing the obtained target data with the waiting duration exceeding the preset waiting duration threshold from the original queue, and enqueuing the target data in the temporary queue;
step S1404, selecting a queue matching the importance of the target data exceeding the preset waiting duration threshold from the plurality of queues according to the importance of the target data exceeding the preset waiting duration threshold in the temporary queues;
In step S1405, the target data exceeding the preset waiting duration threshold is dequeued from the temporary queue and enqueued in a queue matching the importance of the target data exceeding the preset waiting duration threshold.
It should be understood that, here, taking the process of polling a plurality of queues in the game scene and adjusting the target data accordingly as an example, the process of adjusting the target data in one queue (i.e. the first target queue) in the foregoing embodiment is similar, so the other specific implementation processes of step S1401 to step S1405 are referred to the description of the foregoing embodiment, and are not repeated here.
According to the method and the device, each target data in the game scene is reasonably enqueued according to the priority and the importance of the target data, and accordingly, the target data are sequentially processed based on the sequence in the queue, so that the high-priority target data and the low-priority and high-importance target data in the game scene can be timely processed, the service corresponding to the target data can be timely processed, and the use experience of a user is greatly improved.
Fig. 15 is a block diagram of a data processing apparatus in a virtual scenario illustrated in one embodiment of the present application. As shown in fig. 15, the data processing apparatus in the virtual scene includes:
a detection and determination module 1501 configured to determine a priority of a task thread according to a type of the task thread and determine importance of target data associated with the task thread if it is detected that the task thread is created in the virtual scene;
a selecting module 1502, configured to select, according to the priority of the task thread, a first target queue that matches the priority of the task thread from a plurality of queues corresponding to the virtual scene;
an adding module 1503 configured to add the target data to a specified position in the first target queue according to the importance of the target data, the specified position matching the importance of the target data;
the processing and adjusting module 1504 is configured to process the target data in the first target queue, and adjust the target data of at least one of the queues according to the processing condition of the target data in the first target queue.
In one embodiment of the present application, the adding module 1503 includes:
a determining unit configured to determine a position matching the importance of the target data from the first target queue according to the importance of the target data, and to take the matched position as a designated position;
An adding unit configured to add target data to a specified position in the first target queue; the higher the importance of the target data is, the closer the designated position is to the head position in the first target queue.
In one embodiment of the present application, the processing and adjustment module 1504 includes:
a detection unit configured to detect whether or not a first target queue is blocked;
and the adjusting unit is configured to adjust the target data of at least one queue of the plurality of queues according to the processing condition of the target data in the first target queue if the first target queue is blocked.
In one embodiment of the present application, the detection unit is specifically configured to:
acquiring the quantity of target data contained in a first target queue;
if the number is greater than or equal to a preset number threshold, determining that the first target queue is blocked;
if the number is less than the preset number threshold, determining that the first target queue is not blocked.
In one embodiment of the present application, the processing scenario includes waiting for a processing duration; an adjustment unit specifically configured to:
acquiring waiting processing time length of each target data contained in a first target queue;
and adjusting the target data of at least one queue in the plurality of queues according to the waiting processing time length of each target data contained in the first target queue.
In an embodiment of the present application, the adjusting unit is further specifically configured to:
selecting target data exceeding a preset waiting time threshold from the first target queue according to the waiting time of each target data contained in the first target queue;
dequeuing target data exceeding a preset waiting duration threshold from the first target queue, and adding the target data to at least one of the queues to adjust the target data in the at least one queue.
In an embodiment of the present application, the adjusting unit is further specifically configured to:
dequeuing target data exceeding a preset waiting time threshold from a first target queue and adding the target data to a temporary queue;
selecting a second target queue matched with the importance of the target data exceeding the preset waiting time threshold from the queues according to the importance of the target data exceeding the preset waiting time threshold in the temporary queues;
and dequeuing the target data exceeding the preset waiting duration threshold from the temporary queue and adding the target data to a second target queue.
In one embodiment of the present application, the detection and determination module 1501 is specifically configured to:
Determining the priority matched with the type of the task thread according to a mapping relation table of the type and the priority of the preset task thread; the method comprises the steps of presetting a mapping relation table of the types and the priorities of task threads, wherein the preset mapping relation table is provided with a plurality of types of task threads and priorities corresponding to the task threads.
In one embodiment of the present application, the data processing apparatus in a virtual scene further includes:
the acquisition module is configured to acquire attribute data of the executed task threads corresponding to the virtual scene in a preset historical time period; wherein the executed task threads comprise a plurality of types of task threads;
the determining module is configured to determine priorities of a plurality of task threads according to the attribute data of the executed task threads so as to generate a mapping relation table of the types and the priorities of the preset task threads.
In one embodiment of the present application, the plurality of types of task threads include: at least two of a critical task thread, an auxiliary task thread, a reporting data task thread and a third party task thread; the determining module is specifically configured to:
if the attribute data of the executed task thread represents that the service requirement time delay is higher than the set time delay, setting the priority of the executed task thread as a first priority;
If the attribute data of the executed task thread represents that the service requirement time delay is lower than the set time delay, setting the priority of the executed task thread as a second priority; wherein the first priority is higher than the second priority.
In one embodiment of the present application, the detection and determination module 1501 is specifically configured to:
acquiring the type of target data associated with a task thread;
determining importance matched with the type of target data associated with the task thread according to a mapping relation table of the type of the data associated with the task thread and the importance; the method comprises the steps of presetting a mapping relation table of data types and importance of task threads, wherein the mapping relation table is preset with a plurality of types of data associated with the task threads and the importance of the plurality of types of data respectively.
It should be noted that, the apparatus provided in the foregoing embodiment and the method provided in the foregoing embodiment belong to the same concept, and the specific manner in which each module and unit perform the operation has been described in detail in the method embodiment, which is not repeated herein.
The embodiment of the application also provides electronic equipment, which comprises: one or more processors; and the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the electronic equipment realizes the data processing method in the virtual scene.
Fig. 16 is a schematic diagram of a computer system suitable for use in implementing embodiments of the present application.
It should be noted that, the computer system 1600 of the electronic device shown in fig. 16 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present application.
As shown in fig. 16, the computer system 1600 includes a central processing unit (Central Processing Unit, CPU) 1601 that can perform various appropriate actions and processes, such as performing the method in the above-described embodiment, according to a program stored in a Read-Only Memory (ROM) 1602 or a program loaded from a storage section 1608 into a random access Memory (Random Access Memory, RAM) 1603. In the RAM 1603, various programs and data required for system operation are also stored. The CPU 1601, ROM 1602, and RAM 1603 are connected to each other by a bus 1604. An Input/Output (I/O) interface 1605 is also connected to bus 1604.
The following components are connected to the I/O interface 1605: an input portion 1606 including a keyboard, a mouse, and the like; an output portion 1607 including a Cathode Ray Tube (CRT), a liquid crystal display (Liquid Crystal Display, LCD), and the like, a speaker, and the like; a storage section 1608 including a hard disk or the like; and a communication section 1609 including a network interface card such as a LAN (Local Area Network ) card, a modem, or the like. The communication section 1609 performs communication processing via a network such as the internet. The drive 1610 is also connected to the I/O interface 1605 as needed. A removable medium 1611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on the drive 1610 so that a computer program read out therefrom is installed into the storage section 1608 as needed.
In particular, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method shown in the flowchart. In such embodiments, the computer program may be downloaded and installed from a network via the communication portion 1609, and/or installed from the removable media 1611. When executed by a Central Processing Unit (CPU) 1601, the computer program performs the various functions defined in the system of the present application.
It should be noted that, the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium may be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with a computer-readable computer program embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. A computer program embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Where each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented by means of software, or may be implemented by means of hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
Another aspect of the present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of data processing in a virtual scenario as before. The computer-readable storage medium may be included in the electronic device described in the above embodiment or may exist alone without being incorporated in the electronic device.
Another aspect of the present application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the data processing method in the virtual scene provided in the above embodiments.
The foregoing is merely a preferred exemplary embodiment of the present application and is not intended to limit the embodiments of the present application, and those skilled in the art may make various changes and modifications according to the main concept and spirit of the present application, so that the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (16)

1. A method of data processing, the method comprising:
determining the importance of target data associated with the task thread according to a mapping relation table of the data type associated with the preset task thread and the importance;
selecting a first target queue matched with the priority of the task thread from a plurality of queues according to the priority of the task thread;
adding the target data to a designated position in the first target queue according to the importance of the target data, wherein the designated position is matched with the importance of the target data;
and processing the target data in the first target queue, and adjusting the position of the target data in the first target queue according to the processing condition of the target data in the first target queue.
2. The method of claim 1, wherein the adding the target data at the specified location in the first target queue according to the importance of the target data comprises:
determining a position matched with the importance of the target data from the first target queue according to the importance of the target data, and taking the matched position as the designated position;
Adding the target data to the first target queue at the specified location; the higher the importance of the target data is, the closer the designated position is to the head position in the first target queue.
3. The method of claim 1, wherein adjusting the location of the target data in the first target queue based on the processing of the target data in the first target queue comprises:
detecting whether the first target queue is blocked;
and if the first target queue is blocked, adjusting the position of the target data in the first target queue according to the processing condition of the target data in the first target queue.
4. The method of claim 3, wherein the detecting whether the first target queue is blocked comprises:
acquiring the quantity of target data contained in the first target queue;
if the number is greater than or equal to a preset number threshold, determining that the first target queue is blocked;
and if the number is smaller than the preset number threshold, determining that the first target queue is not blocked.
5. A method as claimed in claim 3, wherein the processing conditions include waiting a processing duration; the adjusting the position of the target data in the first target queue according to the processing condition of the target data in the first target queue comprises the following steps:
Acquiring waiting processing time length of each target data contained in the first target queue;
and adjusting the position of the target data in the first target queue according to the waiting processing time of each target data contained in the first target queue.
6. The method of claim 5, wherein adjusting the location of the target data in the first target queue according to the waiting duration of each target data contained in the first target queue comprises:
selecting target data exceeding a preset waiting time threshold from the first target queue according to the waiting time of each target data contained in the first target queue;
adjusting the target data exceeding a preset waiting time threshold from a designated position to a target position in the first target queue; wherein the closer the target location is to a head of queue location in the first target queue than the designated location.
7. The method of claim 6, wherein the obtaining a waiting processing time period for each target data included in the first target queue comprises:
respectively adding the respective target data to the first target queue at the time at the designated position as the start time;
And starting timing from the starting time to obtain the waiting processing time of each target data.
8. A method as claimed in any one of claims 1 to 7, wherein prior to said selecting a first target queue from a plurality of queues that matches the priority of the task thread in accordance with the priority of the task thread, the method further comprises:
and determining the priority of the task thread according to the type of the task thread.
9. The method of claim 8, wherein said determining the priority of the task thread based on the type of task thread comprises:
determining the priority matched with the type of the task thread according to a mapping relation table of the type and the priority of the preset task thread; the mapping relation table of the types and the priorities of the preset task threads is preset with a plurality of types of task threads and priorities corresponding to the task threads respectively.
10. The method of claim 9, wherein before determining the priority matching the type of task thread based on the mapping table of the type of task thread and the priority, the method further comprises:
Acquiring attribute data of executed task threads in a preset historical time period; wherein the executed task threads comprise a plurality of types of task threads;
and determining the priorities of the task threads of the multiple types according to the attribute data of the executed task threads so as to generate a mapping relation table of the types and the priorities of the preset task threads.
11. The method of claim 10, wherein the plurality of types of task threads comprise: at least two of a critical task thread, an auxiliary task thread, a reporting data task thread and a third party task thread; the determining the priorities of the task threads according to the attribute data of the executed task threads comprises the following steps:
if the attribute data of the executed task thread represents that the service requirement time delay is higher than the set time delay, setting the priority of the executed task thread as a first priority;
if the attribute data of the executed task thread represents that the service requirement time delay is lower than the set time delay, setting the priority of the executed task thread as a second priority; wherein the first priority is higher than the second priority.
12. The method according to any one of claims 1 to 7, wherein determining the importance of the target data associated with the task thread according to a mapping table of data types associated with preset task threads and importance includes:
acquiring the type of target data associated with the task thread;
determining importance matched with the type of the target data associated with the task thread according to a mapping relation table of the data type and the importance associated with the preset task thread; the mapping relation table of the data types and the importance of the preset task threads is preset with a plurality of types of data associated with the task threads and the importance of the plurality of types of data respectively.
13. A data processing apparatus, the apparatus comprising:
the detection and determination module is configured to determine the importance of the target data associated with the task thread according to a mapping relation table of the data type and the importance associated with the preset task thread;
a selecting module configured to select a first target queue matching the priority of the task thread from a plurality of queues according to the priority of the task thread;
An adding module configured to add the target data to a specified position in the first target queue according to the importance of the target data, the specified position matching the importance of the target data;
the processing and adjusting module is configured to process the target data in the first target queue and adjust the position of the target data in the first target queue according to the processing condition of the target data in the first target queue.
14. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the electronic device, cause the electronic device to implement a data processing method as claimed in any one of claims 1 to 12.
15. A computer readable medium on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the data processing method according to any one of claims 1 to 12.
16. A computer program product comprising computer instructions which, when executed by a processor, implement a data processing method as claimed in any one of claims 1 to 12.
CN202210702527.3A 2022-01-28 2022-01-28 Data processing method and device, equipment and medium Pending CN116560809A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210702527.3A CN116560809A (en) 2022-01-28 2022-01-28 Data processing method and device, equipment and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210702527.3A CN116560809A (en) 2022-01-28 2022-01-28 Data processing method and device, equipment and medium
CN202210103894.1A CN114116184B (en) 2022-01-28 2022-01-28 Data processing method and device in virtual scene, equipment and medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202210103894.1A Division CN114116184B (en) 2022-01-28 2022-01-28 Data processing method and device in virtual scene, equipment and medium

Publications (1)

Publication Number Publication Date
CN116560809A true CN116560809A (en) 2023-08-08

Family

ID=80361902

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210702527.3A Pending CN116560809A (en) 2022-01-28 2022-01-28 Data processing method and device, equipment and medium
CN202210103894.1A Active CN114116184B (en) 2022-01-28 2022-01-28 Data processing method and device in virtual scene, equipment and medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202210103894.1A Active CN114116184B (en) 2022-01-28 2022-01-28 Data processing method and device in virtual scene, equipment and medium

Country Status (1)

Country Link
CN (2) CN116560809A (en)

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3975703B2 (en) * 2001-08-16 2007-09-12 日本電気株式会社 Preferential execution control method, apparatus and program for information processing system
US10140157B2 (en) * 2014-05-29 2018-11-27 Apple Inc. Multiple process scheduling of threads using process queues
US10515326B2 (en) * 2015-08-28 2019-12-24 Exacttarget, Inc. Database systems and related queue management methods
CN105117284B (en) * 2015-09-09 2020-09-25 厦门雅迅网络股份有限公司 Method for scheduling work threads based on priority proportion queue
CN106095546A (en) * 2016-06-01 2016-11-09 深圳市永兴元科技有限公司 The task management method of cloud computing platform and device
CN106802826B (en) * 2016-12-23 2021-06-18 中国银联股份有限公司 Service processing method and device based on thread pool
CN109257227B (en) * 2018-10-24 2021-09-24 京信网络系统股份有限公司 Coupling management method, device and system in data transmission
CN111400022A (en) * 2019-01-02 2020-07-10 中国移动通信有限公司研究院 Resource scheduling method and device and electronic equipment
CN112579263A (en) * 2019-09-29 2021-03-30 北京国双科技有限公司 Task execution method and device, storage medium and electronic equipment
CN113726636B (en) * 2021-08-31 2022-11-29 华云数据控股集团有限公司 Data forwarding method and system of software forwarding device and electronic device

Also Published As

Publication number Publication date
CN114116184A (en) 2022-03-01
CN114116184B (en) 2022-04-29

Similar Documents

Publication Publication Date Title
CN108776934B (en) Distributed data calculation method and device, computer equipment and readable storage medium
US7257633B2 (en) Dynamic allocation of a pool of threads
EP1421739B1 (en) Transmitting multicast data packets
CN105933213B (en) Chat message processing method, related equipment and system
US20150120854A1 (en) Subscriber based priority of messages in a publisher-subscriber domain
US9654433B2 (en) Selective message republishing to subscriber subsets in a publish-subscribe model
US11321150B2 (en) Ordered event notification
CN109343972B (en) Task processing method and terminal equipment
WO2017032152A1 (en) Method for writing data into storage device and storage device
US9268621B2 (en) Reducing latency in multicast traffic reception
CN111597041B (en) Calling method and device of distributed system, terminal equipment and server
US7292593B1 (en) Arrangement in a channel adapter for segregating transmit packet data in transmit buffers based on respective virtual lanes
CN110515749B (en) Method, device, server and storage medium for queue scheduling of information transmission
CN112256458A (en) Message enqueuing method and device, electronic equipment and computer readable medium
CN109617833B (en) NAT data auditing method and system of multi-thread user mode network protocol stack system
CN114116184B (en) Data processing method and device in virtual scene, equipment and medium
US8156265B2 (en) Data processor coupled to a sequencer circuit that provides efficient scalable queuing and method
US11188394B2 (en) Technologies for synchronizing triggered operations
CN110162415B (en) Method, server, device and storage medium for processing data request
CN111858019B (en) Task scheduling method and device and computer readable storage medium
CN115604191A (en) Service flow control method and device, electronic equipment and readable storage medium
CN112684988A (en) QoS method and system based on distributed storage
CN111324438A (en) Request scheduling method and device, storage medium and electronic equipment
CN113381941B (en) Task scheduling method and device, electronic equipment and computer storage medium
CN109951401B (en) Method for receiving data by non-packet-loss network card based on zero copy receiving

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination