CN107168794A - The treating method and apparatus of request of data - Google Patents
The treating method and apparatus of request of data Download PDFInfo
- Publication number
- CN107168794A CN107168794A CN201710331764.2A CN201710331764A CN107168794A CN 107168794 A CN107168794 A CN 107168794A CN 201710331764 A CN201710331764 A CN 201710331764A CN 107168794 A CN107168794 A CN 107168794A
- Authority
- CN
- China
- Prior art keywords
- thread
- request
- subregion
- data
- processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5018—Thread allocation
Abstract
The invention discloses a kind for the treatment of method and apparatus of request of data.Wherein, this method includes:Distribute multiple threads;At least one subregion is distributed for each thread in multiple threads, wherein, each subregion at least one subregion is used for the request of data for receiving predetermined quantity;By multiple threads, the request of data received at least one subregion corresponding to each thread is handled.The present invention, which solves to distinguish due to each game point in correlation technique, is furnished with single server, and the technical problem of the wasting of resources of some servers will be caused in the case of subregion load imbalance of playing.
Description
Technical field
The present invention relates to computer realm, in particular to a kind for the treatment of method and apparatus of request of data.
Background technology
Due to the particularity of game server, many game needs open many subregions (such as areas of QQ mono-, the areas of QQ bis-), each
Area needs the server of at least one virtual machine of correspondence, container or physical machine.Because the number in each area is different, load
Different, the operational capability for server is a kind of serious waste.For the hand trip for the class that internationalizes, many countries,
Area, it is small, open many subregions, the wasting phenomenon to server is more serious.
Consider the factor of frame cost, prior art will not generally be increased using the configuration for reducing each physical machine server
The scheme of server physical machine quantity, but server resource is more fully utilized using the configuration of reduction virtual machine.Example
2-8 virtual machine is opened such as on a physical server, 2-8 Game Zone can be carried.
It is due to each Game Zone although by the means of virtual machine, the quantity of the server of virtual machine can be increased
Number do not know, the server load of corresponding virtual machine is unstable, it is easy to cause the corresponding virtual machine in some area
Server is busy, and the situation of the server free of the corresponding virtual machine in other areas.
It is furnished with single server for being distinguished in above-mentioned correlation technique due to each game point, in the load of game subregion not
The problem of wasting of resources of some servers will be caused in the case of in a balanced way, effective solution is not yet proposed at present.
The content of the invention
The embodiments of the invention provide a kind for the treatment of method and apparatus of request of data, with least solve in correlation technique by
Distinguished in each game point and be furnished with single server, some will be caused to service in the case of subregion load imbalance of playing
The technical problem of the wasting of resources of device.
One side according to embodiments of the present invention there is provided a kind of processing method of request of data, including:Distribution is multiple
Thread;At least one subregion is distributed for each thread in the multiple thread, wherein, it is each at least one described subregion
Subregion is used for the request of data for receiving predetermined quantity;By the multiple thread, it is corresponding to each thread it is described at least one
The request of data received in subregion is handled.
Another aspect according to embodiments of the present invention, additionally provides a kind of processing unit of request of data, including:First point
With unit, for distributing multiple threads;Second allocation unit, for distributing at least one for each thread in the multiple thread
Individual subregion, wherein, each subregion at least one described subregion is used for the request of data for receiving predetermined quantity;Processing unit,
For by the multiple thread, at the request of data received at least one described subregion corresponding to each thread
Reason.
In embodiments of the present invention, server is divided into multiple threads, and for each thread in multithreading distribute to
A few subregion, wherein, each subregion can receive the request of data of predetermined quantity, then by multiple threads of distribution, to every
The request of data received in each subregion in individual thread is handled, so as to being multiple points of single server-assignment
Area, is handled the corresponding request of data of each thread by multiple threads, can be solved in correlation technique due to each trip
Play subregion is assigned single server, will cause the resource of some servers in the case of subregion load imbalance of playing
The technical problem of waste, and then the resource for making full use of server has been reached, in the case where not influenceing Consumer's Experience, save clothes
The technique effect of device operation cost of being engaged in and maintenance cost.
Brief description of the drawings
Accompanying drawing described herein is used for providing a further understanding of the present invention, constitutes the part of the application, this hair
Bright schematic description and description is used to explain the present invention, does not constitute inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is the schematic diagram of the hardware environment of the processing method of request of data according to embodiments of the present invention;
Fig. 2 is a kind of flow chart of the processing method of optional request of data according to embodiments of the present invention;
Fig. 3 is a kind of schematic diagram of the framework of optional game services according to embodiments of the present invention;
Fig. 4 is a kind of schematic diagram of optional game server internal structure according to embodiments of the present invention;
Fig. 5 is a kind of schematic diagram of optional multithreading game server system according to embodiments of the present invention;
Fig. 6 is a kind of schematic diagram of the processing unit of optional request of data according to embodiments of the present invention;
Fig. 7 is the schematic diagram of the processing unit of another optional request of data according to embodiments of the present invention;
Fig. 8 is the schematic diagram of the processing unit of another optional request of data according to embodiments of the present invention;
Fig. 9 is the schematic diagram of the processing unit of another optional request of data according to embodiments of the present invention;
Figure 10 is the schematic diagram of the processing unit of another optional request of data according to embodiments of the present invention;
Figure 11 is the schematic diagram of the processing unit of another optional request of data according to embodiments of the present invention;
Figure 12 is a kind of structured flowchart of terminal according to embodiments of the present invention.
Embodiment
In order that those skilled in the art more fully understand the present invention program, below in conjunction with the embodiment of the present invention
Accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is only
The embodiment of a part of the invention, rather than whole embodiments.Based on the embodiment in the present invention, ordinary skill people
The every other embodiment that member is obtained under the premise of creative work is not made, should all belong to the model that the present invention is protected
Enclose.
It should be noted that term " first " in description and claims of this specification and above-mentioned accompanying drawing, "
Two " etc. be for distinguishing similar object, without for describing specific order or precedence.It should be appreciated that so using
Data can exchange in the appropriate case, so as to embodiments of the invention described herein can with except illustrating herein or
Order beyond those of description is implemented.In addition, term " comprising " and " having " and their any deformation, it is intended that cover
Lid is non-exclusive to be included, for example, the process, method, system, product or the equipment that contain series of steps or unit are not necessarily limited to
Those steps or unit clearly listed, but may include not list clearly or for these processes, method, product
Or the intrinsic other steps of equipment or unit.
Embodiment 1
There is provided a kind of processing method embodiment of request of data according to embodiments of the present invention.
Alternatively, in the present embodiment, the processing method of above-mentioned request of data can apply to as shown in Figure 1 by servicing
In the hardware environment that device 102 and terminal 104 are constituted.As shown in figure 1, server 102 is connected by network with terminal 104
Connect, above-mentioned network includes but is not limited to:Wide area network, Metropolitan Area Network (MAN) or LAN, terminal 104 are not limited to PC, mobile phone, flat board electricity
Brain etc..The processing method of the request of data of the embodiment of the present invention can be performed by server 102, can also by terminal 104
Perform, can also be and performed jointly by server 102 and terminal 104.Wherein, terminal 104 performs the data of the embodiment of the present invention
The processing method of request can also be performed by client mounted thereto.
Fig. 2 is a kind of flow chart of the processing method of optional request of data according to embodiments of the present invention, such as Fig. 2 institutes
Show, this method may comprise steps of:
Step S202, distributes multiple threads;
Step S204, is that each thread in multiple threads distributes at least one subregion, wherein, at least one subregion
Each subregion is used for the request of data for receiving predetermined quantity;
Step S206, by multiple threads, the request of data received at least one subregion corresponding to each thread
Handled.
By above-mentioned steps S202 to step S206, server is divided into multiple threads, and to be each in multithreading
Thread distributes at least one subregion, wherein, each subregion can receive the request of data of predetermined quantity, then pass through the multiple of distribution
Thread, is handled the request of data received in each subregion in each thread, so as to being single server
Multiple subregions are distributed, the corresponding request of data of each thread is handled by multiple threads, can be solved in correlation technique
Distinguished due to each game point and be furnished with single server, some will be caused to take in the case of subregion load imbalance of playing
The technical problem of the wasting of resources of business device, and then the resource for making full use of server has been reached, do not influenceing the feelings of Consumer's Experience
Under condition, the technique effect of server operation cost and maintenance cost is saved.
In the technical scheme that step S202 is provided, thread, also referred to as Lightweight Process, English name is
Lightweight Process, or LWP, be program perform stream minimum unit, thread is an entity in process, be by
The base unit that system is independently dispatched and assigned, thread oneself does not possess system resource, and only possessing a little in operation must can not
Few resource, but it can share whole resources that process is possessed with belonging to other threads of a process together.Server can root
Factually border use demand, distributes multiple threads, each thread is handled the request of data in the thread alone.
It is the program in computer on certain data acquisition system it should be noted that process, English name is Process
Once operation activity, be system carry out Resource Distribution and Schedule base unit, be operate architecture basis.
Alternatively, the above embodiment of the present invention can be applied in the server.
It should be noted that processor is the control core and arithmetic core of server, the instruction for explaining server
And the data in processing server, wherein, processor mainly passes through the core processing data in processor, the core of processor
Quantity is more, and the operational capability of processor is stronger.
As a kind of optional embodiment, distributing multiple threads can include:The quantity of the core of processor is obtained, its
In, processor is the processor of a server;A thread is bound for each core.
Using the above embodiment of the present invention, by obtaining the core amounts of processor, and according to the core amounts of processor
A thread is bound for each core, makes each thread by there is an independent core to handle the data in thread, from
And ensure the processing speed of the data to each thread.
It is server point in the case that the core amounts of processor are 4 in the server as an optional example
A thread is bound with 4 threads, and for each core in processor.For example, 4 cores are respectively core 1, core 2, core
The heart 3 and core 4;4 threads are respectively thread 1, thread 2, thread 3 and thread 4, can bind thread 1 for core 1,
Thread 2 is bound for core 2, is the binding thread 3 of core 3, is the binding thread 4 of core 4.
As a kind of optional embodiment, after a thread is bound for each core, the embodiment can also include:
Obtain the memory source amount of processor;The memory source amount of processor is distributed into multiple threads.
It should be noted that the speed of service of server is in addition to being processed the speed of service of device inner core, also by
The influence of the memory source amount of processor.
Using the above embodiment of the present invention, after a thread is bound for each core, the internal memory in processor
Stock number, is that each thread distributes corresponding memory source amount, there is each thread has corresponding memory source amount, really
Protecting each thread rapidly can be handled the data in thread.
It is alternatively possible to which the memory source amount of processor is averagely allocated into multiple threads, each thread is set to obtain
Obtain impartial memory source amount.
As an optional example, in the case where number of threads is 4, the memory source amount of processor can be put down
4 parts are divided into, each thread is obtained the 1/4 of memory source amount of former processor.
It is alternatively possible to according to the actual requirements, be that each thread distributes corresponding memory source amount, make each thread can
To meet respective operation demand.
It is used as an optional example, in the case where number of threads is 4, thread 1, thread 2, thread 3 and thread
The ratio of 4 required memory source amounts in the process of running is 3:2:1:4, then can be by the memory source amount average mark of processor
For 10 parts, wherein, thread 1 distributes the memory source amount 3/10 of former processor;Thread 2 distributes the memory source amount 2/ of former processor
10;Thread 3 distributes the memory source amount 1/10 of former processor;Thread 4 distributes the memory source amount 4/10 of former processor.
As a kind of optional embodiment, communicated to connect between any two thread in multiple threads.
Using the above embodiment of the present invention, the communication connection set up between any two thread, making can between multiple threads
To be in communication with each other, so as to according to the communication connection relation, allow multiple threads according to communication connection, multiple threads are dispatched
Between data, in the case of a thread operation troubles in multiple threads, by other thread process in multiple threads
Data in the thread, it is ensured that multiple threads can normally be run.
It is alternatively possible in the case that a thread in multiple threads exceeds load, by the data of the thread process
Other thread process are given according to communication connection relation allocation.
As an optional example, server includes two threads, respectively thread 1 and thread 2, also, thread 1 with
Communicated to connect between thread 2.If the situation beyond load occurs in the process of running in thread 1, thread 1 is exceeded into loading section
Data by communication connection be assigned in thread 2, by thread 2 handle in thread 1 exceed loading section request of data.
It is alternatively possible in the case that a thread in multiple threads occurs, by the data of the thread process according to
Relation allocation is communicated to connect to other thread process.
As an optional example, server includes two threads, respectively thread 1 and thread 2, also, thread 1 with
Communicated to connect between thread 2.If thread 1 breaks down in the process of running, whole outputs in thread 1 are assigned to thread
In 2, the data in thread 1 are handled by thread 2.
In the technical scheme that step S204 is provided, each subregion can be used for the request of data for receiving predetermined quantity, often
At least one subregion can be distributed in individual thread, the request of data that each thread is received quantity of subregion and every in each thread
The predetermined quantity of the corresponding request of data of individual subregion is determined.
As an optional example, at least one subregion, each subregion can received quantity identical data
In the case of request, if server includes two threads, each thread distributes 4 subregions, the data that each subregion can be received
The quantity of request is A, then the quantity for the request of data that each thread can be received is 4A, the request of data that server can be received
Quantity be 2 × 4A=8A.
As an optional example, at least one subregion, each subregion can be asked with the different data of received quantity
In the case of asking, if server is respectively thread 1 and thread 2 including two threads, wherein, at least one subregion in thread 1
Respectively:Subregion 1 and subregion 2;At least one subregion in thread 2 is respectively:Subregion 3 and subregion 4, subregion 1 can be with
The quantity of the request of data of reception is A;The quantity for the request of data that subregion 2 can be received is B;The data that subregion 3 can be received
The quantity of request is C;The quantity for the request of data that subregion 4 can be received is D.The then number for the request of data that thread 1 can be received
The quantity measured as A+B, thread 2 request of data that can be received is C+D, and the quantity for the request of data that server can be received is A+
B+C+D。
It is that each thread in multiple threads distributes at least one subregion and can included as a kind of optional embodiment:
Detect the free memory stock number of each thread;Detecting the presence of thread of the free memory stock number more than the first predetermined threshold
In the case of, it is that free memory stock number distributes new subregion more than the thread of the first predetermined threshold.
Using the above embodiment of the present invention, the free memory stock number of a thread in multiple threads are detected is less than
In the case of first predetermined threshold, it is that free memory stock number distributes new subregion more than the thread of the first predetermined threshold, will accounts for
Distributed with the request of data of the larger subregion of memory source amount to new subregion, each thread is provided in enough internal memories
Source amount is lower to be run.Herein it should be noted that the first predetermined threshold can be set according to the actual requirements, specific limit is not done herein
It is fixed.
Alternatively, it is that free memory stock number distributes new subregion more than the thread of the first predetermined threshold, can be by the line
Request of data in journey in the larger subregion of committed memory stock number is distributed evenly in new subregion.For example, being taken in thread
Request of data quantity in the larger subregion of memory source amount is 2A, then distributes A request of data in the subregion to new
In subregion.
Alternatively, it is that free memory stock number distributes new subregion more than the thread of the first predetermined threshold, can be according to reality
Border use demand distributes the request of data in the larger subregion of committed memory stock number in the thread in new subregion.Example
Such as, the request of data quantity in thread in the larger subregion of committed memory stock number is 3A, is asked by A data in the subregion
Ask after being allocated, the free memory stock number of thread is less than the first predetermined threshold where the subregion, then by A in the subregion
Request of data is distributed into new subregion.
In the technical scheme that step S206 is provided, each subregion can receive corresponding request of data, in each thread
The request of data received at least one subregion in the thread can be handled, can also be by multiple threads to every
The data received at least one subregion in individual thread are handled.
As an optional example, the reception that each thread can individually handle at least one subregion in the thread please
Ask, for example, in the case that the subregion 1 in thread 1 receives request of data A, can be carried out by thread 1 to request of data A
Processing.
As an optional example, each thread can individually handle the reception of at least one subregion in other threads
Request, for example, in the case that the subregion 1 in thread 1 receives request of data A, subregion 1 can be transferred into thread 2, then lead to
Thread 2 is crossed to request of data A processing.
As a kind of optional embodiment, by multiple threads, received at least one subregion corresponding to each thread
To request of data progress processing include:Detect the number of request of data that is to be processed needed for each thread or handling
Amount;The to be processed or quantity of request of data that handles is more than the second predetermined threshold needed for first thread is detected
In the case of, the request of data at least one corresponding subregion of first thread is transferred in the second thread and handled, wherein,
Multiple threads include first thread and the second thread, to be processed needed for the second thread or number of request of data that handles
Amount is less than the 3rd predetermined threshold.
Using the above embodiment of the present invention, data that are to be processed needed for first thread is detected or handling please
, can be by the request of data at least one corresponding subregion of first thread in the case that the quantity asked is more than the second predetermined threshold
The quantity for being transferred to request of data that is required to be processed or handling is less than in the second thread of the 3rd predetermined threshold
Row processing, so as to by it is to be processed needed for first thread or needed for handling it is to be processed or handling
The quantity of request of data is limited in the second predetermined threshold range, and please by the data in first thread higher than the second predetermined threshold
Ask distribution to the quantity of request of data that is required to be processed or handling less than in the second thread of the 3rd threshold value, it is ensured that
Each thread can rapidly process request of data.Herein it should be noted that the second predetermined threshold and the 3rd predetermined threshold are equal
It can according to the actual requirements set, be not specifically limited herein.
As an optional example, to be processed needed for the first thread or quantity of request of data that handles
For 1200, in the case that the quantity of request of data that is to be processed needed for the second thread or handling is 300, if setting the
Two threshold values are 1000, and the 3rd threshold value is 600, then can be by 200 request of data higher than Second Threshold in first thread point
It is assigned in the second thread less than the 3rd threshold value and handles.
As a kind of optional embodiment, by multiple threads, received at least one subregion corresponding to each thread
To request of data progress processing include:Judge whether the target data request that the 3rd thread is received belongs to the 3rd thread correspondence
Subregion;In the case where the target data request for judging to receive belongs to the corresponding subregion of the 3rd thread, pass through the 3rd line
Journey is handled target data request;The corresponding subregion of the 3rd thread is not belonging in the target data request for judging to receive
In the case of, target data request is transferred to the 4th thread and handled, wherein, multiple threads include the 3rd thread and the 4th
Thread, target data request belongs to the corresponding subregion of the 4th thread.
Using the above embodiment of the present invention, it is necessary to judge to connect whether the target data request that the 3rd thread receives belongs to the
The corresponding subregion of three threads, please to target data by the 3rd thread if target data request belongs to the corresponding subregion of the 3rd thread
Ask and handled, if target data request is not belonging to the corresponding subregion of the 3rd thread, target data request transfer is as target
The corresponding subregion of the 4th thread belonging to request of data is handled, it is ensured that each thread, which can be handled, belongs to thread correspondence point
The target data request in area.
Alternatively, in the case that a thread in multiple threads obtains target data request, judge that target data please
Seeking Truth is no to belong to the thread, if target data request belongs to the thread, is asked by the thread process target data, if number of targets
The thread is not belonging to according to request, then asks target data to be transferred to the thread belonging to target data request.
As an optional example, target data request A belongs to subregion 15, and the corresponding thread of subregion 15 is thread 2,
In the case that thread 1 receives target data request A, target data request A is transferred to thread 2, number of targets is handled by thread 2
According to request A.
Present invention also offers a kind of preferred embodiment, a kind of framework prioritization scheme of game services of the preferred embodiment.
Using high-performance server, can be maximum effectively utilize server resource, provided more using the performance of server
High online number, more preferable Consumer's Experience, while reducing server cost and operation cost.
For example, a server reached the standard grade in regional A, entirely may may be divided into 20 points in 50,000 people or so online
Area, has used 20 virtual machines.Due to the uncontrollability of player, the player of some subregion may be caused very active, certain is caused
The load excessive of individual virtual machine, influences Consumer's Experience, and other virtual machines is online small, and operational capability is wasted completely.
If only undertaking the player of above-mentioned all 20 subregions using a high performance server (carrying 60,000 is online).
The user of whichever subregion enlivens, all without the performance of influence whole system.
Fig. 3 is a kind of schematic diagram of the framework of optional game services according to embodiments of the present invention, as shown in figure 3, with
Family is borrowed into network by gateway server, and all user messages are handled in game server.
Fig. 4 is a kind of schematic diagram of optional game server internal structure according to embodiments of the present invention, such as Fig. 4 institutes
Show there is provided the internal structure of a game server with 40 subregions, wherein, the title of each subregion is respectively QQ1
Area, QQ2 areas, QQ3 areas ..., QQ40 areas.In above-mentioned game server internal structure, single server assume responsibility for 40 subregions
User, 4 threads are included in single server, each thread includes 10 subregions, the online number that each subregion can be accommodated
For 1000 people, it is 40000 people that single server, which has the online number that can be accommodated altogether,.
The above embodiment of the present invention provide a kind of game server resource distribution mode, game server process according to
CPU (central processing unit) Core (core) counts to distribute thread, and each Core (core) binds a thread, is used as a meter
Calculate the data structure that multiple Game Zones are distributed in unit, each computing unit.Each thread sets CPU (central processing unit) parent
Edge, binds specific Core (core), plays the maximum performance of whole system.
It should be noted that affinity, English name is affinity, refer to thread can be forced to be limited to available
The characteristic run on CPU subset, it is to a certain extent exposed to the scheduling strategy of process/thread on a multiple-processor system
Systems Programmer, contributes to programmer to realize the scheduling strategy of oneself to provide the higher performance under particular case.
According to the above embodiment of the present invention, a kind of load balancing mode of game server is additionally provided, wherein, server
Process can count the load of multiple subregions in each thread, and dispatch each subregion in cross-thread.For example, the responsible 1-5 areas of thread 1
The online number of user it is more, a part of busy subregion can be re-assigned in the idle thread of comparison by server processes
Go.
Alternatively, because a server can be responsible for many subregions, in the range of whole server, load is substantially
Stable.Such as, disposed using the server of the core of 32G internal memories 12, single physics clothes can carry 300,000 or so it is online (right
For the trip of common hand), online user's quantity can be opened for the game of 1000 people or so while being carried for single subregion
300 subregions, all online (other servers not including pvp etc.) can be undertaken essentially for most of game with unit.
According to the above embodiment of the present invention, additionally provide one kind and open the mode of clothes automatically, because single physical server has abundance
Calculation resources, can meet the need for opening new subregion automatically.In the case that number in the subregion opened is relatively more, only
Need the adjustment for doing some internal storage structures, without doing any change to server disposition, just can automatically turn on new subregion.
Using the above embodiment of the present invention, evade what traditional game server was used, the side of single partition-mono- server
Formula.By more making full use of CPU many nuclear properties in single physical server, in application layer virtual partition, greatly improve single
The online number bearing capacity of server.The appropriate type of server by choosing, can be met by the way of single server
The carrying demand of most of hand trip.
Fig. 5 is a kind of schematic diagram of optional multithreading game server system according to embodiments of the present invention, such as Fig. 5 institutes
Show, first to server system initialize, for be respectively thread 1, thread 2, thread 3 and thread 4 distribution current server in
Resource is deposited, then each thread is independently once operated, such as initialization thread resources, thread initialization initializes communication queue
Etc. resource, the data initialization in thread, processing game subregion data initialization, start to process user request are handled.Wherein, lead to
The resources such as initialization communication queue are crossed, the communication link of multiple threads is set up.
Alternatively, complete above-mentioned with postponing, the scheduling of multiple threads can be carried out, for example, negative in the arithmetic element of thread 1
It in the case that load is excessive, can apply recalling game subregion 2, game subregion 2 is dispatched in thread 3, the adapter game point of thread 3
The data in area 2, complete load balancing.After the completion of load balancing, the request data of user can be assigned to thread 3 and handle, and
The process can't be perceived by the user.
Alternatively, in the case where detecting game subregion capacity close to threshold value, clothes can be sent open with automatic or manual please
Ask, distribute a Ge Xin great areas.For example, in the case where thread 1 detects game subregion capacity close to threshold value, automatic or manual
Clothes request is sent out to thread 4, a new game subregion is distributed in thread 4, it is complete then after the completion of distribution partitioned resources
Composition Region is initialized, and starts externally service.
Using the above embodiment of the present invention, game server and third-party server are handled by Message routing mechanism
Interaction, such as gateway server (user's request) and database server.Routing mechanism is responsible for message to be put into each thread pair
In the shared drive queue answered, after game server is adjusted to game subregion, routing mechanism modification route is notified that,
Subsequent message is sent in the corresponding queue of correct thread.
For example, the message of game subregion 2 can be put into the queue of thread 1, after occurring load balancing, message can be put
Into the queue of thread 3.Because the information in queue has some delays (it should be noted that game server is changed successfully
Route can be just changed afterwards), each thread of game server can carry out 2 sub-distribution, pass through line for being not belonging to the message of oneself
Communication queue between journey is swapped.
The above embodiment of the present invention, can save a large amount of server resources in the game of multi partition, improve user
Experience, the more conducively operation of server.
The above embodiment of the present invention, goes for largely opening the game server in area, for a user, can bring
More preferably, more smooth experience effect;For game operator, server operation cost, and maintenance cost can be saved.By
Many game subregions can be carried in server, so the fluctuation of the online number of user is to server between multiple game subregions
Influence is little, and from the perspective of server entirety, the curve of the online number of total user is stably and controllable.
It should be noted that for foregoing each method embodiment, in order to be briefly described, therefore it is all expressed as a series of
Combination of actions, but those skilled in the art should know, the present invention is not limited by described sequence of movement because
According to the present invention, some steps can be carried out sequentially or simultaneously using other.Secondly, those skilled in the art should also know
Know, embodiment described in this description belongs to preferred embodiment, involved action and module is not necessarily of the invention
It is necessary.
Through the above description of the embodiments, those skilled in the art can be understood that according to above-mentioned implementation
The method of example can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but a lot
In the case of the former be more preferably embodiment.Understood based on such, technical scheme is substantially in other words to existing
The part that technology contributes can be embodied in the form of software product, and the computer software product is stored in a storage
In medium (such as ROM/RAM, magnetic disc, CD), including some instructions are to cause a station terminal equipment (can be mobile phone, calculate
Machine, server, or network equipment etc.) perform method described in each of the invention embodiment.
Embodiment 2
According to embodiments of the present invention, a kind of request of data for being used to implement the processing method of above-mentioned request of data is additionally provided
Processing unit.Fig. 6 is a kind of schematic diagram of the processing unit of optional request of data according to embodiments of the present invention, such as Fig. 6
Shown, the device can include:First allocation unit 61, for distributing multiple threads;Second allocation unit 63, for be multiple
Each thread in thread distributes at least one subregion, wherein, each subregion at least one subregion is used to receive predetermined number
The request of data of amount;Processing unit 65, for by multiple threads, being received at least one subregion corresponding to each thread
Request of data handled.
It should be noted that the first allocation unit 61 in the embodiment can be used for performing in the embodiment of the present application 1
Processing unit 63 in step S202, the embodiment can be used for performing the step S204 in the embodiment of the present application 1, the embodiment
In processing unit 65 can be used for perform the embodiment of the present application 1 in step S206.
Herein it should be noted that above-mentioned module is identical with example and application scenarios that the step of correspondence is realized, but not
It is limited to the disclosure of that of above-described embodiment 1.It should be noted that above-mentioned module as a part for device may operate in as
It in hardware environment shown in Fig. 1, can be realized, can also be realized by hardware by software.
As a kind of optional embodiment, as shown in fig. 7, the second allocation unit 63 can include:First detection module
631, the free memory stock number for detecting each thread;Division module 633, for detecting the presence of free memory resource
It is that free memory stock number is more than the thread distribution of the first predetermined threshold newly in the case of thread of the amount more than the first predetermined threshold
Subregion.
As a kind of optional embodiment, as shown in figure 8, processing unit 65 can include:Second detection module 651, is used
The to be processed or quantity of request of data that is handling needed for each thread is detected;Shift module 653, in detection
, will in the case that the to be processed or quantity of request of data that handles is more than the second predetermined threshold needed for first thread
Request of data at least one corresponding subregion of first thread, which is transferred in the second thread, to be handled, wherein, multiple threads
Including first thread and the second thread, to be processed needed for the second thread or quantity of request of data that handles is less than the
Three predetermined thresholds.
As a kind of optional embodiment, as shown in figure 9, processing unit 65 can include:Judge module 655, for sentencing
Whether the target data request that disconnected 3rd thread is received belongs to the corresponding subregion of the 3rd thread;First processing module 657, is used for
In the case where the target data request for judging to receive belongs to the corresponding subregion of the 3rd thread, by the 3rd thread to target
Request of data is handled;Second processing module 659, for being not belonging to the 3rd line in the target data request for judging to receive
In the case of the corresponding subregion of journey, target data request is transferred to the 4th thread and handled, wherein, multiple threads include the
Three threads and the 4th thread, target data request belong to the corresponding subregion of the 4th thread.
As a kind of optional embodiment, as shown in Figure 10, the first allocation unit 61 can include:First acquisition module
611, the quantity of the core for obtaining processor, wherein, processor is the processor of a server;Binding module 613, is used
In for each core bind a thread.
As a kind of optional embodiment, as shown in figure 11, the embodiment can also include:Second acquisition module 615, is used
The each cores of Yu Wei are bound after a thread, obtain the memory source amount of processor;Distribute module 617, for that will handle
The memory source amount of device distributes to multiple threads.
As a kind of optional embodiment, communicated to connect between any two thread in multiple threads.
Can be the multiple subregions of single server-assignment, by multiple threads to each thread pair by above-mentioned module
The request of data answered is handled, and can solve to distinguish due to each game point in correlation technique and be furnished with single server,
The technical problem of the wasting of resources of some servers will be caused in the case of game subregion load imbalance, and then reached and fill
Divide the resource using server, in the case where not influenceing Consumer's Experience, save the skill of server operation cost and maintenance cost
Art effect.
Embodiment 3
According to embodiments of the present invention, a kind of terminal for being used to implement the processing method of above-mentioned request of data is additionally provided.
Figure 12 is a kind of structured flowchart of terminal according to embodiments of the present invention, and as shown in figure 12, the terminal can include:
One or more (one is only shown in figure) processors 201, memory 203 and transmitting device 205, as shown in figure 12, the end
End can also include input-output equipment 207.
Wherein, memory 203 can be used for storage software program and module, such as the request of data in the embodiment of the present invention
Corresponding programmed instruction/the module for the treatment of method and apparatus, processor 201 is stored in the software journey in memory 203 by operation
Sequence and module, so as to perform various function application and data processing, that is, realize the processing method of above-mentioned request of data.Deposit
Reservoir 203 may include high speed random access memory, can also include nonvolatile memory, such as one or more magnetic storage dress
Put, flash memory or other non-volatile solid state memories.In some instances, memory 203 can further comprise relative to place
The remotely located memory of device 201 is managed, these remote memories can pass through network connection to terminal.The example bag of above-mentioned network
Include but be not limited to internet, intranet, LAN, mobile radio communication and combinations thereof.
Above-mentioned transmitting device 205 is used to data are received or sent via a network.Above-mentioned network instantiation
It may include cable network and wireless network.In an example, transmitting device 205 includes a network adapter (Network
Interface Controller, NIC), its can be connected by netting twine and other network equipments with router so as to interconnection
Net or LAN are communicated.In an example, transmitting device 205 is radio frequency (Radio Frequency, RF) module, its
For wirelessly being communicated with internet.
Wherein, specifically, memory 203 is used to store application program.
Processor 201 can call the application program that memory 203 is stored, to perform following step:Distribute multiple threads;
At least one subregion is distributed for each thread in multiple threads, wherein, each subregion at least one subregion is used to receive
The request of data of predetermined quantity;By multiple threads, the data received at least one subregion corresponding to each thread please
Ask and handled.
Processor 201 is additionally operable to perform following step:Detect the free memory stock number of each thread;Detecting the presence of
It is that free memory stock number is more than the first predetermined threshold in the case that free memory stock number is more than the thread of the first predetermined threshold
Thread distribute new subregion.
Processor 201 is additionally operable to perform following step:Detect number that is to be processed needed for each thread or handling
According to the quantity of request;The to be processed or quantity of request of data that handles is more than second needed for first thread is detected
In the case of predetermined threshold, the request of data at least one corresponding subregion of first thread is transferred in the second thread and carried out
Processing, wherein, multiple threads include first thread and the second thread, to be processed needed for the second thread or number that handles
It is less than the 3rd predetermined threshold according to the quantity of request.
Processor 201 is additionally operable to perform following step:Judge whether the target data request that the 3rd thread is received belongs to
The corresponding subregion of 3rd thread;Belong to the situation of the corresponding subregion of the 3rd thread in the target data request for judging to receive
Under, target data request is handled by the 3rd thread;The 3rd is not belonging in the target data request for judging to receive
In the case of the corresponding subregion of thread, target data request is transferred to the 4th thread and handled, wherein, multiple threads include
3rd thread and the 4th thread, target data request belong to the corresponding subregion of the 4th thread.
Processor 201 is additionally operable to perform following step:The quantity of the core of processor is obtained, wherein, processor is one
The processor of server;A thread is bound for each core.
Processor 201 is additionally operable to perform following step:Obtain the memory source amount of processor;By the memory source of processor
Amount distributes to multiple threads.
Using the embodiment of the present invention, there is provided a kind of scheme of the processing of request of data.Can be single server point
With multiple subregions, the corresponding request of data of each thread is handled by multiple threads, can solve in correlation technique by
Distinguished in each game point and be furnished with single server, some will be caused to service in the case of subregion load imbalance of playing
The technical problem of the wasting of resources of device, and then the resource for making full use of server has been reached, do not influenceing the situation of Consumer's Experience
Under, save the technique effect of server operation cost and maintenance cost.
Alternatively, the specific example in the present embodiment may be referred to showing described in above-described embodiment 1 and embodiment 2
Example, the present embodiment will not be repeated here.
It will appreciated by the skilled person that the structure shown in Figure 12 is only signal, terminal can be smart mobile phone
(such as Android phone, iOS mobile phones), tablet personal computer, palm PC and mobile internet device (Mobile Internet
Devices, MID), the terminal device such as PAD.Figure 12 it does not cause to limit to the structure of above-mentioned electronic installation.For example, terminal is also
May include than shown in Figure 12 more either less components (such as network interface, display device etc.) or with Figure 12 institutes
Show different configurations.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can
To be completed by program come the device-dependent hardware of command terminal, the program can be stored in a computer-readable recording medium
In, storage medium can include:Flash disk, read-only storage (Read-Only Memory, ROM), random access device (Random
Access Memory, RAM), disk or CD etc..
Embodiment 4
Embodiments of the invention additionally provide a kind of storage medium.Alternatively, in the present embodiment, above-mentioned storage medium can
For the program code for the processing method for performing request of data.
Alternatively, in the present embodiment, above-mentioned storage medium can be located at multiple in the network shown in above-described embodiment
On at least one network equipment in the network equipment.
Alternatively, in the present embodiment, storage medium is arranged to the program code that storage is used to perform following steps:
S1, distributes multiple threads;
S2, is that each thread in multiple threads distributes at least one subregion, wherein, each point at least one subregion
Area is used for the request of data for receiving predetermined quantity;
S3, by multiple threads, at the request of data received at least one subregion corresponding to each thread
Reason.
Alternatively, storage medium is also configured to the program code that storage is used to perform following steps:Detect each thread
Free memory stock number;In the case where detecting the presence of thread of the free memory stock number more than the first predetermined threshold, it is
The thread that free memory stock number is more than the first predetermined threshold distributes new subregion.
Alternatively, storage medium is also configured to the program code that storage is used to perform following steps:Detect each thread
The quantity of request of data that is required to be processed or handling;It is to be processed or needed for first thread is detected
In the case that the quantity of the request of data of processing is more than the second predetermined threshold, by least one corresponding subregion of first thread
Request of data is transferred in the second thread and handled, wherein, multiple threads include first thread and the second thread, the second thread
The quantity of request of data that is required to be processed or handling is less than the 3rd predetermined threshold.
Alternatively, storage medium is also configured to the program code that storage is used to perform following steps:Judge the 3rd thread
Whether the target data request received belongs to the corresponding subregion of the 3rd thread;Category is asked in the target data for judging to receive
In the case of the corresponding subregion of the 3rd thread, target data request is handled by the 3rd thread;Judging to receive
To target data request be not belonging to the corresponding subregion of the 3rd thread in the case of, by target data request be transferred to the 4th thread
Handled, wherein, multiple threads include the 3rd thread and the 4th thread, and target data request belongs to corresponding point of the 4th thread
Area.
Alternatively, storage medium is also configured to the program code that storage is used to perform following steps:Obtain processor
The quantity of core, wherein, processor is the processor of a server;A thread is bound for each core.
Alternatively, storage medium is also configured to the program code that storage is used to perform following steps:Obtain processor
Memory source amount;The memory source amount of processor is distributed into multiple threads.
Alternatively, the specific example in the present embodiment may be referred to showing described in above-described embodiment 1 and embodiment 2
Example, the present embodiment will not be repeated here.
Alternatively, in the present embodiment, above-mentioned storage medium can include but is not limited to:USB flash disk, read-only storage (ROM,
Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disc or
CD etc. is various can be with the medium of store program codes.
The embodiments of the present invention are for illustration only, and the quality of embodiment is not represented.
If the integrated unit in above-described embodiment is realized using in the form of SFU software functional unit and is used as independent product
Sale or in use, the storage medium that above computer can be read can be stored in.Understood based on such, skill of the invention
The part or all or part of the technical scheme that art scheme substantially contributes to prior art in other words can be with soft
The form of part product is embodied, and the computer software product is stored in storage medium, including some instructions are to cause one
Platform or multiple stage computers equipment (can be personal computer, server or network equipment etc.) perform each embodiment institute of the invention
State all or part of step of method.
In the above embodiment of the present invention, the description to each embodiment all emphasizes particularly on different fields, and does not have in some embodiment
The part of detailed description, may refer to the associated description of other embodiment.
, can be by others side in several embodiments provided herein, it should be understood that disclosed client
Formula is realized.Wherein, device embodiment described above is only schematical, such as division of described unit, only one
Kind of division of logic function, can there is other dividing mode when actually realizing, such as multiple units or component can combine or
Another system is desirably integrated into, or some features can be ignored, or do not perform.It is another, it is shown or discussed it is mutual it
Between coupling or direct-coupling or communication connection can be the INDIRECT COUPLING or communication link of unit or module by some interfaces
Connect, can be electrical or other forms.
The unit illustrated as separating component can be or may not be it is physically separate, it is aobvious as unit
The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs
's.
In addition, each functional unit in each embodiment of the invention can be integrated in a processing unit, can also
That unit is individually physically present, can also two or more units it is integrated in a unit.Above-mentioned integrated list
Member can both be realized in the form of hardware, it would however also be possible to employ the form of SFU software functional unit is realized.
Described above is only the preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For member, under the premise without departing from the principles of the invention, some improvements and modifications can also be made, these improvements and modifications also should
It is considered as protection scope of the present invention.
Claims (14)
1. a kind of processing method of request of data, it is characterised in that including:
Distribute multiple threads;
At least one subregion is distributed for each thread in the multiple thread, wherein, it is each at least one described subregion
Subregion is used for the request of data for receiving predetermined quantity;
By the multiple thread, at the request of data received at least one described subregion corresponding to each thread
Reason.
2. according to the method described in claim 1, it is characterised in that it is described be each thread in the multiple thread distribute to
A few subregion includes:
Detect the free memory stock number of each thread;
It is the free memory money in the case where detecting the presence of thread of the free memory stock number more than the first predetermined threshold
Source amount distributes new subregion more than the thread of first predetermined threshold.
3. according to the method described in claim 1, it is characterised in that described by the multiple thread, to each thread correspondence
At least one described subregion in receive request of data progress processing include:
Detect the quantity of request of data that is to be processed needed for each thread or handling;
The to be processed or quantity of request of data that handles is more than the second predetermined threshold needed for first thread is detected
In the case of, the request of data at least one corresponding subregion of the first thread is transferred in the second thread and located
Reason, wherein, the multiple thread includes the first thread and second thread, it is to be processed needed for second thread or
The quantity for the request of data that person is being handled is less than the 3rd predetermined threshold.
4. according to the method described in claim 1, it is characterised in that described by the multiple thread, to each thread correspondence
At least one described subregion in receive request of data progress processing include:
Judge whether the target data request that the 3rd thread is received belongs to the corresponding subregion of the 3rd thread;
In the case where the target data request for judging to receive belongs to the corresponding subregion of the 3rd thread, pass through described the
Three threads are handled target data request;
In the case where the target data request for judging to receive is not belonging to the corresponding subregion of the 3rd thread, by the mesh
Mark request of data is transferred to the 4th thread and handled, wherein, the multiple thread includes the 3rd thread and the described 4th
Thread, the target data request belongs to the corresponding subregion of the 4th thread.
5. method according to any one of claim 1 to 4, it is characterised in that the multiple threads of distribution include:
The quantity of the core of processor is obtained, wherein, the processor is the processor of a server;
A thread is bound for each core.
6. method according to claim 5, it is characterised in that it is described for each core bind a thread it
Afterwards, methods described also includes:
Obtain the memory source amount of the processor;
The memory source amount of the processor is distributed into the multiple thread.
7. method according to any one of claim 1 to 4, it is characterised in that any two in the multiple thread
Communicated to connect between thread.
8. a kind of processing unit of request of data, it is characterised in that including:
First allocation unit, for distributing multiple threads;
Second allocation unit, for distributing at least one subregion for each thread in the multiple thread, wherein, it is described at least
Each subregion in one subregion is used for the request of data for receiving predetermined quantity;
Processing unit, for by the multiple thread, being received at least one described subregion corresponding to each thread
Request of data is handled.
9. device according to claim 8, it is characterised in that second allocation unit includes:
First detection module, the free memory stock number for detecting each thread;
Division module, in the case where detecting the presence of thread of the free memory stock number more than the first predetermined threshold, being
The thread that the free memory stock number is more than first predetermined threshold distributes new subregion.
10. device according to claim 8, it is characterised in that the processing unit includes:
Second detection module, the number for detecting request of data that is to be processed needed for each thread or handling
Amount;
Shift module, the quantity for request of data that is to be processed needed for first thread is detected or handling exceedes
In the case of second predetermined threshold, the request of data at least one corresponding subregion of the first thread is transferred to the second line
Handled in journey, wherein, the multiple thread is included needed for the first thread and second thread, second thread
The quantity of request of data that is to be processed or handling is less than the 3rd predetermined threshold.
11. device according to claim 8, it is characterised in that the processing unit includes:
Judge module, for judging whether the target data request that the 3rd thread is received belongs to corresponding point of the 3rd thread
Area;
First processing module, for belonging to the corresponding subregion of the 3rd thread in the target data request for judging to receive
In the case of, target data request is handled by the 3rd thread;
Second processing module, for being not belonging to the corresponding subregion of the 3rd thread in the target data request for judging to receive
In the case of, target data request is transferred to the 4th thread and handled, wherein, the multiple thread includes described the
Three threads and the 4th thread, the target data request belong to the corresponding subregion of the 4th thread.
12. the device according to any one of claim 8 to 11, it is characterised in that first allocation unit includes:
First acquisition module, the quantity of the core for obtaining processor, wherein, the processor is the processing of a server
Device;
Binding module, for binding a thread for each core.
13. device according to claim 12, it is characterised in that described device also includes:
Second acquisition module, for after one thread of binding for each core, obtaining the interior of the processor
Deposit stock number;
Distribute module, for the memory source amount of the processor to be distributed into the multiple thread.
14. the device according to any one of claim 8 to 11, it is characterised in that any two in the multiple thread
Communicated to connect between individual thread.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710331764.2A CN107168794A (en) | 2017-05-11 | 2017-05-11 | The treating method and apparatus of request of data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710331764.2A CN107168794A (en) | 2017-05-11 | 2017-05-11 | The treating method and apparatus of request of data |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107168794A true CN107168794A (en) | 2017-09-15 |
Family
ID=59815930
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710331764.2A Pending CN107168794A (en) | 2017-05-11 | 2017-05-11 | The treating method and apparatus of request of data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107168794A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108429783A (en) * | 2018-01-16 | 2018-08-21 | 重庆金融资产交易所有限责任公司 | Electronic device, configuration file method for pushing and storage medium |
CN109395380A (en) * | 2018-09-14 | 2019-03-01 | 北京智明星通科技股份有限公司 | Game data processing method and system, server and computer readable storage medium |
CN109840877A (en) * | 2017-11-24 | 2019-06-04 | 华为技术有限公司 | A kind of graphics processor and its resource regulating method, device |
CN110727507A (en) * | 2019-10-21 | 2020-01-24 | 广州欢聊网络科技有限公司 | Message processing method and device, computer equipment and storage medium |
CN111104218A (en) * | 2019-11-29 | 2020-05-05 | 北京浪潮数据技术有限公司 | Storage system data synchronization method, device, equipment and readable storage medium |
CN111228824A (en) * | 2020-01-10 | 2020-06-05 | 腾讯科技(深圳)有限公司 | Game fighting method, device, computer readable medium and electronic equipment |
CN112090066A (en) * | 2020-09-10 | 2020-12-18 | 腾讯科技(深圳)有限公司 | Scene display method and device based on virtual interactive application |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103365718A (en) * | 2013-06-28 | 2013-10-23 | 贵阳朗玛信息技术股份有限公司 | Thread scheduling method, thread scheduling device and multi-core processor system |
CN105786447A (en) * | 2014-12-26 | 2016-07-20 | 乐视网信息技术(北京)股份有限公司 | Method and apparatus for processing data by server and server |
-
2017
- 2017-05-11 CN CN201710331764.2A patent/CN107168794A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103365718A (en) * | 2013-06-28 | 2013-10-23 | 贵阳朗玛信息技术股份有限公司 | Thread scheduling method, thread scheduling device and multi-core processor system |
CN105786447A (en) * | 2014-12-26 | 2016-07-20 | 乐视网信息技术(北京)股份有限公司 | Method and apparatus for processing data by server and server |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109840877A (en) * | 2017-11-24 | 2019-06-04 | 华为技术有限公司 | A kind of graphics processor and its resource regulating method, device |
CN109840877B (en) * | 2017-11-24 | 2023-08-22 | 华为技术有限公司 | Graphics processor and resource scheduling method and device thereof |
CN108429783A (en) * | 2018-01-16 | 2018-08-21 | 重庆金融资产交易所有限责任公司 | Electronic device, configuration file method for pushing and storage medium |
CN109395380A (en) * | 2018-09-14 | 2019-03-01 | 北京智明星通科技股份有限公司 | Game data processing method and system, server and computer readable storage medium |
CN110727507A (en) * | 2019-10-21 | 2020-01-24 | 广州欢聊网络科技有限公司 | Message processing method and device, computer equipment and storage medium |
CN111104218A (en) * | 2019-11-29 | 2020-05-05 | 北京浪潮数据技术有限公司 | Storage system data synchronization method, device, equipment and readable storage medium |
CN111228824A (en) * | 2020-01-10 | 2020-06-05 | 腾讯科技(深圳)有限公司 | Game fighting method, device, computer readable medium and electronic equipment |
CN112090066A (en) * | 2020-09-10 | 2020-12-18 | 腾讯科技(深圳)有限公司 | Scene display method and device based on virtual interactive application |
CN112090066B (en) * | 2020-09-10 | 2022-05-20 | 腾讯科技(深圳)有限公司 | Scene display method and device based on virtual interactive application |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107168794A (en) | The treating method and apparatus of request of data | |
CN104750557B (en) | A kind of EMS memory management process and memory management device | |
CN103207814B (en) | Managing and task scheduling system and dispatching method across cluster resource of a kind of decentration | |
CN104881325B (en) | A kind of resource regulating method and resource scheduling system | |
CN111246586B (en) | Method and system for distributing smart grid resources based on genetic algorithm | |
CN109471705A (en) | Method, equipment and system, the computer equipment of task schedule | |
CN107193658A (en) | Cloud computing resource scheduling method based on game theory | |
CN105141541A (en) | Task-based dynamic load balancing scheduling method and device | |
CN107124472A (en) | Load-balancing method and device, computer-readable recording medium | |
CN103747274B (en) | A kind of video data center setting up cache cluster and cache resources dispatching method thereof | |
CN105892996A (en) | Assembly line work method and apparatus for batch data processing | |
CN109936604A (en) | A kind of resource regulating method, device and system | |
CN107968802A (en) | The method, apparatus and filtering type scheduler of a kind of scheduling of resource | |
CN110162388A (en) | A kind of method for scheduling task, system and terminal device | |
CN103064743B (en) | A kind of resource regulating method and its resource scheduling system for multirobot | |
CN108667777A (en) | A kind of service chaining generation method and network function composer NFVO | |
CN109905329A (en) | The flow queue adaptive management method that task type perceives under a kind of virtualized environment | |
CN105872098A (en) | Data processing method, load balancer and interactive application server and system | |
CN106713375A (en) | Method and device for allocating cloud resources | |
CN109471725A (en) | Resource allocation methods, device and server | |
CN106998340B (en) | Load balancing method and device for board resources | |
CN109840139A (en) | Method, apparatus, electronic equipment and the storage medium of resource management | |
CN108683557A (en) | Micro services health degree appraisal procedure, elastic telescopic method and framework | |
CN110362398A (en) | The dispatching method and system of virtual machine | |
CN107665143A (en) | Method for managing resource, apparatus and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170915 |