CN109286642A - A kind of method of Push active push speed-optimization - Google Patents
A kind of method of Push active push speed-optimization Download PDFInfo
- Publication number
- CN109286642A CN109286642A CN201710595651.3A CN201710595651A CN109286642A CN 109286642 A CN109286642 A CN 109286642A CN 201710595651 A CN201710595651 A CN 201710595651A CN 109286642 A CN109286642 A CN 109286642A
- Authority
- CN
- China
- Prior art keywords
- push
- user
- client
- task
- inquiry
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/55—Push-based network services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/563—Data redirection of data network streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
Abstract
The prioritization scheme that the invention discloses a kind of to push from server-side to client message.The Push supplying system that the server-side just refers to, the client refer to the mobile phone or pc client for receiving PUSH message.The details of the prioritization scheme includes: that the single user interface of internal system is changed to batch user interface when having a large number of users to need to push;When having broadcast message or multicast message needs to push, request route service acquisition user connection information is optimized for obtaining user connection information in local shared drive every time in the past;Former farm-out is stored to database to be changed to be stored to shared drive queue;The inquiry of database is optimized.By above-mentioned optimization, the performance of Push push is obviously improved.
Description
Technical field
The present invention relates to field of computer technology, are related to a kind of Push supplying system from server-side to client.
Background technique
Today that mobile Internet flourishes, most of cell phone application both provide message push function, such as news visitor
The hot news at family end is recommended, and the chat messages of IM tool are reminded, electric business merchandising information, the notice and examine that enterprise applies
Criticize process etc..It pushes and raising product liveness, raising functional module utilization rate, promotion user viscosity, promotion user is stayed
The rate of depositing plays important function.
Push is born in Email earliest, and for reminding new message, and mobile internet era is then more used
In mobile client program.Obtain the data of server, usually there are two types of modes: the first is that client PULL(is drawn)
Mode, i.e., go whether server acquisition has data at regular intervals;Second is that server-side PUSH(is pushed away) mode, server
Client is actively issued when there are data.
The supplying system that the present invention discusses uses the Push scheme based on TCP long connection, client active and server
It establishes after TCP long connection, client periodically sends heartbeat packet to server and connects for keeping, when having message,
Server directly passes through this TCP connection notice client having had built up.Although long connection will also result in certain open
Pin, has been optimal mode at present for the hard defects of poll and SMS scheme.
But existing Push supplying system operation during, inventor find existing scheme the prior art has at least the following problems:
(1) whole system is made of multiple services at present, and servicing between service is communicated by network, so network
Requested more efficiency that can undoubtedly lower system.When having broadcast or multicast message needs to issue, due to actively issuing
The strategy for being not aware that the online situation of client, and issuing highly dependent upon with the online situation of client, so every time under message
Hair require in ConnectRouterServer inquire a secondary client issued it is whether online, when user volume very
When big, this inquiry also brings along very big expense;
(2) since each service segmentation is disposed on different machines, so the storage strategy of intermediate data is adopted in existing system
It is to keep in database.Most apparent example is exactly the generation of task and seizes, when there is message to issue with packet mode
When, need to generate task data structure, and task data is also write in database, relevant user Lie Bie is also required to deposit
Enter in database, after under intermediate data is kept in the database, task is seized in the service of back into database again, to task
It is handled, also to safeguard its state, whole process, which is got off, at least needs the operation of 7 various databases, and data volume is small
When it is not bad, have bottleneck when user volume is very more;
(3) it when issuing with packet ID progress message, if some grouping has 200 general-purpose families, needs to divide them
Page simultaneously get user name, under present case acquisitions of user name directly adopt sql sentence limit paging acquisition, with
The incremental expense of family paging is also incremented by therewith, in production environment since this expense blocks the message push of back, causes
Message is not sent in time before the deadline;
(4) push of a user is only received every time when being pushed by the long connection of system, in this case, when having
1000000 user needs just to need to call 1,000,000 services issued when pushing, this intermediate expense do not say and
Analogy.
However, in the related technology, for the problems in above-mentioned scene, there is no effective solution.
Summary of the invention
In view of this, the present invention wishes to propose a set of solution to solve the above problems, specific prioritization scheme includes:
(1) problem too big for network flow, designs the strategy of a set of caching, user's online information is stored in local, is made
Do not have to call other services every time when must issuing, is directly own channel or third party according to the judgement of local online information
Channel issues;
(2) data base read-write often aiming at the problem that, change the storage mode of intermediate data, remove task and user list
Mysql data base read-write, uses the mode of shared drive instead, and when reduction is consumed;
(3) when big for optimization data volume, user's paging is inquired, in the backward time-consuming more problem, takes to be sql and optimizes
Mode is inquired with the range of major key ID, but since this range is not continuously, so also needing to design a set of strategy
The influence of inquiry space division page is reduced, threshold values is set and selects paging and inquiry mode, reduce in the shortest possible time empty
The inquiry of paging;
(4) for single face, time-consuming and is easy to appear the overtime problem of calling, designs the messaging interface of a set of batch,
The wasting of resources for avoiding single message from handling.
Detailed description of the invention
Below in conjunction with attached drawing, the present invention is described further:
Fig. 1 is Push supplying system structural schematic diagram in the present invention;
Fig. 2 is user's online information management figure in the present invention;
Fig. 3 is that task stores forwarding figure in the present invention.
Specific embodiment
The overall structure figure of Push system such as Fig. 1,2 in figure at be registration service that whole system externally provides, by this
Service client can get up Uid and Guid and the other information binding of registration user, and unique mark is for a user
User account, but for Push system, the unique identification of user is Guid.3 be external offer Push Service in figure
PushInterfaceServer, business personnel can tell Push by the interface PUSH message of this service to Push system
The type of the message of system push, content etc. information, then message is issued to client by Push system.At 1 in figure
ConnectServer service is mainly responsible for long connection, short connection, heartbeat etc. that relative client sends over and requests, while
In the case that condition meets, fail the message issued before issuing to requesting party.Prioritization scheme proposed by the invention mainly collects
In PushInterface service, PushStrategy service, OperationMsg service etc. service in.
Optimization point 1 devises a set of based on this here in order to reduce unnecessary query flows between internal system service
The solution of ground caching.However still there are many problems to need to solve before scheme proposes:
A) when server resets, new departure wants that existing service operation can not be influenced;
If b) machine down machine, new departure want to be restored to after machine is restarted before operating status;
C) update and maintenance cached needs not introduce new request flow;
D) since caching is to be located at certain specific machine, need to solve the problems, such as the inquiry of caching, when some business needs are known
It when client link information, quickly to inquire and is buffered on any platform machine accordingly, thus in respective cache
Find the client-side information of needs.
For problem a) can be used shared drive solve, when it is organic think highly of open when shared drive can't discharge,
The shared drive not discharged before being continued to use when service is got up again.
Problem b) can be solved by the way of regular migrating data, persistence behaviour periodically is done to the data in caching
Make, therefrom restores again when machine is restarted.
For problem c), it can use existing system center and jump onto pull the mechanism of offline message, along road plus online
Information reports, and can thus exempt other unnecessary inquiry request bring flows.
For problem d), hash calculating can be done to Guid, as soon as then cryptographic Hash is mapped on a machine, so
The data of caching in need be evenly distributed on different machines, and the machine of mapping can be obtained by calculating hash value
Device.
In conclusion as shown in Fig. 2, as step 1 when ConnectServer receive a long connection from client or
When heartbeat packet, it is clear that this client of this moment be it is online, as shown in step 2, the connection of client can be believed
Breath uploads to OfflineMsgServer, when OfflineMsgServer pulls operation message from OperationMsgServer
When, as shown in step 3, the link information of client can be uploaded to OperationMsgServer again.Due to all
Client online information finally can all focus on operation messaging service in (OperationMsgServer), it is possible to transporting
One shared drive of local maintenance for seeking messaging service, is wherein storing current online user's list, as shown in step 4.When
When thering is the heartbeat packet of user client or long connection packet to come, illustrate this client be it is online, at this moment go to
It runs and updates the online list of user in messaging service.
One service is often distributed on more machines, and online list is to be locally stored, and different user information is deposited
Storage different machines such as problem d) it is described can using hash map by the way of, Guid and corresponding operating service to user
Device does a hash mapping.All broadcast and multicast only needs to find when issuing the machine of Guid mapping, in the machine
It is online on earth that the user for needing to issue is searched in online user's list of the operation messaging service of device, is walked if online certainly
It is issued by channel, is issued if walking third-party channel not online.Due to user list be stored in shared drive so
When server delay machine only can just have no idea to restore, so historical user's list has also been devised here as described in problem b),
All certain times can all not be brushed in historical user's list in the user of update, and historical user's list can be safeguarded 1 month
Inside there is the user of active mistake, while can so work as power-off periodically the data copy of historical user's list to disk file
Or other failures, when cause machine to be died, next time can also import user's online data from local disk file.
Optimize point 2, issued in original scheme in order to handle the big message of user volume, Push supplying system carries out user
Paging, each page represent a task, issue to user's batch in page.Original task and paging information are all
It has write in database, has needed to arrive first lane database every time when follow-up service will handle a task first to rob and account for data,
Then mission bit stream is read again, then deletes task, then obtains paging information.It is related to two databases during this whole operation
Table: task list and user list are existing, in order to which the operation for reducing to database is deposited the data of this two tables using others now
Storage mode, here mainly using the storage of shared drive.
It has been noted that the benefit of shared drive mainly has two o'clock in a upper prioritization scheme: the speed of memory read-write is remote
It is faster than the speed of data base read-write;Shared drive makes to service the loss that not will cause data during restarting, when service weight
It opens, shared drive will not discharge, and service can continue to use the data in shared drive after getting up.
But some other problems are also brought along after alternative, as shown in Fig. 2, when there is push to ask in step 1
It asks and comes then, PushInterface service can generate a task packet, the use that mission bit stream is contained in packet and needs to be issued to
Family list.In step 3, the interface of PushInterface service call PushStrategy is by task push to shared drive team
In column, shared drive queue here is generated in PushStrategy service.In step 4, PushStrategy services handle
Local task pop comes out and is handled.PushInterface can constantly call PushStrategy in this whole process
Interface carrys out store tasks, just brings additional flow here, and is needed in entire treatment process in PushInterface
After to client return packet, if PushInterface call PushStrategy during network timeout, will cause
Undesirable influence.So interface is written in the batch that PushStrategy provides shared drive queue here, in PushInterface
The middle task batch difference appid in a request is written in queue, can thus be reduced the calling of network as far as possible, be subtracted
Few some potential time-out are dangerous.
Optimization point 3, the target of optimization are inquiry of the realization to paging user that will be as fast as possible.Original scheme uses
Inquiry mode based on limit, middle small data quantity and be added to index in the case where, such SQL is used enough.Mesh
Preceding user's paging size is 1000, and with the increase of data volume, number of pages can be more and more, during data slowly increase,
Limit 100000,1000 such situations may just be will appear, limit 100000,1000 means that scanning meets condition
101000 rows, throw away 100000 rows of front, return to 1000 last rows, problem just herein, will obtain the number of 1000 rows
According to but needing to scan 101000 rows, in Push system such a high concurrent using inner, inquiry needs to be scanned beyond every time
10W row, performance are had a greatly reduced quality certainly.So proposing a kind of new inquiry mode: the range based on the ID for establishing index here
Inquiry.This inquiry mode is exactly a little: according to the id range provided, can navigate to rapidly needs with the help of index and look into
The data of inquiry.Although still there are still problems under existing scene quickly for this inquiry mode.User in existing database
List is dynamically addition to delete, that is to say, that in the case where deleting a large number of users and adding a large number of users again, will cause row
ID it is discontinuous, this causes very big influence to paging because be likely to inquiry one empty paging, causing need not
The wasting of resources wanted.It can be summarized as follows according to existing situation is analyzed above:
A) inquiry mode of limit is in the case where user volume is little, inquiry velocity meet demand;
B) fast with the inquiry mode speed of id range, the demand being able to satisfy in the case of any data volume;
C) id and discontinuous in tables of data, is likely to result in empty inquiry.
Based on above situation, a set of suitable optimization point is herein proposed, two threshold values M and N are set here, and M, which is represented, to be used
Amount threshold values, N represent id range and actual user's number ratio threshold.Specific rules are as follows:
1) preferentially judge number of users sum, if total number of users is less than M, use the inquiry mode of limit without exception, it is all
The inquiry of limit mode all uses 1000 paging size;
2) if total number of users is greater than threshold values M, using the inquiry mode of ID range;
3) under the premise of total number of users is greater than M, if the ratio of ID range and total number of users is less than threshold values N, that is to say, that empty
In the case that ID is fewer, paging size uses 1000, just needs to improve the size of paging when ratio is greater than N, improves
The purpose of paging size is exactly to reduce empty inquiry;
4) in the time-division in morning, user's table is arranged, is that user id is continuous as far as possible.
The above strategy can it is very effective using various inquiries the advantages of, while devising some repairing measures, can
The defect of various inquiry modes is made up to greatest extent.It can be good at meeting the needs of existing push scene.
Optimization point 4 when the interface for adding a set of batch processing in Push supplying system, including obtains User ID in batches,
Obtained in batches from DCache caching User ID to client Guid, then the Guid of batch passed into the service of issuing,
Issue in batches by issuing service for user.
To sum up 4 optimization points are both for distinctive Push system usage scenario, and can actually be in actual production environment
Middle promotion message downloading speed, is fully able to meet existing demand.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain
Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.
Claims (6)
1. a kind of Push active push speed-optimization scheme is used to be promoted the response speed of push platform, scheme includes: to provide to criticize
Measure the interface of push;The online information of local cache user is simultaneously pushed according to local presence;Use shared buffer memory team
The task read-write of column substitution mysql database;Paging uses the range query of tables of data major key, and reduces empty inquiry to greatest extent
Appearance.
2. according to claim 1, obtain user information in batches from caching, raw batches of task, finally by own channel or
Person's third square channel issues message batch, wherein own channel is the long connection between client for pushing platform maintenance,
Third square channel is other push platforms, such as apple, Huawei etc..
3. just adding or updating this client account whenever the heartbeat packet that push platform receives client perhaps grows connection packet
Presence, be set as online, the presence of all users is placed on shared drive mapping table.
4. safeguarding historical user's active list, content in table is periodically persisted to local disk file, with power down preventing or is delayed
Machine can restore.
5. generating for a task carries out after being abstracted, in storage to shared drive queue, queue is called in a manner of service
Interface achievees the effect that load balancing, is assigned to task in the queue of different machines and is handled.
6. being searched using the mode that the mode of tables of data major key range-based searching substitutes limit, when number of users is smaller than user threshold values
When still use the inquiry mode of limit, it is excessive when avoiding id range and practical id number gap larger empty to inquire;Work as user
Number is greater than user's threshold values, and when the ratio of the id range of tables of data and practical id number is greater than ratio threshold, paging is increased, to use up
It can be avoided that empty paging;Time-division in morning updates grouping sheet, keeps the id of user record in grouping continuous.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710595651.3A CN109286642A (en) | 2017-07-20 | 2017-07-20 | A kind of method of Push active push speed-optimization |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710595651.3A CN109286642A (en) | 2017-07-20 | 2017-07-20 | A kind of method of Push active push speed-optimization |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109286642A true CN109286642A (en) | 2019-01-29 |
Family
ID=65184829
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710595651.3A Pending CN109286642A (en) | 2017-07-20 | 2017-07-20 | A kind of method of Push active push speed-optimization |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109286642A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112367359A (en) * | 2020-10-21 | 2021-02-12 | 杭州电魂网络科技股份有限公司 | Game data pushing method and system |
CN112492020A (en) * | 2020-11-24 | 2021-03-12 | 杭州萤石软件有限公司 | Message pushing method, system, device and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030217171A1 (en) * | 2002-05-17 | 2003-11-20 | Von Stuermer Wolfgang R. | Self-replicating and self-installing software apparatus |
CN101448004A (en) * | 2008-12-23 | 2009-06-03 | 中国移动通信集团北京有限公司 | Method, server and system based on instant messaging for releasing user state |
US7831693B2 (en) * | 2003-08-18 | 2010-11-09 | Oracle America, Inc. | Structured methodology and design patterns for web services |
US20130073608A1 (en) * | 2010-06-07 | 2013-03-21 | Guangzhoud Sunrise Electronics Development Co., Ltd | User information pushing method, user information presentation method, system, server and client |
CN103530378A (en) * | 2013-10-15 | 2014-01-22 | 福建榕基软件股份有限公司 | Data paging query method and device and data base construction method and device |
CN106131138A (en) * | 2016-06-27 | 2016-11-16 | 浪潮软件股份有限公司 | A kind of display data real time propelling movement system and method based on non-obstruction queue |
CN106161657A (en) * | 2016-09-18 | 2016-11-23 | 深圳震有科技股份有限公司 | A kind of note batch intelligent receive-transmit realization method and system based on smart mobile phone |
-
2017
- 2017-07-20 CN CN201710595651.3A patent/CN109286642A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030217171A1 (en) * | 2002-05-17 | 2003-11-20 | Von Stuermer Wolfgang R. | Self-replicating and self-installing software apparatus |
US7831693B2 (en) * | 2003-08-18 | 2010-11-09 | Oracle America, Inc. | Structured methodology and design patterns for web services |
CN101448004A (en) * | 2008-12-23 | 2009-06-03 | 中国移动通信集团北京有限公司 | Method, server and system based on instant messaging for releasing user state |
US20130073608A1 (en) * | 2010-06-07 | 2013-03-21 | Guangzhoud Sunrise Electronics Development Co., Ltd | User information pushing method, user information presentation method, system, server and client |
CN103530378A (en) * | 2013-10-15 | 2014-01-22 | 福建榕基软件股份有限公司 | Data paging query method and device and data base construction method and device |
CN106131138A (en) * | 2016-06-27 | 2016-11-16 | 浪潮软件股份有限公司 | A kind of display data real time propelling movement system and method based on non-obstruction queue |
CN106161657A (en) * | 2016-09-18 | 2016-11-23 | 深圳震有科技股份有限公司 | A kind of note batch intelligent receive-transmit realization method and system based on smart mobile phone |
Non-Patent Citations (1)
Title |
---|
吴霖: "Redis在订阅推送系统中的应用", 《电脑知识与技术》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112367359A (en) * | 2020-10-21 | 2021-02-12 | 杭州电魂网络科技股份有限公司 | Game data pushing method and system |
CN112492020A (en) * | 2020-11-24 | 2021-03-12 | 杭州萤石软件有限公司 | Message pushing method, system, device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8965954B2 (en) | Always ready client/server data synchronization | |
US9934321B2 (en) | System and method of accelerating response time to inquiries regarding inventory information in a network | |
CN104967861B (en) | Video caching system and method in CDN network | |
CN103858122B (en) | The method and system of the high consistency of the distributed reproducting content in holding client/server system | |
AU2009308480B2 (en) | Search based specification for data synchronization | |
CN101150421A (en) | A distributed content distribution method, edge server and content distribution network | |
EP2227016A1 (en) | A content buffering, querying method and point-to-point media transmitting system | |
JPH1198446A (en) | Video server system, method for dynamically arranging contents, and data transmitter | |
CN106790324A (en) | Content distribution method, virtual server management method, cloud platform and system | |
CN101136911A (en) | Method to download files using P2P technique and P2P download system | |
JPH1196102A (en) | Server decentralized managing method | |
CN102833352A (en) | Distributed cache management system and method for implementing distributed cache management | |
EP3399445B1 (en) | Always-ready client/server data synchronization | |
CN101090371A (en) | Method and system for user information management in at-once communication system | |
CN105159845A (en) | Memory reading method | |
CN102843420A (en) | Fuzzy division based social network data distribution system | |
CN111209364A (en) | Mass data access processing method and system based on crowdsourcing map updating | |
CN102546674A (en) | Directory tree caching system and method based on network storage device | |
CN101465885B (en) | SNS browsing method and equipment for providing SNS browsing | |
CN109635189A (en) | A kind of information search method, device, terminal device and storage medium | |
CN109286642A (en) | A kind of method of Push active push speed-optimization | |
US6622167B1 (en) | Document shadowing intranet server, memory medium and method | |
US10705978B2 (en) | Asynchronous tracking for high-frequency and high-volume storage | |
CN103825922B (en) | A kind of data-updating method and web server | |
EP2695362B1 (en) | Multi-user cache system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190129 |