CN106685715B - The unlimited information flow of client exempts from the method for being segmented load data of pausing - Google Patents

The unlimited information flow of client exempts from the method for being segmented load data of pausing Download PDF

Info

Publication number
CN106685715B
CN106685715B CN201611234698.9A CN201611234698A CN106685715B CN 106685715 B CN106685715 B CN 106685715B CN 201611234698 A CN201611234698 A CN 201611234698A CN 106685715 B CN106685715 B CN 106685715B
Authority
CN
China
Prior art keywords
data group
data
group
client
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611234698.9A
Other languages
Chinese (zh)
Other versions
CN106685715A (en
Inventor
罗世龙
陈国新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Kelan Technology Co Ltd
Original Assignee
Chongqing Kelan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Kelan Technology Co Ltd filed Critical Chongqing Kelan Technology Co Ltd
Priority to CN201611234698.9A priority Critical patent/CN106685715B/en
Publication of CN106685715A publication Critical patent/CN106685715A/en
Application granted granted Critical
Publication of CN106685715B publication Critical patent/CN106685715B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Information Transfer Between Computers (AREA)
  • Computer And Data Communications (AREA)

Abstract

The present invention provides a kind of unlimited information flows of client to exempt from the method for being segmented load data of pausing, comprising: S1, user issue main request;S2, client and server establish network configuration;S3, client download first group of data group relevant to main request and data group sum from server;Data group that S4, client call need to currently show simultaneously starts to show;S5, next group of data group is judged whether there is by data group sum, step S6 is carried out if having, if without no longer downloading data group;S6, client downloads data group corresponding with time request, and cache to client-cache unit;S7, user send time request;S8, next group of data group for reading data group is judged whether there is, step S4 is carried out if having, if without step 9 is carried out;S9, end.Beneficial effects of the present invention: the data for needing load lower section read page after reading upper section read page are avoided when user reads wireless messages stream due to cause to extend the waiting time.

Description

The unlimited information flow of client exempts from the method for being segmented load data of pausing
Technical field
The present invention relates to client-side information flow loading methods, and in particular to a kind of unlimited information flow of client exempts from the segmentation that pauses The method for loading data.
Background technique
Client accesses in server process, interior due to meeting this after user end to server issues access content requests The information bar number for holding request is excessive, therefore server can be just sent with the maximally related group information of content requests first to client Then end, the client downloads information group are shown by will read homepage after calculating on the client, to reach access Demand.But there are disadvantages in this method are as follows:
After user reads homepage, read homepage bottom can show the printed words such as loading, with prompt downloading with it is upper The corresponding second group of data of secondary content requests, this procedural implementation method are as follows: when user, which will read homepage, slides into bottom, Client is by the order for loading second group of data corresponding with initial content requests, and then user end to server is sent The data are returned to client, client downloads to storage at once after finding the data by calculating by the order, server Unit, then by showing reading second page after calculating.It is wasted time very much in the above process, so that reading reading homepage After there is break, bring puzzlement to user.
Summary of the invention
For the deficiency in the presence of the prior art, the present invention provides a kind of unlimited information flows of client to exempt from the segmentation that pauses The method for loading data is avoided when user reads wireless messages stream because needing load lower section to read after reading upper section read page It reads the data of page and causes to extend the waiting time.
To achieve the above object, present invention employs the following technical solutions:
A kind of method that unlimited information flow of client exempts from pause segmentation load data, comprising:
S1, user issue main request;
The corresponding server of S2, client establishes network configuration;
S3, client downloads first group of data group relevant to main request from corresponding server and data group is total Number, and cache to client-cache unit;
S4, client since the data group that client-cache cell call need to currently show and show;
S5, the next group of data group that the data group showed is judged whether there is by data group sum, carry out if having Step S6, if the no longer downloading data group without if;
Next group of data group of the data group that S6, client are showing from the downloading of corresponding server, and cache extremely In client-cache unit;
S7, user send the next group of data group request for reading data group;
S8, judge whether there is next group of data group for reading data group, step S4 carried out if having, if without into Row step 9;
S9, end.
Preferably, step S2 specifically:
S21, judge whether there is data group relevant to main request in server buffer unit, if so, then carrying out step S25;If nothing carries out step S22;
All data relevant to main request are found in the database in S22, server, and are sent to server buffer list Member;
S23, server buffer unit are according to the degree of correlation or time with main request by all data arrangements;
All data sectionals after arrangement are formed data group by S24, server buffer unit, and cache the orderly data Group and data group sum;
The first group of data group and data group sum of arrangement are sent to server transmission list by S25, server buffer unit Member;
S26, client Transmit-Receive Unit download first group of data group and data group sum from server transmission unit.
Preferably, step S6 specifically:
S61, client sends time request to corresponding server and main request corresponding with this time request, secondary request are Next group of data group of the data group showed is downloaded, server receives the secondary request and main request;
S62, server judge whether server buffer unit has caching data group corresponding with time request, have, are walked Rapid S66;Nothing then carries out step S63;
All data relevant to main request are found in the database in S63, server, and are sent to server buffer list Member;
S64, server buffer unit are according to the degree of correlation or time with main request by all data arrangements;
All data sectionals after arrangement are formed data group by S65, server buffer unit, and cache the orderly data Group and data group sum;
Data group corresponding with secondary request is sent to server transmission unit by S66, server buffer unit;
S67, server transmission unit send data group corresponding with time request, and client transmission unit receives the data Group, and cache to client-cache unit.
Compared with the prior art, the invention has the following beneficial effects:
1) by setting start when current data group is showing " S6, client from corresponding server downloading just In next group of data group of the data group showed, and cache to client-cache unit ", it avoids user and reads currently Cause to extend the waiting time because the data of lower section read page need to be loaded after showing data, it is final to realize that wireless messages stream exempts to pause Segmentation load data, and then improve the comfort in user's use process;
2) data group sum has just been downloaded when downloading first group of data group, and judgement is when downloading next group of data group It is no to have downloaded the following group data, it avoids falling into endless loop.
Detailed description of the invention
Fig. 1 is the flow chart for the method that the unlimited information flow of client exempts from pause segmentation load data;
Fig. 2 is the specific flow chart of S2 in Fig. 1;
Fig. 3 is the specific flow chart of S6 in Fig. 1.
Specific embodiment
As shown in Figure 1, the invention proposes a kind of unlimited information flows of client to exempt from the method for being segmented load data of pausing, packet It includes:
S1, user issue main request;
The corresponding server of S2, client establishes network configuration;
S3, client downloads first group of data group relevant to main request from corresponding server and data group is total Number, and cache to client-cache unit;
S4, client since the data group that client-cache cell call need to currently show and show;
S5, the next group of data group that the data group showed is judged whether there is by data group sum, carry out if having Step S6, if the no longer downloading data group without if;
Next group of data group of the data group that S6, client are showing from the downloading of corresponding server, and cache extremely In client-cache unit;
S7, user send the next group of data group request for reading data group;
S8, judge whether there is next group of data group for reading data group, step S4 carried out if having, if without into Row step 9;
S9, end.
As shown in Fig. 2, in order to realize first group of data group of downloading and data group sum, while to accelerate after first group Data group downloading provide basic condition, step S2 specifically:
S21, judge whether there is data group relevant to main request in server buffer unit, if so, then carrying out step S25;If nothing carries out step S22;
All data relevant to main request are found in the database in S22, server, and are sent to server buffer list Member;
S23, server buffer unit are according to the degree of correlation or time with main request by all data arrangements;
All data sectionals after arrangement are formed data group by S24, server buffer unit, and cache the orderly data Group and data group sum;
The first group of data group and data group sum of arrangement are sent to server transmission list by S25, server buffer unit Member;
S26, client Transmit-Receive Unit download first group of data group and data group sum from server transmission unit.
As shown in figure 3, in order to accelerate the downloading of first group of later data group, step S6 specifically:
S61, client sends time request to corresponding server and main request corresponding with this time request, secondary request are Next group of data group of the data group showed is downloaded, server receives the secondary request and main request;
S62, server judge whether server buffer unit has caching data group corresponding with time request, have, are walked Rapid S66;Nothing then carries out step S63;
All data relevant to main request are found in the database in S63, server, and are sent to server buffer list Member;
S64, server buffer unit are according to the degree of correlation or time with main request by all data arrangements;
All data sectionals after arrangement are formed data group by S65, server buffer unit, and cache the orderly data Group and data group sum;
Data group corresponding with secondary request is sent to server transmission unit by S66, server buffer unit;
S67, server transmission unit send data group corresponding with time request, and client transmission unit receives the data Group, and cache to client-cache unit.
When work, firstly, user end to server issues main request, server receives main request;Then, server judges Whether to main request relevant data group is had in server buffer unit, to improve download efficiency;After again, server is from database In find all data relevant to main request, since database bears the function of operation and storage bottom original data, will The Data Concurrent found gives server buffer unit, to reduce database operation burden;Server buffer unit according to The degree of correlation of main request or time are by all data arrangements;All data sectionals after arrangement are formed number by server buffer unit According to group, and cache the orderly data group and data group sum, if it is document category retrieve, then be exactly by with keyword Relevancy ranking and by most similar one group of data be first group of data group, if it is news or circle of friends be then according to the time elder generation Sequence and be the first data group by one group of nearest data of time afterwards, each group of data group can be 10 information, when assigning to most Latter group information discontented 10 are also 1 group of data group, calculate data group sum, subsequent judge whether to have downloaded institute to facilitate There is data group, there certainly exist finding 0 data, then directly return to the first data group 0 and data group sum is also 0, Subsequent client is directly prompted without related data;After again, server buffer unit is by the first group of data group and data group of arrangement Sum is sent to server transmission unit, and client Transmit-Receive Unit downloads first group of data group and data from server transmission unit Group sum, realizes the downloading of first group of data group and data group sum;After again, after having downloaded data, client is by the of downloading One group of data group starts to show, and user starts to read, at this point, starting to download one group of data group, enables a user to nothing Discontinuously read next group of data.The secondary request that next group of data group is read when downloading next group of data group is attached to main request It is sent to server, to find corresponding next group of data group;Server judge server buffer unit whether have caching with time Corresponding data group is requested, if server buffer has the data group, so that it may client is transmitted directly to, to improve downloading speed Degree, reads when being over the data group currently read that there are no corresponding next group of data groups to avoid user, if not caching pair Time data of request are answered, then are also needed as recalculating lookup when first group of data is found.
Finally, it is stated that the above examples are only used to illustrate the technical scheme of the present invention and are not limiting, although referring to compared with Good embodiment describes the invention in detail, those skilled in the art should understand that, it can be to skill of the invention Art scheme is modified or replaced equivalently, and without departing from the objective and range of technical solution of the present invention, should all be covered at this In the scope of the claims of invention.

Claims (2)

1. a kind of unlimited information flow of client exempts from the method for being segmented load data of pausing characterized by comprising
S1, user issue main request;
The corresponding server of S2, client establishes network configuration;
S3, client download first group of data group relevant to main request and data group sum from corresponding server, and In caching to client-cache unit;
S4, client since the data group that client-cache cell call need to currently show and show;
S5, the next group of data group that the data group showed is judged whether there is by data group sum, carry out step if having S6, if the no longer downloading data group without if;
Next group of data group of the data group that S6, client are showing from the downloading of corresponding server, and cache to client It holds in cache unit;
S7, user send the next group of data group request for reading data group;
S8, next group of data group for reading data group is judged whether there is, step S4 is carried out if having, if without being walked Rapid 9;
S9, end,
Step S2 specifically:
S21, judge whether there is data group relevant to main request in server buffer unit, if so, then carrying out step S25;If Nothing then carries out step S22;
All data relevant to main request are found in the database in S22, server, and are sent to server buffer unit;
S23, server buffer unit are according to the degree of correlation or time with main request by all data arrangements;
S24, server buffer unit by after arrangement all data sectionals formed data group, and cache the orderly data group and Data group sum;
The first group of data group and data group sum of arrangement are sent to server transmission unit by S25, server buffer unit;
S26, client Transmit-Receive Unit download first group of data group and data group sum from server transmission unit.
2. the unlimited information flow of client according to claim 1 exempts from the method for being segmented load data of pausing, which is characterized in that Step S6 specifically:
S61, client send time request and main request corresponding with this time request to corresponding server, and secondary request is downloading Next group of data group of the data group showed, server receive the secondary request and main request;
S62, server judge whether server buffer unit has caching data group corresponding with time request, have, carry out step S66;Nothing then carries out step S63;
All data relevant to main request are found in the database in S63, server, and are sent to server buffer unit;
S64, server buffer unit are according to the degree of correlation or time with main request by all data arrangements;
S65, server buffer unit by after arrangement all data sectionals formed data group, and cache the orderly data group and Data group sum;
Data group corresponding with secondary request is sent to server transmission unit by S66, server buffer unit;
S67, server transmission unit send data group corresponding with time request, and client transmission unit receives the data group, and It caches to client-cache unit.
CN201611234698.9A 2016-12-28 2016-12-28 The unlimited information flow of client exempts from the method for being segmented load data of pausing Active CN106685715B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611234698.9A CN106685715B (en) 2016-12-28 2016-12-28 The unlimited information flow of client exempts from the method for being segmented load data of pausing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611234698.9A CN106685715B (en) 2016-12-28 2016-12-28 The unlimited information flow of client exempts from the method for being segmented load data of pausing

Publications (2)

Publication Number Publication Date
CN106685715A CN106685715A (en) 2017-05-17
CN106685715B true CN106685715B (en) 2019-11-08

Family

ID=58873155

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611234698.9A Active CN106685715B (en) 2016-12-28 2016-12-28 The unlimited information flow of client exempts from the method for being segmented load data of pausing

Country Status (1)

Country Link
CN (1) CN106685715B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109391936B (en) * 2018-09-19 2021-04-06 四川长虹电器股份有限公司 OTA upgrade package encryption downloading method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102446222A (en) * 2011-12-22 2012-05-09 华为技术有限公司 Method, device and system of webpage content preloading
CN103618936A (en) * 2013-12-16 2014-03-05 乐视致新电子科技(天津)有限公司 Smart television, as well as method and device for pre-downloading link pages in browser of smart television
CN103617222A (en) * 2013-11-22 2014-03-05 北京奇虎科技有限公司 Browser and method for preloading in webpages
CN104361071A (en) * 2014-11-12 2015-02-18 沈文策 Page preloading method and device
CN104683329A (en) * 2015-02-06 2015-06-03 成都品果科技有限公司 Data caching method and system for mobile equipment client
CN105634981A (en) * 2014-10-30 2016-06-01 阿里巴巴集团控股有限公司 Content caching and transmitting method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102446222A (en) * 2011-12-22 2012-05-09 华为技术有限公司 Method, device and system of webpage content preloading
CN103617222A (en) * 2013-11-22 2014-03-05 北京奇虎科技有限公司 Browser and method for preloading in webpages
CN103618936A (en) * 2013-12-16 2014-03-05 乐视致新电子科技(天津)有限公司 Smart television, as well as method and device for pre-downloading link pages in browser of smart television
CN105634981A (en) * 2014-10-30 2016-06-01 阿里巴巴集团控股有限公司 Content caching and transmitting method and system
CN104361071A (en) * 2014-11-12 2015-02-18 沈文策 Page preloading method and device
CN104683329A (en) * 2015-02-06 2015-06-03 成都品果科技有限公司 Data caching method and system for mobile equipment client

Also Published As

Publication number Publication date
CN106685715A (en) 2017-05-17

Similar Documents

Publication Publication Date Title
CN104202360B (en) The method of accessed web page, device and router
CN104756449B (en) From the method for node and Content owner's transmission packet in content center network
US8392407B2 (en) Method, apparatus and system of searching and downloading mobile telephone file
KR101744656B1 (en) Sequenced transmission of digital content items
KR101177224B1 (en) Method and apparatus for pre-fetching data in a mobile network environment using edge data storage
ES2675126T3 (en) Method, device and data acquisition system
CN106982248B (en) caching method and device for content-centric network
KR101964927B1 (en) Method and apparatus for caching proxy
CN105656997B (en) Temperature cache contents active push method based on mobile subscriber's relationship
CN105450579B (en) Internet resources pre-add support method, client and middleware server
US20180302489A1 (en) Architecture for proactively providing bundled content items to client devices
CN102118376A (en) CDN server and content download method
CN104217019A (en) Content inquiry method and device based on multiple stages of cache modules
WO2012028103A1 (en) Method and system for accessing micro blog, and method and system for sending picture on micro blog website
CN104346345B (en) The storage method and device of data
US20140344525A1 (en) Method and apparatus for managing cache memory in communication system
CN109412972A (en) A kind of data reordering method, device and node server
CN110139123A (en) The broadcasting of files in stream media, transmission, treating method and apparatus
CN107872478A (en) A kind of content buffering method, device and system
CN101068173A (en) Resource sharing method and system
CN106685715B (en) The unlimited information flow of client exempts from the method for being segmented load data of pausing
CN107959667B (en) Media fragment pushing method, server and client
CN104410721B (en) The method and system of caching automatically are supported according to update content
CN104486347A (en) Method and device for pushing multimedia
CN106303581A (en) A kind of video file download process method, device and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant