CN107888633A - Document distribution method and device - Google Patents

Document distribution method and device Download PDF

Info

Publication number
CN107888633A
CN107888633A CN201610864451.9A CN201610864451A CN107888633A CN 107888633 A CN107888633 A CN 107888633A CN 201610864451 A CN201610864451 A CN 201610864451A CN 107888633 A CN107888633 A CN 107888633A
Authority
CN
China
Prior art keywords
file
distributed
threshold value
node server
edge node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610864451.9A
Other languages
Chinese (zh)
Other versions
CN107888633B (en
Inventor
葛明雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Supreme Being Joins Information Technology Share Co Ltd
Original Assignee
Shanghai Supreme Being Joins Information Technology Share Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Supreme Being Joins Information Technology Share Co Ltd filed Critical Shanghai Supreme Being Joins Information Technology Share Co Ltd
Priority to CN201610864451.9A priority Critical patent/CN107888633B/en
Publication of CN107888633A publication Critical patent/CN107888633A/en
Application granted granted Critical
Publication of CN107888633B publication Critical patent/CN107888633B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]

Abstract

Document distribution method and device, methods described include:Judge whether the data volume of file to be distributed is more than default threshold value;When it is determined that the data volume of file to be distributed is more than the threshold value, the file to be distributed is cut into slices, obtains corresponding multiple section files;Resulting multiple section files are respectively sent into corresponding edge node server to be stored.Above-mentioned scheme, the load of edge node server in file distributing, can be reduced.

Description

Document distribution method and device
Technical field
The present invention relates to content distributing network technical field, more particularly to a kind of document distribution method and device.
Background technology
Content distributing network (Content Delivery Network, CDN), by placing buffer service everywhere in network Device, one layer of intelligent virtual network is built on existing Internet basic, the content of website is published to closest to user's Network " edge ", allows user to obtain required content nearby, to alleviate the situation of network congestion, improves the response speed of website Degree, technically comprehensively solve causes user to visit because network bandwidth is small, user's visit capacity is big, the unequal reason of network point distribution Ask the problem of response speed of website is slow.
In the prior art, the file stored in source station is distributed to the side in corresponding edge cluster according to load balancing Stored in edge node server, when receiving the access request of client transmission, load equalizer is by corresponding to storage The information of the fringe node of file is sent to client, make it that client is the fringe node service of that storage respective file Data access request corresponding to device transmission, to obtain the file stored in edge node server.
But the document distribution method in existing CDN, the load of edge node server can be caused larger, have impact on The performance of system.
The content of the invention
The embodiment of the present invention solves the problems, such as it is how in file distributing, reduces the load of edge node server.
To solve the above problems, the embodiments of the invention provide a kind of document distribution method, methods described includes:Judgement is treated Whether the data volume of distribution of document is more than default threshold value;It is right when it is determined that the data volume of file to be distributed is more than the threshold value The file to be distributed is cut into slices, and obtains corresponding multiple section files;Resulting multiple section files are sent out respectively Edge node server corresponding to delivering to is stored.
Alternatively, it is described by resulting multiple section files be respectively sent to corresponding to edge node server deposited Storage, including:For file distributing task corresponding to the multiple section file generated and it is stored in default assignment database;When from , please to file distributing corresponding to load equalizer transmission corresponding to being got in the assignment database during file distributing task Ask, to cause the information of URL corresponding to the load equalizer return;The information for the URL that the load equalizer returns is received, And the section file is sent into edge node server corresponding to the URL and stored.
Alternatively, the file distributing request is HTTP request.
Alternatively, the threshold value is 50M.
The embodiment of the present invention additionally provides a kind of file distributing device, and described device includes:Judging unit, suitable for judging to treat Whether the data volume of distribution of document is more than default threshold value;File cutting unit, suitable for when the data volume for determining file to be distributed During more than the threshold value, the file to be distributed is cut into slices, obtains corresponding multiple section files;Dispatching Unit, fit Stored in resulting multiple section files are respectively sent into corresponding edge node server.
Alternatively, the Dispatching Unit, it is suitable for file distributing task corresponding to the multiple section file generated and deposits Storage is in default assignment database;It is equal to load when getting corresponding file distributing task from the assignment database File distributing request corresponding to weighing apparatus transmission, to cause the information of URL corresponding to the load equalizer return;Receive described negative The information for the URL that balanced device returns is carried, and the section file is sent into edge node server corresponding to the URL Row storage.
Alternatively, the file distributing request is HTTP request.
Alternatively, the threshold value is 50M.
Compared with prior art, technical scheme has the following advantages that:
Above-mentioned scheme, can by the way that the larger file of data volume is cut into corresponding multiple less files of data volume When carrying out file distributing, to avoid distributed file from taking substantial amounts of outlet bandwidth in edge node server, so as to To reduce the load of edge node server.
Brief description of the drawings
Fig. 1 is a kind of flow chart of document distribution method in the embodiment of the present invention;
Fig. 2 is the flow chart of another document distribution method in the embodiment of the present invention;
Fig. 3 is the structural representation of the file distributing device in the embodiment of the present invention.
Embodiment
To solve the above-mentioned problems in the prior art, the technical scheme that the embodiment of the present invention uses is by by data volume Larger file is cut into corresponding multiple less files of data volume, can avoid what is distributed when carrying out file distributing File takes substantial amounts of outlet bandwidth in edge node server, so as to reduce the load of edge node server.
It is understandable to enable the above objects, features and advantages of the present invention to become apparent, below in conjunction with the accompanying drawings to the present invention Specific embodiment be described in detail.
Fig. 1 shows a kind of flow of document distribution method in the embodiment of the present invention.Referring to Fig. 1, the embodiment of the present invention In document distribution method, the steps can be included:
Step S101:Judge whether the data volume of file to be distributed is more than default threshold value;, can when judged result is to be To perform step S102;Conversely, it then can directly perform step S104.
In specific implementation, default threshold value can be configured according to the actual needs, specifically it is contemplated that edge section The bandwidth of point server and load, the actual demand of user etc..In an embodiment of the present invention, it is 50M to set the threshold value.
Step S102:The file to be distributed is cut into slices, obtains corresponding multiple section files.
In specific implementation, it is resulting corresponding to multiple section files quantity, with the size of file to be distributed and presetting Threshold value it is related.
Step S103:Resulting multiple section files are respectively sent into corresponding edge node server to be deposited Storage.
In specific implementation, source station server by load equalizer will be resulting corresponding to it is multiple section files send Stored into corresponding edge node server, specifically refer to Fig. 2.
Step S104:The file to be distributed is sent to corresponding edge node server and stored.
In specific implementation, when the data volume of file to be distributed is less than default threshold value, sent by file to be distributed During to corresponding edge node server, the outlet bandwidth of edge node server will not be taken for a long time, thus without cutting Piece processing can be with transmission to corresponding edge node server.
Above-mentioned scheme, can by the way that the larger file of data volume is cut into corresponding multiple less files of data volume When carrying out file distributing, to avoid distributed file from taking outlet bandwidth larger in edge node server, side is reduced The load of edge node server.
Detailed Jie is carried out to the document distribution method in the embodiment of the present invention below by a specific application scenarios Continue.
Fig. 2 shows a kind of flow of document distribution method in the embodiment of the present invention.Referring to Fig. 2, the embodiment of the present invention In document distribution method, suitable for source station server to edge node server distribution of document, can specifically include following step Suddenly:
Step S201:Judge whether the data volume of file to be distributed is more than default threshold value;, can when judged result is to be To perform step S202;Conversely, it then can directly perform step S204.
In specific implementation, source station server can obtain corresponding file to be distributed first when carrying out file distributing, And judge whether the quantity of file to be distributed is more than default threshold value.
Step S202:The file to be distributed is cut into slices, obtains corresponding multiple section files.
In specific implementation, source station server to data capacity be more than default threshold value file cut into slices when, with Data volume corresponding to default threshold value is unit, and file to be distributed is cut into slices.
For example, when default threshold value is 50M, when file a.MP4 to be distributed data volume is 120M, source station server with 50M is a section file, and file a.MP4 to be distributed is carried out into cutting obtains corresponding three sections files successively A.1.MP4, a.2.MP4 and a.3.MP4, data volume is respectively 50M, 50M and 20M.
For another example, when default threshold value is 60M, and file a.MP4 to be distributed data volume is 250M, source station server with 60M is a section file, and file a.MP4 to be distributed is carried out into cutting obtains corresponding five sections files successively A.1.MP4, a.2.MP4, a.3.MP4, a.4.MP4 and a.5.MP4, data volume is respectively 60M, 60M, 60M, 60M and 10M.
Step S203:For file distributing task corresponding to the multiple section file generated and it is stored in default number of tasks According to storehouse.
In specific implementation, when obtaining corresponding multiple section files by cutting, source station server is resulting File distributing task corresponding to multiple section file generateds, and send and stored into default assignment database.
Above example equally is used, when file a.MP4 to be distributed data volume is 120M, source station server is with pre- If threshold value 50M sizes be a section file, by file a.MP4 to be distributed carry out cutting obtain corresponding to three section texts Part is successively a.1.MP4, a.2.MP4 and a.3.MP4, and be respectively a.1.MP4, a.2.MP4 and a.3.MP4 to distinguish in database File distributing task corresponding to generation is simultaneously stored in default assignment database, with subsequently carry out file distributing when, source station Server is by the corresponding file distributing task that is obtained from assignment database and handles, will a.1.MP4, a.2.MP4 and A.3.MP4 stored in edge node server corresponding to being distributed to respectively.
Step S204:When getting corresponding file distributing task from the assignment database, to load equalizer File distributing corresponding to transmission is asked, to cause the information of URL corresponding to the load equalizer return.
In specific implementation, when carrying out file distributing, source station server obtained from database corresponding to file distributing Task is simultaneously stored in internal memory, is obtained untreated file distributing task in internal memory in sequence and is handled.
Wherein, when handling file distributing task, source station server can be first to text corresponding to load equalizer transmission Part distribution request, i.e. HTTP (Hyper Text Transfer Protocol, HTTP) are asked.Load equalizer When receiving the HTTP request that source station server is sent, a unified resource started with IP can be returned to source station server Finger URL (Uniform Resource Locator, URL), to allow source station server to return receiving load equalizer During the URL returned, according to the IP address of edge node server corresponding to the URL determinations that load equalizer returns, and correspondingly deposit The information in path is stored up, and and then corresponding file is sent into corresponding edge node server stored.
Step S205:The information for the URL that the load equalizer returns is received, and the section file is sent to described Stored in edge node server corresponding to URL.
In specific implementation, source station server, can be according to load balancing when receiving the URL of load equalizer return The IP address of edge node server corresponding to the URL determinations that device returns, and the information of corresponding store path, then, source station Corresponding file is sent and stored into corresponding edge node server at corresponding storage location.
When user accesses corresponding file by client, can be sent by client corresponding to resource access request When, the resource access request that client is sent is first sent to load equalizer.Load equalizer is receiving client's transmission During resource access request, it may be determined that the information of the edge node server where corresponding file.Specifically, load equalizer Can first according to corresponding to URL carry out Hash operation, be calculated corresponding to edge node server information, and will calculate The information of obtained corresponding edge node server returns to client.Client is receiving the side of load equalizer transmission During the information of edge node server, it can send to corresponding edge node server and send to corresponding resource access request. For corresponding edge node server when receiving the resource access request of client transmission, being accessed based on the resource file please The information of the store path of file corresponding to determining is sought, and file and return to corresponding to obtaining from identified store path Client, so that file corresponding to user's acquisition.
The above-mentioned document distribution method in the embodiment of the present invention is described in detail, below will be to corresponding device It is described.
Fig. 3 shows a kind of structure of file distributing device in the embodiment of the present invention.Referring to Fig. 3, the embodiment of the present invention In file distributing device 300, suitable for source station server to edge node server distribution of document, can specifically include judging single Member 301, file cutting unit 302 and Dispatching Unit 303, wherein:
The judging unit 301, suitable for judging whether the data volume of file to be distributed is more than default threshold value.Wherein, institute Stating default threshold value can be configured according to the actual needs, such as can be according to the bandwidth of edge node server, load It is determined etc. factor.In an embodiment of the present invention, default threshold value is arranged to 50M.
The file cutting unit 302, suitable for when it is determined that the data volume of file to be distributed is more than the threshold value, to described File to be distributed is cut into slices, and obtains corresponding multiple section files.Wherein, the quantity of resulting section file, with institute Stating the data volume of default threshold value and file to be distributed is associated.
The Dispatching Unit 303, suitable for resulting multiple section files are respectively sent into corresponding fringe node clothes Business device is stored.
In specific implementation, the Dispatching Unit 303, file distributing corresponding to the multiple section file generated is suitable for Task is simultaneously stored in default assignment database;When getting corresponding file distributing task from the assignment database, To file distributing request corresponding to load equalizer transmission, to cause the information of URL corresponding to the load equalizer return;Connect The information for the URL that the load equalizer returns is received, and the section file is sent to fringe node corresponding to the URL and taken Stored in business device.
In specific implementation, the file distributing request can be HTTP request.
Using the such scheme in the embodiment of the present invention, by the way that the larger file of data volume is cut into corresponding more numbers According to less file is measured, distributed file can be avoided to take larger in edge node server when carrying out file distributing Outlet bandwidth, reduce the load of edge node server.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can To instruct the hardware of correlation to complete by program, the program can be stored in computer-readable recording medium, and storage is situated between Matter can include:ROM, RAM, disk or CD etc..
The method and system of the embodiment of the present invention are had been described in detail above, the present invention is not limited thereto.Any Art personnel, without departing from the spirit and scope of the present invention, it can make various changes or modifications, therefore the guarantor of the present invention Shield scope should be defined by claim limited range.

Claims (8)

  1. A kind of 1. document distribution method, it is characterised in that including:
    Judge whether the data volume of file to be distributed is more than default threshold value;
    When it is determined that the data volume of file to be distributed is more than the threshold value, the file to be distributed is cut into slices, obtained pair The multiple section files answered;
    Resulting multiple section files are respectively sent into corresponding edge node server to be stored.
  2. 2. document distribution method according to claim 1, it is characterised in that described by resulting multiple section files point Do not send to corresponding edge node server and stored, including:
    For file distributing task corresponding to the multiple section file generated and it is stored in default assignment database;
    When getting corresponding file distributing task from the assignment database, to file corresponding to load equalizer transmission Distribution request, to cause the information of URL corresponding to the load equalizer return;
    The information for the URL that the load equalizer returns is received, and the section file is sent to edge corresponding to the URL Stored in node server.
  3. 3. document distribution method according to claim 2, it is characterised in that the file distributing request is HTTP request.
  4. 4. document distribution method according to claim 1, it is characterised in that the threshold value is 50M.
  5. A kind of 5. file distributing device, it is characterised in that including:
    Judging unit, suitable for judging whether the data volume of file to be distributed is more than default threshold value;
    File cutting unit, suitable for when it is determined that the data volume of file to be distributed is more than the threshold value, to the text to be distributed Part is cut into slices, and obtains corresponding multiple section files;
    Dispatching Unit, deposited suitable for resulting multiple section files are respectively sent into corresponding edge node server Storage.
  6. 6. file distributing device according to claim 5, it is characterised in that the Dispatching Unit, be suitable for the multiple File distributing task corresponding to section file generated is simultaneously stored in default assignment database;Suitable for working as from the assignment database In get corresponding to file distributing task when, to load equalizer send corresponding to file distributing ask, it is described negative to cause Carry the information of URL corresponding to balanced device return;And the URL returned suitable for receiving the load equalizer information, and by described in Section file, which is sent into edge node server corresponding to the URL, to be stored.
  7. 7. file distributing device according to claim 6, it is characterised in that the file distributing request is HTTP request.
  8. 8. file distributing device according to claim 5, it is characterised in that the threshold value is 50M.
CN201610864451.9A 2016-09-29 2016-09-29 File distribution method and device Active CN107888633B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610864451.9A CN107888633B (en) 2016-09-29 2016-09-29 File distribution method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610864451.9A CN107888633B (en) 2016-09-29 2016-09-29 File distribution method and device

Publications (2)

Publication Number Publication Date
CN107888633A true CN107888633A (en) 2018-04-06
CN107888633B CN107888633B (en) 2020-10-20

Family

ID=61769901

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610864451.9A Active CN107888633B (en) 2016-09-29 2016-09-29 File distribution method and device

Country Status (1)

Country Link
CN (1) CN107888633B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101282281A (en) * 2007-04-03 2008-10-08 华为技术有限公司 Medium distributing system and apparatus as well as flow medium play method
CN102170475A (en) * 2011-04-22 2011-08-31 中兴通讯股份有限公司 File distribution system and fragmentation method based on P2P (peer-to-peer)
US20120110113A1 (en) * 2009-07-02 2012-05-03 Aim To G Co., Ltd. Cooperative Caching Method and Contents Providing Method Using Request Apportioning Device
CN105610823A (en) * 2015-12-28 2016-05-25 武汉鸿瑞达信息技术有限公司 Stream media processing method and processing system architecture based on task vectors
CN105763628A (en) * 2016-04-12 2016-07-13 上海帝联信息科技股份有限公司 Data access request processing method and device, edge node server and edge cluster

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101282281A (en) * 2007-04-03 2008-10-08 华为技术有限公司 Medium distributing system and apparatus as well as flow medium play method
US20120110113A1 (en) * 2009-07-02 2012-05-03 Aim To G Co., Ltd. Cooperative Caching Method and Contents Providing Method Using Request Apportioning Device
CN102170475A (en) * 2011-04-22 2011-08-31 中兴通讯股份有限公司 File distribution system and fragmentation method based on P2P (peer-to-peer)
CN105610823A (en) * 2015-12-28 2016-05-25 武汉鸿瑞达信息技术有限公司 Stream media processing method and processing system architecture based on task vectors
CN105763628A (en) * 2016-04-12 2016-07-13 上海帝联信息科技股份有限公司 Data access request processing method and device, edge node server and edge cluster

Also Published As

Publication number Publication date
CN107888633B (en) 2020-10-20

Similar Documents

Publication Publication Date Title
US10778801B2 (en) Content delivery network architecture with edge proxy
CN104202362B (en) SiteServer LBS and its content distribution method and device, load equalizer
US9450896B2 (en) Methods and systems for providing customized domain messages
CN105763628B (en) Data access request processing method and processing device, edge node server and cluster
US10742552B2 (en) Representational state transfer operations using information centric networking
EP3211857B1 (en) Http scheduling system and method of content delivery network
CN102882939B (en) Load balancing method, load balancing equipment and extensive domain acceleration access system
CN102263828B (en) Load balanced sharing method and equipment
CN109327550B (en) Access request distribution method and device, storage medium and computer equipment
CN107317879B (en) A kind of distribution method and system of user's request
CN103391312B (en) Resource offline method for down loading and device
CN103248645B (en) BT off-line datas download system and method
CN108173937A (en) Access control method and device
US20090248697A1 (en) Cache optimization
CN104427005A (en) Method and system for realizing accurate request scheduling on content delivery network
KR20130070500A (en) Method and apparatus for processing server load balancing with the result of hash function
CN107347015B (en) Method, device and system for identifying content distribution network
CN110430274A (en) A kind of document down loading method and system based on cloud storage
CN107332908A (en) A kind of data transmission method and its system
CN104935653A (en) Bypass cache method for visiting hot spot resource and device
CN103401799A (en) Method and device for realizing load balance
CN110943876B (en) URL state detection method, device, equipment and system
CN109873855A (en) A kind of resource acquiring method and system based on block chain network
US20140068052A1 (en) Advanced notification of workload
CN105025042B (en) A kind of method and system of determining data information, proxy server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant