CN111930710A - Method for distributing big data content - Google Patents

Method for distributing big data content Download PDF

Info

Publication number
CN111930710A
CN111930710A CN202010845508.7A CN202010845508A CN111930710A CN 111930710 A CN111930710 A CN 111930710A CN 202010845508 A CN202010845508 A CN 202010845508A CN 111930710 A CN111930710 A CN 111930710A
Authority
CN
China
Prior art keywords
request
target file
edge nodes
terminal
sends
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010845508.7A
Other languages
Chinese (zh)
Inventor
麦雪楹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qiaorui Shenzhen Technology Co ltd
Original Assignee
Qiaorui Shenzhen Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qiaorui Shenzhen Technology Co ltd filed Critical Qiaorui Shenzhen Technology Co ltd
Priority to CN202010845508.7A priority Critical patent/CN111930710A/en
Publication of CN111930710A publication Critical patent/CN111930710A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • G06F16/1824Distributed file systems implemented using Network-attached Storage [NAS] architecture
    • G06F16/183Provision of network file services by network file servers, e.g. by using NFS, CIFS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/13File access structures, e.g. distributed indices

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application discloses a method for distributing big data content, which comprises the following steps: the method comprises the steps that a cloud center server obtains the access frequency of a target file, wherein the access frequency is the number of I/O access requests generated in unit time; if the access frequency exceeds a preset threshold value, copying the target file, and distributing the copied target file to a plurality of edge nodes closest to the I/O access request terminal so that the plurality of edge nodes store the target file; the cloud center server transmits the I/O access request message to the edge nodes so that the edge nodes can send the target file to the I/O access request terminal according to the I/O access request; and after receiving the second I/O access request, the edge nodes directly send the stored target file to the second I/O access request terminal and forbid forwarding to the cloud center server.

Description

Method for distributing big data content
Technical Field
The application relates to the technical field of information, in particular to a method for distributing big data content.
Background
With the development and popularization of big data, the challenges and requirements of big data are increasing. The cloud server of big data usually has the functions of big data acquisition, storage, mining, analysis and the like, and the big data cloud server can effectively process the big data through the functions.
However, the distribution capability of the cloud server is limited by the storage space and the network environment, and especially for the response of the hot event, the unreasonable storage resource and the network environment may greatly affect the distribution performance of the big data content.
Disclosure of Invention
The embodiment of the application provides a method for distributing big data content, which is used for solving the problem that the big data content is unreasonable to distribute in the prior art.
The embodiment of the invention provides a big data content distribution method, which comprises the following steps:
the method comprises the steps that a cloud center server obtains the access frequency of a target file, wherein the access frequency is the number of I/O access requests generated in unit time;
if the access frequency exceeds a preset threshold value, copying the target file, and distributing the copied target file to a plurality of edge nodes closest to the I/O access request terminal so that the plurality of edge nodes store the target file;
the cloud center server transmits the I/O access request message to the edge nodes so that the edge nodes can send the target file to the I/O access request terminal according to the I/O access request;
and after receiving the second I/O access request, the edge nodes directly send the stored target file to the second I/O access request terminal and forbid forwarding to the cloud center server.
Optionally, the target file has a priority parameter, and the method further includes:
the edge nodes adjust the storage level of the data of the edge nodes according to the priority of the target file;
and after the edge nodes receive the I/O requests of a plurality of different files, the I/O response requests corresponding to the target files are prioritized according to the level of the stored data of the edge nodes.
Optionally, the method further comprises:
when one edge node receives I/O requests of a plurality of target files, the target files are sent to the terminal which sends the first I/O request;
the edge node sends a second I/O request to the terminal which sends the first I/O request, wherein the second I/O request comprises the MAC address of the terminal which sends the second I/O request;
the terminal sending the first I/O request transparently sends the target file to the terminal sending the second I/O request based on the second I/O request;
the edge node sends the target file to the terminal which sends the third I/O request;
the edge node sends a fourth I/O request to the terminal which sends the third I/O request, wherein the fourth I/O request comprises the MAC address of the terminal which sends the fourth I/O request;
and the terminal sending the third I/O request transparently sends the target file to the terminal sending the fourth I/O request based on the fourth I/O request.
Optionally, the method further comprises:
if the edge node is in a busy state, the edge node distributes the I/O requests of the target files to one or more edge nodes nearest to the edge node, so that the one or more edge nodes respond to the I/O requests of the target files.
Optionally, the method further comprises:
the cloud center server splits the target file into a plurality of data messages;
the cloud center server copies the plurality of data messages;
and the cloud center server monitors the running states of the edge nodes and sends the copied data messages to the edge nodes with good running states so as to enable the edge nodes to respond to the I/O request of the target file.
According to the method provided by the embodiment of the invention, the cloud center server copies and distributes the target file with high access frequency, and the edge node is responsible for sending the target file to each terminal, so that the network operation efficiency is improved, and the cloud storage resources are saved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below.
FIG. 1 is an architecture diagram of a big data content distribution system, in one embodiment;
FIG. 2 is a flow diagram of big data content distribution in one embodiment.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Fig. 1 is an architecture diagram of a big data content distribution system in an embodiment of the present invention. As shown in fig. 1, the embodiment of the present invention includes a cloud-pipe-end three-level structure, where a cloud is a cloud server cluster and includes a plurality of extensible cloud servers, where the cloud server cluster includes a central server defined as a cloud central server, and is configured to monitor a storage state and an operation state of each cloud server, and dynamically release and extend resources based on the state of each cloud server, so as to ensure normal operation of a service. The cloud center server can be one of a plurality of cloud servers, can also be specified as a special server with a control strategy function, can dynamically acquire a target file, and responds to an I/O request of the target file. The management layer is an edge layer and consists of a plurality of edge nodes, the edge nodes are close to the user side, certain calculation and data processing capabilities are achieved, and the query and data acquisition requests of the user can be responded in a short time. The terminal is a terminal, is controlled by a user, generates an I/O request, sends the I/O request to the edge node and the cloud, and finally obtains needed data from the cloud or the edge node.
Fig. 2 is a flowchart of a big data content distribution method according to an embodiment of the present invention. As shown in fig. 2, the method includes:
s101, a cloud center server obtains the access frequency of a target file, wherein the access frequency is the number of I/O access requests generated in unit time;
the target file can be a section of audio and video, a news message, a section of characters and the like. The cloud center server acquires the access frequency of the target file, and aims to determine whether the target file is a hot event. The response frequency of a hot event in a unit time is high, for example, a "hot search" event, the access amount in the unit time can reach millions, and such a high access frequency needs to respond in a short time, which is a challenge for cloud storage and the whole network.
S102, if the access frequency exceeds a preset threshold value, copying the target file, and distributing the copied target file to a plurality of edge nodes closest to the I/O access request terminal so that the edge nodes store the target file;
and if the access frequency exceeds a preset threshold, defining the target file as a hot event and requiring special processing.
In the embodiment of the invention, distributed storage is carried out by copying and distributing the hot event, and the target file is sunk to the edge node by utilizing the characteristic of quick response time of the edge node, so that the response time of the hot event is shortened.
S103, the cloud center server transmits the I/O access request message to the edge nodes so that the edge nodes can transmit the target file to the I/O access request terminal according to the I/O access request;
because the hot event has the characteristic of many I/O requests in unit time, the I/O request queue contains the acquisition of the target file, for the cloud, the I/O response quantity of the hot event is a bottleneck, meanwhile, the hot event has the pulse type sudden characteristic, the hot event tends to be cool after a period of time, and how to carry out quick response in the pulse peak period is a problem which needs to be solved urgently. In the embodiment of the invention, not only the target file is sunk, but also the I/O request is correspondingly sunk, and the I/O request sending terminal is required to communicate with the nearest edge node based on the principle of proximity so as to respond to the I/O request of the terminal in time.
And S104, after receiving the second I/O access request, the edge nodes directly send the stored target file to the second I/O access request terminal and forbid the target file from being forwarded to the cloud center server.
Furthermore, for a new I/O request (second I/O access request), the cloud-centric server does not need to respond to the request, but rather by the edge node that has already obtained the target file.
Therefore, after receiving the target file and the I/O access request, the edge node directly responds to the subsequent I/O request, and forbids forwarding to the cloud center server, so that the load pressure of the cloud center server is not allowed to be increased.
Optionally, in this embodiment of the present invention, the response prioritization may also be performed according to the priority of the target file, and specifically, the response prioritization may be:
the edge nodes adjust the storage level of the data of the edge nodes according to the priority of the target file; the storage level of the self data can be divided into a high data priority, a medium data priority and a low data priority.
And after the edge nodes receive the I/O requests of a plurality of different files, the I/O response requests corresponding to the target files are prioritized according to the level of the stored data of the edge nodes. And when a plurality of I/O requests are queued, the edge node can respond according to different priorities after giving priority or being bad, and respond to the I/O request of the target file preferentially.
Optionally, in the embodiment of the present invention, if the number of I/O requests is large, and the edge node response time in unit time is correspondingly prolonged, the edge node distributes the terminal that has acquired the target file to the terminal of the next I/O request for the second time by using the P2P protocol, so that the amount of I/O requests can be reduced by at least half. The specific method comprises the following steps:
when one edge node receives I/O requests of a plurality of target files, the target files are sent to a terminal which sends a first I/O request;
the edge node sends a second I/O request to the terminal which sends the first I/O request, wherein the second I/O request comprises the MAC address of the terminal which sends the second I/O request;
the terminal which sends the first I/O request sends the target file to the terminal which sends the second I/O request based on the second I/O request;
the edge node sends the target file to the terminal which sends the third I/O request;
the edge node sends a fourth I/O request to the terminal which sends the third I/O request, wherein the fourth I/O request comprises the MAC address of the terminal which sends the fourth I/O request;
the terminal that has transmitted the third I/O request transparently transmits the target file to the terminal that has transmitted the fourth I/O request based on the fourth I/O request.
Optionally, in this embodiment of the present invention, if the edge node is in a busy state, the edge node distributes the I/O requests of the plurality of target files to one or more edge nodes closest to the edge node, so that the one or more edge nodes respond to the plurality of I/O requests of the target files.
Optionally, the method further comprises:
the cloud center server splits the target file into a plurality of data messages;
the cloud center server copies the plurality of data messages;
and the cloud center server monitors the running states of the edge nodes and sends the copied data messages to the edge nodes with good running states so as to enable the edge nodes to respond to the I/O request of the target file. In the embodiment of the invention, the target file is split, the split target file is divided into a plurality of short data messages with small data volume, the short data messages can be stored in the vacant storage spaces of different edge nodes without being stored in the whole, when the target file is called, the plurality of target files are required to be completely acquired and subjected to data splicing, and the response time of the target file is influenced by the missing of a single message or the long response time, so that the data message is required to be copied and distributed to the edge nodes with good running state, and the response time of each data message is ensured to be low. The good operation state can be judged by parameters such as data storage rate, unit data throughput and the like.
According to the method provided by the embodiment of the invention, the cloud center server copies and distributes the target file with high access frequency, and the edge node is responsible for sending the target file to each terminal, so that the network operation efficiency is improved, and the cloud storage resources are saved.
The above is only a specific embodiment of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (5)

1. A method for big data content distribution, comprising:
the method comprises the steps that a cloud center server obtains the access frequency of a target file, wherein the access frequency is the number of I/O access requests generated in unit time;
if the access frequency exceeds a preset threshold value, copying the target file, and distributing the copied target file to a plurality of edge nodes closest to the I/O access request terminal so that the plurality of edge nodes store the target file;
the cloud center server transmits the I/O access request message to the edge nodes so that the edge nodes can send the target file to the I/O access request terminal according to the I/O access request;
and after receiving the second I/O access request, the edge nodes directly send the stored target file to the second I/O access request terminal and forbid forwarding to the cloud center server.
2. The method of claim 1, wherein the target file has a priority parameter, the method further comprising:
the edge nodes adjust the storage level of the data of the edge nodes according to the priority of the target file;
and after the edge nodes receive the I/O requests of a plurality of different files, the I/O response requests corresponding to the target files are prioritized according to the level of the stored data of the edge nodes.
3. The method of claim 1, further comprising:
when one edge node receives I/O requests of a plurality of target files, the target files are sent to the terminal which sends the first I/O request;
the edge node sends a second I/O request to the terminal which sends the first I/O request, wherein the second I/O request comprises the MAC address of the terminal which sends the second I/O request;
the terminal sending the first I/O request transparently sends the target file to the terminal sending the second I/O request based on the second I/O request;
the edge node sends the target file to the terminal which sends the third I/O request;
the edge node sends a fourth I/O request to the terminal which sends the third I/O request, wherein the fourth I/O request comprises the MAC address of the terminal which sends the fourth I/O request;
and the terminal sending the third I/O request transparently sends the target file to the terminal sending the fourth I/O request based on the fourth I/O request.
4. The method of claim 3, further comprising:
if the edge node is in a busy state, the edge node distributes the I/O requests of the target files to one or more edge nodes nearest to the edge node, so that the one or more edge nodes respond to the I/O requests of the target files.
5. The method of claim 1, further comprising:
the cloud center server splits the target file into a plurality of data messages;
the cloud center server copies the plurality of data messages;
and the cloud center server monitors the running states of the edge nodes and sends the copied data messages to the edge nodes with good running states so as to enable the edge nodes to respond to the I/O request of the target file.
CN202010845508.7A 2020-08-20 2020-08-20 Method for distributing big data content Withdrawn CN111930710A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010845508.7A CN111930710A (en) 2020-08-20 2020-08-20 Method for distributing big data content

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010845508.7A CN111930710A (en) 2020-08-20 2020-08-20 Method for distributing big data content

Publications (1)

Publication Number Publication Date
CN111930710A true CN111930710A (en) 2020-11-13

Family

ID=73305976

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010845508.7A Withdrawn CN111930710A (en) 2020-08-20 2020-08-20 Method for distributing big data content

Country Status (1)

Country Link
CN (1) CN111930710A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113596123A (en) * 2021-07-19 2021-11-02 深圳市元征未来汽车技术有限公司 Software downloading method, communication device and storage medium
CN115114008A (en) * 2021-03-17 2022-09-27 中移(上海)信息通信科技有限公司 Edge program processing method and device and server
CN115174689A (en) * 2022-06-17 2022-10-11 宁波义钛工业物联网有限公司 Access processing method and device for edge node

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115114008A (en) * 2021-03-17 2022-09-27 中移(上海)信息通信科技有限公司 Edge program processing method and device and server
CN113596123A (en) * 2021-07-19 2021-11-02 深圳市元征未来汽车技术有限公司 Software downloading method, communication device and storage medium
CN115174689A (en) * 2022-06-17 2022-10-11 宁波义钛工业物联网有限公司 Access processing method and device for edge node

Similar Documents

Publication Publication Date Title
CN111930710A (en) Method for distributing big data content
CN108306971B (en) Method and system for sending acquisition request of data resource
WO2010100859A1 (en) Distributed system
US20200050479A1 (en) Blockchain network and task scheduling method therefor
CN109933431B (en) Intelligent client load balancing method and system
PH12015500177B1 (en) Computer information system and dynamic disaster recovery method therefor
CN106230997B (en) Resource scheduling method and device
CN110336848B (en) Scheduling method, scheduling system and scheduling equipment for access request
US20100011098A1 (en) Systems and methods for managing networks
CN109672711B (en) Reverse proxy server Nginx-based http request processing method and system
CN106713378B (en) Method and system for providing service by multiple application servers
CN113810304A (en) Load balancing method, device, equipment and computer storage medium
CN112468310B (en) Streaming media cluster node management method and device and storage medium
CN116662020B (en) Dynamic management method and system for application service, electronic equipment and storage medium
CN113726690A (en) Method and system for uploading protocol message, electronic equipment and storage medium
CN112416594A (en) Micro-service distribution method, electronic equipment and computer storage medium
CN110868323A (en) Bandwidth control method, device, equipment and medium
CN107547643B (en) Load sharing method and device
CN106790610B (en) Cloud system message distribution method, device and system
CN106790354B (en) Communication method and device for preventing data congestion
US9967163B2 (en) Message system for avoiding processing-performance decline
CN112118275B (en) Overload processing method, internet of things platform and computer readable storage medium
CN113190347A (en) Edge cloud system and task management method
CN111966694A (en) System and method for optimizing back-end data storage space
CN110445725B (en) Method and storage medium for shunting newly-added load node

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20201113

WW01 Invention patent application withdrawn after publication