CN110874429A - Distributed web crawler performance optimization method oriented to mass data acquisition - Google Patents

Distributed web crawler performance optimization method oriented to mass data acquisition Download PDF

Info

Publication number
CN110874429A
CN110874429A CN201911110871.8A CN201911110871A CN110874429A CN 110874429 A CN110874429 A CN 110874429A CN 201911110871 A CN201911110871 A CN 201911110871A CN 110874429 A CN110874429 A CN 110874429A
Authority
CN
China
Prior art keywords
url
module
link
crawling
url address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911110871.8A
Other languages
Chinese (zh)
Inventor
张凯云
吴志成
陈立忠
吴艳林
张郭秋晨
纪纲
王学勇
郭姣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jinghang Computing Communication Research Institute
Original Assignee
Beijing Jinghang Computing Communication Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jinghang Computing Communication Research Institute filed Critical Beijing Jinghang Computing Communication Research Institute
Priority to CN201911110871.8A priority Critical patent/CN110874429A/en
Publication of CN110874429A publication Critical patent/CN110874429A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention belongs to the technical field of software engineering, and particularly relates to a distributed web crawler performance optimization method oriented to mass data acquisition. The method comprises the following steps: step 1: the initialization module creates a duplication removal character string and a junk link characteristic character string; step 2: the main node crawler reads an initial URL address, and the crawling module crawls the initial URL address to generate a URL task queue; and step 3: and the crawling module crawls the webpage according to the URL task queue to finish crawling work. Compared with the prior art, the method breaks through the bottleneck of the crawling performance of the distributed web crawlers, and the crawling performance is improved by over 50 percent; the duplicate removal efficiency of the URL task queue is improved, and the efficiency requirement of mass data acquisition is met; the storage space of the URL task queue is optimized, and the memory resources of the server are greatly saved; and a link of garbage link filtering is added, so that the memory resource of the server is saved, and the crawler efficiency is obviously improved.

Description

Distributed web crawler performance optimization method oriented to mass data acquisition
Technical Field
The invention belongs to the technical field of software engineering, and particularly relates to a distributed web crawler performance optimization method oriented to mass data acquisition.
Background
The web crawler, also known as a web spider, a web ant, a web robot, or the like, can automatically acquire data from a network according to a set rule. The distributed web crawler can efficiently acquire a large-scale data set, is widely applied to search engines and big data analysis, and becomes an important tool for acquiring mass data.
A distributed web crawler typically includes a master node crawler and multiple slave node crawlers that persist a URL task queue and a deduplication queue using a Redis in-memory database. The main node crawler crawls a webpage according to an initial URL (uniform resource locator), obtains data, simultaneously obtains a new URL, and puts the new URL into a URL task queue after de-duplicating the new URL; and (3) acquiring a URL address from the URL task queue by the node crawler, crawling a webpage to acquire data, simultaneously acquiring a new URL, putting the new URL into the URL task queue after the new URL is repeated, and repeating the steps until the crawler task meets the ending condition or the URL task queue is empty.
The URL duplication elimination principle of the Redis memory database is that the characteristic that data sets are not repeated is utilized, and the method is suitable for the condition that data measurement is not large in size. When the link to be deduplicated reaches ten million orders of magnitude, the requirement on the memory of the server is obviously improved, and the deduplication efficiency is greatly reduced. Through practice, along with the continuous operation of the crawler, the accumulated URL task queues and the de-duplication queues continuously occupy Redis memory and continuously increase, and finally the server is down due to the fact that the memory of the whole server is occupied.
Therefore, when facing mass data acquisition, the performance of the existing Redis-based distributed web crawler has three disadvantages: (1) huge de-duplication queues are stored in a Redis set, so that not only is the de-duplication efficiency low, but also the memory resources of the server are excessively consumed; (2) the garbage link layer is infinite, and a Redis memory database cannot be effectively distinguished, so that the normal crawling work is seriously influenced; (3) the data volume of the URL task queue is increased rapidly, and the memory resource of the server is excessively occupied.
Disclosure of Invention
Technical problem to be solved
The technical problem to be solved by the invention is as follows: how to solve the problems of low duplicate removal efficiency, excessive consumption of server memory resources and incapability of effectively eliminating garbage links when massive data is collected in the conventional distributed web crawler based on a Redis memory database,
(II) technical scheme
In order to solve the above technical problems, the present invention provides a distributed web crawler performance optimization method for mass data acquisition, the distributed web crawler performance optimization method being implemented based on a distributed web crawler performance optimization system, the distributed web crawler performance optimization system comprising: the system comprises an initialization module and a crawling module;
the distributed web crawler performance optimization method comprises the following steps:
step 1: the initialization module creates a de-duplication character string and a spam link characteristic character string;
step 2: the main node crawler reads an initial URL address, and the crawling module crawls the initial URL address to generate a URL task queue;
and step 3: and the crawling module crawls a webpage according to the URL task queue to finish crawling work.
Wherein the initialization module comprises: the system comprises a duplication removal character string generating unit and a junk link characteristic character string generating unit;
the step 1 comprises the following steps:
step 11: the duplication-removing character string generating unit is used for newly establishing a duplication-removing character string in a Redis memory database;
step 12: and the junk link characteristic character string generating unit is used for newly establishing a junk link characteristic character string in a Redis memory database according to the characteristic of the junk link.
Wherein all bit values in the deduplication string are 0.
Wherein, the typical characteristics of the junk link comprise: automatically commenting generated links and sending external links in a group.
Wherein, the crawling module comprises: the system comprises a page crawling module, a page analysis module and a link processing module;
the step 2 comprises the following steps:
step 21: a user sets a set of initial URL addresses through a user center according to a data acquisition theme;
step 22: the host node crawler reads an initial URL address and delivers the initial URL address to a page crawling module;
step 23: the page crawling module sends a request to an internet webpage according to the initial URL address;
step 24: the internet web page responds to the request of the page crawling module and returns response content;
step 25: the page analysis module analyzes the response content according to the related subject, extracts the content and stores the content in a database for query, and meanwhile, a new URL address is obtained;
step 26: and the new URL address is processed by the link processing module and is stored in a URL task queue of a Redis memory database to form a URL task queue.
Wherein the link processing module includes: the system comprises a garbage link filtering module, a link duplication removing processing module, a compression processing module and a serialization processing module;
the step 26 comprises:
step 261: the junk link filtering module acts on the new URL address according to the junk link characteristic character string to identify whether the new URL address contains junk link characteristics, if the URL address contains the junk link characteristics, the URL address is judged to belong to a junk link, and is directly filtered, otherwise, the next step of processing is carried out;
step 262: the URL address with the filtered junk link characteristics is compressed into the same number of bits through a password hash function by a link deduplication processing module, then the URL address is operated by k different hash functions to finally obtain k independent hash values, whether the numerical values of the character bits corresponding to the hash values in the deduplication character string in the Redis memory database are all 1 or not is judged according to the k independent hash values, and if the numerical values are all 1, the URL address belongs to the duplicate link and is directly filtered; otherwise, the URL address does not belong to a repeat link;
step 263: the compression processing module carries out encryption compression algorithm processing on the URL address which does not belong to the repeated link after the duplication removal in the step 262;
step 264: the compressed URL address and the page analysis function are serialized by a serialization processing module according to a key value pair data format;
step 265: and storing the serialized URL address into a URL task queue of a Redis memory database to form a URL task queue.
In step 262, for the URL address that does not belong to the duplicate link, all the numerical values corresponding to the URL address and corresponding to the hash value in the deduplication string, where the character bit is not 1, are set to 1.
Wherein the value of k comprises 5, 7, 9, 11.
Wherein, distributed network crawler principal and subordinate node crawls the module and still includes: the device comprises a judgment module, a priority determination module and an deserialization module;
the step 3 comprises the following steps:
step 31: the judging module judges whether the URL address to be crawled meets the crawler finishing condition in the URL task queue, if so, the module stops, otherwise, the module goes to step 32;
step 32: the priority determining module determines the priority of the URL to be crawled in the URL task queue according to the breadth-first crawling strategy, and then the main node crawler and the slave node crawler read the URL to be crawled from the URL task queue according to the priority;
step 33: the deserializing module deserializes the read URL to be crawled to obtain a URL address;
step 34: the master node crawler and the slave node crawlers respectively and independently call respective page crawling modules and send requests to internet webpages according to URL addresses;
step 35: the webpage responds to the request of the page crawling module and returns response content;
step 36: the page analysis module analyzes the response content according to the related subject, extracts the content and stores the content in a database for query, and meanwhile, a new URL link is obtained;
step 37: the new URL is processed by a link processing module and is stored in a URL task queue in Redis;
step 38: and repeating the step 31 until the whole crawling work is completed.
Wherein the crawler finishing conditions are as follows: in the URL task queue, all URL addresses have been crawled.
(III) advantageous effects
Compared with the prior art, the invention has the following beneficial effects:
(1) the bottleneck of the crawling performance of the distributed web crawlers is broken through, and the crawling performance is improved by over 50 percent;
(2) the duplicate removal efficiency of the URL task queue is improved, and the efficiency requirement of mass data acquisition is met;
(3) the storage space of the URL task queue is optimized, and the memory resources of the server are greatly saved;
(4) and a link of garbage link filtering is added, so that the memory resource of the server is saved, and the crawler efficiency is obviously improved.
Drawings
FIG. 1 is a schematic diagram illustrating the generation of an initial URL task queue for a distributed web crawler according to the present invention.
FIG. 2 is a functional block diagram of a distributed web crawler link processing module according to the present invention.
FIG. 3 is a flow chart of the work flow of crawling the master node and the slave node of the distributed web crawler of the present invention.
Detailed Description
In order to make the objects, contents, and advantages of the present invention clearer, the following detailed description of the embodiments of the present invention will be made in conjunction with the accompanying drawings and examples.
In order to solve the problems in the prior art, the invention provides a distributed web crawler performance optimization method oriented to mass data acquisition, which is implemented based on a distributed web crawler performance optimization system, and the distributed web crawler performance optimization system comprises: the system comprises an initialization module and a crawling module;
the distributed web crawler performance optimization method comprises the following steps:
step 1: the initialization module creates a de-duplication character string and a spam link characteristic character string;
step 2: the main node crawler reads an initial URL address, and the crawling module crawls the initial URL address to generate a URL task queue;
and step 3: and the crawling module crawls a webpage according to the URL task queue to finish crawling work.
Wherein the initialization module comprises: the system comprises a duplication removal character string generating unit and a junk link characteristic character string generating unit;
the step 1 comprises the following steps:
step 11: the duplication-removing character string generating unit is used for newly establishing a duplication-removing character string in a Redis memory database;
step 12: and the junk link characteristic character string generating unit is used for newly establishing a junk link characteristic character string in a Redis memory database according to the characteristic of the junk link.
Wherein all bit values in the deduplication string are 0.
Wherein, the typical characteristics of the junk link comprise: automatically commenting generated links and sending external links in a group.
Wherein, the crawling module comprises: the system comprises a page crawling module, a page analysis module and a link processing module;
as shown in fig. 1, the step 2 includes:
step 21: a user sets a set of initial URL addresses through a user center according to a data acquisition theme;
step 22: the host node crawler reads an initial URL address and delivers the initial URL address to a page crawling module;
step 23: the page crawling module sends a request to an internet webpage according to the initial URL address;
step 24: the internet web page responds to the request of the page crawling module and returns response content;
step 25: the page analysis module analyzes the response content according to the related subject, extracts the content and stores the content in a database for query, and meanwhile, a new URL address is obtained;
step 26: and the new URL address is processed by the link processing module and is stored in a URL task queue of a Redis memory database to form a URL task queue.
As shown in fig. 2, the link processing module includes: the system comprises a garbage link filtering module, a link duplication removing processing module, a compression processing module and a serialization processing module;
the step 26 comprises:
step 261: the junk link filtering module acts on the new URL address according to the junk link characteristic character string to identify whether the new URL address contains junk link characteristics, if the URL address contains the junk link characteristics, the URL address is judged to belong to a junk link, and is directly filtered, otherwise, the next step of processing is carried out;
step 262: the URL address with the filtered junk link characteristics is compressed into the same number of bits through a password hash function by a link deduplication processing module, then the URL address is operated by k different hash functions to finally obtain k independent hash values, whether the numerical values of the character bits corresponding to the hash values in the deduplication character string in the Redis memory database are all 1 or not is judged according to the k independent hash values, and if the numerical values are all 1, the URL address belongs to the duplicate link and is directly filtered; otherwise, the URL address does not belong to a repeat link;
step 263: the compression processing module carries out encryption compression algorithm processing on the URL address which does not belong to the repeated link after the duplication removal in the step 262;
step 264: the compressed URL address and the page analysis function are serialized by a serialization processing module according to a key value pair data format;
step 265: and storing the serialized URL address into a URL task queue of a Redis memory database to form a URL task queue.
In step 262, for the URL address that does not belong to the duplicate link, all the numerical values corresponding to the URL address and corresponding to the hash value in the deduplication string, where the character bit is not 1, are set to 1.
Wherein the value of k comprises 5, 7, 9, 11.
Wherein, distributed network crawler principal and subordinate node crawls the module and still includes: the device comprises a judgment module, a priority determination module and an deserialization module;
as shown in fig. 3, the step 3 includes:
step 31: the judging module judges whether the URL address to be crawled meets the crawler finishing condition in the URL task queue, if so, the module stops, otherwise, the module goes to step 32;
step 32: the priority determining module determines the priority of the URL to be crawled in the URL task queue according to the breadth-first crawling strategy, and then the main node crawler and the slave node crawler read the URL to be crawled from the URL task queue according to the priority;
step 33: the deserializing module deserializes the read URL to be crawled to obtain a URL address;
step 34: the master node crawler and the slave node crawlers respectively and independently call respective page crawling modules and send requests to internet webpages according to URL addresses;
step 35: the webpage responds to the request of the page crawling module and returns response content;
step 36: the page analysis module analyzes the response content according to the related subject, extracts the content and stores the content in a database for query, and meanwhile, a new URL link is obtained;
step 37: the new URL is processed by a link processing module and is stored in a URL task queue in Redis;
step 38: and repeating the step 31 until the whole crawling work is completed.
Wherein the crawler finishing conditions are as follows: in the URL task queue, all URL addresses have been crawled.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A distributed web crawler performance optimization method oriented to mass data acquisition is characterized in that the distributed web crawler performance optimization method is implemented based on a distributed web crawler performance optimization system, and the distributed web crawler performance optimization system comprises: the system comprises an initialization module and a crawling module;
the distributed web crawler performance optimization method comprises the following steps:
step 1: the initialization module creates a de-duplication character string and a spam link characteristic character string;
step 2: the main node crawler reads an initial URL address, and the crawling module crawls the initial URL address to generate a URL task queue;
and step 3: and the crawling module crawls a webpage according to the URL task queue to finish crawling work.
2. The method for optimizing the performance of the distributed web crawler for mass data acquisition according to claim 1, wherein the initialization module comprises: the system comprises a duplication removal character string generating unit and a junk link characteristic character string generating unit;
the step 1 comprises the following steps:
step 11: the duplication-removing character string generating unit is used for newly establishing a duplication-removing character string in a Redis memory database;
step 12: and the junk link characteristic character string generating unit is used for newly establishing a junk link characteristic character string in a Redis memory database according to the characteristic of the junk link.
3. The method for optimizing the performance of the distributed web crawler for mass data acquisition according to claim 2, wherein all bit values in the deduplication character string are 0.
4. The method for optimizing the performance of the distributed web crawler for mass data acquisition according to claim 3, wherein the characteristic features of the garbage link include: automatically commenting generated links and sending external links in a group.
5. The distributed web crawler performance optimization method for mass data collection according to claim 4, wherein the crawling module comprises: the system comprises a page crawling module, a page analysis module and a link processing module;
the step 2 comprises the following steps:
step 21: a user sets a set of initial URL addresses through a user center according to a data acquisition theme;
step 22: the host node crawler reads an initial URL address and delivers the initial URL address to a page crawling module;
step 23: the page crawling module sends a request to an internet webpage according to the initial URL address;
step 24: the internet web page responds to the request of the page crawling module and returns response content;
step 25: the page analysis module analyzes the response content according to the related subject, extracts the content and stores the content in a database for query, and meanwhile, a new URL address is obtained;
step 26: and the new URL address is processed by the link processing module and is stored in a URL task queue of a Redis memory database to form a URL task queue.
6. The method for optimizing the performance of the distributed web crawler for mass data acquisition as recited in claim 5, wherein the link processing module comprises: the system comprises a garbage link filtering module, a link duplication removing processing module, a compression processing module and a serialization processing module;
the step 26 comprises:
step 261: the junk link filtering module acts on the new URL address according to the junk link characteristic character string to identify whether the new URL address contains junk link characteristics, if the URL address contains the junk link characteristics, the URL address is judged to belong to a junk link, and is directly filtered, otherwise, the next step of processing is carried out;
step 262: the URL address with the filtered junk link characteristics is compressed into the same number of bits through a password hash function by a link deduplication processing module, then the URL address is operated by k different hash functions to finally obtain k independent hash values, whether the numerical values of the character bits corresponding to the hash values in the deduplication character string in the Redis memory database are all 1 or not is judged according to the k independent hash values, and if the numerical values are all 1, the URL address belongs to the duplicate link and is directly filtered; otherwise, the URL address does not belong to a repeat link;
step 263: the compression processing module carries out encryption compression algorithm processing on the URL address which does not belong to the repeated link after the duplication removal in the step 262;
step 264: the compressed URL address and the page analysis function are serialized by a serialization processing module according to a key value pair data format;
step 265: and storing the serialized URL address into a URL task queue of a Redis memory database to form a URL task queue.
7. The method for optimizing the performance of the distributed web crawler facing mass data acquisition as recited in claim 6, wherein in the step 262, for the URL addresses that do not belong to the duplicate link, all the values corresponding to the URL addresses and corresponding to the hash values in the deduplication strings whose character bits are not 1 are set to 1.
8. The method for optimizing the performance of the distributed web crawler for mass data acquisition as recited in claim 6, wherein the value of k comprises 5, 7, 9 and 11.
9. The method for optimizing the performance of the distributed web crawler for mass data acquisition as recited in claim 7, wherein the distributed web crawler master-slave node crawling module further comprises: the device comprises a judgment module, a priority determination module and an deserialization module;
the step 3 comprises the following steps:
step 31: the judging module judges whether the URL address to be crawled meets the crawler finishing condition in the URL task queue, if so, the module stops, otherwise, the module goes to step 32;
step 32: the priority determining module determines the priority of the URL to be crawled in the URL task queue according to the breadth-first crawling strategy, and then the main node crawler and the slave node crawler read the URL to be crawled from the URL task queue according to the priority;
step 33: the deserializing module deserializes the read URL to be crawled to obtain a URL address;
step 34: the master node crawler and the slave node crawlers respectively and independently call respective page crawling modules and send requests to internet webpages according to URL addresses;
step 35: the webpage responds to the request of the page crawling module and returns response content;
step 36: the page analysis module analyzes the response content according to the related subject, extracts the content and stores the content in a database for query, and meanwhile, a new URL link is obtained;
step 37: the new URL is processed by a link processing module and is stored in a URL task queue in Redis;
step 38: and repeating the step 31 until the whole crawling work is completed.
10. The distributed web crawler performance optimizing method for mass data acquisition as recited in claim 9, wherein the crawler finishing condition is: in the URL task queue, all URL addresses have been crawled.
CN201911110871.8A 2019-11-14 2019-11-14 Distributed web crawler performance optimization method oriented to mass data acquisition Pending CN110874429A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911110871.8A CN110874429A (en) 2019-11-14 2019-11-14 Distributed web crawler performance optimization method oriented to mass data acquisition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911110871.8A CN110874429A (en) 2019-11-14 2019-11-14 Distributed web crawler performance optimization method oriented to mass data acquisition

Publications (1)

Publication Number Publication Date
CN110874429A true CN110874429A (en) 2020-03-10

Family

ID=69718186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911110871.8A Pending CN110874429A (en) 2019-11-14 2019-11-14 Distributed web crawler performance optimization method oriented to mass data acquisition

Country Status (1)

Country Link
CN (1) CN110874429A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111428115A (en) * 2020-04-16 2020-07-17 行吟信息科技(上海)有限公司 Webpage information processing method and device
CN111444412A (en) * 2020-04-03 2020-07-24 北京明朝万达科技股份有限公司 Scheduling method and device for web crawler task
CN111522847A (en) * 2020-04-16 2020-08-11 山东贝赛信息科技有限公司 Method for removing duplicate of distributed crawler website
CN112035723A (en) * 2020-08-28 2020-12-04 光大科技有限公司 Resource library determination method and device, storage medium and electronic device
CN112199175A (en) * 2020-04-02 2021-01-08 支付宝(杭州)信息技术有限公司 Task queue generating method, device and equipment
CN112347330A (en) * 2020-11-05 2021-02-09 江苏电力信息技术有限公司 Distributed parallel acquisition method for urban big data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101650715A (en) * 2008-08-12 2010-02-17 厦门市美亚柏科信息股份有限公司 Method and device for screening links on web pages
CN103970788A (en) * 2013-02-01 2014-08-06 北京英富森信息技术有限公司 Webpage-crawling-based crawler technology
CN108121810A (en) * 2017-12-26 2018-06-05 北京锐安科技有限公司 A kind of data duplicate removal method, system, central server and distributed server
CN109542595A (en) * 2017-09-21 2019-03-29 阿里巴巴集团控股有限公司 A kind of collecting method, device and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101650715A (en) * 2008-08-12 2010-02-17 厦门市美亚柏科信息股份有限公司 Method and device for screening links on web pages
CN103970788A (en) * 2013-02-01 2014-08-06 北京英富森信息技术有限公司 Webpage-crawling-based crawler technology
CN109542595A (en) * 2017-09-21 2019-03-29 阿里巴巴集团控股有限公司 A kind of collecting method, device and system
CN108121810A (en) * 2017-12-26 2018-06-05 北京锐安科技有限公司 A kind of data duplicate removal method, system, central server and distributed server

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王海林: "基于Spark的社交网络数据分析平台", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112199175A (en) * 2020-04-02 2021-01-08 支付宝(杭州)信息技术有限公司 Task queue generating method, device and equipment
CN112199175B (en) * 2020-04-02 2024-05-17 支付宝(杭州)信息技术有限公司 Task queue generating method, device and equipment
CN111444412A (en) * 2020-04-03 2020-07-24 北京明朝万达科技股份有限公司 Scheduling method and device for web crawler task
CN111444412B (en) * 2020-04-03 2023-06-16 北京明朝万达科技股份有限公司 Method and device for scheduling web crawler tasks
CN111428115A (en) * 2020-04-16 2020-07-17 行吟信息科技(上海)有限公司 Webpage information processing method and device
CN111522847A (en) * 2020-04-16 2020-08-11 山东贝赛信息科技有限公司 Method for removing duplicate of distributed crawler website
CN112035723A (en) * 2020-08-28 2020-12-04 光大科技有限公司 Resource library determination method and device, storage medium and electronic device
CN112347330A (en) * 2020-11-05 2021-02-09 江苏电力信息技术有限公司 Distributed parallel acquisition method for urban big data

Similar Documents

Publication Publication Date Title
CN110874429A (en) Distributed web crawler performance optimization method oriented to mass data acquisition
CN110866166A (en) Distributed web crawler performance optimization system for mass data acquisition
US8266134B1 (en) Distributed crawling of hyperlinked documents
CN103902593B (en) A kind of method and apparatus of Data Migration
CN108256076B (en) Distributed mass data processing method and device
CN103984753B (en) A kind of web crawlers goes the extracting method and device of multiplex eigenvalue
CN110795257A (en) Method, device and equipment for processing multi-cluster operation records and storage medium
CN102710795B (en) Hotspot collecting method and device
CN105824744A (en) Real-time log collection and analysis method on basis of B2B (Business to Business) platform
CN111046011B (en) Log collection method, system, device, electronic equipment and readable storage medium
CN106407224A (en) Method and device for file compaction in KV (Key-Value)-Store system
CN101478608A (en) Fast operating method for mass data based on two-dimensional hash
CN102024018A (en) On-line recovering method of junk metadata in distributed file system
CN107104820B (en) Dynamic capacity-expansion daily operation and maintenance method based on F5 server node
CN102253991A (en) Uniform resource locator (URL) storage method, web filtering method, device and system
CN107798106A (en) A kind of URL De-weight methods in distributed reptile system
CN111913917A (en) File processing method, device, equipment and medium
CN108090186A (en) A kind of electric power data De-weight method on big data platform
CN104424316A (en) Data storage method, data searching method, related device and system
CN103647774A (en) Web content information filtering method based on cloud computing
CN105426407A (en) Web data acquisition method based on content analysis
CN112650739A (en) Data storage processing method and device for coal mine data middling station
CN111506672A (en) Method, device, equipment and storage medium for analyzing environmental protection monitoring data in real time
CN110019152A (en) A kind of big data cleaning method
CN111611463A (en) Scapy-Redis-based distributed web crawler optimization method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200310