CN111008074B - File processing method, device, equipment and medium - Google Patents

File processing method, device, equipment and medium Download PDF

Info

Publication number
CN111008074B
CN111008074B CN201911233894.8A CN201911233894A CN111008074B CN 111008074 B CN111008074 B CN 111008074B CN 201911233894 A CN201911233894 A CN 201911233894A CN 111008074 B CN111008074 B CN 111008074B
Authority
CN
China
Prior art keywords
files
target
processed
available
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911233894.8A
Other languages
Chinese (zh)
Other versions
CN111008074A (en
Inventor
雷鸣
尤见
张恺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Construction Bank Corp
Original Assignee
China Construction Bank Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Bank Corp filed Critical China Construction Bank Corp
Priority to CN201911233894.8A priority Critical patent/CN111008074B/en
Publication of CN111008074A publication Critical patent/CN111008074A/en
Application granted granted Critical
Publication of CN111008074B publication Critical patent/CN111008074B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a file processing method, a device, equipment and a medium. Wherein the method comprises the following steps: determining a target file group from the files to be processed according to cluster resources required by the files to be processed, and available resources and available threads of the distributed clusters; and determining target threads from the available threads according to the number of files in the target file group and cluster resources required by the files in the target file group, determining target files associated with the target threads, and determining cluster resources to be allocated of the target threads from the available resource aggregate, wherein the cluster resources to be allocated of the target threads are used for processing the target files through the target threads. According to the technical scheme, the distributed processing of the files is realized, and the utilization rate of distributed cluster resources is improved.

Description

File processing method, device, equipment and medium
Technical Field
The embodiment of the invention relates to the technical field of data processing, in particular to a file processing method, device, equipment and medium.
Background
At present, for very large-scale data files, such as the flow data of banks, the number of files is large, the size difference of the files is large, and with the increase of business, the data files which need to be processed every day are increased continuously, and the data files reach more than 1T.
In the prior art, a common file processing method is to process a file by a single machine, and the processing method has the problem of long time consumption when processing large-scale file data.
Disclosure of Invention
The invention provides a file processing method, a device, equipment and a medium, which are used for improving the efficiency of file processing and the resource utilization rate of a file processing system.
In a first aspect, an embodiment of the present invention provides a file processing method, including:
determining a target file group from the files to be processed according to cluster resources required by the files to be processed, and available resources and available threads of the distributed clusters;
and determining target threads from the available threads according to the number of files in the target file group and cluster resources required by the files in the target file group, determining target files associated with the target threads, and determining cluster resources to be allocated of the target threads from the available resource aggregate, wherein the cluster resources to be allocated of the target threads are used for processing the target files through the target threads.
In a second aspect, an embodiment of the present invention further provides a file processing apparatus, including:
the target file group determining module is used for determining a target file group from the files to be processed according to cluster resources required by the files to be processed, available resources and available threads of the distributed clusters;
the target thread information determining module is used for determining a target thread from the available threads according to the number of files in the target file group and cluster resources required by the files in the target file group, determining target files associated with the target thread from the available resource total, and determining cluster resources to be allocated for the target thread, wherein the cluster resources to be allocated for processing the target file through the target thread.
In a third aspect, an embodiment of the present invention further provides an apparatus, where the apparatus includes:
one or more processors;
storage means for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement a method for processing files according to any one of the embodiments of the present invention.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium having a computer program stored thereon, where the program when executed by a processor implements a file processing method according to any of the embodiments of the present invention
According to the method and the device, the target file group is determined from the files to be processed according to the cluster resources required by the files to be processed and the available resources and the available threads of the distributed clusters, the target threads are determined from the available threads according to the number of the files in the target file group and the required cluster resources required by the files in the target file group, the target files associated with the target threads are determined from the available threads, and the cluster resources to be allocated of the target threads are determined from the available resources, so that the available resources and the available threads in the distributed clusters can be reasonably and dynamically allocated according to the cluster resources required by the files to be processed and the number of the files, and the utilization rate of the distributed cluster resources is improved.
Drawings
FIG. 1 is a flowchart of a method for processing a file according to a first embodiment of the present invention;
FIG. 2 is a flowchart of a file processing method according to a second embodiment of the present invention;
fig. 3 is a schematic diagram of a file transcoding system according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a file processing device according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of a device according to a fifth embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Example 1
Fig. 1 is a flowchart of a file processing according to a first embodiment of the present invention, where the embodiment is applicable to a case of processing a file, and typically, the method may be used to transcode the file. The method may be performed by a file processing device, which may be implemented in software and/or hardware.
Referring to fig. 1, the method specifically includes the steps of:
and 110, determining a target file group from the files to be processed according to cluster resources required by the files to be processed and available resources and available threads of the distributed clusters.
The distributed cluster can be Spark distributed cluster, and the cluster is composed of a plurality of processors and has certain computing processing capacity and memory resources. When a task to be processed is issued to the distributed cluster system, a plurality of parallel threads are started in the distributed cluster, the system allocates certain CPU resources and memory resources for each thread executing the task, and the started threads are used for processing the issued task.
In this embodiment, the cluster resources required for the file to be processed may be greater than the available resources of the distributed cluster, or may be less than or equal to the available resources of the distributed cluster. Further, if the cluster resources required by the file to be processed are larger than the available resources of the distributed cluster, selecting the file which can occupy the available resources and the available threads as much as possible from the file to be processed for processing, and enabling the rest files to enter a waiting state so as to fully utilize the available resources in the distributed cluster. If the cluster resources required by the files to be processed are smaller than or equal to the available resources of the distributed clusters, the files with the largest cluster resources required in the files to be processed should be processed under the condition that the number of available threads allows.
Optionally, each thread completes processing tasks of different files, and available resources occupied on each thread are the same as cluster resources required for the files processed on the thread.
Step 120, determining a target thread from available threads according to the number of files in the target file group and cluster resources required by the files in the target file group, determining target files associated with the target thread, and determining cluster resources to be allocated of the target thread from the total available resources, wherein the cluster resources are used for processing the target files through the target thread.
In this implementation, after the target file group is determined, determining, according to the number of files in the target file group and the cluster resources required by the files in the target file group, a target thread from available threads, a target file associated with the target thread, and determining, from the total available resources, a cluster resource to be allocated for the target thread, includes:
determining target threads with the same number of files as the target file group and target files associated with the target threads from available threads according to the number of files of the target file group;
and determining resources to be allocated of each target thread from the available cluster resources according to the target files associated with the target threads and the cluster resources required by each target file.
For example, if the number of files in the target file group is 10, then the number of target threads to be started in the distributed cluster is also 10.
Further, after the target thread is determined, determining the resources to be allocated of each target thread from the available cluster resources according to the target file associated with the target thread and the cluster resources required by each target file includes:
and determining available cluster resources which are equal to the cluster resources required by each file from the available cluster resources according to the target files associated with the target threads and the cluster resources required by each target file, wherein the available cluster resources are used as resources to be allocated for each target thread. The scheme can maximally utilize the cluster resources of the distributed clusters.
According to the technical scheme of the embodiment, the target file group is determined from the files to be processed according to the cluster resources required by the files to be processed and the available resources and the available threads of the distributed clusters, the target threads are determined from the available threads according to the number of the files in the target file group and the required cluster resources required by the files in the target file group, the target files associated with the target threads are determined from the available threads, and the cluster resources to be allocated of the target threads are determined from the available resources, so that the available resources and the available threads in the distributed clusters can be reasonably and dynamically allocated according to the cluster resources required by the files to be processed and the number of the files, and the utilization rate of the distributed cluster resources is improved.
Example two
Fig. 2 is a diagram of a file processing method according to a second embodiment of the present invention, where step 110 is further refined based on the above embodiment. Referring to fig. 2, an embodiment of the present invention may specifically include:
step 210, if the cluster resources required by the file to be processed are smaller than or equal to the available resources of the distributed cluster, determining whether the number of files of the file to be processed is greater than the number of available threads.
In this embodiment, if the cluster resources required by the files to be processed are smaller than or equal to the available resources of the distributed cluster, the number of target files is determined by the number of available threads in the distributed cluster, so that it is necessary to further determine whether the number of files of the files to be processed is greater than the number of available threads.
And 220, determining a target file group from the files to be processed according to the determination result.
Specifically, according to the determination result, determining the target file group from the files to be processed includes:
if the number of files to be processed is larger than the number of available threads, selecting the files to be processed, which are the same as the number of the available threads and have the largest total cluster resources, from the files to be processed as a target file group;
and if the number of the files to be processed is smaller than or equal to the number of the available threads, taking all the files to be processed as a target file group.
For example, if the number of files to be processed is 20 and the number of available threads is 10, 10 files to be processed with the largest total cluster resources are selected from the files to be processed as the target file group.
If the number of files to be processed is 5 and the number of available threads is 10, the 5 files to be processed are all used as the target file group.
Step 230, determining a target thread from available threads according to the number of files in the target file group and cluster resources required by the files in the target file group, determining target files associated with the target thread, and determining cluster resources to be allocated of the target thread from the total available resources, wherein the cluster resources are used for processing the target files through the target thread.
According to the technical scheme, when the cluster resources required by the files to be processed are smaller than or equal to the available resources of the distributed clusters, the target file group is determined by determining whether the number of the files to be processed is larger than the number of available threads, and efficient utilization of the available resources is achieved on the basis of processing as many files as possible.
Further, on the basis of the foregoing embodiment, determining, from the files to be processed, the target file group according to the cluster resources required by the files to be processed, and the available resources and available threads of the distributed cluster, includes:
and if the cluster resources required by the files to be processed are larger than the available resources of the distributed clusters, selecting the files to be processed, the quantity of which is smaller than or equal to the available threads, from the files to be processed, and the total cluster resources required by the files to be processed are close to the available resources of the distributed clusters, as a target file group.
For example, assuming that the cluster resources required by the files to be processed are M, the available resources of the distributed clusters are N, if M is greater than N, it is indicated that the cluster resources required by the files to be processed exceed the available resources of the current distributed cluster, so that the files to be processed, the number of which is less than or equal to the number of available threads and the total cluster resources required are close to the available resources of the distributed clusters, need to be selected from the files to be processed, are used as a target file group, and the remaining files in the processing enter a waiting state to ensure that the available resources of the distributed clusters are efficiently utilized.
Example III
Fig. 3 is a schematic diagram of a file transcoding system according to a third embodiment of the present invention.
Referring to fig. 3, the system includes a scanning component 310, a processing component 320, a transcoding component 330, and a transmitting component 340, and further, the system includes a plurality of the above components. The system is mainly used for converting a large number of EBCDIC format files into UTF8 format files, and the system architecture is realized based on a distributed processing architecture Spark.
Specifically, the scanning component 310 is configured to scan whether a file to be transcoded exists under a specified directory, and send the scanned file information to the processing component 320.
The transcoding component 330 is configured to request the file to be transcoded from the processing component 320, complete the conversion of the file format after obtaining the file information, and convert the file from the EBCDIC format to the UTF8 format.
The sending component 340 is configured to request a file to be sent from the processing component 320, obtain file information, and send the file to a downstream system.
Processing component 320 is operative to process requests from scanning component 310, transcoding component 330, and sending component 340.
The scanning component 310 component, the processing component 320 and the sending component 340 in the system are system functional components developed by using Python, and the transcoding component 330 component performs multi-machine parallel processing of file conversion tasks based on Spark API.
Further, transcoding component 330 is deployed over the transcoding clusters. The transcoding clusters employ Spark on Yarn schemes. Resource management is carried out by the Yarn of Hadoop, so that the capacity expansion of the cluster can be realized very conveniently. By adding hardware resources involved in the transcoding task, the processing performance of the system is improved without any change at the application level.
Multiple transcoding components 330 may be started simultaneously based on Spark's multitasking parallel operating capability, with peer-to-peer relationships between transcoding components 330. If a task is terminated by an abnormal situation in a certain transcoding component 330, the operation of other transcoding components 330 is not affected, and the subsequent transcoding task is taken over by other transcoding components 330. The transcoding component 330 internally employs a multi-threaded mechanism, each thread performing the transcoding task of a different file.
The Zookeeper is responsible for the Master-slave switching of resource management, and if the Master machine currently responsible for resource management is abnormal, the slave Master machine is automatically switched into the Master machine, so that the high availability of cluster resource management is realized.
The transcoding component 330 realizes multi-machine parallel processing of a single file conversion task based on the Spark API, and the execution performance of the single file transcoding task is remarkably improved.
Example IV
Fig. 4 is a schematic structural diagram of a document processing device according to a fourth embodiment of the present invention. Referring to fig. 4, the apparatus specifically includes:
the target file group determining module 410 is configured to determine a target file group from the files to be processed according to cluster resources required by the files to be processed, and available resources and available threads of the distributed clusters;
the target thread information determining module 420 is configured to determine a target thread from the available threads according to the number of files in the target file group and cluster resources required by the files in the target file group, the target files associated with the target thread, and determine cluster resources to be allocated for the target thread from the total available resources, where the cluster resources to be allocated for processing the target file by the target thread.
Further, the object file group determining module 410 is specifically configured to:
if the cluster resources required by the files to be processed are smaller than or equal to the available resources of the distributed clusters, determining whether the number of the files to be processed is larger than the number of available threads;
and determining a target file group from the files to be processed according to the determination result.
Further, the target file group determining module 410 is specifically further configured to:
if the number of the files to be processed is larger than the number of the available threads, selecting the files to be processed which are the same as the number of the available threads and have the largest total cluster resources from the files to be processed as a target file group;
and if the number of the files to be processed is smaller than or equal to the number of the available threads, taking all the files to be processed as target file groups.
Further, the target file group determining module 410 is specifically further configured to:
and if the cluster resources required by the files to be processed are larger than the available resources of the distributed clusters, selecting the files to be processed, the quantity of which is smaller than or equal to the number of available threads and the total required cluster resources of which are close to the available resources of the distributed clusters, from the files to be processed as a target file group.
Further, the target thread information determining module 420 is specifically configured to:
determining target threads with the same number of files as the target file group and target files associated with the target threads from the available threads according to the number of files of the target file group;
and determining resources to be allocated of each target thread from the available cluster resources according to the target files associated with the target threads and the cluster resources required by each target file.
The file processing device provided by the embodiment of the invention can execute the file processing method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method, and the description is omitted.
Example five
Fig. 5 is a schematic structural diagram of a device according to a fifth embodiment of the present invention. Fig. 5 shows a block diagram of an exemplary device 12 suitable for use in implementing embodiments of the present invention. The device 12 shown in fig. 5 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 5, device 12 is in the form of a general purpose computing device. Components of device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, a bus 18 that connects the various system components, including the system memory 28 and the processing units 16.
Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, micro channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Device 12 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 30 and/or cache memory 32. Device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, commonly referred to as a "hard disk drive"). Although not shown in fig. 5, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In such cases, each drive may be coupled to bus 18 through one or more data medium interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored in, for example, memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 42 generally perform the functions and/or methods of the embodiments described herein.
Device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), one or more devices that enable a user to interact with device 12, and/or any devices (e.g., network card, modem, etc.) that enable device 12 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 22. Also, device 12 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, via network adapter 20. As shown, network adapter 20 communicates with other modules of device 12 over bus 18. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with device 12, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processing unit 16 executes various functional applications and data processing by running programs stored in the system memory 28, for example, implementing a file processing method provided by an embodiment of the present invention.
Example six
The sixth embodiment of the present invention also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a file processing method according to any of the embodiments of the present invention. Wherein the method comprises the following steps:
determining a target file group from the files to be processed according to cluster resources required by the files to be processed, and available resources and available threads of the distributed clusters;
and determining target threads from the available threads according to the number of files in the target file group and cluster resources required by the files in the target file group, determining target files associated with the target threads, and determining cluster resources to be allocated of the target threads from the available resource aggregate, wherein the cluster resources to be allocated of the target threads are used for processing the target files through the target threads.
The computer storage media of embodiments of the invention may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (10)

1. A document processing method, comprising:
determining a target file group from the files to be processed according to cluster resources required by the files to be processed, and available resources and available threads of the distributed clusters;
determining a target thread from the available threads according to the number of files in the target file group and cluster resources required by the files in the target file group, determining target files associated with the target thread from the available resource aggregate, and determining cluster resources to be allocated of the target thread, wherein the cluster resources to be allocated are used for processing the target files through the target thread; the available resources occupied by each target thread are equal to the cluster resources of the target thread corresponding to the processing target file.
2. The method of claim 1, wherein determining the set of target files from the files to be processed based on cluster resources required by the files to be processed, and available resources and available threads of the distributed cluster, comprises:
if the cluster resources required by the files to be processed are smaller than or equal to the available resources of the distributed clusters, determining whether the number of the files to be processed is larger than the number of available threads; and determining a target file group from the files to be processed according to the determination result.
3. The method according to claim 2, wherein determining the target file group from the files to be processed according to the determination result comprises:
if the number of the files to be processed is larger than the number of the available threads, selecting the files to be processed which are the same as the number of the available threads and have the largest total cluster resources from the files to be processed as a target file group;
and if the number of the files to be processed is smaller than or equal to the number of the available threads, taking all the files to be processed as target file groups.
4. The method of claim 1, wherein determining the set of target files from the files to be processed based on cluster resources required by the files to be processed, and available resources and available threads of the distributed cluster, comprises:
and if the cluster resources required by the files to be processed are larger than the available resources of the distributed clusters, selecting the files to be processed, the quantity of which is smaller than or equal to the number of available threads and the total required cluster resources of which are close to the available resources of the distributed clusters, from the files to be processed as a target file group.
5. The method of claim 1, wherein determining a target thread from the available threads based on the number of files in the target file group and the cluster resources required for the files in the target file group, the target file associated with the target thread, and determining the cluster resources to be allocated for the target thread from the total of available resources, comprises:
determining target threads with the same number of files as the target file group and target files associated with the target threads from the available threads according to the number of files of the target file group;
and determining resources to be allocated of each target thread from the available cluster resources according to the target files associated with the target threads and the cluster resources required by each target file.
6. A document processing apparatus, comprising:
the target file group determining module is used for determining a target file group from the files to be processed according to cluster resources required by the files to be processed, available resources and available threads of the distributed clusters;
the target thread information determining module is used for determining a target thread from the available threads according to the number of files in a target file group and cluster resources required by the files in the target file group, determining target files associated with the target thread from the available resource total, and determining cluster resources to be allocated of the target thread, wherein the cluster resources to be allocated of the target thread are used for processing the target file through the target thread; the available resources occupied by each target thread are equal to the cluster resources of the target thread corresponding to the processing target file.
7. The apparatus of claim 6, wherein the target file group determination module is specifically configured to:
if the cluster resources required by the files to be processed are smaller than or equal to the available resources of the distributed clusters, determining whether the number of the files to be processed is larger than the number of available threads;
and determining a target file group from the files to be processed according to the determination result.
8. The apparatus of claim 7, wherein the target file group determination module is specifically configured to:
if the number of the files to be processed is larger than the number of the available threads, selecting the files to be processed which are the same as the number of the available threads and have the largest total cluster resources from the files to be processed as a target file group;
and if the number of the files to be processed is smaller than or equal to the number of the available threads, taking all the files to be processed as target file groups.
9. An apparatus, the apparatus comprising:
one or more processors;
storage means for storing one or more programs,
when executed by the one or more processors, causes the one or more processors to implement a file processing method as recited in any one of claims 1-5.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements a file processing method as claimed in any one of claims 1-5.
CN201911233894.8A 2019-12-05 2019-12-05 File processing method, device, equipment and medium Active CN111008074B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911233894.8A CN111008074B (en) 2019-12-05 2019-12-05 File processing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911233894.8A CN111008074B (en) 2019-12-05 2019-12-05 File processing method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN111008074A CN111008074A (en) 2020-04-14
CN111008074B true CN111008074B (en) 2023-08-22

Family

ID=70115518

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911233894.8A Active CN111008074B (en) 2019-12-05 2019-12-05 File processing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN111008074B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8214839B1 (en) * 2008-03-31 2012-07-03 Symantec Corporation Streaming distribution of file data based on predicted need
CN109376137A (en) * 2018-12-17 2019-02-22 中国人民解放军战略支援部队信息工程大学 A kind of document handling method and device
CN109582445A (en) * 2018-09-29 2019-04-05 阿里巴巴集团控股有限公司 Message treatment method, device, electronic equipment and computer readable storage medium
CN110222016A (en) * 2019-05-20 2019-09-10 平安银行股份有限公司 A kind of document handling method and device
CN110290189A (en) * 2019-06-17 2019-09-27 深圳前海微众银行股份有限公司 A kind of container cluster management method, apparatus and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8214839B1 (en) * 2008-03-31 2012-07-03 Symantec Corporation Streaming distribution of file data based on predicted need
CN109582445A (en) * 2018-09-29 2019-04-05 阿里巴巴集团控股有限公司 Message treatment method, device, electronic equipment and computer readable storage medium
CN109376137A (en) * 2018-12-17 2019-02-22 中国人民解放军战略支援部队信息工程大学 A kind of document handling method and device
CN110222016A (en) * 2019-05-20 2019-09-10 平安银行股份有限公司 A kind of document handling method and device
CN110290189A (en) * 2019-06-17 2019-09-27 深圳前海微众银行股份有限公司 A kind of container cluster management method, apparatus and system

Also Published As

Publication number Publication date
CN111008074A (en) 2020-04-14

Similar Documents

Publication Publication Date Title
US11294714B2 (en) Method and apparatus for scheduling task, device and medium
US11340803B2 (en) Method for configuring resources, electronic device and computer program product
US20200396311A1 (en) Provisioning using pre-fetched data in serverless computing environments
JP5159884B2 (en) Network adapter resource allocation between logical partitions
CN108647104B (en) Request processing method, server and computer readable storage medium
US11201836B2 (en) Method and device for managing stateful application on server
CN111679911B (en) Management method, device, equipment and medium of GPU card in cloud environment
CN110781159B (en) Ceph directory file information reading method and device, server and storage medium
CN110851276A (en) Service request processing method, device, server and storage medium
CN107342929B (en) Method, device and system for sending new message notification
CN111008074B (en) File processing method, device, equipment and medium
CN109151033B (en) Communication method and device based on distributed system, electronic equipment and storage medium
Faraji et al. Design considerations for GPU‐aware collective communications in MPI
WO2022199206A1 (en) Memory sharing method and device for virtual machines
CN113485835B (en) Method, system, equipment and medium for realizing memory sharing under multiple scenes
KR20110018618A (en) Apparatus and method for input/output processing of multi thread
US8543687B2 (en) Moving deployment of images between computers
US9176910B2 (en) Sending a next request to a resource before a completion interrupt for a previous request
CN113315841A (en) File uploading method and device, medium and electronic equipment
US20170063966A1 (en) Real time data streaming from a mainframe computer to off platform destinations
CN113076180A (en) Method for constructing uplink data path and data processing system
CN113254182A (en) Data processing method and device, electronic equipment and storage medium
CN112543165B (en) Decoding method, device, equipment and medium
CN111930566A (en) Data backup method and device, electronic equipment and storage medium
CN116680072A (en) VDI-based business scene processing method, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220920

Address after: 25 Financial Street, Xicheng District, Beijing 100033

Applicant after: CHINA CONSTRUCTION BANK Corp.

Address before: 25 Financial Street, Xicheng District, Beijing 100033

Applicant before: CHINA CONSTRUCTION BANK Corp.

Applicant before: Jianxin Financial Science and Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant