CN115098255A - Design method and system of distributed file asynchronous processing service and electronic equipment - Google Patents

Design method and system of distributed file asynchronous processing service and electronic equipment Download PDF

Info

Publication number
CN115098255A
CN115098255A CN202210689844.6A CN202210689844A CN115098255A CN 115098255 A CN115098255 A CN 115098255A CN 202210689844 A CN202210689844 A CN 202210689844A CN 115098255 A CN115098255 A CN 115098255A
Authority
CN
China
Prior art keywords
subtask
execution
subtasks
command
sequence object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210689844.6A
Other languages
Chinese (zh)
Inventor
范凌
王喆
李佳楠
赵珂飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tezign Shanghai Information Technology Co Ltd
Original Assignee
Tezign Shanghai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tezign Shanghai Information Technology Co Ltd filed Critical Tezign Shanghai Information Technology Co Ltd
Priority to CN202210689844.6A priority Critical patent/CN115098255A/en
Publication of CN115098255A publication Critical patent/CN115098255A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Abstract

The invention discloses a design method, a system and electronic equipment of distributed file asynchronous processing service, wherein the method comprises the following steps: the command terminal receives a request initiated by the calling terminal and determines a subtask according to the request; the command terminal arranges a task link based on the subtasks and constructs an executable sequence object of the distributed asynchronous message queue according to the task link; the command end commands the execution end to trigger execution of the executable sequence object; and the execution end executes the subtasks according to the executable sequence object to obtain subtask products, and the subtask products are fed back to the calling end through the command end. The invention realizes the file processing capability through the subtasks, a plurality of subtasks in the task link can provide various file processing capabilities, and the corresponding subtasks are executed according to different file processing requirements, so that the effect of providing various different file processing capabilities is achieved; and, a large amount of file processing tasks can be handled through task links and distributed asynchronous message queues.

Description

Design method and system of distributed file asynchronous processing service and electronic equipment
Technical Field
The invention relates to the technical field of data asset management, in particular to a design method and a system of distributed file asynchronous processing service and electronic equipment.
Background
In a Data Asset Management (DAM) service scenario, offline processing of multimedia files such as videos and pictures of a user is a capability that a DAM tool needs to have. With the increase of the traffic, the number of files needing to be processed is large, and the magnitude of various file processing tasks needing to be processed every day is 10w level; in addition, there are various processing requirements for video, picture and other files, such as video OCR recognition, picture intelligent marking and the like, and various file processing capabilities are required to meet these requirements.
However, when a file processing task is performed in a conventional DAM service scenario, a large amount of file data cannot be processed, and various different file processing capabilities cannot be provided for various file processing requirements.
Aiming at the problems that the related technology cannot process larger file data volume and cannot provide various file processing capacities, an effective solution is not provided at present.
Disclosure of Invention
The invention mainly aims to provide a design method and a system of distributed file asynchronous processing service, so as to solve the problems that the related technology cannot process larger file data volume and cannot provide various different file processing capacities.
In order to achieve the above object, a first aspect of the present invention provides a method for designing a distributed file asynchronous processing service, including:
a command terminal receives a request initiated by a calling terminal and determines a subtask according to the request;
the command end arranges a task link based on the subtasks and constructs an executable sequence object of a distributed asynchronous message queue according to the task link;
the command end commands the execution end to trigger and execute the executable sequence object;
and the execution end executes the subtasks according to the executable sequence object to obtain subtask products, and the subtask products are fed back to the calling end through the command end.
Optionally, the receiving, by the director, a request initiated by a caller, and determining a subtask according to the request include:
the command terminal receives a request initiated by the calling terminal, and determines the file processing capacity according to the request;
independently packaging each file processing capacity into a subtask in advance;
and determining the subtask corresponding to the request.
Optionally, the orchestration task link based on the subtasks by the director, and constructing an executable sequence object of a distributed asynchronous message queue according to the task link, includes:
the command end is based on all the subtasks and acquires subtask data corresponding to each subtask;
periodically constructing a subtask link diagram according to the subtask data;
constructing an execution link of each subtask according to the subtask link diagram;
and constructing an executable sequence object of the distributed asynchronous message queue based on the execution link.
Further, the constructing an executable sequence object of a distributed asynchronous message queue based on the execution link includes:
constructing a subtask class corresponding to an execution link based on the execution link;
determining an initial subtask node which is initially triggered to execute in an execution link, and configuring initial node parameters of the initial subtask node;
determining a final subtask node which is finally triggered to be executed in an execution link, and dynamically transmitting a final node parameter of the final subtask node;
and constructing a subtask object through the subtask class, and constructing an executable sequence object of the distributed asynchronous message queue celery based on the subtask object, the initial subtask node, the initial node parameter, the final subtask node and the final node parameter.
Optionally, the commanding end and the executing end of the commander trigger the execution of the executable sequence object, including:
the command end command execution end executes an executable sequence object of a distributed asynchronous message queue (celery);
and triggering the execution end to execute the specified subtask.
Optionally, before the execution end executes the subtask according to the executable sequence object, the method further includes:
the execution end periodically updates the configuration according to the open source configuration management center apollo;
registering the subtasks carried by the execution terminal after the updating configuration, and completing the initialization of the execution terminal;
after the initialization of the execution end is completed, sending heartbeat data to a message queue, wherein the heartbeat data comprises meta-information of subtasks carried by the execution end, and the meta-information comprises: capability name, capability version, pre-dependencies, and supporting file types.
Further, the executing end executes the subtask according to the executable sequence object to obtain a subtask product, and feeds the subtask product back to the calling end through the command end, including:
each subtask uniquely corresponds to an exclusive queue according to the capability name and the capability version in the meta-information;
the execution end takes out the subtask from the exclusive queue uniquely corresponding to each subtask through the process of each carried subtask;
the execution end serially executes or parallelly executes the subtasks to obtain subtask products, and the subtask products are uniformly bound in the same directory;
and feeding back the subtask product to a calling end through a command end.
A second aspect of the present invention provides a system for designing a distributed file asynchronous processing service, including:
the determining unit is used for receiving a request initiated by a calling end by the command end and determining a subtask according to the request;
the arranging unit is used for commanding the end to arrange the task link based on the subtask and constructing an executable sequence object of the distributed asynchronous message queue according to the task link;
the trigger execution unit is used for triggering and executing the executable sequence object by the command end and the execution end;
and the execution unit is used for the execution end to execute the subtask according to the executable sequence object to obtain a subtask product and feed the subtask product back to the calling end through the command end.
A third aspect of the present invention provides a computer-readable storage medium storing computer instructions for causing a computer to execute the method for designing a distributed file asynchronous processing service provided in any one of the first aspects.
A fourth aspect of the present invention provides an electronic apparatus, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to cause the at least one processor to perform the method of designing a distributed file asynchronous processing service as provided in any of the first aspects.
In the design method of the distributed file asynchronous processing service provided by the embodiment of the invention, a command terminal receives a request initiated by a calling terminal, and a subtask is determined according to the request; realizing the file processing capacity through the subtask;
the command end arranges a task link based on the subtasks and constructs an executable sequence object of a distributed asynchronous message queue according to the task link; multiple subtasks in a task link may provide multiple file processing capabilities; moreover, a large amount of file processing task quantity can be processed through a task link and a distributed asynchronous message queue;
the command end commands the execution end to trigger and execute the executable sequence object; and the execution end executes the subtasks according to the executable sequence object to obtain subtask products, and the subtask products are fed back to the calling end through the command end. The method and the device execute corresponding subtasks according to different file processing requirements, achieve the effect of providing various different file processing capabilities, and solve the problems that a larger file data volume cannot be processed and various different file processing capabilities cannot be provided in the related technology.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is an architecture diagram of a design system for a distributed file asynchronous processing service provided by an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for designing a distributed file asynchronous processing service according to an embodiment of the present invention;
FIG. 3 is a timing diagram of a serial task according to an embodiment of the present invention;
FIG. 4 is a timing diagram of parallel tasks according to an embodiment of the present invention;
FIG. 5 is a block diagram of a system for designing a distributed file asynchronous processing service according to an embodiment of the present invention;
fig. 6 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged under appropriate circumstances in order to facilitate the description of the embodiments of the invention herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the present invention, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "center", "vertical", "horizontal", "lateral", "longitudinal", and the like indicate an orientation or positional relationship based on the orientation or positional relationship shown in the drawings. These terms are used primarily to better describe the invention and its embodiments and are not intended to limit the indicated systems, elements or components to a particular orientation or to be constructed and operated in a particular orientation.
Moreover, some of the above terms may be used to indicate other meanings besides the orientation or positional relationship, for example, the term "on" may also be used to indicate some kind of attachment or connection relationship in some cases. The specific meanings of these terms in the present invention can be understood by those skilled in the art as appropriate.
Furthermore, the terms "mounted," "disposed," "provided," "connected," and "sleeved" are to be construed broadly. For example, it may be a fixed connection, a removable connection, or a unitary construction; can be a mechanical connection, or an electrical connection; may be directly connected, or indirectly connected through intervening media, or may be in communication between two systems, components or parts. The specific meanings of the above terms in the present invention can be understood by those of ordinary skill in the art according to specific situations.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
In a Data Asset Management (DAM) service scenario, offline processing of multimedia files such as videos and pictures of a user is a capability that a DAM tool needs to have. With the increase of the traffic, the number of files to be processed is large, and the processing task level of various files to be processed every day is 10w level; in addition, there are various processing requirements for video, picture and other files, such as video OCR recognition, picture intelligent marking and the like, and various file processing capabilities are required to meet these requirements.
However, when a traditional DAM service scenario performs a file processing task, it cannot process a large amount of file data, and cannot provide various file processing capabilities for various file processing requirements.
In order to solve the above problem, an architecture diagram of a system for designing a distributed file asynchronous processing service is shown in fig. 1, where a caller, i.e., a user, is used as a calling end, a conductor is a command end, and a player is an execution end, the system includes a command end and multiple execution ends, and each execution end bears and executes one or multiple subtasks;
the method comprises the following steps that a calling end initiates a file processing request to a command end, the command end commands an execution end to perform file processing service, the execution end executes a subtask corresponding to the file processing request to obtain a subtask product, and the subtask product is fed back to the calling end through the command end to complete the file processing service;
in addition, in the file processing service process, a monitor is adopted to monitor the state and the performance, and when the abnormal state is monitored, an abnormal alarm is given;
wherein FastAPI is a python backend framework for providing an external interface; common lib is used to encapsulate common modules; mongo is a non-relational database for storing processing data; the redis is a non-relational database based on a memory, and is used for performing message communication, for example, the message communication between the command end and the execution end is realized through the redis; the Message Broker is Message middleware and is used for queue Message communication; file system is the underlying File system used to store the subtask artifacts.
The design system of the distributed file asynchronous processing service provided by the embodiment of the invention can process a large number of files and meet the requirement that the processing task level of processing various files is in the level of 10w every day; diverse file types can be handled, including mp4, jpeg, png, psd, mpeg, heic, gif, etc.; through multiple executed subtasks, multiple file processing capabilities are supported, including video frame extraction, video face recognition, video OCR recognition, video strip removal, picture thumbnail extraction, picture dominant color information extraction, picture intelligent marking and the like; the design system provided by the embodiment of the invention is highly available, can ensure that high-level operation performance is realized in a given time period, and reduces the downtime and service interruption to the maximum extent; the system supports the concurrent execution of a plurality of file processing tasks; the system reasonably utilizes resources and quickly completes the file processing task.
An embodiment of the present invention further provides a method for designing a distributed file asynchronous processing service, as shown in fig. 2, the method includes the following steps S101 to S104:
step S101: a command terminal receives a request initiated by a calling terminal and determines a subtask according to the request;
each subtask has a cache identifier and consists of a tenant id, a file sha1, a task name, a version number and a parameter Hash, wherein the sha1 is Secure Hash Algorithm 1, and the Secure Hash Algorithm 1 belongs to a data encryption Algorithm; the intermediate node product can be reused, and a cache control switch is also arranged in the configuration of the substask, so that the flexible configuration can be realized;
moreover, each subtask supports horizontal and vertical expansion, and the horizontal expansion is as follows: supporting a plurality of execution end instances at the same time; longitudinal expansion: each subtask supports a plurality of processes, the default is dynamic longitudinal expansion, the number of the processes can be dynamically opened up along with the task amount, and at least 2 processes have at most 10 processes, so that the executed task amount is matched with occupied resources, and the low execution efficiency or resource waste is avoided.
Specifically, the step S101 includes:
the command terminal receives a request initiated by the calling terminal and determines the file processing capacity according to the request; the file processing capability comprises video frame extraction, video face recognition, video OCR recognition, video strip removal, picture thumbnail extraction, picture dominant color information extraction, picture intelligent marking and the like;
independently packaging each file processing capacity into a subtask in advance; by independently encapsulating each file processing capacity in advance, the subtasks can be independent from each other, and do not interfere with each other when processing files, so that the situation that the files with large data volume are disordered in the processing process is avoided.
And determining the subtask corresponding to the request. Aiming at the subtask corresponding to the request, executing the subtask to perform file processing service; each time the subtask is executed, a unique global play _ id is generated to identify the current execution record, and the execution records can be connected in series through the global play _ id in the whole link, so that the upstream and downstream problems can be conveniently checked.
Step S102: the command end arranges a task link based on the subtasks and constructs an executable sequence object of a distributed asynchronous message queue according to the task link;
specifically, the step S102 includes:
the command end is based on all the subtasks and acquires subtask data corresponding to each subtask; the command end receives heartbeat data corresponding to the subtasks sent by the execution ends through the message queue redis, wherein the heartbeat data comprises meta-information of all subtasks borne by each execution end;
the meta information comprises a capability name, a capability version, a pre-dependency and a supporting file type; when the file processing effect corresponding to the subtask changes, the version number is upgraded, and the capability version is updated; one or more pre-dependencies of each subtask may exist, and after all pre-dependencies of the subtask trigger execution, the subtask can be normally triggered to execute.
Regularly constructing a subtask link diagram according to the subtask data; by means of the pre-dependency of all the subtasks, a complete subtask link diagram can be constructed, and the specific positions of the subtasks in the complete task link and the subtasks in the upstream and downstream can be obtained, so that the subtasks are prevented from being lost; since the subtask data is changing, a subtask link map is constructed every 5 seconds.
Constructing an execution link of each subtask according to the subtask link diagram; and extracting the tree-like dependency relationship of each subtask according to the subtask link diagram, and constructing an execution link of each subtask based on the tree-like dependency relationship.
And constructing an executable sequence object of the distributed asynchronous message queue based on the execution link.
Further, the constructing an executable sequence object of a distributed asynchronous message queue based on the execution link includes:
constructing a subtask class corresponding to an execution link based on the execution link;
determining an initial subtask node which is initially triggered to execute in an execution link, and configuring initial node parameters of the initial subtask node; an initial subtask node, which is the first subtask to be executed, is usually a fetch node;
determining a final subtask node which is finally triggered to be executed in an execution link, and dynamically transmitting a final node parameter of the final subtask node; the final subtask node is a target subtask to be executed finally; supporting dynamic parameter transmission, and injecting parameters;
and constructing a subtask object through the subtask class, and constructing an executable sequence object of the distributed asynchronous message queue celery based on the subtask object, the initial subtask node, the initial node parameter, the final subtask node and the final node parameter. The distributed asynchronous message task queue is developed based on python, and the message queue is operated by the queue, so that the scheduling execution of the message queue can be processed, and the asynchronous processing of tasks is realized.
Step S103: the command end commands the execution end to trigger and execute the executable sequence object;
specifically, the step S103 includes:
the command end commands the execution end to execute the executable sequence object of the distributed asynchronous message queue celery;
and triggering the execution end to execute the specified subtask.
And the command end commands the execution end to execute the executable sequence object of the distributed asynchronous message queue, and triggers the execution end to execute the subtask sequence.
Step S104: and the execution end executes the subtasks according to the executable sequence object to obtain subtask products, and the subtask products are fed back to the calling end through the command end.
One execution side carries one or more file processing capabilities. Independently packaging each file processing capacity into a subtask, and then carrying one or more subtasks by an execution end; because the calculation cost of a plurality of file processing capacities is low, and resource waste is caused by singly opening a container or singly allocating a execution end, the execution end is supported to simultaneously bear a plurality of file processing capacities.
Specifically, before the execution end in step S104 executes the subtask according to the executable sequence object, the method further includes:
the execution end periodically updates the configuration according to the open source configuration management center apollo; the configuration of the open source configuration management center apollo supports hot updating, the execution end can periodically obtain the latest configuration on the apollo in the background, and when the configuration is changed, the configuration is updated.
Registering the subtasks carried by the execution end after the updating configuration, and finishing the initialization of the execution end;
after the initialization of the execution end is completed, sending heartbeat data to a message queue, wherein the heartbeat data comprises meta-information of subtasks carried by the execution end, and the meta-information comprises: capability name, capability version, pre-dependencies, and supporting file types. The execution end does not open any http/rpc interface, and only carries out message communication through the message queue redis.
Further, the executing end executes the subtask according to the executable sequence object to obtain a subtask product, and feeds the subtask product back to the calling end through the command end, including:
each subtask uniquely corresponds to an exclusive queue according to the capability name and the capability version in the meta-information;
the execution end takes out the subtask from the exclusive queue uniquely corresponding to each subtask through the process of each carried subtask; processing the subtasks in the queue through the process, for example, if 10 subtasks are backlogged in the queue and the number of the processes is 2, the execution end will process 2 subtasks in the queue through the process each time;
the execution end serially executes or parallelly executes the subtasks to obtain subtask products, and the subtask products are uniformly bound in the same directory;
and feeding back the subtask product to a calling end through a command end.
The serial task is that subtasks are sequentially executed from a root node to a target subtask node, and a serial task timing chart provided by the embodiment of the invention is shown in fig. 3; the parallel task refers to that a plurality of subtasks which can be executed simultaneously exist in the same file processing task, the subtasks do not have a sequential dependency relationship and can be executed simultaneously, namely, executed in parallel, and a sequence diagram of the serial task provided by the embodiment of the invention is shown in fig. 4.
From the above description, it can be seen that the present invention achieves the following technical effects:
according to the design method provided by the embodiment of the invention, the file processing capacity is realized through the subtasks, a plurality of subtasks in the task link can provide various file processing capacities, and the corresponding subtasks are executed according to different file processing requirements, so that the effect of providing various different file processing capacities is achieved; moreover, a large amount of file processing task quantity can be processed through a task link and a distributed asynchronous message queue;
the design system provided by the embodiment of the invention is highly available, can ensure that high-level operation performance is realized in a given time period, and reduces the shutdown time and service interruption to the maximum extent; the system supports the concurrent execution of a plurality of file processing tasks; the system reasonably utilizes resources and quickly completes the file processing task.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
An embodiment of the present invention provides a system for designing a distributed file asynchronous processing service, which is used for implementing the method for designing a distributed file asynchronous processing service, and as shown in fig. 5, the system includes:
a determining unit 51, configured to receive, by the command end, a request initiated by the call end, and determine a subtask according to the request;
the arranging unit 52 is used for commanding the end to arrange the task link based on the subtasks and constructing an executable sequence object of the distributed asynchronous message queue according to the task link;
a trigger execution unit 53, configured to trigger the command execution end to execute the executable sequence object;
and the execution unit 54 is used for the execution end to execute the subtask according to the executable sequence object to obtain a subtask product, and feeding the subtask product back to the calling end through the command end.
An embodiment of the present invention further provides an electronic device, as shown in fig. 6, the electronic device includes one or more processors 61 and a memory 62, where one processor 61 is taken as an example in fig. 6.
The controller may further include: an input device 63 and an output device 64.
The processor 61, the memory 62, the input device 63 and the output device 64 may be connected by a bus or other means, as exemplified by the bus connection in fig. 6.
The Processor 61 may be a Central Processing Unit (CPU), the Processor 61 may also be other general-purpose processors, Digital Signal Processors (DSP), Application Specific Integrated Circuits (ASIC), Field Programmable Gate Arrays (FPGA), other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or any combination thereof, and the general-purpose Processor may be a microprocessor or any conventional Processor.
The memory 62, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the control methods in the embodiments of the present invention. The processor 61 executes various functional applications of the server and data processing, namely, a design method of the distributed file asynchronous processing service implementing the above-described method embodiments, by running non-transitory software programs, instructions and modules stored in the memory 62.
The memory 62 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of a processing device operated by the server, and the like. Further, the memory 62 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 62 may optionally include memory located remotely from the processor 61, which may be connected to a network connection device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 63 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the processing device of the server. The output device 64 may include a display device such as a display screen.
One or more modules are stored in the memory 62, which when executed by the one or more processors 61 perform the method as shown in fig. 2.
Those skilled in the art will appreciate that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and the processes of the embodiments of the motor control methods described above can be included when the computer program is executed. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (FM), a Hard Disk (Hard Disk Drive, HDD), or a Solid-State Drive (SSD); the storage medium may also comprise a combination of memories of the kind described above.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (10)

1. A method for designing a distributed file asynchronous processing service is characterized by comprising the following steps:
a command terminal receives a request initiated by a calling terminal and determines a subtask according to the request;
the command end arranges a task link based on the subtasks and constructs an executable sequence object of a distributed asynchronous message queue according to the task link;
the command end commands the execution end to trigger and execute the executable sequence object;
and the execution end executes the subtasks according to the executable sequence object to obtain subtask products, and the subtask products are fed back to the calling end through the command end.
2. The method according to claim 1, wherein the director receives a request initiated by a caller, and determines a subtask according to the request, including:
the command terminal receives a request initiated by the calling terminal and determines the file processing capacity according to the request;
independently packaging each file processing capacity into a subtask in advance;
and determining the subtask corresponding to the request.
3. The method of claim 1, wherein the command end orchestrates task links based on the subtasks and constructs executable sequence objects of a distributed asynchronous message queue according to the task links, comprising:
the command end is based on all the subtasks and acquires subtask data corresponding to each subtask;
periodically constructing a subtask link diagram according to the subtask data;
constructing an execution link of each subtask according to the subtask link diagram;
and constructing an executable sequence object of the distributed asynchronous message queue based on the execution link.
4. The method of claim 3, wherein constructing an executable sequence object of a distributed asynchronous message queue based on the execution chain comprises:
constructing a subtask class corresponding to an execution link based on the execution link;
determining an initial subtask node which is initially triggered to execute in an execution link, and configuring an initial node parameter of the initial subtask node;
determining a final subtask node which is finally triggered to be executed in an execution link, and dynamically transmitting a final node parameter of the final subtask node;
and constructing a subtask object through the subtask class, and constructing an executable sequence object of the distributed asynchronous message queue celery based on the subtask object, the initial subtask node, the initial node parameter, the final subtask node and the final node parameter.
5. The method of claim 1, wherein the director command executor triggers execution of the executable sequence object, comprising:
the command end commands the execution end to execute the executable sequence object of the distributed asynchronous message queue celery;
and triggering the execution end to execute the specified subtask.
6. The method of claim 1, wherein prior to the execution of the subtasks by the execution end according to the executable sequence object, the method further comprises:
the execution end periodically updates the configuration according to the open source configuration management center apollo;
registering the subtasks carried by the execution end after the updating configuration, and finishing the initialization of the execution end;
after the initialization of the execution end is completed, sending heartbeat data to a message queue, wherein the heartbeat data comprises meta-information of subtasks carried by the execution end, and the meta-information comprises: capability name, capability version, pre-dependencies, and supporting file types.
7. The method according to claim 6, wherein the execution end executes the subtasks according to the executable sequence object to obtain a subtask product, and feeds back the subtask product to the calling end through the command end, and the method comprises:
each subtask uniquely corresponds to an exclusive queue according to the capability name and the capability version in the meta-information;
the execution end takes out the subtask from the exclusive queue uniquely corresponding to each subtask through the process of each carried subtask;
the execution end serially executes or parallelly executes the subtasks to obtain subtask products, and the subtask products are uniformly bound in the same directory;
and feeding back the subtask product to a calling end through a command end.
8. A system for designing a distributed file asynchronous processing service, comprising:
the determining unit is used for receiving a request initiated by a calling end by the command end and determining a subtask according to the request;
the arranging unit is used for commanding the end to arrange the task link based on the subtask and constructing an executable sequence object of the distributed asynchronous message queue according to the task link;
the trigger execution unit is used for commanding the execution end to trigger and execute the executable sequence object by the command end;
and the execution unit is used for the execution end to execute the subtask according to the executable sequence object to obtain a subtask product, and feeding the subtask product back to the calling end through the command end.
9. A computer-readable storage medium storing computer instructions for causing a computer to perform the method of designing a distributed file asynchronous processing service of any one of claims 1 to 7.
10. An electronic device, characterized in that the electronic device comprises: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to cause the at least one processor to perform the method of designing a distributed file asynchronous processing service of any of claims 1-7.
CN202210689844.6A 2022-06-17 2022-06-17 Design method and system of distributed file asynchronous processing service and electronic equipment Pending CN115098255A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210689844.6A CN115098255A (en) 2022-06-17 2022-06-17 Design method and system of distributed file asynchronous processing service and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210689844.6A CN115098255A (en) 2022-06-17 2022-06-17 Design method and system of distributed file asynchronous processing service and electronic equipment

Publications (1)

Publication Number Publication Date
CN115098255A true CN115098255A (en) 2022-09-23

Family

ID=83290914

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210689844.6A Pending CN115098255A (en) 2022-06-17 2022-06-17 Design method and system of distributed file asynchronous processing service and electronic equipment

Country Status (1)

Country Link
CN (1) CN115098255A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116521400A (en) * 2023-07-04 2023-08-01 京东科技信息技术有限公司 Article information processing method and device, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116521400A (en) * 2023-07-04 2023-08-01 京东科技信息技术有限公司 Article information processing method and device, storage medium and electronic equipment
CN116521400B (en) * 2023-07-04 2023-11-03 京东科技信息技术有限公司 Article information processing method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN110019240B (en) Service data interaction method, device and system
CN111400008B (en) Computing resource scheduling method and device and electronic equipment
US11206316B2 (en) Multiple model injection for a deployment cluster
CN105653425A (en) Complicated event processing engine based monitoring system
US9843528B2 (en) Client selection in a distributed strict queue
US9571414B2 (en) Multi-tiered processing using a distributed strict queue
CN111897633A (en) Task processing method and device
CN109783255B (en) Data analysis and distribution device and high-concurrency data processing method
US20240111549A1 (en) Method and apparatus for constructing android running environment
US10305817B1 (en) Provisioning system and method for a distributed computing environment using a map reduce process
CN106331783B (en) A kind of resource allocation methods, device and intelligent television system
CN115098255A (en) Design method and system of distributed file asynchronous processing service and electronic equipment
CN109388501B (en) Communication matching method, device, equipment and medium based on face recognition request
CN104052677A (en) Soft load balancing method and apparatus of single data source
US20170171307A1 (en) Method and electronic apparatus for processing picture
CN116069493A (en) Data processing method, device, equipment and readable storage medium
CN112600842A (en) Cluster shell method and device, electronic equipment and computer readable storage medium
CN115098254A (en) Method and system for triggering execution of subtask sequence and electronic equipment
CN115061796A (en) Execution method and system for calling between subtasks and electronic equipment
CN111190731A (en) Cluster task scheduling system based on weight
US11606422B2 (en) Server for controlling data transmission through data pipeline and operation method thereof
CN115426361A (en) Distributed client packaging method and device, main server and storage medium
CN106331774A (en) Equipment connection method and device and intelligent television system
CN114969199A (en) Method, device and system for processing remote sensing data and storage medium
CN111338775B (en) Method and equipment for executing timing task

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination