CN116204289A - Process data processing method, terminal equipment and storage medium - Google Patents

Process data processing method, terminal equipment and storage medium Download PDF

Info

Publication number
CN116204289A
CN116204289A CN202310099786.6A CN202310099786A CN116204289A CN 116204289 A CN116204289 A CN 116204289A CN 202310099786 A CN202310099786 A CN 202310099786A CN 116204289 A CN116204289 A CN 116204289A
Authority
CN
China
Prior art keywords
tasks
task
processing method
data processing
relationship
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310099786.6A
Other languages
Chinese (zh)
Inventor
孙兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Lutes Robotics Co ltd
Original Assignee
Wuhan Lotus Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Lotus Technology Co Ltd filed Critical Wuhan Lotus Technology Co Ltd
Priority to CN202310099786.6A priority Critical patent/CN116204289A/en
Publication of CN116204289A publication Critical patent/CN116204289A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

The data processing method comprises the steps of disassembling a process implementation process, instantiating a plurality of step modules into a plurality of tasks, instantiating a dependency relationship into a plurality of pipelines, binding the plurality of tasks and the plurality of pipelines, performing topological ordering, respectively and correspondingly packaging the tasks and the pipelines into a plurality of thread objects, and performing scheduling processing on the plurality of thread objects by using a multithreading technology according to a synchronous relationship and a triggering mode. The process data processing method, the system and the storage medium can support communication interaction among threads, can also support flexible triggering modes of all threads, and can support data flow of a directed ring graph structure during intra-process communication.

Description

Process data processing method, terminal equipment and storage medium
Technical Field
The application belongs to the technical field of software architecture, and particularly relates to a process data processing method, a system and a storage medium.
Background
The existing computer system is provided with a plurality of CPU cores, so that complex processing tasks in one process are met, a plurality of software is of a multi-thread design, the cores of the CPU can be fully utilized, data processing is accelerated, and the problem of headache of a plurality of software development engineers in multi-thread architecture software is a data competition problem.
The ROS communication framework widely adopted at present is mainly used for interprocess communication and does not support the communication of the multi-task in the process, and meanwhile, the serialization and the deserialization of the information in the communication process consume larger resources. In the process of designing and forming the application, the applicant finds that when a plurality of complex subtasks exist in a single process, and data interaction, data synchronization and the like are needed among the subtasks, different scheduling modes (event triggered scheduling and periodic scheduling) possibly needed by different tasks are different, and the data flow relation among the tasks needs to be flexibly configured.
Disclosure of Invention
In view of the above problems, the present application provides a process data processing method, including:
disassembling a process implementation process, and acquiring a plurality of step modules of the implementation process and a dependency relationship among the plurality of step modules;
instantiating the plurality of step modules into a plurality of tasks and instantiating the dependency relationship into a plurality of pipelines, wherein the tasks are used for representing functions to be realized in one thread, and the pipelines are used for representing data exchange among the tasks;
binding the plurality of tasks and the plurality of pipes;
performing topological ordering on the tasks according to the dependency relationship, and establishing a synchronous relationship and a triggering mode among the tasks;
and respectively and correspondingly packaging the tasks into a plurality of thread objects, and performing scheduling processing on the plurality of thread objects by using a multithreading technology according to the synchronization relation and the triggering mode.
Optionally, the step of obtaining the dependency relationship between the plurality of step modules of the implementation process and the plurality of step modules includes:
responding to the service task of the process, performing functional decomposition on the service task, and classifying the step module corresponding to each function;
acquiring data to be transferred among the plurality of step modules, and establishing a data type pointer;
determining the direction and the sequence of the data transfer, and establishing the dependency relationship among the plurality of step modules.
Optionally, the step of instantiating the plurality of step modules as a plurality of tasks and instantiating the dependency as a plurality of pipes and before includes:
defining each task and pipeline related class and inheritance relationship thereof, wherein the class comprises a base class and an implementation class;
template parameters are obtained, and inheritance is carried out on the base class;
and implementing the base class through the implementation class by using the template parameters.
Optionally, the step of topologically ordering the plurality of tasks according to the dependency relationship, and establishing the synchronization relationship and the triggering manner between the plurality of tasks includes:
and binding mutexes and condition variables of the plurality of tasks.
Optionally, the step of binding the mutexes and the condition variables of the plurality of tasks includes:
carrying out mutex encapsulation corresponding to each task and realizing an automatic mutex corresponding to the shared resource;
performing a mutex entering operation in a constructor of the automatic mutex object, and performing a mutex exiting operation in a destructor;
and inquiring the parameters of the shared resource, and setting the condition variables required by accessing the shared resource.
Optionally, the step of binding the plurality of tasks and the plurality of pipes includes:
carrying out text description on each task and a corresponding pipeline according to the dependency relationship among the tasks;
the text description is entered into a dataflow framework along with the plurality of tasks.
Optionally, the step of topologically ordering the plurality of tasks according to the dependency relationship, and establishing a synchronization relationship and a triggering manner between the plurality of tasks includes at least one of the following:
performing topological ordering on the tasks according to the text description;
screening the synchronous relation among the tasks according to the source task of the input information of each task and the destination task of the output information;
and establishing a triggering mode of each task according to the dependency weight of the input information on each task.
Optionally, the step of packaging the tasks into a plurality of thread objects correspondingly, and performing scheduling processing on the plurality of thread objects by using a multithreading technology according to the synchronization relationship and the triggering mode further includes:
and acquiring pointer information corresponding to the pipelines, and establishing a cache queue according to the synchronization relationship between the pointer information and the tasks.
The application also provides a terminal device, which comprises a processor and a memory;
the memory stores a computer program which, when executed by the processor, implements the steps of the process data processing method as described above.
The present application also provides a storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a process data processing method as described above.
The process data processing method, the system and the storage medium can simultaneously run a plurality of tasks on a plurality of cores by adopting a multithreading technology, and the data flow among the tasks can be configured. Only inputs and outputs need to be of interest for each task, and multithreaded content is not involved, i.e. the problem of data contention that multithreading may cause has been addressed internally by the framework. Meanwhile, based on the C++ template element programming technology, non-invasive design is carried out, and during multi-thread communication in a process, processing tasks and the flow content and the dependency relationship of data between the tasks can be freely configured. The framework supports single-thread scheduling after topological ordering of all tasks, also supports multi-thread scheduling according to topological dependency, and each task also supports various triggering modes, such as: periodic triggering, triggering by any one of the dependent events, triggering by all the dependent events, triggering by the appointed event, and the like. The sharing and circulation efficiency of data is extremely high, and the sharing and circulation efficiency of data can be realized through pointers. Communication interaction between threads can be supported, flexible triggering modes of all threads can be supported, and data flow of a directed ring graph structure can be supported during intra-process communication.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a flow chart of a process data processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of pipeline class inheritance relationships according to an embodiment of the present application;
FIG. 3 is a schematic diagram of task class inheritance relationships according to an embodiment of the present application;
fig. 4 is a schematic diagram of a process data processing flow of a terminal device according to an embodiment of the present application;
FIG. 5 is a task disassembly diagram I of an embodiment of the present application;
FIG. 6 is a task disassembly diagram II according to an embodiment of the present application;
fig. 7 is a task disassembly diagram III according to an embodiment of the present application.
The realization, functional characteristics and advantages of the present application will be further described with reference to the embodiments, referring to the attached drawings. Specific embodiments thereof have been shown by way of example in the drawings and will herein be described in more detail. These drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but to illustrate the concepts of the present application to those skilled in the art by reference to specific embodiments.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some of the embodiments of the present application, but not all of the embodiments. All other embodiments, based on the embodiments herein, which would be apparent to one of ordinary skill in the art without undue burden are within the scope of the present application. Accordingly, the following detailed description of the embodiments of the present application, provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, based on the embodiments herein, which would be apparent to one of ordinary skill in the art without undue burden are within the scope of the present application.
In the description of the present application, it should be understood that the terms indicating orientation or positional relationship are based on the orientation or positional relationship shown in the drawings, and are merely for convenience of description and to simplify the description, rather than to indicate or imply that the apparatus or elements referred to must have a particular orientation, be constructed and operate in a particular orientation, and therefore should not be construed as limiting the present application.
In this application, unless specifically stated and limited otherwise, the terms "mounted," "connected," "secured," and the like are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communicated with the inside of two elements or the interaction relationship of the two elements. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art based on the specification specifics.
First embodiment
First, a process data processing method is provided, and fig. 1 is a flowchart of a process data processing method according to an embodiment of the present application.
As shown in fig. 1, in an embodiment, the process data processing method includes:
s10: and disassembling the process implementation process, and acquiring the dependency relationship between a plurality of step modules and a plurality of step modules of the implementation process.
A Process (Process) is a running activity of a program in a computer on a certain data set, is a basic unit of resource allocation and scheduling by a system, and is a basis of an operating system structure. In early process-oriented computer architecture, a process is the basic execution entity of a program; in contemporary thread-oriented computer architectures, the process is a container for the threads. A program is a description of instructions, data, and their organization, and a process is an entity of a program. Optionally, in the actual service requirement, when the service to be implemented in a single process is complex and time-consuming, the complex problem can be analyzed and disassembled, and a plurality of sub-modules and the dependency relationship among the sub-modules can be obtained.
S20: the method comprises the steps of instantiating a plurality of step modules into a plurality of tasks and instantiating the dependency relationships into a plurality of pipelines, wherein the tasks are used for representing functions to be realized in one thread, and the pipelines are used for representing data exchange among the tasks.
Instantiation refers to the process of creating objects with classes in object-oriented programming, referred to as instantiation. Is the process of embodying an abstract conceptual class into the real object of the class. By way of example, through analysis of actual problems, several concepts are abstracted, the tasks representing what is needed in one thread. Pipes represent exchanges of data between tasks, supporting only one-side writing and one-side reading. The FlowGraph class can be used for combining the pipeline and the task, carrying out topological sorting according to the description information, and detecting the directed ring data flow condition by combining the trigger type of the task.
S30: binding a plurality of tasks and a plurality of pipes.
The specific task stages are triggered corresponding to specific input information respectively, or specific task results can be generated respectively and output to other task stages as input information. Thus, a connection is made between two tasks through a specific data pipe to transfer data information.
S40: and performing topological sequencing on the tasks according to the dependency relationship, and establishing a synchronous relationship and a triggering mode among the tasks.
Dependency relationships are also known as "logical relationships". In process management, reference is made to a relationship that indicates that a change in one of two threads (leading and trailing) will affect the other thread. Illustratively, one class uses the object of another class as a parameter of operation, one class uses the object of another class as its data member, one class sends messages to another class, etc., and there is a dependency between both classes.
S50: and respectively and correspondingly packaging the tasks into a plurality of thread objects, and scheduling the plurality of thread objects by using a multithreading technology according to the synchronous relation and the triggering mode.
The tasks are individually packaged into executable threads to enable the scheduler to individually schedule processing. Illustratively, the environment variables are set by the application program, which enable the dynamic loader of the operating system to preload the library of functions. The function library contains thread creation wrapper functions and process creation wrapper functions to wrap threads.
According to the embodiment, a plurality of tasks can be run on a plurality of cores at the same time, and the data flow configurability among the tasks can support communication interaction among threads, and also can support a flexible triggering mode of each thread, and can support the data flow of a directed ring graph structure during intra-process communication.
Optionally, the step of disassembling the process implementation procedure to obtain the dependency relationship between the plurality of step modules and the plurality of step modules in the implementation procedure includes:
responding to the service task of the acquired process, performing functional decomposition on the service task, and classifying the step module corresponding to each function; acquiring data to be transferred among a plurality of step modules, and establishing a data type pointer; determining the direction and the sequence of data transmission, and establishing the dependency relationship among a plurality of step modules.
In actual business requirements, the data flow framework may be used to solve problems when the business to be implemented within a single process is complex and time consuming. Optionally, the complex problem is first analyzed and disassembled to obtain several sub-modules and the dependency between the sub-modules. The data passed between tasks may be pointers to custom data types. For example, it is assumed that the decomposition results from the first task module, the second task module, the third task module, the fourth task module, and the fifth task module are obtained, where the results processed by the first task module are output to the second task module, the third task module, and the fourth task module, and the results of the second task module, the third task module, and the fourth task module are output to the E module. After the processing of the upstream module is completed, an event notification is triggered and a data pointer is transmitted to the downstream module, when the current module does not receive the event, the current module is in a blocking waiting state, and CPU resources are not consumed in the blocking state.
Optionally, the step of instantiating the plurality of step modules as a plurality of tasks and instantiating the dependency as a plurality of pipes and before includes:
defining each task and pipeline related class and inheritance relation thereof, wherein the class comprises a base class and an implementation class; template parameters are obtained, and inheritance is carried out on the base class; and using template parameters to realize the base class through the realization class.
Optionally, the externally actually defined transmission data types are passed to the inside of the data stream framework by means of template parameters. FIG. 2 is a schematic diagram of pipeline class inheritance relationships according to an embodiment of the present application. FIG. 3 is a schematic diagram of task class inheritance relationships according to an embodiment of the present application.
As shown in fig. 2, in one embodiment, flowPipe input is the base class of the pipe's input, flowPipe input < T > is an interface class instantiated according to a specific template parameter type T, which is implemented by inheriting the interface class. The output ends of the pipelines are the same and are not described in detail.
As shown in fig. 3, in an embodiment, flowTask < args.> is mainly to handle static, definition and implementation of tasks in the case where data types transmitted in pipes are not the same, flowTask dynamic < args.> is mainly to define and implement when data types in pipes are just the same.
Optionally, the step of topologically ordering the plurality of tasks according to the dependency relationship, and establishing the synchronization relationship and the triggering manner between the plurality of tasks includes:
mutexes and condition variables of a plurality of tasks are bound.
In multi-threaded software development, a large number of threads are used to share resources, and access to these resources requires locking operations, i.e., mutex operations. This results in the need to use more mutex in the code, and in the process of using mutex, the problem of multiple outlets caused by program condition judgment, which results in easy omission of operation of releasing mutex at program outlets, the omission is fatal, if large multi-thread server program, the omission is hidden, which results in deadlock condition in program operation, and in order to eliminate the problem, it will consume a great deal of effort of system engineers to eliminate. For shared resources that may be needed in thread operation, whether to perform mutex operation has a correspondence with the state of the condition variable of the shared resources. Therefore, before the tasks are topologically ordered to establish the synchronous relationship and the triggering mode among a plurality of tasks, the tasks need to be bound first to define the operation content and the timing of the mutex.
Optionally, the step of binding the mutex and the condition variable of the plurality of tasks includes:
carrying out mutex encapsulation corresponding to each task and realizing an automatic mutex corresponding to the shared resource; performing a mutex entering operation in a constructor of the automatic mutex object, and performing a mutex exiting operation in a destructor; and inquiring the parameters of the shared resource, and setting the condition variables required for accessing the shared resource.
Alternatively, the enter mutex operation may be performed in a constructor of the auto-mutex object and the exit mutex operation may be performed in a destructor. And then, utilizing the lifetime of the temporary variable, declaring an automatic mutex object at the beginning of a section of code to be operated by using atoms, so that the object is destroyed after the lifetime of the automatic mutex object is reached, and then, invoking the operation of releasing the mutex object in the destructor.
Illustratively, in encapsulating a mutex for a task, this encapsulation object is assumed to be CcoreMutex, which is the functional interface for which lock and unlock are implemented. The Lock is used for obtaining an interface entering the mutex, the unlock is used for releasing the mutex, an automatic mutex of CAutoCoreMutex is realized, a pointer of CcoreMutex is transmitted into parameters, the validity of the pointer is checked, then the Lock operation is carried out, and the unlock operation is carried out on the pointer in a destructor. Alternatively, the encapsulation of the mutex may be performed by the system, with macros distinguishing the operating system platforms. Therefore, the platform independence of the codes is effectively improved, and a developer can perform atomic operation on the codes in the code area only by generating an automatic mutex object, and the releasing action is automatically executed when the automatic mutex object is destroyed.
In order to better protect the data, when one thread accesses the shared resource, other threads cannot access until the thread finishes reading and writing, and the other threads can not use the shared resource; thus, data inconsistency or data pollution cannot occur, and the mutex is ensured to lock any access to the protected data. Illustratively, when a thread wants to access the shared resource, it is first determined whether the condition variable accords with the access state, and only when the access state is allowed, the thread can enter the shared resource to operate and then perform locking and unlocking actions so as to better protect data.
Optionally, the step of binding the plurality of tasks and the plurality of pipes comprises:
carrying out text description on each task and a corresponding pipeline according to the dependency relationship among the tasks;
the text description is fed into the dataflow framework along with a number of tasks.
Optionally, the step of topologically ordering the plurality of tasks according to the dependency relationship, and establishing the synchronization relationship and the triggering manner between the plurality of tasks may include:
the plurality of tasks are topologically ordered according to the text description.
Illustratively, the threads do not consume CPU resources in the blocked state. The thread where each sub-module is located is called a task, and the data flow between the two modules is called a pipeline, so that the dependency relationship between the tasks can be converted into plain text description, the plain text description and the tasks are sent into a data flow framework, the framework can perform topological ordering on the tasks according to the information, and then a buffer queue is established according to the pipeline for data exchange between the tasks.
Optionally, the step of topologically ordering the plurality of tasks according to the dependency relationship, and establishing the synchronization relationship and the triggering manner between the plurality of tasks may include:
and screening the synchronous relation among a plurality of tasks according to the source task of the input information of each task and the destination task of the output information.
By way of example, according to the association relation between the input information and the output information of each task in the description information, a synchronous relation and a triggering mode between the tasks are established, and all the tasks are scheduled by using multithreading. For example, the thread may need to trigger in a periodic manner, any message arrival trigger in all messages of the dependency, all message arrival triggers of the dependency, and several message arrival triggers specified in the configuration items in all messages of the dependency.
Optionally, the step of topologically ordering the plurality of tasks according to the dependency relationship, and establishing the synchronization relationship and the triggering manner between the plurality of tasks may include:
and establishing a triggering mode of each task according to the dependency weight of the input information on each task.
For example, when the first task is periodically triggered and the other tasks are event triggered, the first task may be set to a periodically triggered manner, so as to enable supporting the data flow of the directed ring graph structure during the intra-process communication.
Illustratively, it is assumed that the fifth task depends on the output of the second, third and fourth tasks, but is weakly dependent on the fourth task, i.e. the fifth task works better if it receives the fourth task's message, and works normally if it does not receive the fourth task's message, but only the second and third tasks ' messages. The messages of the second task and the third task are not indispensable to the fifth task, so that the messages of the second task and the third task are strong dependencies, the dependency weight is higher, the messages of the fourth task are weak dependencies, and the dependency weight is lower for the fifth task, and then the fifth task can be configured to be in a specified trigger mode.
Optionally, the step of packaging the plurality of tasks into a plurality of thread objects correspondingly, and performing scheduling processing on the plurality of thread objects by using a multithreading technology according to the synchronization relationship and the triggering mode further includes:
pointer information corresponding to a plurality of pipelines is obtained, and a cache queue is established according to the synchronization relationship between the pointer information and a plurality of tasks.
Alternatively, both sharing and streaming of data may be accomplished by pointers. The data passed between tasks may use pointers to custom data types. The dependency relationship between the tasks can be converted into plain text description, the plain text description and the tasks are sent into a data flow framework, the framework can perform topological ordering on the tasks according to the information, and then a buffer queue is established according to pointer parameters of the pipeline for data exchange between the tasks.
Second embodiment
The application also provides a terminal device, which comprises a processor and a memory;
the memory stores a computer program which, when executed by the processor, implements the steps of the process data processing method as described above.
The terminal equipment adopts a multithreading technology, can run a plurality of tasks on a plurality of cores simultaneously, and the data flow among the tasks is configurable. Only inputs and outputs need to be of interest for each task, and multithreaded content is not involved, i.e. the problem of data contention that multithreading may cause has been addressed internally by the framework. Meanwhile, based on the C++ template element programming technology, the framework is non-invasive, and can freely configure the processing tasks and the flow content and the dependency relationship of data between the tasks during the multithreading communication in the process, and the framework does not need to be modified. The framework supports single-thread scheduling after topological ordering of all tasks, also supports multi-thread scheduling according to topological dependency, and each task also supports various triggering modes, such as: periodic triggering, triggering by any one of the dependent events, triggering by all the dependent events, triggering by the appointed event, and the like. The sharing and circulation efficiency of data is extremely high, and the data is realized through pointers.
Fig. 4 is a schematic diagram of a process data processing flow of a terminal device according to an embodiment of the present application.
As shown in fig. 4, the specific implementation procedure of the terminal device is as follows: by analyzing the actual problem, the following concepts are further abstracted, and the tasks represent things needed to be done in one thread. Pipes represent exchanges of data between tasks, supporting only one-side writing and one-side reading.
In an embodiment, the terminal device first instantiates all the pipes and instantiates all the tasks, binds the pipes and the tasks correspondingly, and binds the mutex and the condition variables to ensure the safety of the multithreading. And the processing framework performs topological sequencing according to the dependency relationship among the tasks, so that each task realizes a corresponding triggering mode. And finally, packaging the tasks into callable thread objects, and delivering the callable thread objects to a scheduling module for scheduling and running in sequence.
Alternatively, the externally actually defined transmission data types may be passed to the inside of the data stream framework by template parameters.
Referring to FIG. 2, the pipeline-related classes and their inheritance relationships defined within the framework are generally shown below (with the brackets indicating this one template class, with the template parameters in the brackets):
wherein: flowPipeInputBIse is a base class of the input of the pipe, flowPipeInput < T > is an interface class instantiated according to a specific template parameter type T, which is implemented by inheriting the interface class. The output end of the pipeline is not described in detail.
Referring to fig. 3, the internal task class and its inheritance relationship are shown in the following diagram (the brackets indicate this one template class, and the template parameters are in brackets):
wherein: flowTask < args. > is mainly to handle the definition and implementation of tasks in the case where the data types transmitted in the pipes are not the same, flowTask dynamic < args. > is mainly to define and implement when the data types in the pipes are just the same.
The FlowGraph class can be used for combining the pipeline and the tasks, performing topological sorting according to the description information, and detecting the directed ring data flow condition by combining the trigger type of the tasks. For the ring link, when the condition that the deadlock is not caused by periodic triggering or weak dependence and the like is detected, a bidirectional queue pair is automatically established to uplink and downlink, the access process is accessed, each task is abstracted into a callable object, and finally the callable object is transmitted to a back-end scheduling sub-module for actual execution.
In actual business requirements, the data flow framework may be used to solve problems when the business to be implemented within a single process is complex and time consuming. Firstly, complex problems are analyzed and disassembled, and a plurality of sub-modules and the dependency relationship among the sub-modules can be obtained.
Fig. 5 is a task disassembly diagram of an embodiment of the present application.
In one embodiment, as shown in fig. 5, it is assumed that five sub-modules, a (first task), B (second task), C (third task), D (fourth task), and E (fifth task), are obtained by decomposition, and data transferred between the tasks is a pointer of a custom data type. It is desirable that all tasks operate in different threads, with the results of the A module processing being output to the B, C, D module at the same time and the results of the B, C, D module being output to the E module. After the processing of the upstream module is completed, an event notification is triggered and a data pointer is transmitted to the downstream module, when the current module does not receive the event, the current module is in a blocking waiting state, and CPU resources are not consumed in the blocking state. The thread where each sub-module is located is called a task, and the data flow between the two modules is called a pipeline, so that the dependency relationship between the tasks can be converted into plain text description, the plain text description and the tasks are sent into a data flow framework, the framework can perform topological ordering on the tasks according to the information, and then a buffer queue is established according to the pipeline for data exchange between the tasks. And then, establishing a synchronous relation and a triggering mode between tasks according to the description information, and scheduling all the tasks by using multithreading. Similar to the three-stage pipeline principle in this example, E may be processing tasks at time T1 and C may be processing tasks at time T2 when the tasks are dense, but A has already been processing data at time T3. For each sub-task, a single-threaded operation is required, and only the inputs and outputs of the sub-module need be of interest. All modules can work normally under the dispatching of the data flow framework, and the data flow framework does not need any change (realized by using the template meta-generic programming technology in the C++ language) aiming at tasks with different numbers and different relations and different transmission types.
Fig. 6 is a task disassembly diagram of a second embodiment of the present application.
The fig. 5 embodiment describes one of the simplest and most common usage scenarios. In fact, the data flow framework has many functions, please refer to fig. 6, a certain task may be triggered periodically, rather than necessarily triggered by an event, the above process and advantages remain valid, and when there is a task in the ring-shaped dependency task list that is triggered periodically (for example, a in the following figure is triggered periodically, and other tasks are triggered by an event), there may also be a feedback structure.
Illustratively, the processing framework of the terminal device of the present application also supports the scenario, as in the above figure, that assumes the output of E-dependency B, C, D, but is weakly dependent on D, i.e., E-tasks will work better if E receives D's messages, and E will work normally if D's messages are not received, but only B and C's messages. B and C messages are not necessary for E, so both B and C are strongly dependent and can be configured to specify a trigger pattern.
Fig. 7 is a task disassembly diagram III according to an embodiment of the present application.
Referring to fig. 7, the processing framework of the terminal device of the present application may also be phase-changed to be embedded as a simple multi-threaded module in the existing code. In the embodiment of fig. 7, the a module may feed data at a certain moment, activate three threads B, C, D in the background, block the own horses, wait for B, C, D, and continue running the thread where the a module is located.
The processing framework of the terminal equipment can solve the problem of data competition of a multithreading architecture in a single process, flexibly configure data interaction among a plurality of task nodes and support a plurality of trigger configuration methods.
Third embodiment
The present application also provides a storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a process data processing method as described above.
The present embodiments also provide a computer program product comprising computer program code which, when run on a computer, causes the computer to perform the method in the various possible implementations as above.
The embodiments also provide a chip including a memory for storing a computer program and a processor for calling and running the computer program from the memory, so that a device on which the chip is mounted performs the method in the above possible embodiments.
In the embodiments provided in the present application, all technical features of any one of the foregoing method embodiments may be included, and the extension and explanation of the description are substantially the same as those of each embodiment of the foregoing method, which is not repeated herein.
The process data processing method, the system and the storage medium can simultaneously run a plurality of tasks on a plurality of cores by adopting a multithreading technology, and the data flow among the tasks can be configured. Only inputs and outputs need to be of interest for each task, and multithreaded content is not involved, i.e. the problem of data contention that multithreading may cause has been addressed internally by the framework. Meanwhile, based on the C++ template element programming technology, non-invasive design is carried out, and during multi-thread communication in a process, processing tasks and the flow content and the dependency relationship of data between the tasks can be freely configured. The framework supports single-thread scheduling after topological ordering of all tasks, also supports multi-thread scheduling according to topological dependency, and each task also supports various triggering modes, such as: periodic triggering, triggering by any one of the dependent events, triggering by all the dependent events, triggering by the appointed event, and the like. The sharing and circulation efficiency of data is extremely high, and the sharing and circulation efficiency of data can be realized through pointers. Communication interaction between threads can be supported, flexible triggering modes of all threads can be supported, and data flow of a directed ring graph structure can be supported during intra-process communication.
In this application, step numbers such as S10 and S20 are used for the purpose of more clearly and briefly describing the corresponding content, and are not to constitute a substantial limitation on the sequence, and those skilled in the art may execute S20 first and then S10 when implementing the present invention, but these are all within the scope of protection of the present application.
It can be understood that the above scenario is merely an example, and does not constitute a limitation on the application scenario of the technical solution provided in the embodiments of the present application, and the technical solution of the present application may also be applied to other scenarios. For example, as one of ordinary skill in the art can know, with the evolution of the system architecture and the appearance of new service scenarios, the technical solutions provided in the embodiments of the present application are equally applicable to similar technical problems.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device of the embodiment of the application can be combined, divided and pruned according to actual needs.
In this application, the same or similar term concept, technical solution, and/or application scenario description will generally be described in detail only when first appearing, and when repeated later, for brevity, will not generally be repeated, and when understanding the content of the technical solution of the present application, etc., reference may be made to the previous related detailed description thereof for the same or similar term concept, technical solution, and/or application scenario description, etc., which are not described in detail later.
In this application, the descriptions of the embodiments are focused on, and the details or descriptions of one embodiment may be found in the related descriptions of other embodiments.
The technical features of the technical solutions of the present application may be arbitrarily combined, and for brevity of description, all possible combinations of the technical features in the above embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the present application.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the claims, and all equivalent structures or equivalent processes using the descriptions and drawings of the present application, or direct or indirect application in other related technical fields are included in the scope of the claims of the present application.

Claims (10)

1. A process data processing method, comprising:
disassembling a process implementation process, and acquiring a plurality of step modules of the implementation process and a dependency relationship among the plurality of step modules;
instantiating the plurality of step modules into a plurality of tasks and instantiating the dependency relationship into a plurality of pipelines, wherein the tasks are used for representing functions to be realized in one thread, and the pipelines are used for representing data exchange among the tasks;
binding the plurality of tasks and the plurality of pipes;
performing topological ordering on the tasks according to the dependency relationship, and establishing a synchronous relationship and a triggering mode among the tasks;
and respectively and correspondingly packaging the tasks into a plurality of thread objects, and performing scheduling processing on the plurality of thread objects by using a multithreading technology according to the synchronization relation and the triggering mode.
2. The process data processing method according to claim 1, wherein the step of disassembling the process implementation procedure, and the step of acquiring the dependency relationship between the plurality of step modules of the implementation procedure comprises:
responding to the service task of the process, performing functional decomposition on the service task, and classifying the step module corresponding to each function;
acquiring data to be transferred among the plurality of step modules, and establishing a data type pointer;
determining the direction and the sequence of the data transfer, and establishing the dependency relationship among the plurality of step modules.
3. The process data processing method according to claim 1, wherein said step of instantiating said plurality of step modules as a plurality of tasks and instantiating said dependency as a plurality of pipes and before comprises:
defining each task and pipeline related class and inheritance relationship thereof, wherein the class comprises a base class and an implementation class;
template parameters are obtained, and inheritance is carried out on the base class;
and implementing the base class through the implementation class by using the template parameters.
4. The process data processing method according to claim 1, wherein the step of topologically ordering the plurality of tasks according to the dependency relationship, and establishing the synchronization relationship and the triggering manner between the plurality of tasks comprises, before:
and binding mutexes and condition variables of the plurality of tasks.
5. The process data processing method according to claim 4, wherein the step of binding the mutexes and the condition variables of the plurality of tasks comprises:
carrying out mutex encapsulation corresponding to each task and realizing an automatic mutex corresponding to the shared resource;
performing a mutex entering operation in a constructor of the automatic mutex object, and performing a mutex exiting operation in a destructor;
and inquiring the parameters of the shared resource, and setting the condition variables required by accessing the shared resource.
6. The process data processing method according to claim 1, wherein the step of binding the plurality of tasks and the plurality of pipes comprises:
carrying out text description on each task and a corresponding pipeline according to the dependency relationship among the tasks;
the text description is entered into a dataflow framework along with the plurality of tasks.
7. The process data processing method according to claim 6, wherein the step of topologically ordering the plurality of tasks according to the dependency relationship, and establishing a synchronization relationship and a trigger manner between the plurality of tasks comprises at least one of:
performing topological ordering on the tasks according to the text description;
screening the synchronous relation among the tasks according to the source task of the input information of each task and the destination task of the output information;
and establishing a triggering mode of each task according to the dependency weight of the input information on each task.
8. The process data processing method according to any one of claims 1 to 7, wherein the step of packaging the plurality of tasks into a plurality of thread objects respectively, and performing scheduling processing on the plurality of thread objects by using a multithreading technique according to the synchronization relationship and the triggering manner further comprises:
and acquiring pointer information corresponding to the pipelines, and establishing a cache queue according to the synchronization relationship between the pointer information and the tasks.
9. A terminal device comprising a processor and a memory;
the memory stores a computer program which, when executed by the processor, implements the steps of the process data processing method according to any one of claims 1 to 8.
10. A storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the process data processing method according to any of claims 1 to 8.
CN202310099786.6A 2023-01-31 2023-01-31 Process data processing method, terminal equipment and storage medium Pending CN116204289A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310099786.6A CN116204289A (en) 2023-01-31 2023-01-31 Process data processing method, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310099786.6A CN116204289A (en) 2023-01-31 2023-01-31 Process data processing method, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116204289A true CN116204289A (en) 2023-06-02

Family

ID=86518597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310099786.6A Pending CN116204289A (en) 2023-01-31 2023-01-31 Process data processing method, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116204289A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117707654A (en) * 2024-02-06 2024-03-15 芯瑞微(上海)电子科技有限公司 Signal channel inheritance method for multi-physical-field core industrial simulation processing software

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117707654A (en) * 2024-02-06 2024-03-15 芯瑞微(上海)电子科技有限公司 Signal channel inheritance method for multi-physical-field core industrial simulation processing software
CN117707654B (en) * 2024-02-06 2024-05-03 芯瑞微(上海)电子科技有限公司 Signal channel inheritance method for multi-physical-field core industrial simulation processing software

Similar Documents

Publication Publication Date Title
AU2018203641B2 (en) Controlling tasks performed by a computing system
JP2829078B2 (en) Process distribution method
Imam et al. Integrating task parallelism with actors
Chrysanthakopoulos et al. An asynchronous messaging library for c
CN116204289A (en) Process data processing method, terminal equipment and storage medium
Pagano et al. A model based safety critical flow for the aurix multi-core platform
Dearle et al. A component-based model and language for wireless sensor network applications
Grimshaw et al. Real-time Mentat programming language and architecture
Tan et al. StateOS: A memory-efficient hybrid operating system for IoT devices
Abdullah et al. Schedulability analysis and software synthesis for graph-based task models with resource sharing
Reichardt et al. Design principles in robot control frameworks
Krzikalla et al. Synchronization debugging of hybrid parallel programs
Schuele Efficient parallel execution of streaming applications on multi-core processors
Holmes et al. Towards Reusable Synchronisation for Object-Oriented Languages
Li et al. Gdarts: A gpu-based runtime system for dataflow task programming on dependency applications
Poggi et al. An efficient and flexible C++ library for concurrent programming
Do et al. Self-timed periodic scheduling of data-dependent tasks in embedded streaming applications
Reichardt et al. One Fits More—On the Relevance of Highly Modular Framework and Middleware Design for Quality Characteristics of Robotics Software
Miomandre et al. Embedded runtime for reconfigurable dataflow graphs on manycore architectures
Sreenivas Response Time Analysis of Tasking Framework Task Chains
Azadbakht Asynchronous Programming in the Abstract Behavioural Specification Language
Baba et al. Programming and debugging for massive parallelism: The case for a parallel object-oriented language A-NETL
Welch et al. Design principles for the (System) CSP software framework
Marusarz Developer Products Division
Alrahmawy et al. An RTSJ-based reconfigurable server component

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230924

Address after: Room A101, Building I, No. 7 Zhongchuang Second Road, Hangzhou Bay New Area, Ningbo City, Zhejiang Province, 315335

Applicant after: Ningbo Lutes Robotics Co.,Ltd.

Address before: 430056 A504, Building 3, No. 28, Chuanjiangchi Second Road, Wuhan Economic and Technological Development Zone, Wuhan, Hubei Province

Applicant before: Wuhan Lotus Technology Co.,Ltd.

TA01 Transfer of patent application right