CN112231097A - Capacitive pressure transmitter edge calculation work system and work method - Google Patents

Capacitive pressure transmitter edge calculation work system and work method Download PDF

Info

Publication number
CN112231097A
CN112231097A CN202011032940.0A CN202011032940A CN112231097A CN 112231097 A CN112231097 A CN 112231097A CN 202011032940 A CN202011032940 A CN 202011032940A CN 112231097 A CN112231097 A CN 112231097A
Authority
CN
China
Prior art keywords
task
module
resource
resources
pressure transmitter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011032940.0A
Other languages
Chinese (zh)
Other versions
CN112231097B (en
Inventor
王其朝
杨祖业
金光淑
王雪冰
宁德魁
彭帅
李思言
于占茹
王萌
于啸航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Overview Micro Technology Co ltd
Original Assignee
Shenyang Overview Micro Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Overview Micro Technology Co ltd filed Critical Shenyang Overview Micro Technology Co ltd
Priority to CN202011032940.0A priority Critical patent/CN112231097B/en
Publication of CN112231097A publication Critical patent/CN112231097A/en
Application granted granted Critical
Publication of CN112231097B publication Critical patent/CN112231097B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a capacitive pressure transmitter edge calculation work system and a work method, wherein the capacitive pressure transmitter edge calculation work system comprises the following steps: the task starting and stopping module is used for deciding the working task of the pressure transmitter which needs to be started or stopped at present according to the working condition of the pressure transmitter; sending a pressure transmitter work task starting request to a task quantification module; the task quantification module is used for modeling the calculation resource quantity required by the task under different calculation equipment according to the time delay requirement of the calculation task; and sending to a resource management module; and the resource management module is responsible for distributing computing resources for the tasks and managing the computing resources of the equipment. The invention is convenient for the arrangement of algorithms with different complexity on resource limiting equipment, can reduce the hardware cost of the pressure transmitter and is convenient for the arrangement of self-diagnosis, self-learning and self-decision tasks of the pressure transmitter.

Description

Capacitive pressure transmitter edge calculation work system and work method
Technical Field
The invention relates to industrial edge computing equipment, in particular discloses a software working system and a working method of a capacitive pressure transmitter edge computing function, and belongs to the technical field of industrial process control.
Background
The traditional fault diagnosis method of the industrial equipment is mostly carried out by adopting a mechanism modeling method, but the mechanism model is difficult to be suitable for different industrial fields due to the complexity of the industrial process. In recent years, with the development of electronic technology, computer performance has been enhanced, and data-driven technologies represented by deep learning technologies have become important tools for industrial data analysis. The method provides a new way for intelligent operation and maintenance of the pressure transmitter, and the instrument can complete self-learning, self-diagnosis, self-decision and other functions according to working condition changes by deploying the machine learning model in the pressure transmitter.
Machine learning algorithms require high performance computing equipment, while meters in industrial fields tend to be limited in power and computing resources due to field environment constraints. Therefore, how to deploy such algorithms in resource-constrained environments is the focus of research today. Cloud computing is a technology capable of providing high-performance computing equipment, but due to the problems of network bandwidth, time delay, safety and the like, the cloud computing cannot acquire massive data, and therefore, a pressure transmitter instrument with an edge computing function is a good solution.
The edge computing not only sinks part of the computing to the edge side, but also provides computing service at the edge side, so that the industrial equipment can dynamically use computing resources at the edge side. If the pressure transmitter monitors that the abnormal change of the pressure signal needs further analysis or the equipment runs for a long time and needs to learn parameters again, the calculation resources of the instrument can not meet the tasks, and the calculation resources can be requested to other edge equipment for task processing.
The existing pressure transmitter equipment mainly uses a traditional MCU as a main computing unit, and the software architecture of the existing pressure transmitter equipment cannot meet the requirement of edge computing. Therefore, how to construct a pressure transmitter software scheme capable of utilizing the computing resources of the edge device to enable the computing task of the instrument to run on other edge devices and ensure the real-time performance, reliability and the like of the task, so as to complete the running of a complex algorithm in the instrument with limited resources is a problem to be solved urgently.
The problem that the single body of the existing pressure transmitter diagnosis equipment is difficult to support the edge intelligent function is solved from the aspect of function research, the problem that the intelligent equipment lacks remote management and operation and maintenance, how to construct the edge computing capability of the pressure transmitter diagnosis circuit is solved, and the problems of low management and control and intelligent degree of the traditional instrument body are solved; on the basis of limited resources and software and hardware carriers of the traditional instrument, three-level intelligent cooperation is constructed by expanding resources and software and hardware, and how to break through the self-diagnosis of the pressure transmitter body, the self-learning based on the trend change rate of the working condition and the edge calculation function such as self-decision based on the diagnosis and learning. There is a great need for those skilled in the art to solve the corresponding technical problems.
Disclosure of Invention
The invention aims to at least solve the technical problems in the prior art, and particularly provides an edge calculation working system and a working method of a capacitive pressure transmitter.
In order to achieve the above object of the present invention, the present invention provides an edge calculation operating system of a capacitive type pressure transmitter, comprising:
the task starting and stopping module is used for deciding the working task of the pressure transmitter which needs to be started or stopped at present according to the working condition of the pressure transmitter; sending a pressure transmitter work task starting request to a task quantification module; after the task is successfully started, the task starting and stopping module monitors the task heartbeat and ensures that the task is restarted after the task is accidentally invalid; when the task is normally finished, the task starting and stopping module sends a task finishing message to the resource management module;
the task quantification module is used for modeling the calculation resource quantity required by the task under different calculation equipment according to the time delay requirement of the calculation task; when a task starting request is received, calculating resources required by a task according to task delay constraints and equipment details, and sending the resources to a resource management module;
the resource management module is responsible for distributing computing resources for the tasks and managing the computing resources of the equipment by one or any combination of the following:
when a task requests resources, computing resources are distributed to the task, a task mirror image is obtained, and the task is started;
when the task is finished, recovering the computing resources of the task;
when the local computing resources cannot meet the task requirements, the resource management module is responsible for sending computing unloading requests to other edge devices such as edge servers.
In a preferred embodiment of the present invention, the method comprises: the task start and stop function in the task start and stop module is completed by inquiring and setting a task start and stop table in real time through a task start and stop program, the task start and stop mode has one or any combination of time triggering, signal triggering and manual triggering, the task start and stop table stores the start and stop conditions of all tasks, the task start and stop program inquires the start and stop conditions of the tasks in real time and sends a start and stop request after the conditions are met.
In a preferred embodiment of the present invention, the method comprises: the task starting and stopping module judges that the task is started successfully by monitoring a task heartbeat signal, after the task is started successfully, the task starting and stopping module sends a heartbeat signal to the task starting and stopping module at regular time, and when the task heartbeat signal is not received within a certain time, the task monitoring module sends a task restarting request to the resource management module; when the task is normally finished, a task finishing signal is sent to the task monitoring module, and the task monitoring module resets the task starting and stopping table after receiving the stopping signal.
In a preferred embodiment of the present invention, the method further comprises: and after receiving the task request ID and the time delay constraint, the task quantification module returns the resource requirements of the task under different computing devices according to the stored mathematical model.
In a preferred embodiment of the present invention, the task mathematical model in the task quantification module is obtained by polynomial fitting on each device during deployment, the main fitting manner is two second-order unitary polynomial fittings, the task delay is decomposed into calculation delay or/and communication delay, and the fitting is performed by processor percentage and bandwidth speed, respectively.
In a preferred embodiment of the present invention, the method further comprises: the resource allocation of the resource management module is completed through a contianerd interface, when the resource management module obtains a task ID and a computing resource requirement which need to be started, if the current equipment can meet the computing resource requirement, a contianerd mirror image is obtained from a mirror image warehouse through the task ID, computing resources are allocated for the contianerd mirror image, and the computing resources are started; the processor and network resources allocated to the tasks by the resource management module are given by the task quantification module, the required external memory resources are not limited, and the required internal memory resources are dynamically provided by the resource management module.
In a preferred embodiment of the present invention, the task is packaged as a task mirror image by a contianerd and stored in a contianerd warehouse, and the storage content includes a task unique ID, a task mirror image body, a task memory requirement, and a task external memory requirement; after the task start-stop module is started, a task ID and time delay constraint are sent to a task quantification module to obtain computing resources required by a task; the task quantification module completes construction of a task model during task development, quantifies the relation between a CPU and calculation time delay and the sum of bandwidth and communication time delay, total task time delay and the sum of calculation time delay and communication time delay through second-order polynomial fitting; after the task quantification module finishes task quantification, the task resource requirement and the constraint are sent to the resource management module, and the resource management module distributes computing resources for the task resource requirement and the constraint through a containered API and starts the task; and after the task is finished, a task finishing identifier is sent to the task starting and stopping module, or the task starting and stopping module actively finishes the task, and after the resource management module receives a task stopping signal, the task is finished, and task resources are recovered.
In a preferred embodiment of the present invention, the method further comprises:
and the image storage warehouse is responsible for storing container images of all dynamically executable tasks, and comprises a plurality of versions of images under different computing architectures, wherein each image has a unique task ID.
The invention also discloses a working method for calculating the edge of the capacitive pressure transmitter, which comprises the following steps:
s1, in the task development process, packaging tasks into container mirror images, packaging various container mirror images according to a processor architecture, and setting task time delay constraints, task trigger conditions, and required memory and external memory resource information;
s2, modeling the relation between task time delay and computing resources in the task development process; the task time delay is mainly divided into calculation time delay and communication time delay, the calculation time delay and the communication time delay of a plurality of groups of tasks under different calculation resources and communication resources are measured in a fitting mode, and then the relation between the calculation resources and the calculation time delay and the relation between the communication resources and the communication time delay are modeled according to two second-order polynomials;
s3, the task start-stop module checks the task start-stop table in real time, and sends the task ID and the delay constraint to the task modeling module when detecting that the task needs to be started;
s4, after receiving the task starting request, the task modeling module calculates the resource requirement according to the task time delay constraint and sends the resource requirement to the resource management module;
s5, after the resource management module receives the task starting request, if the local resource can meet the task requirement, the resource management module allocates the computing resource for the local resource through the contianerd interface; if the local computing resources can not meet the task requirements, sending computing unloading requests to other edge computing nodes;
s6, after the task is deployed successfully, a heartbeat signal is sent to the task start-stop module in real time, the task start-stop module monitors the task heartbeat, and when the task is ended accidentally, the task start-stop module restarts the task;
s7, after the task is deployed successfully, the resource management module monitors the use amount of the task memory in real time and dynamically allocates the memory for the task memory;
and S8, after the task is executed, the task itself or the task start-stop module determines the stop of the task, and the resource management module recovers the task resources after the task is stopped.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
1. the invention changes the design mode of the traditional capacitance type pressure transmitter, decouples the hardware of the computing equipment and the acquisition hardware, does not need to consider the performance of resources such as CPU, RAM, FLASH and the like during the hardware design, only needs to consider the hardware design such as related power supply, acquisition and the like, and is beneficial to the standardization and modularization of the hardware design.
2. The invention provides dynamic computing resources for the computing task of the capacitive pressure transmitter, which greatly improves the utilization rate of the computing resources of the equipment and can deploy complex algorithms on the equipment with resource limitation.
3. The invention deploys the computing task in a resource virtualization mode, is beneficial to development and deployment of software and is convenient for subsequent upgrading.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic diagram of the overall system structure of the present invention.
FIG. 2 is a diagram of the hardware configuration of the present invention.
FIG. 3 is a schematic diagram of the software architecture of the present invention.
FIG. 4 is a schematic diagram of a hardware development process according to the present invention.
FIG. 5 is a schematic diagram of the software development process of the present invention.
FIG. 6 is a schematic diagram of the task start-up process of the present invention.
FIG. 7 is a schematic diagram of the task modeling module workflow of the present invention.
FIG. 8 is a schematic diagram of the resource management module according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
The software scheme comprises a task start-stop module, wherein the task start-stop module comprises a task start-stop table, and the task start-stop table stores task start conditions, task time delay constraints and task memory and external memory requirements. The task start-stop module comprises a task start-stop table detection program and controls the start of the task according to manual triggering, signal triggering and periodic triggering. The task starting and stopping module is communicated with the task quantification module and sends the task ID, the task constraint, the task memory requirement and the task external memory requirement to the resource quantification module. The task ID is a unique identifier of a task image in the task repository. And the task quantification module completes task quantification modeling during task deployment.
According to the preferable technical scheme, the tasks are packaged into the task mirror images through the contianerd and are stored in the contianerd warehouse, and the storage content comprises the task unique ID, the task mirror image body, the task memory requirement and the task external memory requirement.
As a preferred technical scheme of the invention, after the task start-stop module is started, the task start-stop module sends the task ID and the time delay constraint to the task quantification module so as to obtain the computing resources required by the task.
As a preferred technical scheme of the invention, the task quantification module completes construction of a task model during task development, and quantifies the relation between a CPU and calculation delay and the sum of bandwidth and communication delay, total task delay and calculation delay and communication delay through second-order polynomial fitting.
As a preferred technical scheme of the invention, after the task quantification module finishes task quantification, the task resource requirement and constraint are sent to the resource management module, and the resource management module distributes computing resources for the task resource requirement and constraint and starts the task through a contianerd API.
As a preferred technical solution of the present invention, after the task is finished, a task end identifier is sent to the task start/stop module, or the task start/stop module actively ends the task, and after the resource management module receives the task stop signal, the resource management module ends the task and recovers the task resources.
As shown in fig. 1 to 8, the present invention discloses: the resource virtualization technology based on the process level is used for providing resource management, distribution and task scheduling functions for the calculation tasks of the pressure transmitter, and comprises the following steps:
and the task starting and stopping module is used for deciding the task which needs to be started or stopped at present according to the working condition of the pressure transmitter. And sends a task start request to the task quantization module. After the task is successfully started, the task starting and stopping module monitors the task heartbeat and ensures that the task is restarted after the task is accidentally invalid. When the task is normally finished, the task supervision model sends a task finishing message to the resource management module.
And the task quantification module is used for modeling the amount of computing resources required by the task under different computing equipment according to the time delay requirement of the computing task. The modeling tasks are all tasks that need to be run on the meter, such as diagnostic tasks, display tasks, communication tasks, etc. of the meter. The different computing devices are primarily referred to as edge servers, edge gateways, and meters themselves. And when a task starting request is received, calculating resources required by the task according to the task delay constraint and the equipment details, and sending the resources to the resource management module.
And the resource management module is responsible for distributing computing resources for the tasks and managing the computing resources of the equipment. And when the task requests resources, allocating computing resources for the task, acquiring a task mirror image, and starting the task. When the task is finished, the computing resources of the task are recycled. When the local computing resources cannot meet the task requirements, the resource management module is responsible for sending computing unloading requests to other edge devices such as edge servers. It includes:
s61, using local server (i.e. system composed of task start/stop module, task quantification module and resource management module) as center, defining at least one edge server;
if M edge servers exist in the enclosed area at the same time, respectively defining an edge server D for the 1 st circle12 nd circle edge server D23 rd circle edge server D3… …, Mth edge serverMM is a positive integer greater than or equal to 2, i.e. d1=d2=d3=…=dM,dmRepresenting the distance from the local server to the mth circle edge server; m ═ 1,2,3,. said, M;
the method for calculating the service distance from the local server to each circle of edge servers comprises the following steps:
Figure BDA0002704307700000081
wherein (x)0,y0,z0) Location coordinates representing a local server, (x)m,ym,zm) Representing the position coordinates of the mth circle edge server;
s62, calculating a cooperation area by taking the edge server in the delineation area as the center and the distance from the edge server to the local server as the edge; counting the number set S of other local servers in the edge calculation cooperation area;
s63, judging whether the local server meets the calculation cache task resource requirement or/and the calculation task resource requirement of the client by using the calculation cache task resource requirement or/and the calculation task resource requirement of the client (namely, the pressure transmitter);
the method for calculating the resource demand of the cache task at the client comprises the following steps:
Figure BDA0002704307700000082
wherein epsilonηDenotes the harmonic first parameter, εη∈(0,1];
νφA latency value representing the s-th edge server;
Figure BDA0002704307700000083
a latency value representing a local server;
x represents the cache residual resource of the edge server;
Qwr,srepresenting the first calling buffer amount of the ith client to the ith edge server;
r represents a set of clients, R ═ R1,r2,r3,...,rp},r∈{r1,r2,r3,...,rp},rp′Denotes the pth client within the delineating region, p' ═ 1,2,3, ·, p;
s represents an edge server set, S ═ S1,s2,s3,...,sg},sg′Indicating that the g 'th within the delineation area circumscribes the edge server, g' ═ 1,2,31,s2,s3,...,sg};
The method for calculating the task resource requirement comprises the following steps:
Figure BDA0002704307700000091
wherein the content of the first and second substances,
Figure BDA0002704307700000092
the representation is harmonic to a second parameter,
Figure BDA0002704307700000093
h represents the remaining computing resources of the edge server;
Qur,srepresenting the first call calculation amount of the r client to the s edge server;
the computing method for determining whether the local server meets the computing cache task resource requirement of the client comprises the following steps:
Figure BDA0002704307700000094
wherein, muβIndicating cache resource utilization, muβ∈(0,1];
Ar,sIndicating whether the r client calls the cache resource from the edge server;
if Ar,s1 represents that the r-th client calls a cache resource from the edge server;
if Ar,s0 means that the r-th client does not call a cache resource from the edge server;
if yes, the local server meets the computing cache task resource requirement of the client;
if not, the local server does not meet the computing cache task resource requirement of the client;
the computing method for determining whether the local server meets the computing task resource requirement of the client comprises the following steps:
Figure BDA0002704307700000095
wherein, χβRepresenting the computational resource utilization, χβ∈(0,1];
Or,sIndicating whether the r client calls the computing resource from the s edge server;
if O isr,s1 denotes that the r-th client calls a computing resource from the s-th edge server;
if O isr,s0 means that the r-th client has not called a computing resource from the s-th edge server;
if yes, the local server meets the computing task resource requirement of the client;
if not, the local server does not meet the computing task resource requirement of the client;
if the local server meets the calculation cache task resource requirement or/and the task resource requirement of the client, namely if the local server meets the calculation cache task resource requirement or/and the task resource requirement correspondingly
Figure BDA0002704307700000101
Or/and
Figure BDA0002704307700000102
the local server provides service for the client;
and if the local server cannot meet the calculation cache task resource requirement or/and the task resource requirement of the client, the local server sends a resource calling command request to the delineation edge server, and the delineation edge server is utilized to provide service for the client.
Further comprising the step of S64, the method comprises,
s64, judging whether the edge server meets the computing cache task resource requirement or/and the task resource requirement of the client side by using the computing cache task resource requirement or/and the task resource requirement of the client side:
if the bound edge server meets the calculation cache task resource requirement or/and the task resource requirement of the client, the edge server provides service for the client;
if the bound edge server does not meet the cache task resource requirement or/and the task resource requirement calculated by the client, the edge server sends a resource calling command request to the cloud server; and providing services for the client by utilizing the cloud server.
And the image storage warehouse is responsible for storing container images of all dynamically executable tasks, and comprises a plurality of versions of images under different computing architectures, wherein each image has a unique task ID.
The task start and stop function in the task start and stop module is completed by inquiring and setting a task start and stop table in real time through a task start and stop program, the task start and stop mode has three time triggering, signal triggering and manual triggering, the task start and stop table stores the start and stop conditions of all tasks, the task start and stop program inquires the start and stop conditions of the tasks in real time and sends a start and stop request after the conditions are met. The task starting condition is determined according to the task starting and stopping condition, if the time trigger task is started regularly, the satisfied condition is that a timing value arrives, the task is triggered by a signal, the task is started when the pressure signal is detected to be abnormal, and the manual triggering is manually started through a key or other methods.
And the task start-stop table stores information such as a task unique ID, a task delay constraint and the like.
In the task start-stop request, a task start signal is sent to the task quantification module, and a task stop signal is sent to the resource management module.
The task starting and stopping module judges that the task is started successfully by monitoring a task heartbeat signal, the task starting and stopping module sends the heartbeat signal to the task starting and stopping module at regular time after the task is started successfully, and the task monitoring module sends a task restarting request to the resource management module when the task heartbeat signal is not received for a certain time. When the task is normally finished, a task finishing signal is sent to the task monitoring module, and the task monitoring module resets the task starting and stopping table after receiving the stopping signal.
And after receiving the task request ID and the time delay constraint, the task quantification module returns the resource requirement of the task under the condition of not communicating with the computing equipment according to the stored mathematical model.
The task mathematical model in the task quantification module is obtained on each device in a polynomial fitting mode during deployment, the main fitting mode is two second-order unitary polynomial fitting, the task time delay is decomposed into calculation time delay and communication time delay, and fitting is carried out through the percentage of a processor and the bandwidth speed respectively.
The content of the task quantification module for sending the computing resources required by the task to the resource management module comprises a task ID, processor requirements and network bandwidth requirements under each computing device.
And when the resource management module obtains a task ID and a resource requirement which need to be started, if the current equipment can meet the resource requirement, the resource management module obtains a contianed mirror image from the mirror image warehouse through the task ID, and allocates and starts the computing resource for the contianed mirror image.
The resource management module gives out the processor and network resources allocated to the task by the resource quantification module, the required external memory resources are not limited, and the required internal memory resources are dynamically provided by the resource management module.
The mathematical model is two first-order polynomials used to fit the execution time of the task, including computation time and communication time. The calculation time is t ═ w0+ w1 ^ cpu + w2 ^ cpu ^2, and the communication time is t ═ w0+ w1 ^ network + w2 ^ 2.
Referring to fig. 3, the present invention provides a capacitive pressure transmitter edge calculation software solution, which includes a task start/stop module, a task quantization module and a resource management module. The task start-stop module comprises a task start-stop table, and the task start-stop table stores task start conditions, task delay constraints and task memory and external memory requirements. The task start-stop module comprises a task start-stop table detection program and controls the start of the task according to manual triggering, signal triggering and periodic triggering. And the task starting and stopping module is communicated with the task quantification module and sends the task ID, the task constraint, the task memory requirement and the task external memory requirement to the resource quantification module. The task ID is a unique identifier of the task image in the task repository. And the task quantification module completes task quantification modeling when the task is deployed.
The tasks are packaged into task images through the contianerd and are stored in a contianerd warehouse, and the storage content comprises a task unique ID, a task image body, a task memory requirement and a task external memory requirement. And after the task start-stop module is started, the task start-stop module sends a task ID and time delay constraint to the task quantification module so as to obtain the computing resources required by the task. And the task quantification module completes construction of a task model during task development, quantifies the relation between the CPU and the calculation delay and the sum of the bandwidth and the communication delay, the total task delay and the calculation delay and the communication delay through second-order polynomial fitting. After the task quantification module finishes task quantification, the task resource requirement and the constraint are sent to the resource management module, and the resource management module distributes computing resources for the task resource requirement and the constraint through a containered API and starts the task. And after the task is finished, a task finishing identifier is sent to the task starting and stopping module, or the task starting and stopping module actively finishes the task, and after the resource management module receives a task stopping signal, the task is finished, and task resources are recovered.
The invention discloses a capacitive pressure transmitter edge calculation working method which mainly comprises the following steps:
s1, in the task development process, packaging tasks into container mirror images, packaging various container mirror images according to possible processor architectures, and setting resource information such as task delay constraints, task trigger conditions, required memories and external memories; multiple task images refer to task images under different processor architectures, for example, if a meter is an ARM architecture and an edge server is an x86 architecture, if a task wants to be able to run on both computing devices, the task needs to be packaged into two types of contiinerd images in the development process, but this is not necessary, and if the task is developed in a cross-platform language and does not involve the content of the processor architecture, the task can be deployed in one type of contiinerd image.
And S2, modeling the relation between the task time delay and the computing resource in the task development process. The task time delay is mainly divided into calculation time delay and communication time delay, the calculation time delay and the communication time delay of a plurality of groups of tasks under different calculation resources and communication resources are measured in a fitting mode, and then the relation between the calculation resources and the calculation time delay and the relation between the communication resources and the communication time delay are modeled according to two second-order polynomials;
s3, the task start-stop module checks the task start-stop table in real time, and sends the task ID and the delay constraint to the task modeling module when detecting that the task needs to be started;
s4, after receiving the task starting request, the task modeling module calculates the resource requirement according to the task time delay constraint and sends the resource requirement to the resource management module;
and S5, after the resource management module receives the task starting request, if the local resource can meet the task requirement, allocating the computing resource for the local resource through the contianerd interface. If the local computing resources can not meet the task requirements, sending computing unloading requests to other edge computing nodes;
s6, after the task is deployed successfully, a heartbeat signal is sent to the task start-stop module in real time, the task start-stop module monitors the task heartbeat, and when the task is ended accidentally, the task start-stop module restarts the task;
s7, after the task is deployed successfully, the resource management module monitors the use amount of the task memory in real time and dynamically allocates the memory for the task memory;
and S8, after the task is executed, the task itself or the task start-stop module determines the stop of the task, and the resource management module recovers the task resources after the task is stopped.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (9)

1. A capacitive pressure transmitter edge calculation work system, comprising:
the task starting and stopping module is used for deciding the working task of the pressure transmitter which needs to be started or stopped at present according to the working condition of the pressure transmitter; sending a pressure transmitter work task starting request to a task quantification module;
after the task is successfully started, the task starting and stopping module monitors the task heartbeat and ensures that the task is restarted after the task is accidentally invalid; when the task is normally finished, the task starting and stopping module sends a task finishing message to the resource management module;
the task quantification module is used for modeling the calculation resource quantity required by the task under different calculation equipment according to the time delay requirement of the calculation task; when a task starting request is received, calculating resources required by a task according to task delay constraints and equipment details, and sending the resources to a resource management module;
the resource management module is responsible for distributing computing resources for the tasks and managing the computing resources of the equipment by one or any combination of the following:
when a task requests resources, computing resources are distributed to the task, a task mirror image is obtained, and the task is started;
when the task is finished, recovering the computing resources of the task;
when the local computing resources cannot meet the task requirements, the resource management module is responsible for sending computing unloading requests to other edge devices such as edge servers.
2. The capacitive pressure transmitter edge computing system of claim 1, comprising: the task start and stop function in the task start and stop module is completed by inquiring and setting a task start and stop table in real time through a task start and stop program, the task start and stop mode has one or any combination of time triggering, signal triggering and manual triggering, the task start and stop table stores the start and stop conditions of all tasks, the task start and stop program inquires the start and stop conditions of the tasks in real time and sends a start and stop request after the conditions are met.
3. The capacitive pressure transmitter edge computing system of claim 1, comprising: the task starting and stopping module judges that the task is started successfully by monitoring a task heartbeat signal, after the task is started successfully, the task starting and stopping module sends a heartbeat signal to the task starting and stopping module at regular time, and when the task heartbeat signal is not received within a certain time, the task monitoring module sends a task restarting request to the resource management module; when the task is normally finished, a task finishing signal is sent to the task monitoring module, and the task monitoring module resets the task starting and stopping table after receiving the stopping signal.
4. The capacitive pressure transmitter edge computing system of claim 3, further comprising: and after receiving the task request ID and the time delay constraint, the task quantification module returns the resource requirements of the task under different computing devices according to the stored mathematical model.
5. The capacitive pressure transmitter edge calculation work system of claim 1, wherein a task mathematical model in the task quantification module is obtained at deployment time on each device by means of polynomial fitting, the main fitting means is two second-order unitary polynomial fits, and the task delay is decomposed into calculation delay or/and communication delay, which are fitted by processor percentage and bandwidth speed, respectively.
6. The capacitive pressure transmitter edge computing system of claim 1, further comprising: the resource allocation of the resource management module is completed through a contianerd interface, when the resource management module obtains a task ID and a computing resource requirement which need to be started, if the current equipment can meet the computing resource requirement, a contianerd mirror image is obtained from a mirror image warehouse through the task ID, computing resources are allocated for the contianerd mirror image, and the computing resources are started; the processor and network resources allocated to the tasks by the resource management module are given by the task quantification module, the required external memory resources are not limited, and the required internal memory resources are dynamically provided by the resource management module.
7. The capacitive pressure transmitter edge computing work system according to claim 1, wherein the tasks are packaged as task images by a contianerd and stored in a contianerd warehouse, and the storage contents include a task unique ID, a task image body, a task memory requirement and a task external memory requirement; after the task start-stop module is started, a task ID and time delay constraint are sent to a task quantification module to obtain computing resources required by a task; the task quantification module completes construction of a task model during task development, quantifies the relation between a CPU and calculation time delay and the sum of bandwidth and communication time delay, total task time delay and the sum of calculation time delay and communication time delay through second-order polynomial fitting; after the task quantification module finishes task quantification, the task resource requirement and the constraint are sent to the resource management module, and the resource management module distributes computing resources for the task resource requirement and the constraint through a containered API and starts the task; and after the task is finished, a task finishing identifier is sent to the task starting and stopping module, or the task starting and stopping module actively finishes the task, and after the resource management module receives a task stopping signal, the task is finished, and task resources are recovered.
8. The capacitive pressure transmitter edge computing system of claim 1, further comprising:
and the image storage warehouse is responsible for storing container images of all dynamically executable tasks, and comprises a plurality of versions of images under different computing architectures, wherein each image has a unique task ID.
9. A working method for calculating the edge of a capacitive pressure transmitter is characterized by comprising the following steps:
s1, in the task development process, packaging tasks into container mirror images, packaging various container mirror images according to a processor architecture, and setting task time delay constraints, task trigger conditions, and required memory and external memory resource information;
s2, modeling the relation between task time delay and computing resources in the task development process; the task time delay is mainly divided into calculation time delay and communication time delay, the calculation time delay and the communication time delay of a plurality of groups of tasks under different calculation resources and communication resources are measured in a fitting mode, and then the relation between the calculation resources and the calculation time delay and the relation between the communication resources and the communication time delay are modeled according to two second-order polynomials;
s3, the task start-stop module checks the task start-stop table in real time, and sends the task ID and the delay constraint to the task modeling module when detecting that the task needs to be started;
s4, after receiving the task starting request, the task modeling module calculates the resource requirement according to the task time delay constraint and sends the resource requirement to the resource management module;
s5, after the resource management module receives the task starting request, if the local resource can meet the task requirement, the resource management module allocates the computing resource for the local resource through the contianerd interface; if the local computing resources can not meet the task requirements, sending computing unloading requests to other edge computing nodes;
s6, after the task is deployed successfully, a heartbeat signal is sent to the task start-stop module in real time, the task start-stop module monitors the task heartbeat, and when the task is ended accidentally, the task start-stop module restarts the task;
s7, after the task is deployed successfully, the resource management module monitors the use amount of the task memory in real time and dynamically allocates the memory for the task memory;
and S8, after the task is executed, the task itself or the task start-stop module determines the stop of the task, and the resource management module recovers the task resources after the task is stopped.
CN202011032940.0A 2020-09-27 2020-09-27 Capacitive pressure transmitter edge computing working system and working method Active CN112231097B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011032940.0A CN112231097B (en) 2020-09-27 2020-09-27 Capacitive pressure transmitter edge computing working system and working method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011032940.0A CN112231097B (en) 2020-09-27 2020-09-27 Capacitive pressure transmitter edge computing working system and working method

Publications (2)

Publication Number Publication Date
CN112231097A true CN112231097A (en) 2021-01-15
CN112231097B CN112231097B (en) 2024-05-24

Family

ID=74107827

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011032940.0A Active CN112231097B (en) 2020-09-27 2020-09-27 Capacitive pressure transmitter edge computing working system and working method

Country Status (1)

Country Link
CN (1) CN112231097B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113032120A (en) * 2021-03-26 2021-06-25 重庆大学 Industrial field big data task coordination degree method based on edge calculation
CN113886094A (en) * 2021-12-07 2022-01-04 浙江大云物联科技有限公司 Resource scheduling method and device based on edge calculation
CN114048040A (en) * 2021-11-29 2022-02-15 中南大学 Task scheduling method based on time delay relation between memory and image classification model

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002344504A (en) * 2001-05-14 2002-11-29 Nippon Telegr & Teleph Corp <Ntt> Computer network resource allocation method, resource control type computer network system, resource control server, edge switch, computer network resource allocation program, and storage medium storing computer network resource allocation program
WO2015003420A1 (en) * 2013-07-09 2015-01-15 国云科技股份有限公司 Resource deployment method for cloud computing environment
CN105373431A (en) * 2015-10-29 2016-03-02 武汉联影医疗科技有限公司 Computer system resource management method and computer resource management system
CN105704458A (en) * 2016-03-22 2016-06-22 北京邮电大学 Container-technology-based video monitoring cloud service platform realization method and system
US20170155595A1 (en) * 2015-11-29 2017-06-01 International Business Machines Corporation Reuse of computing resources for cloud managed services
US20190116128A1 (en) * 2017-10-18 2019-04-18 Futurewei Technologies, Inc. Dynamic allocation of edge computing resources in edge computing centers
CN109684075A (en) * 2018-11-28 2019-04-26 深圳供电局有限公司 A method of calculating task unloading is carried out based on edge calculations and cloud computing collaboration
US20190196875A1 (en) * 2017-10-27 2019-06-27 EMC IP Holding Company LLC Method, system and computer program product for processing computing task
US20190354413A1 (en) * 2018-05-17 2019-11-21 International Business Machines Corporation Optimizing dynamic resource allocations for memory-dependent workloads in disaggregated data centers
CN110928691A (en) * 2019-12-26 2020-03-27 广东工业大学 Traffic data-oriented edge collaborative computing unloading method
US20200145337A1 (en) * 2019-12-20 2020-05-07 Brian Andrew Keating Automated platform resource management in edge computing environments

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002344504A (en) * 2001-05-14 2002-11-29 Nippon Telegr & Teleph Corp <Ntt> Computer network resource allocation method, resource control type computer network system, resource control server, edge switch, computer network resource allocation program, and storage medium storing computer network resource allocation program
WO2015003420A1 (en) * 2013-07-09 2015-01-15 国云科技股份有限公司 Resource deployment method for cloud computing environment
CN105373431A (en) * 2015-10-29 2016-03-02 武汉联影医疗科技有限公司 Computer system resource management method and computer resource management system
US20170155595A1 (en) * 2015-11-29 2017-06-01 International Business Machines Corporation Reuse of computing resources for cloud managed services
CN105704458A (en) * 2016-03-22 2016-06-22 北京邮电大学 Container-technology-based video monitoring cloud service platform realization method and system
US20190116128A1 (en) * 2017-10-18 2019-04-18 Futurewei Technologies, Inc. Dynamic allocation of edge computing resources in edge computing centers
US20190196875A1 (en) * 2017-10-27 2019-06-27 EMC IP Holding Company LLC Method, system and computer program product for processing computing task
US20190354413A1 (en) * 2018-05-17 2019-11-21 International Business Machines Corporation Optimizing dynamic resource allocations for memory-dependent workloads in disaggregated data centers
CN109684075A (en) * 2018-11-28 2019-04-26 深圳供电局有限公司 A method of calculating task unloading is carried out based on edge calculations and cloud computing collaboration
US20200145337A1 (en) * 2019-12-20 2020-05-07 Brian Andrew Keating Automated platform resource management in edge computing environments
CN110928691A (en) * 2019-12-26 2020-03-27 广东工业大学 Traffic data-oriented edge collaborative computing unloading method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MAZOUZI H ETAL.: "Elastic offloading of multitasking applications to mobile edge computing", 《22ND INTERNATIONAL ACM CONFERENCE ON MODELING,ANALYSIS AND SIMULATION OF WIRELESS AND MOBILE SYSTEMS》, pages 307 *
杨祖业 等: "面向智能装备的工业互联网平台参考架构", 《中国仪器仪表》, no. 6, pages 31 - 36 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113032120A (en) * 2021-03-26 2021-06-25 重庆大学 Industrial field big data task coordination degree method based on edge calculation
CN114048040A (en) * 2021-11-29 2022-02-15 中南大学 Task scheduling method based on time delay relation between memory and image classification model
CN113886094A (en) * 2021-12-07 2022-01-04 浙江大云物联科技有限公司 Resource scheduling method and device based on edge calculation

Also Published As

Publication number Publication date
CN112231097B (en) 2024-05-24

Similar Documents

Publication Publication Date Title
CN112231097B (en) Capacitive pressure transmitter edge computing working system and working method
CN105357038B (en) Monitor the method and system of cluster virtual machine
US7761487B2 (en) Predicting out of memory conditions using soft references
CN109885389A (en) A kind of parallel deep learning scheduling training method and system based on container
US20120072579A1 (en) Monitoring cloud-runtime operations
EP1361513A2 (en) Systems and methods for providing dynamic quality of service for a distributed system
CN109117252B (en) Method and system for task processing based on container and container cluster management system
CN108874549B (en) Resource multiplexing method, device, terminal and computer readable storage medium
CN111913818A (en) Method for determining dependency relationship between services and related device
WO2018014812A1 (en) Risk identification method, risk identification apparatus, and cloud risk identification apparatus and system
Liu et al. Availability prediction and modeling of high mobility oscar cluster
CN112631725A (en) Cloud-edge-cooperation-based smart city management system and method
AU2015328574B2 (en) Real-time reporting based on instrumentation of software
CN115373835A (en) Task resource adjusting method and device for Flink cluster and electronic equipment
CN111459610A (en) Model deployment method and device
CN112162852A (en) Multi-architecture CPU node management method, device and related components
CN114138501B (en) Processing method and device for edge intelligent service for field safety monitoring
CN111796933A (en) Resource scheduling method, device, storage medium and electronic equipment
US20150301877A1 (en) Naming of nodes in net framework
CN115686813A (en) Resource scheduling method and device, electronic equipment and storage medium
CN108958840B (en) Dynamic detection, merging and loading method for cluster configuration
CN110147265A (en) A method of the integrated virtualization system based on microcontroller platform
Peng Gscheduler: Reducing mobile device energy consumption
CN112114972B (en) Data inclination prediction method and device
CN111399983B (en) Scheduling method and device based on container scheduling service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant