CN118093125A - Task processing method of edge computing device and edge computing device system - Google Patents

Task processing method of edge computing device and edge computing device system Download PDF

Info

Publication number
CN118093125A
CN118093125A CN202410194364.1A CN202410194364A CN118093125A CN 118093125 A CN118093125 A CN 118093125A CN 202410194364 A CN202410194364 A CN 202410194364A CN 118093125 A CN118093125 A CN 118093125A
Authority
CN
China
Prior art keywords
edge computing
computing device
equipment
list
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410194364.1A
Other languages
Chinese (zh)
Inventor
黄建斌
张晓辉
庞殊杨
刘晓松
冉星明
周青松
向发川
冯远航
李强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CISDI Chongqing Information Technology Co Ltd
Original Assignee
CISDI Chongqing Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CISDI Chongqing Information Technology Co Ltd filed Critical CISDI Chongqing Information Technology Co Ltd
Priority to CN202410194364.1A priority Critical patent/CN118093125A/en
Publication of CN118093125A publication Critical patent/CN118093125A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • General Factory Administration (AREA)

Abstract

The application provides a task processing method of edge computing equipment and an edge computing equipment system, and relates to the technical field of data transmission. The computing power of each edge computing device can be reasonably utilized, and the effect of improving the resource utilization rate of the edge computing device is achieved. And the cloud platform determines the scheduling list, so that the data processing efficiency can be improved.

Description

Task processing method of edge computing device and edge computing device system
Technical Field
The present application relates to the field of data transmission technologies, and in particular, to a task processing method of an edge computing device and an edge computing device system.
Background
The edge computing equipment is widely applied to scenes such as parks, finances and traffic, the edge computing equipment with certain computing power is deployed locally, the capability of some algorithms or data processing can be deployed locally, the response time of some AI algorithms can be obviously reduced through local processing, and only meaningful data can be sent to a cloud platform through local processing and preliminary screening of the data. AI vision (i.e., computer vision) is one of the important fields of artificial intelligence, and a video analysis system constructed based thereon replicates a human vision system through a training computer so that digital devices (e.g., a face detector, a QR code scanner) can recognize and process objects in images and videos like humans, and is widely used in various industries.
In the industrial field, anomaly detection, OCR recognition detection, defect detection and the like of a production line have been gradually changed from conventional manual visual inspection to AI image detection, and processing and detection control of pictures taken at a plurality of points of the production line by a plurality of edge computing devices is a relatively common manner at present. Because the number of the tasks of image shooting and detection processing received under different scenes is different, and meanwhile, the respective computing power performances of different edge computing devices are also different, the task loads of different edge computing devices in the same time period are unbalanced, so that the processing resources of the edge computing devices are wasted, and the image detection efficiency on a production line is also influenced.
Therefore, how to improve the resource utilization of the edge computing device is a urgent problem to be solved.
Disclosure of Invention
In view of the above-mentioned drawbacks of the related art, the present application provides a task processing method of an edge computing device and an edge computing device system, so as to solve the above-mentioned technical problems.
The application provides a task processing method of edge computing equipment, which comprises the following steps:
the method comprises the steps that a first edge computing device collects a first detection area image, a first task is generated according to the first detection area image, current device load parameters of the first edge computing device are compared with a first preset threshold, the first edge computing device is any one of a plurality of edge computing devices capable of performing data interaction, and each edge computing device corresponds to one detection area;
If the current equipment load parameter of the first edge computing equipment is larger than the first preset threshold, the first edge computing equipment sends list request information to a cloud platform;
The cloud platform responds to receiving list request information sent by the first edge computing equipment, acquires current equipment load parameters of all second edge computing equipment, generates a local scheduling list of the first edge computing equipment, sends the local scheduling list to the first edge computing equipment, and enables the second edge computing equipment to be all other edge computing equipment except the first edge computing equipment;
the first edge computing device obtains a local scheduling list sent by the cloud platform and sends the first task to target edge computing devices with highest scheduling priority in the scheduling list;
And the target edge computing equipment receives the first task and processes the first task to obtain a processing result, and the processing result is sent to the cloud platform and/or the first edge computing equipment.
In an embodiment of the present application, the cloud platform obtains current device load parameters of all second edge computing devices, and the cloud platform generates a local scheduling list of the first edge computing devices, including:
sending load request information to the second edge computing device;
Acquiring current equipment load parameters sent by all second edge computing equipment according to the load request information;
and determining a local scheduling list of the first edge computing device according to the current device load parameter of each second edge computing device, wherein the local scheduling list of the first edge computing device comprises scheduling priorities of all the second edge computing devices.
In an embodiment of the present application, after sending load request information to the second edge computing device, the method further includes:
Sending load request information to the first edge computing device;
And generating a local scheduling list corresponding to each edge computing device according to the current device load parameters of each edge computing device, and sending the local scheduling list to each second edge computing device.
In one embodiment of the present application, after sending the local schedule list to each second edge computing device, the method further includes:
each second edge computing device determining whether a local dispatch list already exists;
If the local scheduling list exists, updating the scheduling list according to the local scheduling list sent by the cloud platform;
and if the local scheduling list does not exist, storing the local scheduling list sent by the cloud platform.
In an embodiment of the present application, the method further includes:
the method comprises the steps that a third edge computing device collects a third detection area image, a third task is generated according to the third detection area image, current device load parameters of the third edge computing device are compared with a third preset threshold, and the third edge computing device is any one of a plurality of edge computing devices capable of performing data interaction;
If the current equipment load parameter of the third edge computing equipment is larger than the third preset threshold value, determining whether a local scheduling list exists, and if so, determining list updating time; determining an update interval time according to the update time and the current time, and if the update interval time is greater than a preset interval time, sending list request information to a cloud platform;
And the third edge computing device acquires a local scheduling list sent by the cloud platform, and sends the third task to a target edge computing device with the highest scheduling priority in the scheduling list.
In an embodiment of the present application, after determining the update interval time according to the update time and the current time, the method further includes:
And if the update interval time is smaller than or equal to the preset interval time, the third task is sent to the target edge computing device with the highest scheduling priority in the scheduling list.
In an embodiment of the present application, after comparing the magnitude between the current device load parameter of the first edge computing device and the first preset threshold, the method further includes:
And if the current equipment load parameter is smaller than or equal to the first preset threshold value, the first task is processed by the first edge computing equipment.
To achieve the above and other related objects, the present application provides an edge computing device system comprising:
The edge computing devices are used for respectively shooting corresponding detection area images on the production line;
The cloud platform is used for responding to receiving list request information sent by first edge computing equipment, acquiring current equipment load parameters of all second edge computing equipment, generating a local scheduling list of the first edge computing equipment, sending the local scheduling list to the first edge computing equipment, wherein the first edge computing equipment is any one of a plurality of edge computing equipment, and the second edge computing equipment is all other edge computing equipment except the first edge computing equipment in a plurality of edge computing equipment capable of carrying out data interaction;
Each edge computing device includes:
the acquisition module is used for acquiring the detection area image corresponding to the local machine;
the task generating module is used for generating a detection task according to the detection area image;
the comparison module is used for comparing the current equipment load parameter of the machine with a preset threshold value corresponding to the machine;
The information sending module is used for sending list request information to the cloud platform if the current equipment load parameter of the local machine is larger than a preset threshold corresponding to the local machine;
The task sending module is used for obtaining a local scheduling list sent by the cloud platform and sending a local detection task to target edge computing equipment with highest scheduling priority in the scheduling list;
the task processing module is used for processing the detection task of the local machine or receiving the detection task sent by other edge computing equipment to process to obtain a processing result, and sending the processing result to the cloud platform and/or other edge computing equipment.
In an embodiment of the present application, the cloud platform is further configured to:
sending load request information to the second edge computing device;
Acquiring current equipment load parameters sent by all second edge computing equipment according to the load request information;
and determining a local scheduling list of the first edge computing device according to the current device load parameter of each second edge computing device, wherein the local scheduling list of the first edge computing device comprises scheduling priorities of all the second edge computing devices.
In an embodiment of the present application, the cloud platform is further configured to:
Sending load request information to the first edge computing device;
and generating a local scheduling list corresponding to each edge computing device according to the current device load parameter of each edge computing device, and sending the local scheduling list to each edge computing device.
As described above, the task processing method of the edge computing device and the edge computing device system provided by the application have the following beneficial effects:
The method comprises the steps that a first detection area image is acquired through a first edge computing device, a first task is generated according to the first detection area image, current device load parameters of the first edge computing device are compared with a first preset threshold, if the current device load parameters of the first edge computing device are larger than the first preset threshold, the first edge computing device sends list request information to a cloud platform, the cloud platform obtains the current device load parameters of all second edge computing devices in response to receiving the list request information sent by the first edge computing device, a local scheduling list of the first edge computing device is generated, the local scheduling list of the first edge computing device is sent to the first edge computing device, the first edge computing device obtains the local scheduling list sent by the cloud platform, the first task is sent to target edge computing devices with highest scheduling priority in the scheduling list, the target edge computing device receives the first task to process to obtain a processing result, and the processing result is sent to the cloud platform and/or the first edge computing device. By comparing the first task with a first preset threshold corresponding to the first edge computing device, when the current device load parameter of the first edge computing device is large and the first task cannot be processed quickly, the first task can be sent to the target edge computing device with the highest scheduling priority in the scheduling list for processing, the computing power of each edge computing device is reasonably utilized, and the effect of improving the resource utilization rate of the edge computing devices is achieved. And the cloud platform determines the scheduling list, so that the data processing efficiency can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. It is evident that the drawings in the following description are only some embodiments of the present application and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art. In the drawings:
FIG. 1 is a flow chart of a method of task processing for an edge computing device, as illustrated by an exemplary embodiment of the application;
FIG. 2 is a block diagram of an edge computing device system shown in accordance with an exemplary embodiment of the present application.
Detailed Description
Further advantages and effects of the present application will become readily apparent to those skilled in the art from the disclosure herein, by referring to the accompanying drawings and the preferred embodiments. The application may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present application. It should be understood that the preferred embodiments are presented by way of illustration only and not by way of limitation.
It should be noted that the illustrations provided in the following embodiments merely illustrate the basic concept of the present application by way of illustration, and only the components related to the present application are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
In the following description, numerous details are set forth in order to provide a more thorough explanation of embodiments of the present application, it will be apparent, however, to one skilled in the art that embodiments of the present application may be practiced without these specific details, in other embodiments, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the embodiments of the present application.
Referring to fig. 1, fig. 1 is a flowchart illustrating a task processing method of an edge computing device according to an exemplary embodiment of the present application. As can be seen with reference to fig. 1, the task processing method of the edge computing device may include:
in step S110, a first edge computing device collects a first detection area image, generates a first task according to the first detection area image, compares a current device load parameter of the first edge computing device with a first preset threshold, and is any one of a plurality of edge computing devices capable of performing data interaction, wherein each edge computing device corresponds to a detection area.
In one possible implementation, a plurality of edge computing devices capable of data interaction may be connected by network communication to enable data transmission between the edge computing devices. In addition, each edge computing device can be connected with the cloud platform to achieve data transmission with the cloud platform. Each edge computing device may correspond to a detection area, and each edge computing device may acquire a detection area image of the corresponding detection area.
For example, the edge computing devices may be AI (ARTIFICIAL INTELLIGENCE ) edge computing devices, each of which may include an image acquisition device to implement the function of acquiring an image of the detection region.
It should be noted that edge computing refers to an open platform that merges core capabilities of network, computing, storage and application on the network edge side near the object or data source, and provides edge intelligent service nearby, so as to meet the key requirements of industry digitization in aspects of agile connection, real-time service, data optimization, application intelligence, security and privacy protection. An edge computing device is a carrier body for implementing edge computing functions.
In one possible implementation, the first edge computing device may compare its current device load parameter to a first preset threshold. The first preset threshold may be a maximum value of a device load that does not affect the operation of the device when the first edge calculates the operation of the device. After the current equipment load parameter exceeds a first preset threshold, the first edge computing equipment cannot process the first task, and the first edge computing equipment needs to wait for the current equipment load parameter of the first edge computing equipment to be smaller than the first preset threshold and then process the first task, or send the first task to other edge computing equipment for processing.
It should be noted that the first edge computing device may be any one of a plurality of edge computing devices capable of performing data interaction, each edge computing device may correspond to a preset threshold, the preset threshold corresponding to each edge computing device may be the same or different, and a specific preset threshold may be set according to an actual device performance of each edge computing device.
For example, the current device load parameter may include a weighted sum of a first metric factor, which may be a normalized value of the current active task number of the edge computing device, and a second metric factor, which may be a normalized value of the current load rate or a last-period average load rate of the edge computing device. The weight can be set by an operator according to actual conditions.
In step S120, if the current device load parameter of the first edge computing device is greater than the first preset threshold, the first edge computing device sends list request information to the cloud platform.
In one possible implementation, if the current device load parameter of the first edge computing device is greater than a first preset threshold, the first edge computing device sends list request information to the cloud platform. When the current equipment load parameter of the first edge computing equipment is larger than a first preset threshold, the load of the first edge computing equipment is larger, the efficiency of processing the first task is slower, and list request information needs to be sent to the cloud platform.
It should be noted that, if the current device load parameter is less than or equal to the first preset threshold, the first task is processed by the first edge computing device.
In step S130, in response to receiving the list request information sent by the first edge computing device, the cloud platform obtains current device load parameters of all second edge computing devices, generates a local scheduling list of the first edge computing device, and sends the local scheduling list to the first edge computing device, where the second edge computing device is all other edge computing devices except the first edge computing device among the plurality of edge computing devices capable of performing data interaction.
In one possible implementation manner, after the first edge computing device sends the list request to the cloud platform, in response to receiving the list request information, the cloud platform may obtain current device load parameters of all second edge computing devices, generate a scheduling list of the first edge computing device, and then may send the scheduling list of the first edge computing device to the first edge computing device.
It should be noted that, compared with the method that the first edge computing device obtains the current device load parameters of other edge computing devices and generates the local scheduling list, the cloud platform obtains the current device load parameters of all the second edge computing devices and generates the local scheduling list of the first edge computing device, so that the operation storage space of the edge computing device can be released, and the influence of the additional computing process on the local scheduling list is avoided.
In an exemplary embodiment, the process of the cloud platform obtaining the current device load parameters of all the second edge computing devices and generating the local schedule list of the first edge computing devices in step S130 may include steps S131 to S137.
Step S131, the load request information is sent to the second edge computing device.
In one possible implementation, the cloud platform may send load request information to the second edge computing device. After the cloud platform receives the list request information sent by the first edge computing device, the cloud platform needs to obtain current device load parameters of each second edge computing device, and at this time, the cloud platform can send load request information to each second edge computing device.
Step S132, obtaining current equipment load parameters sent by all second edge computing equipment according to the load request information.
In one possible implementation manner, the cloud platform may obtain current device load parameters sent by all the second edge computing devices according to the load request information. Each second edge computing device can calculate the current device load parameter of the local machine after receiving the load request information sent by the cloud platform, and send the current device load parameter of the local machine to the cloud platform.
Step S133, determining a local scheduling list of the first edge computing device according to the current device load parameter of each second edge computing device, wherein the local scheduling list of the first edge computing device comprises scheduling priorities of all the second edge computing devices.
In one possible implementation, a schedule list of all second edge computing devices except the first edge computing device may be determined.
Step S134, the load request information is sent to the first edge computing device.
In one possible implementation, load request information may also be sent to the first edge computing device to obtain current device load parameters of the first edge computing device. Since the current device load parameters of all the second edge computing devices have been obtained in step S132, after the load request is sent to the first edge computing device at this time, the current device load parameters of all the edge computing devices may be obtained, and thus, the local schedule list of each edge computing device may be determined.
Step S134, according to the current equipment load parameters of each edge computing equipment, generating a local scheduling list corresponding to each edge computing equipment, and sending the local scheduling list to each second edge computing equipment.
In one possible implementation, a local schedule list corresponding to each edge computing device may be generated according to current device load parameters of each edge computing device, and the local schedule list may be sent to each second edge computing device.
In step S135, each second edge computing device determines whether a local schedule list already exists.
In one possible implementation, each second edge computing device, upon receiving the native schedule list, may determine whether the native schedule list already exists.
Step S136, if the local scheduling list exists, updating the scheduling list according to the local scheduling list sent by the cloud platform.
In one possible implementation manner, if the local scheduling list already exists, it is indicated that the second edge computing device has previously received the scheduling list sent by the cloud end, and at this time, the update operation of the scheduling list may be performed according to the local scheduling list sent by the cloud end platform.
Step S137, if the local scheduling list does not exist, the local scheduling list sent by the cloud platform is stored.
In one possible implementation manner, if the local scheduling list does not exist, it is indicated that the second edge computing device has not received the scheduling list sent by the cloud end, and at this time, the local scheduling list sent by the cloud end platform may be stored.
In step S140, the first edge computing device obtains a local scheduling list sent by the cloud platform, and sends the first task to a target edge computing device with the highest scheduling priority in the scheduling list.
In one possible implementation manner, the first edge computing device may acquire a local scheduling list sent by the cloud platform, and send the first task to a target edge computing device with a highest scheduling priority in the scheduling list. The local scheduling list can display other edge computing devices, each edge computing device can be correspondingly provided with a scheduling priority, and the scheduling priority can be calculated according to the current device load parameters of each edge computing device, namely the corresponding scheduling priority of the current device load parameters is higher. After receiving the local scheduling list sent by the cloud platform, the first edge computing device can send the first task to the target edge computing device with the highest scheduling priority in the scheduling list, and the target edge computing device with the highest scheduling priority has the lowest corresponding current device load parameter, so that sufficient processing space can be provided for processing the first task.
It should be noted that, after the first task is sent to the target edge computing device with the highest scheduling priority in the scheduling list, the target edge computing device may call an internal task processing module to process the first task.
In step S150, the target edge computing device receives the first task to process to obtain a processing result, and sends the processing result to the cloud platform and/or the first edge computing device.
In one possible implementation manner, the target edge computing device may receive the first task and process the first task to obtain a processing result, and may send the processing result to the cloud platform and/or the first edge computing device.
When the method for processing the tasks of the edge computing equipment provided by the embodiment of the application is needed to be described, the load balance of processing the image detection tasks by the edge computing equipment which is positioned in the same network segment in the same time period can be realized, the waste of processing resources of the edge computing equipment is avoided, and the image detection efficiency on a production line is obviously improved.
In an exemplary embodiment, the task processing method of the edge computing device may further include steps S210 to S240.
Step S210, the third edge computing device collects a third detection area image, generates a third task according to the third detection area image, and compares the current device load parameter of the third edge computing device with a third preset threshold.
It should be noted that the third edge computing device is any one of a plurality of edge computing devices capable of data interaction.
In one possible implementation, the third edge computing device acquires a third detection area image, generates a third task according to the third detection area image, and compares the magnitude between the current device load parameter of the third edge computing device and a third preset threshold.
Step S220, if the current equipment load parameter of the third edge computing equipment is larger than a third preset threshold value, determining whether a local scheduling list exists, and if so, determining list updating time; and determining an update interval time according to the update time and the current time, and if the update interval time is greater than a preset interval time, sending list request information to the cloud platform.
In one possible implementation manner, if the current device load parameter of the third edge computing device is greater than a third preset threshold, whether a local scheduling list exists or not may be determined, and if the local scheduling list exists, a list update time is determined; and determining an update interval time according to the update time and the current time, if the update interval time is larger than a preset interval time, the equipment load of each edge computing equipment may have larger change affecting the scheduling priority in the update interval time, and at this time, list request information can be sent to the cloud platform.
In step S230, the third edge computing device obtains the local scheduling list sent by the cloud platform, and sends the third task to the target edge computing device with the highest scheduling priority in the scheduling list.
In one possible implementation manner, the third edge computing device may acquire a local scheduling list sent by the cloud platform, and send the third task to the target edge computing device with the highest scheduling priority in the scheduling list.
In step S240, if the update interval time is less than or equal to the preset interval time, the third task is sent to the target edge computing device with the highest scheduling priority in the scheduling list.
In one possible implementation, if the update interval time is less than or equal to the preset interval time, the change of the scheduling priority of each edge computing device in the scheduling list is small, and the third task may be sent to the target edge computing device with the highest scheduling priority in the scheduling list.
In summary, by comparing the first task with the first preset threshold corresponding to the first edge computing device, when the current device load parameter of the first edge computing device is larger and the first task cannot be processed faster, the scheme of the embodiment of the application can send the first task to the target edge computing device with the highest scheduling priority in the scheduling list for processing, and the computing power of each edge computing device is reasonably utilized, so that the effect of improving the resource utilization rate of the edge computing device is achieved. And the cloud platform determines the scheduling list, so that the data processing efficiency can be improved.
FIG. 2 is a block diagram of an edge computing device system shown in accordance with an exemplary embodiment of the present application. As shown in fig. 2, the exemplary edge computing device system may include:
A plurality of edge computing devices 210 for respectively photographing detection area images of corresponding detection areas located on the production line;
The cloud platform 220 is connected with each edge computing device and is used for responding to the received list request information sent by the first edge computing device, the cloud platform obtains current device load parameters of all second edge computing devices, generates a scheduling list of the first edge computing device, sends the scheduling list of the first edge computing device to the first edge computing device, the first edge computing device is any one of a plurality of edge computing devices, and the second edge computing device is all other edge computing devices except the first edge computing device among the plurality of edge computing devices capable of carrying out data interaction;
each edge computing device 210 may include:
the acquisition module 211 is used for acquiring the detection area image corresponding to the local machine;
a task generating module 212, configured to generate a detection task according to the detection area image;
a comparison module 213, configured to compare a current device load parameter of the local device with a preset threshold corresponding to the local device;
The information sending module 214 is configured to send list request information to the cloud platform if the current device load parameter of the local device is greater than a preset threshold corresponding to the local device;
The task sending module 215 is configured to obtain a local scheduling list sent by the cloud platform, and send a local detection task to a target edge computing device with a highest scheduling priority in the scheduling list;
The task processing module 216 is configured to process a local detection task, or receive detection tasks sent by other edge computing devices, process the detection tasks to obtain a processing result, and send the processing result to the cloud platform and/or other edge computing devices.
In one possible implementation, the cloud platform is further configured to:
Transmitting load request information to the second edge computing device;
acquiring current equipment load parameters sent by all second edge computing equipment according to the load request information;
And determining a local scheduling list of the first edge computing device according to the current device load parameter of each second edge computing device, wherein the local scheduling list of the first edge computing device comprises scheduling priorities of all the second edge computing devices.
In one possible implementation, the cloud platform is further configured to:
sending load request information to a first edge computing device;
and generating a local scheduling list corresponding to each edge computing device according to the current device load parameter of each edge computing device, and sending the local scheduling list to each edge computing device.
It should be noted that, the edge computing device system provided in the foregoing embodiment and the task processing method of the edge computing device provided in the foregoing embodiment belong to the same concept, and specific manners in which each module and unit perform operations have been described in detail in the method embodiment, which is not described herein again. In practical application, the edge computing device system provided in the above embodiment may allocate the functions to different functional modules according to needs, that is, the internal structure of the system is divided into different functional modules to perform all or part of the functions described above, which is not limited herein.
The embodiment of the application also provides electronic equipment, which comprises: one or more processors; and a storage device for storing one or more programs, which when executed by the one or more processors, cause the electronic device to implement the task processing method of the edge computing device provided in the above embodiments.
The units involved in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
Another aspect of the present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to perform the task processing method of the edge computing device provided in the above embodiments. The computer-readable storage medium may be included in the electronic device described in the above embodiment or may exist alone without being incorporated in the electronic device.
Another aspect of the application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the task processing method of the edge computing device provided in the above embodiments.
In the present disclosure, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the following description and in the claims, the terms "include" and "comprise" are used in an open-ended fashion, and thus should be interpreted to mean "include, but not limited to.
In the present application, the term "and/or" is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front-rear association object is an "or" relationship.
The above embodiments are merely illustrative of the principles of the present application and its effectiveness, and are not intended to limit the application. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the application. It is therefore intended that all equivalent modifications and changes made by those skilled in the art without departing from the spirit and technical spirit of the present application shall be covered by the appended claims.

Claims (10)

1. A method for processing tasks of an edge computing device, comprising:
the method comprises the steps that a first edge computing device collects a first detection area image, a first task is generated according to the first detection area image, current device load parameters of the first edge computing device are compared with a first preset threshold, the first edge computing device is any one of a plurality of edge computing devices capable of performing data interaction, and each edge computing device corresponds to one detection area;
If the current equipment load parameter of the first edge computing equipment is larger than the first preset threshold, the first edge computing equipment sends list request information to a cloud platform;
The cloud platform responds to receiving list request information sent by the first edge computing equipment, acquires current equipment load parameters of all second edge computing equipment, generates a local scheduling list of the first edge computing equipment, sends the local scheduling list to the first edge computing equipment, and enables the second edge computing equipment to be all other edge computing equipment except the first edge computing equipment;
the first edge computing device obtains a local scheduling list sent by the cloud platform and sends the first task to target edge computing devices with highest scheduling priority in the scheduling list;
And the target edge computing equipment receives the first task and processes the first task to obtain a processing result, and the processing result is sent to the cloud platform and/or the first edge computing equipment.
2. The method for processing tasks of edge computing devices according to claim 1, wherein the cloud platform obtains current device load parameters of all second edge computing devices, and the cloud platform generates a scheduling list of the first edge computing devices, including:
sending load request information to the second edge computing device;
Acquiring current equipment load parameters sent by all second edge computing equipment according to the load request information;
and determining a local scheduling list of the first edge computing device according to the current device load parameter of each second edge computing device, wherein the local scheduling list of the first edge computing device comprises scheduling priorities of all the second edge computing devices.
3. The method of task processing for an edge computing device according to claim 2, wherein after sending load request information to the second edge computing device, the method further comprises:
Sending load request information to the first edge computing device;
And generating a local scheduling list corresponding to each edge computing device according to the current device load parameters of each edge computing device, and sending the local scheduling list to each second edge computing device.
4. A method of task processing for an edge computing device according to claim 3, wherein after sending the local schedule list to each second edge computing device, the method further comprises:
each second edge computing device determining whether a local dispatch list already exists;
If the local scheduling list exists, updating the scheduling list according to the local scheduling list sent by the cloud platform;
and if the local scheduling list does not exist, storing the local scheduling list sent by the cloud platform.
5. A method of task processing for an edge computing device according to claim 3, the method further comprising:
the method comprises the steps that a third edge computing device collects a third detection area image, a third task is generated according to the third detection area image, current device load parameters of the third edge computing device are compared with a third preset threshold, and the third edge computing device is any one of a plurality of edge computing devices capable of performing data interaction;
If the current equipment load parameter of the third edge computing equipment is larger than the third preset threshold value, determining whether a local scheduling list exists, and if so, determining list updating time; determining an update interval time according to the update time and the current time, and if the update interval time is greater than a preset interval time, sending list request information to a cloud platform;
And the third edge computing device acquires a local scheduling list sent by the cloud platform, and sends the third task to a target edge computing device with the highest scheduling priority in the scheduling list.
6. The task processing method of an edge computing device according to claim 5, wherein after determining an update interval time from the update time and a current time, the method further comprises:
And if the update interval time is smaller than or equal to the preset interval time, the third task is sent to the target edge computing device with the highest scheduling priority in the scheduling list.
7. The method of task processing for an edge computing device of claim 1, wherein after comparing a current device load parameter of the first edge computing device to a first preset threshold, the method further comprises:
And if the current equipment load parameter is smaller than or equal to the first preset threshold value, the first task is processed by the first edge computing equipment.
8. An edge computing device system, comprising:
The edge computing devices are used for respectively shooting corresponding detection area images on the production line;
The cloud platform is used for responding to receiving list request information sent by first edge computing equipment, acquiring current equipment load parameters of all second edge computing equipment, generating a local scheduling list of the first edge computing equipment, sending the local scheduling list to the first edge computing equipment, wherein the first edge computing equipment is any one of a plurality of edge computing equipment, and the second edge computing equipment is all other edge computing equipment except the first edge computing equipment in a plurality of edge computing equipment capable of carrying out data interaction;
Each edge computing device includes:
the acquisition module is used for acquiring the detection area image corresponding to the local machine;
the task generating module is used for generating a detection task according to the detection area image;
the comparison module is used for comparing the current equipment load parameter of the machine with a preset threshold value corresponding to the machine;
The information sending module is used for sending list request information to the cloud platform if the current equipment load parameter of the local machine is larger than a preset threshold corresponding to the local machine;
The task sending module is used for obtaining a local scheduling list sent by the cloud platform and sending a local detection task to target edge computing equipment with highest scheduling priority in the scheduling list;
the task processing module is used for processing the detection task of the local machine or receiving the detection task sent by other edge computing equipment to process to obtain a processing result, and sending the processing result to the cloud platform and/or other edge computing equipment.
9. The edge computing device system of claim 8, wherein the cloud platform is further to:
sending load request information to the second edge computing device;
Acquiring current equipment load parameters sent by all second edge computing equipment according to the load request information;
and determining a local scheduling list of the first edge computing device according to the current device load parameter of each second edge computing device, wherein the local scheduling list of the first edge computing device comprises scheduling priorities of all the second edge computing devices.
10. The edge computing device system of claim 9, wherein the cloud platform is further to:
Sending load request information to the first edge computing device;
and generating a local scheduling list corresponding to each edge computing device according to the current device load parameter of each edge computing device, and sending the local scheduling list to each edge computing device.
CN202410194364.1A 2024-02-21 2024-02-21 Task processing method of edge computing device and edge computing device system Pending CN118093125A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410194364.1A CN118093125A (en) 2024-02-21 2024-02-21 Task processing method of edge computing device and edge computing device system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410194364.1A CN118093125A (en) 2024-02-21 2024-02-21 Task processing method of edge computing device and edge computing device system

Publications (1)

Publication Number Publication Date
CN118093125A true CN118093125A (en) 2024-05-28

Family

ID=91160902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410194364.1A Pending CN118093125A (en) 2024-02-21 2024-02-21 Task processing method of edge computing device and edge computing device system

Country Status (1)

Country Link
CN (1) CN118093125A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118382106A (en) * 2024-06-24 2024-07-23 互丰科技(北京)有限公司 Wireless data transmission processing method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118382106A (en) * 2024-06-24 2024-07-23 互丰科技(北京)有限公司 Wireless data transmission processing method and system

Similar Documents

Publication Publication Date Title
US11983909B2 (en) Responding to machine learning requests from multiple clients
CN118093125A (en) Task processing method of edge computing device and edge computing device system
JP2017537357A (en) Alarm method and device
CN104041016A (en) Camera device, server device, image monitoring system, control method of image monitoring system, and control program of image monitoring system
CN112162863B (en) Edge unloading decision method, terminal and readable storage medium
CN114285847A (en) Data processing method and device, model training method and device, electronic equipment and storage medium
CN111402297A (en) Target tracking detection method, system, electronic device and storage medium
CN113850285A (en) Power transmission line defect identification method and system based on edge calculation
CN111050027B (en) Lens distortion compensation method, device, equipment and storage medium
CN113835876A (en) Artificial intelligent accelerator card scheduling method and device based on domestic CPU and OS
CN111866159A (en) Method, system, device and storage medium for calling artificial intelligence service
CN115617532B (en) Target tracking processing method, system and related device
CN114363579B (en) Method and device for sharing monitoring video and electronic equipment
CN116089043A (en) Heterogeneous application system video analysis task scheduling method, device, terminal and medium
CN115952866A (en) Inference method, computer equipment and medium for artificial intelligence inference framework
CN111107530A (en) Agricultural disease and pest control system based on LoRa technology
CN118037997B (en) Cloud rendering method and device and related equipment
CN117424936B (en) Video edge gateway autonomous scheduling monitoring method, device, equipment and medium
CN114531603B (en) Image processing method and system for video stream and electronic equipment
CN118485896B (en) Algorithm testing method and device, electronic device and storage medium
CN112637312B (en) Edge node task coordination method, device and storage medium
CN118034913A (en) Cloud cooperative control method, electronic equipment and integrated large model deployment architecture
CN112738199B (en) Scheduling method and scheduling system
CN118138801B (en) Video data processing method and device, electronic equipment and storage medium
Dong et al. EdgeCam: A Distributed Camera Operating System for Inference Scheduling and Continuous Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination