CN113992941B - Cloud edge collaborative video analysis system and method based on server-free function computing - Google Patents

Cloud edge collaborative video analysis system and method based on server-free function computing Download PDF

Info

Publication number
CN113992941B
CN113992941B CN202111184613.1A CN202111184613A CN113992941B CN 113992941 B CN113992941 B CN 113992941B CN 202111184613 A CN202111184613 A CN 202111184613A CN 113992941 B CN113992941 B CN 113992941B
Authority
CN
China
Prior art keywords
module
state
edge
state machine
cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111184613.1A
Other languages
Chinese (zh)
Other versions
CN113992941A (en
Inventor
周知
刘魏鑫宁
陈旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202111184613.1A priority Critical patent/CN113992941B/en
Publication of CN113992941A publication Critical patent/CN113992941A/en
Application granted granted Critical
Publication of CN113992941B publication Critical patent/CN113992941B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/34Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream

Abstract

The invention discloses a cloud-edge collaborative video analysis system based on serverless function computing, which comprises an edge state machine module, a cloud-edge collaborator module and a cloud application module; the edge state machine module is used for supplementing the edge capability for the Amazon step function, and is a system which can independently run at an edge end or run with the Amazon step function at the cloud end; the cloud edge cooperator module is used for converting and cooperating an edge state machine and an Amazon step function; the cloud application module is used for defining a model assembly line of a video stream analysis task; and realizing video stream analysis tasks for performing different analyses according to whether the video content contains characters or not.

Description

Cloud edge collaborative video analysis system and method based on server-free function computing
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a cloud-edge collaborative video analysis system and method based on serverless function computing.
Technical Field
The architecture effectively reduces network delay and improves application response speed by placing part of tasks on edge equipment which is close to a user terminal in physical distance, and meanwhile, the rest tasks are placed on a cloud end with elastic resource expansion capacity for computing. Since the online video processing task facing the user needs to use various types of deep learning models for reasoning, strict requirements are placed on low response time. In recent years, amazon platform develops a serverless service named amazon Lambda, which allows a user to execute a function while a third party governs the allocation and expansion of computing resources, and thus amazon Lambda has a characteristic of fast response speed and reducing the burden of a developer on managing computing resources. However, the Lambda function is stateless, event-driven, and expensive, so that the Lambda function and the edge algorithm are combined to achieve a better balance between latency and cost.
In order to achieve better balance, the task needs to be divided into a cloud end and an edge end, and then the cloud end cooperatively executes the task. The division mode of the tasks under the cloud edge collaborative scene can cause great influence on the user experience. The existing cloud edge collaborative model usually performs task division by developers according to the functional composition flow of the application, and does not consider performing task division according to the model. Because the demands and response times of different models for the memory are greatly different, coarse-grained task scheduling may cause cloud-edge collaboration of tasks to fail to reach an optimal balance point, and slight differences of schemes for task division may cause severe jitter of response times.
In order to divide a task, many tasks divide the task into a plurality of sub-tasks, and combine the sub-tasks in a pipeline form into a directed acyclic graph, and then perform the task division. However, most of the directed acyclic graphs of these working designs are organized in a sequential structure, and the selection, circulation and branch relationships among the subtasks are not considered. When a task with complex internal logic is encountered, a directed acyclic graph which can accurately describe the task cannot be obtained.
To address this problem, some prior efforts have implemented the use of a framework provided by a cloud provider in the cloud to construct a directed acyclic graph with complex graph logic, such as amazon developing a pipeline service named amazon step function that can organize amazon Lambda functions into one state machine instance to coordinate the execution of multiple functions.
However, current work does not address providing an automated framework at the edge end that partitions the task into directed acyclic graphs with complex graph logic. This results in that task partitioning at the edge end needs to completely depend on manual program writing, which causes non-uniformity of function interfaces, which leads to difficulty in multiplexing subtask modules. However, a plurality of identical subtask modules often appear in a cloud-edge collaborative block diagram, and non-uniform function interfaces lead to code redundancy and block diagram bloat.
In order to ensure satisfactory online service performance and reduce resource consumption, when the execution environment of a subtask is judged to be at a cloud end or an edge end, an existing cloud edge collaboration framework is not flexible enough, some methods judge whether the function is executed at the cloud end or the edge end by taking the function as a unit, although resource scheduling can be carried out in a fine-grained manner, the integrity of a directed acyclic graph and the dependence between subtasks are ignored, and the concurrency condition is only solved, namely the sequential inherited logical relationship between functions. Other methods divide the directed acyclic graph in advance and then use an algorithm to select the execution environment of the subgraph, and the method is a coarse-grained division method although the completeness of the graph is maintained. Therefore, it is difficult to satisfy both important features of graph integrity and segmentation sensitivity in task segmentation.
Therefore, the prior art has the following drawbacks:
1. in order to realize fine-grained resource scheduling of cloud-edge cooperative tasks, when the tasks are divided, a plurality of tasks are divided into a plurality of subtasks by a plurality of jobs, and the subtasks are combined into a directed acyclic graph in a pipeline mode and then graph division is carried out. However, most of the directed acyclic graphs of these working designs are organized in a sequential structure, and do not consider the selection, circulation and branch relationships among the subtasks. When a task with complex internal logic is encountered, a directed acyclic graph which can accurately describe the task cannot be obtained. Amazon proposed amazon step functions have implemented the use of cloud provider provided frames in the cloud to construct directed acyclic graphs with complex graph logic, however, these efforts have not studied providing an automated frame at the edge to partition tasks into directed acyclic graphs with complex graph logic.
2. In order to balance satisfactory online service performance and reduce resource consumption, when the execution environment of a subtask is judged to be at a cloud end or an edge end, an existing cloud edge collaboration framework is generally not flexible enough, although an existing method such as amazon internet of things green grass can extend an amazon Lambda function to the edge end, a method for organizing the amazon Lambda function in an directed acyclic graph form is not realized at the edge end, the function is still a complete task executed at the edge end, and fine-grained division is not performed in the task. Other methods implement the function organization in amazon state language semantics, such as the open-source framework Flogo, but the framework implementation is not complete, only implements the most basic sequence and selection branches, etc., but does not implement redefinition set branching and interfacing with amazon step functions, so the Flogo is only a functional incomplete and can only be a stand-alone system running on the edge end alone.
3. The existing method usually judges whether the function is executed at a cloud end or an edge end by taking the function as a unit, although fine-grained resource scheduling can be performed, the integrity of the directed acyclic graph, dependence between subtasks and high concurrency are ignored, and only the logical relationship of sequential inheritance between functions can be realized. Other methods divide the directed acyclic graph under a plurality of different division points in advance, and then use an algorithm to select the execution environment of the subgraph, and the static cloud edge cooperation method maintains the integrity of the graph, but has insufficient sensitivity in the division of the graph. Therefore, it is difficult for the above-described method to satisfy both important features of the completeness of the graph and the sensitivity of the segmentation in the task division.
Disclosure of Invention
In view of the defects of the prior art, the invention provides a cloud edge collaborative video analysis system and method based on server-free function computing, in particular to a high-concurrency cloud edge collaborative video stream processing framework with finite state machine semantics based on a model pipeline, which is composed of a cloud model pipeline based on an amazon step function state machine and an edge model pipeline based on an edge high-concurrency state machine edge state machine independently developed by the invention. The model pipeline refers to a task form with a finite-state machine which is formed by a plurality of deep learning models with different memory requirements or different functions. The method has the characteristic that a model assembly line with complex graph logic can be formed while fine-grained resource scheduling can be realized at a cloud end and an edge end.
Firstly, after obtaining the request data and task mode of a user, mapping the task into a directed acyclic graph which is composed of a plurality of states and has complex graph logic. The edge state machine deployed on the edge device respectively encapsulates a plurality of states in the directed acyclic graph into independent function templates, and then combines the states into a model pipeline state machine with complex graph logic according to mapping rules and logic between the states in the directed acyclic graph. In order to solve the defect that a cloud edge collaborative framework is not flexible enough, after a complete model pipeline state machine is deployed at an edge end, the model pipeline state machine with the same function as that of the edge end is arranged at a cloud end by using an amazon step function, the model pipeline state machines of the cloud end and the edge end introduce a routing template for automatically judging a starting point and an end point, and self-adaptive cloud edge combination is carried out according to a user-defined cloud edge collaborative segmentation point. Because the directed acyclic graph has the characteristics of a single starting point and multiple end points, the routing template can well divide any subgraph of the directed acyclic graph by controlling the starting point and the end point of the directed acyclic graph, and therefore flexible cloud edge cooperation is conducted. And the deployment and operation of the routing module hardly affect the performance of the system. Compared with the traditional resource scheduling taking a function as a unit, the method keeps the integrity of the graph, because the routing module can dynamically organize the cloud edge collaborative directed acyclic graph according to the task request of the user, the method is more flexible than a static cloud edge collaborative method, and the calculation cost and the memory consumption of combining all possible cloud edge collaborative directed acyclic graphs in advance are avoided.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the cloud edge collaborative video analysis system based on the server-free function computing comprises an edge state machine module, a cloud edge collaborator module and a cloud application module; wherein
The edge state machine module is used for providing edge capability supplement for the Amazon step function, and is a system which can independently run at an edge end or run in cooperation with the Amazon step function at the cloud end;
the cloud edge cooperator module is used for converting and cooperating an edge state machine and an Amazon step function;
the cloud application module is used for defining a model assembly line of a video stream analysis task; and realizing video stream analysis tasks for performing different analyses according to whether the video content contains characters or not.
It should be noted that the edge state machine module includes:
a status pool module: the system is used for representing a set of all available states, wherein the states refer to functions which are customized by a user and realize application logic;
redefined state language state machine module: the redefined state language state machine provides functions of transition relations among defined states similar to Amazon state language at the edge end;
an analyzer module: for parsing redefined state languages, managing state pools, and dynamically determining transition relationships between states;
an actuator module: the cloud edge coordination strategy is used for defining an execution state machine, and a user can define the calling proportion of cloud resources and edge resources;
a message module: information for passing between templated states, an underlying data structure of the information is defined, and a collection structure is used to store the type and value of the data.
It should be noted that, the cloud edge coordinator module includes:
a preprocessing module: the system comprises a state machine, a pipeline task and a data processing module, wherein the state machine is used for receiving request data of a user and calling the state machine to execute the pipeline task;
a state module: the main body application logic is used for realizing a user-defined state, and a specific function is realized by realizing a handle function of the state;
function factory module: state resources required for generating an edge state machine module and an amazon step function;
an edge state machine module: the method is used for realizing an edge state machine, generating an edge end state machine by using state resources generated by an edge state machine module and a function factory module, has user-defined state transition logic of a pipeline task, and is a specific implementation of a directed acyclic graph; the user can decide whether to call the cloud resources in the edge state machine module, if the cloud resources are called, cloud-edge coordination is realized, and if not, the system only calls the edge computing resources;
lambda function module: the function body logic is used for defining the Lambda function, and is encapsulated by using a software development kit of the Lambda function and the state module;
amazon status language module: the system comprises an Amazon state language module, an edge state machine module and an Amazon state language module, wherein the Amazon state language module is used for generating an Amazon state language file required by an Amazon step function, automatically generating the Amazon state language file by analyzing the edge state machine defined by the edge state machine module, and checking the grammatical specification of the Amazon state language file;
the SAM module: a script YAML file for generating an automated deploy amazon step function. The module indicates the positions of an amazon state language file and a Lambda function which form an amazon step function, packs corresponding resources and automatically deploys the amazon step function at the cloud end;
an execution module: program logic for implementing a user to invoke the state machine and automatically send data; the execution module uses Golang protocol to manage the sending of user requests, each protocol corresponds to a user request, and the user request calls a state machine instance;
it should be noted that the cloud application module includes:
a video segmentation module: for dividing the video into video frames at a fixed frequency;
a person detection module: the system is used for detecting whether a video frame contains a person or not;
the face detection module: the face recognition method is used for recognizing faces in video frames;
a framework detection module: detecting a skeleton shape of a person in a video frame;
a target detection module: for identifying and classifying objects in the video frames;
a scene conversion module: a scene style type for converting the video frame;
a routing module: the method is used for realizing the seamless connection of the cloud edges in the cloud edge collaboration. The cloud edge cooperation is based on the segmentation of the directed acyclic graph of the model assembly line task, so that the first half is executed at the edge end, and the second half is executed at the cloud end; the routing module can dynamically limit the starting point and the end point of the edge state machine and the cloud end state machine, so that seamless connection of cloud edges at different partition points is realized.
It should be noted that the redefined state language state machine module has 5 state attributes, including
Task: the application logic for realizing user self-definition is a main body of state execution;
straight line: the method is used for realizing a sequential structure between states, generally, two states are connected with each other, and the output of a state executed first is the input of a state executed later;
redefining the set: the invention is used for using a plurality of different inputs to concurrently execute the same state, the invention uses an array sequence to store a plurality of inputs and calls a plurality of nested states in a set state in sequence;
selecting a branch: for implementing branching logic, the present invention uses an aggregate data structure to store mappings of conditions and states. Selecting a branch state selects a corresponding state according to an input condition;
the end point used for representing the state machine instance can be reached only when the state machine instance is successfully ended;
and failure is used for indicating the illegal terminal of the state machine instance, and the state machine instance stops running due to abnormity or errors.
It should be noted that the function factory is divided into:
factory: state resources required for producing edge state machines;
auxiliary plants: an interface for calling amazon step function exposure; in order to reduce economic expenditure, a cloud amazon step function is only used as supplement of edge end state computer computing resources; the auxiliary factory produces the resources only when the factory cannot generate the state resources of the edge end; the user may also customize the production logic of the auxiliary plant.
The invention also provides a cloud-edge collaborative video analysis method using the cloud-edge collaborative video analysis system based on the serverless function computing, which comprises the following steps:
s1, generating an Amazon state language file by an Amazon state language module, and returning a directory address where the Amazon state language file is to the SAM module;
s2, the SAM module receives a directory address where the Amazon status language file is located, generates a YAML file, automatically deploys an Amazon step function at the cloud end, and returns a calling interface of the Amazon step function to an auxiliary factory module of the function factory;
s3, the execution module reads the video frame and sends a POST request to a route defined in the preprocessing module according to a fixed speed, wherein the POST request comprises data to be processed and initial state information and end state information of a state machine;
s4, after receiving the request, the preprocessing module starts a protocol to call a state machine defined by the edge state machine module, and forwards the request data and the initial state information and the end state information of the state machine to the edge state machine module for execution;
s5, after the edge state machine module receives the request, the edge state machine module is internally provided with a state machine defined by an edge state machine template, the module starts to execute from the initial state of the state machine defined in the request, and when one state is executed, the current state calls a function factory module to obtain function resources defined by the state; according to a cloud edge cooperation strategy formulated by a user, if the state needs to call edge end resources, the state calls a factory of the function factory module to generate an edge end function, and if the state needs to call cloud end resources, the state calls an auxiliary factory of the function factory module to call an amazon step function of a cloud end; after acquiring function resources and processing data, the current state transmits the next state information connected with the current state to the routing module for continuous execution or successfully quits the state machine, and when running errors occur, the state stops running and quits the state machine in advance;
s6, when the function factory module is called, the type of the called factory is checked. If the type of the factory is the factory type, the function factory module generates computing resources of the edge end and returns a function executed at the edge end; if the type of the auxiliary factory is the auxiliary factory type, the function factory module returns a function for calling the Amazon step function;
s7, after receiving the output of the previous state and the current state information to be executed, the routing module judges whether the current state is a user-defined end point state, if so, the operation of the state machine is terminated, and the program is exited; if not, the output of the last state is transmitted to the edge state machine module, and the current state is executed.
The invention has the beneficial effects that:
1. the invention discloses a cloud edge collaborative video analysis system based on serverless function computing, which aims to realize cloud edge collaboration based on an Amazon step function, namely the Amazon step function can be operated at the cloud edge and can partially or completely expand a state machine to be executed at the edge. The invention constructs an edge state machine of an edge state machine system which can be compatible with the Amazon step function at the edge end. Compared with the amazon internet of things green grass which cannot provide the state machine capability and the open-source framework Flogo which only realizes a small amount of state machine capability in the traditional method, the edge state machine can be more completely compatible with the state machine execution capability of the amazon step function. And the edge state machine defines the message type compatible with the amazon step function and the edge state machine definition language similar to the amazon step language, compared with the use of the self-defined message type and the state machine definition Flogo which have great difference with the amazon step function, the edge state machine avoids the type conversion when the message is transmitted between cloud edges, and because the state machine definition languages of the edge end and the cloud end are similar, the state machine generated by the edge state machine can be perfectly compatible with the cloud end and the edge end at the same time without additionally modifying the state machine in order to adapt to the amazon step function of the cloud end.
2. The invention discloses a novel cloud-edge collaborative video analysis system based on server-free function computing, and provides an edge state machine interpreter cloud-edge collaborator. The cloud edge cooperator may perform translation and cooperation between the edge state machine and the amazon step function. Compared with the traditional method that the cloud sides respectively realize one system, the cloud side collaborator can multiplex the same codes of the cloud sides, so that the method has the advantages of high cohesion and low coupling. The cloud edge collaborator can automatically generate the amazon stateful language file and the YAML file required by the amazon step function deployment according to the state machine defined by the edge state machine template, and compared with the traditional file manual compiling file, the cloud edge collaborator has the advantages of automatically deploying resources, avoiding the manual compiling cost and reducing the time overhead.
3. The invention provides a routing module, which realizes flexible cloud edge cooperation. In the traditional method, when the cloud-edge cooperation is realized, the partition unit at the cloud end is a single Lambda function, and a pipeline task is formed by taking Lambda as a unit without depending on a graph organization form of an Amazon step function. Therefore, task organization of the traditional method only has simple sequential inheritance relationship generally, and cannot realize complex transfer relationship between states of a state machine, so that the universality is low. The invention realizes the random selection of the partition points between the cloud edges according to the routing module in the view of the directed acyclic graph, realizes the seamless connection between the cloud edges, does not destroy the dependency relationship between the states in the graph, and has high universality. And once the state machine generated by the Amazon step function is deployed, the state machine cannot be dynamically changed, and the appearance of the routing module enables various cloud edge cooperation schemes to be possible only by using the same Amazon step function state machine, so that the time cost and the economic cost for producing multiple sets of Amazon step functions are greatly saved.
Drawings
FIG. 1 is a schematic flow diagram disclosed in the present invention;
FIG. 2 is a directed acyclic graph in the form of a model pipeline for a video stream analysis task;
fig. 3 is a schematic diagram of a state machine with a routing module for the video stream analysis task.
DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
The present invention will be further described with reference to the accompanying drawings, and it should be noted that the present embodiment is based on the technical solution, and the detailed implementation and the specific operation process are provided, but the protection scope of the present invention is not limited to the present embodiment.
Example 1
The embodiment discloses a cloud-edge collaborative video analysis method based on serverless function computing, wherein a complete flow chart of the method is shown in fig. 1, and the specific implementation steps are as follows:
1. the cloud application module forms a directed acyclic graph of the video stream analysis task.
2. The edge state machine module generates a state machine from the directed acyclic graph.
3. The amazon state language module generates an amazon state language file and returns the directory address of the amazon state language file to the SAM module.
4. The SAM module receives a directory address where the Amazon state language file is located, generates a YAML file, automatically deploys an Amazon step function of the cloud, and returns a calling interface of the Amazon step function to an auxiliary factory module of the function factory.
5. The execution module reads the video frames and sends a POST request to the route defined in the pre-processing module at a fixed rate, the POST request containing the data to be processed and the start state information and end state information of the state machine.
6. After receiving the request, the preprocessing module calls a state machine defined by the edge state machine module, and forwards the request data and the initial state information and the end state information of the state machine to the edge state machine module for execution.
7. When the edge state machine module receives a request, the edge state machine module is provided with a state machine defined by an edge state machine template, the module starts to execute from the initial state of the state machine defined in the request, and when executing a state, the current state calls a function factory module to obtain a function resource defined by the state. According to a cloud edge cooperation strategy formulated by a user, if the state needs to call edge end resources, the state calls a factory generation edge end function of the function factory module, and if the state needs to call cloud end resources, the state calls an auxiliary factory of the function factory module to call an amazon step function of the cloud end. After the function resources are obtained and the data is processed, the current state transmits the next state information connected with the current state to the routing module for continuous execution, or successfully exits the state machine, and when a runtime error is encountered, the state terminates the operation and exits the state machine in advance.
8. The function factory module, when called, looks at the type of factory called. If the type of the factory is the factory type, the function factory module generates computing resources of the edge end and returns the function executed at the edge end. If it is an auxiliary plant type, the function plant module returns a function that calls the Amazon step function.
9. After receiving the output of the last state and the current state information to be executed, the routing module judges whether the current state is a terminal state defined by a user, if so, the routing module terminates the operation of the state machine and exits the program.
If not, the output of the last state is transmitted to the edge state machine module, and the current state is executed.
The embodiment discloses a directed acyclic graph in a model pipeline form for a video stream analysis task, as shown in fig. 2, the specific implementation steps are as follows:
1. a video segmentation module: the system is used for dividing the video into video frames according to a fixed frequency and transferring the video frames to a human detection module.
2. A person detection module: the method is used for detecting whether the video frame contains the person, and if yes, the method is transferred to a face detection module.
3. A face detection module: the method is used for identifying the human face in the video frame and turning to the skeleton detection module.
4. The framework detection module: the method is used for detecting the skeleton shape of the person in the video frame and then the method is finished.
5. A target detection module: the system is used for identifying and classifying objects in the video frame and turning to a scene detection module.
6. A scene conversion module: and converting the scene style type of the video frame to end.
The embodiment discloses a schematic diagram of an edge state machine added with a video stream analysis task of a routing module, as shown in fig. 3:
1. the Start state initiates the operation of the state machine, forwards to the route, and forwards the received input to the route, the input including a selection branch value defining the Start of the state machine.
2. The routing module carries out routing orientation and analyzes input:
1) If the branch value is selected as the continuous person detection, the state of the continuous person detection is switched to.
2) And if the branch value is selected as the continuous target detection, switching to a continuous target detection state.
3) And if the branch value is selected as the continuous scene conversion, switching to a continuous scene conversion state.
4) And if the branch value is selected to continue the face recognition, the state of continuing the face recognition is switched to.
5) If the branch value does not match the above condition, the state is switched to the default state.
3. And continuing the person detection state to set the selection branch value for person detection, and switching to the selection branch routing.
4. And continuing the target detection state, setting the selective branch value as target detection, and switching to the selective branch route.
5. And continuing the scene transition state, setting a selection branch value as the scene transition, and switching to the selection branch route.
6. And continuing the face recognition state to set a selection branch value for face recognition, and switching to a selection branch route.
7. Selecting a branch route for route orientation, and analyzing input:
1) If the branch value is selected for human detection, the state is switched to human detection state.
2) And if the branch value is selected as the target detection, switching to a target detection state.
3) And if the selected branch value is the scene transition, switching to a scene transition state.
4) And if the selected branch value is the face recognition, turning to a face recognition state.
5) If the branch value does not match the above condition, the state is switched to the default state.
8. The person detection state is used for carrying out person identification, and if persons exist in the frame, the option value is set to continue the person detection; if not, setting the selection branch value as the continuous target detection. And turning to the selection state routing.
9. Selecting a state route for routing orientation, and analyzing input:
1) And if the branch value is selected to continue the face recognition, the state of continuing the face recognition is switched to.
2) And if the branch value is selected as the continuous target detection, switching to a continuous target detection state.
10. And the human face recognition state carries out human face recognition and is switched to a skeleton detection state.
11. And performing skeleton identification in the skeleton detection state, and turning to an end state 1.
12. The print output is completed in state 1, and the system goes to the completed state.
13. And carrying out target detection in the target detection state, and switching to the scene switching state.
14. And the scene conversion state carries out scene conversion and is converted to an end state.
15. And ending the state, printing the execution success information, and turning to an End state.
16. And printing execution failure information in a default state, and turning to an End state.
17. The End state stops the execution of the state machine and the flow ends.
Various modifications may be made by those skilled in the art based on the above teachings and concepts, and all such modifications are intended to be included within the scope of the present invention as defined in the appended claims.

Claims (6)

1. The cloud edge collaborative video analysis system based on the server-free function computing is characterized by comprising an edge state machine module, a cloud edge collaborator module and a cloud application module; wherein
The edge state machine module is used for providing edge capability supplement for the Amazon step function, and is a system which can independently run at an edge end or run in cooperation with the Amazon step function at the cloud end;
the cloud edge cooperator module is used for converting and cooperating edge states and Amazon step functions;
the cloud application module is used for defining a model assembly line of a video stream analysis task; the video stream analysis task of carrying out different analyses according to whether the video content contains characters or not is realized;
wherein the cloud edge coordinator module comprises:
a preprocessing module: the system comprises a state machine, a pipeline task and a data processing module, wherein the state machine is used for receiving request data of a user and calling the state machine to execute the pipeline task;
a state module: the main body application logic is used for realizing a user-defined state, and a specific function is realized by realizing a handle function of the state;
function factory module: state resources required for generating edge state machine modules and amazon step functions;
an edge state machine module: the method is used for realizing an edge state machine, generating an edge end state machine by using state resources generated by an edge state machine module and a function factory module, has user-defined state transition logic of a pipeline task, and is a specific implementation of a directed acyclic graph; the user can decide whether to call the cloud resources in the edge state machine module, if the cloud resources are called, cloud-edge cooperation is realized, and if not, the system only calls the edge computing resources;
lambda function module: the function body logic is used for defining a Lambda function and is packaged by using a software development kit of the Lambda function and the state module;
amazon status language module: the system comprises an Amazon state language module, an edge state machine module and an Amazon state language module, wherein the Amazon state language module is used for generating an Amazon state language file required by an Amazon step function, automatically generating the Amazon state language file by analyzing the edge state machine defined by the edge state machine module, and checking the grammatical specification of the Amazon state language file;
the SAM module: the system comprises a script YAML file for generating an automatic deployment Amazon step function, a module and a module, wherein the script YAML file is used for generating the automatic deployment Amazon step function, the module indicates the positions of an Amazon state language file and a Lambda function which form the Amazon step function, packs corresponding resources and automatically deploys the Amazon step function at the cloud end;
an execution module: program logic for implementing a user to invoke the state machine and automatically send data; the execution module uses the Golang protocol to manage the sending of user requests, each protocol corresponding to a user request, a user request invoking a state machine instance.
2. The system according to claim 1, wherein the edge state machine module comprises:
a status pool module: the system is used for representing a set of all available states, wherein the states refer to functions which are customized by a user and realize application logic;
redefined state language state machine module: the redefined state language state machine provides functions of transition relations among defined states similar to Amazon state language at the edge end;
a resolver module: used for analyzing redefined state language, managing state pool and dynamically judging the transition relation between states;
an actuator module: the cloud edge coordination strategy is used for defining an execution state machine, and a user can define the calling proportion of cloud resources and edge resources;
a message module: information for passing between templated states, an underlying data structure for the information is defined, and the type and value of the data is stored using a collection structure.
3. The server-less function computing based cloud-edge collaborative video analysis system according to claim 1, wherein the cloud application module includes:
a video segmentation module: for dividing the video into video frames at a fixed frequency;
a person detection module: the system is used for detecting whether a video frame contains a person or not;
the face detection module: the face recognition method comprises the steps of recognizing faces in video frames;
the framework detection module: detecting a skeletal shape of a person in a video frame;
a target detection module: for identifying and classifying objects in the video frames;
a scene conversion module: a scene style type for converting the video frame;
a routing module: the method is used for realizing seamless connection of cloud edges during cloud edge coordination; the cloud edge cooperation is based on the segmentation of the directed acyclic graph of the model pipeline task, so that the first half is executed at the edge end, and the second half is executed at the cloud end; the routing module can dynamically limit the starting point and the end point of the edge state machine and the cloud end state machine, so that seamless connection of cloud edges under different segmentation points is realized.
4. The server-less function computing based cloud-edge collaborative video analytics system of claim 2, wherein said redefined state language state machine module has 5 state attributes comprising
Task (2): the application logic for realizing user self-definition is a main body of state execution;
straight line: a sequential structure for implementing states, usually two states are interconnected, the output of a state executed first is the input of a state executed later;
redefined set: the system comprises a plurality of sets of inputs, a plurality of processors and a plurality of processors, wherein the sets of inputs are used for executing the same state concurrently by using a plurality of different inputs, storing the plurality of inputs by using an array sequence, and calling a plurality of states nested in the set state in sequence;
selecting a branch: the system is used for realizing branch logic, selecting branch states by using the mapping relation between the storage conditions and the states of the set data structure, and selecting corresponding states according to input conditions;
the end point used for representing the state machine instance can be reached only when the state machine instance is successfully ended;
and failure is used for indicating the illegal terminal of the state machine instance, and the state machine instance stops running due to abnormity or errors.
5. The cloud-edge collaborative video analysis system based on serverless functional computing as claimed in claim 1, wherein the function factory is divided into:
factory: state resources required for producing edge state machines;
auxiliary factory: an interface for calling amazon step function exposure; in order to reduce economic expenditure, a cloud amazon step function is only used as supplement of edge end state computer computing resources; wherein, the auxiliary factory produces only when the factory can not generate the edge terminal state resource; the user may also customize the production logic of the auxiliary plant.
6. A cloud-edge collaborative video analysis method using the server-less function computing based cloud-edge collaborative video analysis system according to any one of claims 1 to 4, the method comprising the steps of:
s1, generating an Amazon state language file by an Amazon state language module, and returning a directory address where the Amazon state language file is to the SAM module;
s2, the SAM module receives a directory address where the Amazon status language file is located, generates a YAML file, automatically deploys an Amazon step function at the cloud end, and returns a calling interface of the Amazon step function to an auxiliary factory module of the function factory;
s3, the execution module reads the video frame and sends a POST request to a route defined in the preprocessing module according to a fixed speed, wherein the POST request comprises data to be processed and initial state information and end state information of a state machine;
s4, after receiving the request, the preprocessing module starts a state machine defined by the protocol calling edge state machine module, and forwards the request data and the initial state information and the end state information of the state machine to the edge state machine module for execution;
s5, after the edge state machine module receives the request, the edge state machine module is internally provided with a state machine defined by an edge state machine template, the module starts to execute from the initial state of the state machine defined in the request, and when one state is executed, the current state calls a function factory module to obtain function resources defined by the state; according to a cloud edge cooperation strategy formulated by a user, if the state needs to call edge end resources, the state calls a factory of the function factory module to generate an edge end function, and if the state needs to call cloud end resources, the state calls an auxiliary factory of the function factory module to call an amazon step function of a cloud end; after the function resources are obtained and the data are processed, the current state transmits the next state information connected with the output current state to the routing module for continuous execution or successfully quits the state machine, and when running errors occur, the state stops running and quits the state machine in advance;
s6, when the function factory module is called, checking the type of the called factory, if the type of the called factory is the factory type, generating computing resources of an edge end by the function factory module, and returning to a function executed at the edge end; if the type of the auxiliary factory is the auxiliary factory type, the function factory module returns a function for calling the Amazon step function;
s7, after receiving the output of the previous state and the current state information to be executed, the routing module judges whether the current state is a user-defined end point state, if so, the operation of the state machine is terminated, and the program is exited; if not, the output of the last state is transmitted to the edge state machine module, and the current state is executed.
CN202111184613.1A 2021-10-12 2021-10-12 Cloud edge collaborative video analysis system and method based on server-free function computing Active CN113992941B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111184613.1A CN113992941B (en) 2021-10-12 2021-10-12 Cloud edge collaborative video analysis system and method based on server-free function computing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111184613.1A CN113992941B (en) 2021-10-12 2021-10-12 Cloud edge collaborative video analysis system and method based on server-free function computing

Publications (2)

Publication Number Publication Date
CN113992941A CN113992941A (en) 2022-01-28
CN113992941B true CN113992941B (en) 2023-01-24

Family

ID=79738195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111184613.1A Active CN113992941B (en) 2021-10-12 2021-10-12 Cloud edge collaborative video analysis system and method based on server-free function computing

Country Status (1)

Country Link
CN (1) CN113992941B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110442041A (en) * 2019-08-05 2019-11-12 西藏宁算科技集团有限公司 A kind of emulation platform construction method and analogue system based on isomery cloud computing framework
CN111866450A (en) * 2020-06-18 2020-10-30 昇辉控股有限公司 Intelligent video monitoring device based on cloud edge cooperation
CN112003924A (en) * 2020-08-20 2020-11-27 浪潮云信息技术股份公司 Industrial internet-oriented edge cloud platform building method and system
CN112788142A (en) * 2021-01-18 2021-05-11 四川中英智慧质量工程技术研究院有限公司 Intelligent edge Internet of things gateway supporting multi-sensor access
US11038933B1 (en) * 2019-06-25 2021-06-15 Amazon Technologies, Inc. Hybrid videoconferencing architecture for telemedicine
CN113114758A (en) * 2021-04-09 2021-07-13 北京邮电大学 Method and device for scheduling tasks for server-free edge computing

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10719311B2 (en) * 2017-09-08 2020-07-21 Accenture Global Solutions Limited Function library build architecture for serverless execution frameworks

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11038933B1 (en) * 2019-06-25 2021-06-15 Amazon Technologies, Inc. Hybrid videoconferencing architecture for telemedicine
CN110442041A (en) * 2019-08-05 2019-11-12 西藏宁算科技集团有限公司 A kind of emulation platform construction method and analogue system based on isomery cloud computing framework
CN111866450A (en) * 2020-06-18 2020-10-30 昇辉控股有限公司 Intelligent video monitoring device based on cloud edge cooperation
CN112003924A (en) * 2020-08-20 2020-11-27 浪潮云信息技术股份公司 Industrial internet-oriented edge cloud platform building method and system
CN112788142A (en) * 2021-01-18 2021-05-11 四川中英智慧质量工程技术研究院有限公司 Intelligent edge Internet of things gateway supporting multi-sensor access
CN113114758A (en) * 2021-04-09 2021-07-13 北京邮电大学 Method and device for scheduling tasks for server-free edge computing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Edge Intelligence:Paving the Last Mile of Artificial Intelligence With EDGE Computing;Zhi Zhou,Xu Chen,En Li et al.;《Proceedings of the IEEE(PIEEE)》;popular article of PIEEE from Jan.2020 to feb.2021;20190831;第107卷(第8期);1738-1762 *

Also Published As

Publication number Publication date
CN113992941A (en) 2022-01-28

Similar Documents

Publication Publication Date Title
US10579349B2 (en) Verification of a dataflow representation of a program through static type-checking
CN110088737A (en) Concurrent program is converted to the integration schedules for the hardware that can be deployed in the cloud infrastructure based on FPGA
CN109491777A (en) Task executing method, device, equipment and storage medium
US8495593B2 (en) Method and system for state machine translation
CN112329945A (en) Model deployment and reasoning method and device
CN105975261B (en) A kind of runtime system and operation method called towards unified interface
CN103678135A (en) System and method for achieving cross-process and cross-thread debugging in large-data environment
CN109740765A (en) A kind of machine learning system building method based on Amazon server
CN112214289A (en) Task scheduling method and device, server and storage medium
CN115600676A (en) Deep learning model reasoning method, device, equipment and storage medium
CN112395736A (en) Parallel simulation job scheduling method of distributed interactive simulation system
US9229980B2 (en) Composition model for cloud-hosted serving applications
CN108985459A (en) The method and apparatus of training pattern
CN114610404A (en) Component calling method and device based on application framework and computer equipment
CN113992941B (en) Cloud edge collaborative video analysis system and method based on server-free function computing
Petriu et al. Software performance models from system scenarios
CN116521181A (en) Script data processing method, device, equipment and medium based on game system
CN111274018A (en) Distributed training method based on DL framework
CN115114022A (en) Method, system, device and medium for using GPU resources
CN116301826A (en) Method for calling software command to conduct data batch processing in CAA development
CN109739666A (en) Striding course call method, device, equipment and the storage medium of singleton method
CN113051173B (en) Method, device, computer equipment and storage medium for arranging and executing test flow
CN114816357A (en) Service arrangement system for serving process bank
CN109426529A (en) Method, apparatus and terminal based on X window system graphic plotting
Cavalheiro et al. Anahy: A programming environment for cluster computing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant