CN115866417A - Video service method and system based on edge calculation - Google Patents

Video service method and system based on edge calculation Download PDF

Info

Publication number
CN115866417A
CN115866417A CN202310174755.2A CN202310174755A CN115866417A CN 115866417 A CN115866417 A CN 115866417A CN 202310174755 A CN202310174755 A CN 202310174755A CN 115866417 A CN115866417 A CN 115866417A
Authority
CN
China
Prior art keywords
video
algorithm
network camera
edge
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310174755.2A
Other languages
Chinese (zh)
Other versions
CN115866417B (en
Inventor
曹江
高原
王平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Research Institute of War of PLA Academy of Military Science
Original Assignee
Research Institute of War of PLA Academy of Military Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Research Institute of War of PLA Academy of Military Science filed Critical Research Institute of War of PLA Academy of Military Science
Priority to CN202310174755.2A priority Critical patent/CN115866417B/en
Publication of CN115866417A publication Critical patent/CN115866417A/en
Application granted granted Critical
Publication of CN115866417B publication Critical patent/CN115866417B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a video service method and a system based on edge calculation, and particularly relates to the technical field of image communication, wherein the system comprises a network camera module, a task management module, an algorithm selection module, a data preprocessing module and an analysis processing module, wherein the network camera module is used for acquiring real-time video and transmitting the acquired video to the task management module, and a network camera in an area simultaneously responds to an analysis result obtained from one network camera; the task management module is used for managing received video streams, distributing edge computing resources and storage resources according to video quality and user requirements, the algorithm selection module is used for comparing a video to be analyzed with a training video in a database through analyzing a video abstract, selecting an adaptive algorithm, obtaining an analysis result through an algorithm model, feeding the analysis result back to the regional network camera management unit, and managing a camera shooting mode of the network camera according to the result.

Description

Video service method and system based on edge calculation
Technical Field
The present invention relates to the field of image communication technologies, and in particular, to a video service method and system based on edge calculation.
Background
Edge computing means that an open platform integrating network, computing, storage and application core capabilities is adopted at one side close to an object or a data source to provide nearest-end services nearby. The edge computing can further extend functions of data storage, computing, management and the like, the transmission pressure of a boundary relieving network of cloud computing is expanded, and meanwhile, as the edge computing is closer to the source of data, data processing can be performed more timely, the working requirement of equipment is responded, and the time delay is reduced; in the capability range that the edge calculation can process, the data can be directly processed at the source end, and the safety of data storage is guaranteed.
When video and network live broadcast gradually become mainstream transmission modes, edge calculation can be performed at the edge of a network closer to a user, standard calculation capacity and IT service are provided, service response time delay can be reduced by local arrangement nearby, service capacity is effectively improved, and high-definition and smooth live broadcast and watching experience is realized. In the field of video monitoring, an edge calculation processing mode is adopted to provide video service nearby, which is beneficial to reducing the labor intensity of monitoring personnel, and the existing video service method and system based on edge calculation are not intelligent enough and have the following problems: 1. the method is not intelligent enough and can not reasonably distribute computing resources according to the video quality and the user requirements; 2. a single algorithm is adopted, and an adaptive algorithm is not selected according to different videos, so that the video service quality is poor; 3. the intelligent management of the network cameras is lacked, the network cameras in the area are lacked in association, and the video sampling strategy cannot be updated according to the video service result.
Disclosure of Invention
In order to overcome the above defects in the prior art, embodiments of the present invention provide a video service method based on edge calculation, which establishes a calculation resource allocation model of edge calculation through a task management module, ensures the maximum utilization rate of video services, selects an optimal algorithm through an algorithm selection module, improves the video service quality, and realizes intelligent control over a network camera in an area through a network camera module, so as to solve the problems proposed in the background art.
In order to achieve the purpose, the invention provides the following technical scheme: the video service method based on edge calculation comprises the following steps:
step S01, management of the network camera: installing a network camera in the area for monitoring, establishing a network camera topological graph and a network camera distribution graph in the monitored area, and acquiring real-time video data by the network camera and transmitting the real-time video data to an edge computing server;
step S02, task management: the video data transmitted in the step S01 are regarded as task requests, whether the edge cloud computing can meet the delay requirements or not is judged, the tasks which cannot be processed are transmitted to a cloud computing system, and the tasks which can meet the delay requirements are transmitted to the next step;
step S03, algorithm selection: after receiving the service request, the edge computing server firstly obtains a sampling set F' from the video data through differential sampling, and arranges the picture frames to be detected according to a time sequence; then, the sampling set is divided into n sampling subsets F1 ', F2' and F3 '8230, fn' according to the environmental change time nodes; selecting an applicable algorithm of each sampling subset to obtain an algorithm set S 'corresponding to the sampling set algorithm set F';
step S04, model calling and data preprocessing: calling a required algorithm model from a database, and processing data to be detected in different modes according to the type of the algorithm model, wherein the processing of the sampling set F' by using a characteristic enhancement and identification unit is carried out to obtain test data applicable to the algorithm;
step S05, video service: performing video service by using an algorithm, wherein the video service comprises target identification and anomaly detection to obtain a video service result;
step S06, result feedback: and setting a response mode of the network cameras, and updating the shooting mode of other network cameras in the area when a target appears and an abnormality appears in one network camera so as to realize dynamic monitoring of the target.
In a preferred embodiment, in step S03, the sampling set F' is obtained by extracting a picture frame from a video stream by using an inter-frame differential and background differential joint sampling method, including the following steps:
s11, acquiring an initial set F of video frames to be sampled, initializing an inter-frame differential sampling set F1, and initializing a background differential sampling set F2;
s12, acquiring an interframe differential sampling threshold value H1 and a background differential threshold value H2, processing video frames in an initial set F of video frames to be sampled by adopting a binarization decision formula, marking the frames exceeding the threshold value as '1', marking the frames less than or equal to the threshold value as '0', and adding the frames marked as '1' into the set F1 or F2 to be sampled;
and S13, obtaining a final sampling set, and taking a union set of the inter-frame differential sampling set F1 and the background differential sampling set F2 to obtain a combined sampling set F'.
In a preferred embodiment, in step S04, the test data selection method of the applicable algorithm is to compare the video summary with an algorithm training set in a database, and find an optimal processing algorithm, which includes the following steps:
step S21, data storage: obtaining all available algorithm model sets S of video service and training video sets X corresponding to the algorithm model sets S, and acquiring and processing performance score sets B;
step S22, data comparison: and comparing the similarity of the subset in the combined sampling set F' with a training video set X in the following manner: projecting the training video and the video to be detected onto the Grassmann manifold to obtain the training video and the video to be detected, wherein the distribution similarity of the training video and the video to be detected on the Grassmann manifold is high;
step S23, selection of processing algorithm: and selecting a processing algorithm according to the similarity between the training video and the video to be detected, if more than one detection algorithm of each subset is adopted, selecting an algorithm which occupies less edge computing resources and meets the requirement of accuracy, and putting the detection algorithm corresponding to each subset into an algorithm set S' to be used of the video.
In a preferred embodiment, the data preprocessing manner in step S04 is data enhancement and data identification, and the data enhancement manner is one or more of flipping, scaling, random cropping, shifting, and adding gaussian noise, mixup.
In a preferred embodiment, the video service method in step S05 includes one or more of a direction gradient histogram algorithm, an aggregation channel feature algorithm, and a deformable component model algorithm.
In order to achieve the purpose, the invention provides the following technical scheme: the system comprises a network camera module, a task management module, an algorithm selection module, a data preprocessing module and an analysis processing module, wherein the network camera module is used for acquiring real-time videos and transmitting the acquired videos with different qualities to the task management module; the task management module is used for managing the received video stream and distributing edge computing resources and storage resources according to the video quality and the user requirement; the algorithm selection module is used for comparing the video to be analyzed with the training video in the database through analyzing the video abstract and selecting an adaptive algorithm; the data preprocessing module comprises a data enhancement unit and a data identification unit; the analysis processing module is used for analyzing the video stream to obtain an analysis result, feeding the analysis result back to the network camera management unit in the region, and managing the camera shooting mode of the network camera according to the analysis result.
In a preferred embodiment, the network camera module includes a camera control unit, a camera topology analysis unit and a video summary extraction unit, the camera control unit is configured to receive the data transmitted by the result feedback unit and adopt a coping strategy to control the video quality of the network camera according to the data, the camera topology analysis unit is configured to obtain the distribution situation of the network camera in the region, the video summary extraction unit is configured to extract image frames from the video data and transmit the video data and the image frames to the task management module, and the sampling set F' is obtained by the video summary extraction unit.
In a preferred embodiment, the task management module includes a delay time collection unit, a video quality analysis unit, and a task allocation unit, where the delay time collection unit is configured to collect delay time of a task, the video quality analysis unit is configured to judge video quality and analyze a matching degree between the video quality and an accuracy requirement, and the task allocation unit is configured to transmit a task that cannot be processed by edge computing to the cloud computing system, and includes the following steps:
step S31, after the video service request set is obtained by the edge cloud, the service request set is arranged according to the delay requirement and the video quality, the first request task q is taken, and the edge computing resource Cq needed by the completion task q is predicted;
step S32, comparing the sizes of the current residual edge cloud residual resources C' and Cq, if so, allocating the computing resources of the edge computing to a task q, and if the current residual edge cloud residual resources are smaller than the edge computing resources needed for predicting the completion of the task q, transmitting the task q to a cloud server for computing;
and S33, updating the residual resources and the task set of the current residual edge cloud, and repeating the step S31 and the step S32 until all the tasks are distributed.
In a preferred embodiment, the algorithm selection module includes a data storage unit, a comparison unit and an algorithm model calling unit, the data storage unit is used for storing all algorithm models and corresponding training videos required by the video service, the comparison unit is used for comparing similarity between a video to be detected and the training videos, and selecting an algorithm according to the similarity, and the algorithm model calling unit is used for calling the video to be detected from the cloud server.
In a preferred embodiment, the system connects cloud computing with edge computing, and comprises a cloud server side, an edge computing side and a video acquisition side, wherein the cloud server side is used for storing an algorithm model and a training set, analyzing the similarity between a video abstract to be detected and the training set, and selecting an optimal algorithm in a cloud server; the edge computing server obtains corresponding algorithm models from the cloud server according to different scenes, and deploys the algorithm models on an edge development board to compute and store video data.
The invention has the technical effects and advantages that:
the invention adopts task management to reasonably distribute computing resources, selects corresponding algorithms for different types of videos, transmits the obtained result to the network camera module, and reduces or improves the video quality when one network camera has targets and abnormity and other network cameras in the area take reaction.
Drawings
Fig. 1 is a flow chart of a video service method based on edge calculation according to the present invention.
Fig. 2 is a block diagram of a video service system based on edge calculation according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As used in this application, the terms "module," "system," and the like are intended to include a computer-related entity, such as but not limited to hardware, firmware, a combination of hardware and software, or software in execution. For example, a module may be, but is not limited to: a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device can be a module. One or more modules may reside within a process and/or thread of execution and a module may be localized on one computer and/or distributed between two or more computers.
Example 1
The embodiment provides a video service method based on edge calculation as shown in fig. 1, which includes the following steps:
step S01, management of the network camera: installing a network camera in the area for monitoring, establishing a network camera topological graph and a network camera distribution graph in the monitored area, and acquiring real-time video data by the network camera and transmitting the real-time video data to an edge computing server;
step S02, task management: the video data transmitted in the step S01 are regarded as task requests, whether the edge cloud computing can meet the delay requirements or not is judged, the tasks which cannot be processed are transmitted to a cloud computing system, and the tasks which can meet the delay requirements are transmitted to the next step;
step S03, algorithm selection: after receiving the service request, the edge computing server firstly obtains a sampling set F' from the video data through differential sampling, and arranges the picture frames to be detected according to a time sequence; then, the sampling set is divided into n sampling subsets F1 ', F2' and F3 '8230, fn' according to the environmental change time nodes; selecting an applicable algorithm of each sampling subset to obtain an algorithm set S 'corresponding to the sampling set algorithm set F';
step S04, model calling and data preprocessing: calling a required algorithm model from a database, and processing data to be detected in different modes according to the type of the algorithm model, wherein the processing of the sampling set F' by using a characteristic enhancement and identification unit is carried out to obtain test data applicable to the algorithm;
step S05, video service: performing video service by using an algorithm, wherein the video service comprises target identification and anomaly detection to obtain a video service result;
step S06, result feedback: and setting a response mode of the network cameras, and updating the shooting mode of other network cameras in the area when a target appears or an abnormality appears in one network camera so as to realize dynamic monitoring of the target.
Further, in step S023, the sampling set F' is obtained by extracting a picture frame from the video stream in an inter-frame differential and background differential joint sampling manner, and the method includes the following steps:
s11, acquiring an initial set F of video frames to be sampled, initializing an inter-frame differential sampling set F1, and initializing a background differential sampling set F2;
s12, acquiring an inter-frame differential sampling threshold value H1 and a background differential threshold value H2, processing video frames in an initial set F of video frames to be sampled by adopting a binary decision formula, marking frames exceeding the threshold value as '1', marking frames less than or equal to the threshold value as '0', and adding the frames marked as '1' into the set F1 or F2 to be sampled;
and S13, obtaining a final sampling set, and taking a union set of the inter-frame differential sampling set F1 and the background differential sampling set F2 to obtain a combined sampling set F'.
Further, in step S04, the test data selection method for the applicable algorithm is to compare the video abstract with an algorithm training set in a database, and find an optimal processing algorithm, which includes the following steps:
step S21, data storage: obtaining all available algorithm model sets S of video service and training video sets X corresponding to the algorithm model sets S, and acquiring and processing performance score sets B;
step S22, data comparison: and comparing the similarity of the subset in the combined sampling set F' with a training video set X in the following manner: projecting the training video and the video to be detected onto the Grassmann manifold to obtain the training video and the video to be detected, wherein the distribution similarity of the training video and the video to be detected on the Grassmann manifold is high;
step S23, selection of processing algorithm: and selecting a processing algorithm according to the similarity between the training video and the video to be detected, if more than one detection algorithm of each subset is adopted, selecting an algorithm which occupies less edge computing resources and meets the requirement of accuracy, and putting the detection algorithm corresponding to each subset into an algorithm set S' to be used of the video.
Further, in step S04, the data preprocessing method includes data enhancement and data identification, and the data enhancement method includes one or more of flipping, scaling, random clipping, shifting, and adding gaussian noise and Mixup.
Further, the video service method in step S05 includes one or more of a direction gradient histogram algorithm, an aggregation channel feature algorithm, and a deformable component model algorithm.
In order to achieve the above purpose, the present invention provides the following technical solutions as shown in fig. 2: the system comprises a network camera module, a task management module, an algorithm selection module, a data preprocessing module and an analysis processing module, wherein the network camera module is used for acquiring real-time videos and transmitting the acquired videos with different qualities to the task management module; the task management module is used for managing the received video stream and distributing edge computing resources and storage resources according to the video quality and the user requirement; the algorithm selection module is used for comparing the video to be analyzed with the training video in the database through analyzing the video abstract and selecting an adaptive algorithm; the data preprocessing module comprises a data enhancement unit and a data identification unit; the analysis processing module is used for analyzing the video stream to obtain an analysis result, feeding the analysis result back to the network camera management unit in the region, and managing the camera shooting mode of the network camera according to the analysis result.
Further, the network camera module includes a camera control unit, a camera topology analysis unit and a video abstract extraction unit, the camera control unit is configured to receive the data transmitted by the result feedback unit and adopt a coping strategy to control the video quality of the network camera according to the data, the camera topology analysis unit is configured to obtain the distribution condition of the network camera in the region, the video abstract extraction unit is configured to extract image frames from the video data, transmit the video data and the image frames to the task management module, and obtain a sampling set F' through the video abstract extraction unit.
Further, the task management module comprises a delay time acquisition unit, a video quality analysis unit and a task allocation unit, wherein the delay time acquisition unit is used for acquiring delay time of a task, the video quality analysis unit is used for judging video quality and analyzing matching degree between the video quality and precision requirements, and the task allocation unit is used for transmitting a task which cannot be processed by edge computing to the cloud computing system and comprises the following steps:
step S31, after the video service request set is obtained by the edge cloud, the service request set is arranged in sequence according to delay requirements and video quality, a first request task q is taken, and edge computing resources Cq required by the completion task q are predicted;
step S32, comparing the sizes of the current residual edge cloud residual resources C' and Cq, if so, allocating the computing resources of the edge computing to a task q, and if the current residual edge cloud residual resources are smaller than the edge computing resources needed for predicting the completion of the task q, transmitting the task q to a cloud server for computing;
and S33, updating the residual resources and the task set of the current residual edge cloud, and repeating the step S31 and the step S32 until all the tasks are distributed.
Further, the algorithm selection module comprises a data storage unit, a comparison unit and an algorithm model calling unit, the data storage unit is used for storing all algorithm models and corresponding training videos required by the video service, the comparison unit is used for comparing the similarity between the video to be detected and the training videos, the algorithm is selected according to the similarity, and the algorithm model calling unit is used for calling the detected video from the cloud server.
Further, the system connects cloud computing with edge computing and comprises a cloud server side, an edge computing side and a video acquisition side, wherein the cloud server side is used for storing an algorithm model and a training set, analyzing the similarity between a video abstract to be detected and the training set and selecting an optimal algorithm in the cloud server; and the edge computing server obtains corresponding algorithm models from the cloud server aiming at different scenes, and deploys the algorithm models on an edge development board to compute and store video data.
In summary, the following steps: the method comprises the steps of establishing a computing resource distribution model of edge computing through a task management module, ensuring the maximum utilization rate of video services, selecting an optimal algorithm through an algorithm selection module, improving the quality of the video services, and realizing intelligent control over the network cameras in the region through a network camera module, so as to solve the problems that the traditional system and method cannot reasonably distribute computing resources according to the quality of the video and the requirements of users, do not select adaptive algorithms according to different videos, lack intelligent management over the network cameras, and lack correlation among the network cameras in the region.
And finally: the above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that are within the spirit and principle of the present invention are intended to be included in the scope of the present invention.

Claims (9)

1. The video service method based on edge calculation is characterized in that: comprises the following steps:
step S01, management of the network camera: installing a network camera in the area for monitoring, establishing a network camera topological graph and a network camera distribution graph in the monitored area, and acquiring real-time video data by the network camera and transmitting the real-time video data to an edge computing server;
step S02, task management: the video data transmitted in the step S01 are regarded as task requests, whether edge cloud computing can meet delay requirements or not is judged, tasks which cannot be processed are transmitted to a cloud computing system, and the tasks which can meet the delay requirements are transmitted to the next step;
step S03, algorithm selection: after receiving the service request, the edge computing server firstly obtains a sampling set F' from the video data through differential sampling, and arranges the picture frames to be detected according to a time sequence; then, the sampling set is divided into n sampling subsets F1 ', F2' and F3 '8230' according to environment change time nodes, and an applicable algorithm of each sampling subset is selected to obtain an algorithm set S 'corresponding to the sampling set algorithm F';
step S04, model calling and data preprocessing: calling a required algorithm model from a database, and processing data to be detected in different modes according to the type of the algorithm model, wherein the processing of the sampling set F' by using a characteristic enhancement and identification unit is carried out to obtain test data applicable to the algorithm;
step S05, video service: performing video service by using an algorithm, wherein the video service comprises target identification and anomaly detection to obtain a video service result;
step S06, result feedback: and setting a network camera response mode, and when a target appears or an abnormality appears in one network camera, updating the shooting mode of other network cameras in the area to realize dynamic monitoring on the target.
2. The edge-computing-based video service method of claim 1, wherein: in step S03, the sampling set F' is obtained by extracting a picture frame from the video stream in an inter-frame differential and background differential joint sampling manner, including the following steps:
s11, acquiring an initial set F of video frames to be sampled, initializing an inter-frame differential sampling set F1, and initializing a background differential sampling set F2;
s12, acquiring an interframe differential sampling threshold value H1 and a background differential threshold value H2, processing video frames in an initial set F of video frames to be sampled by adopting a binarization decision formula, marking the frames exceeding the threshold value as '1', marking the frames less than or equal to the threshold value as '0', and adding the frames marked as '1' into the set F1 or F2 to be sampled;
and S13, obtaining a final sampling set, and taking a union set of the inter-frame differential sampling set F1 and the background differential sampling set F2 to obtain a combined sampling set F'.
3. The edge-computing-based video service method according to claim 1, wherein: in step S04, the test data selection method for the applicable algorithm is to compare the video abstract with an algorithm training set in a database, and find an optimal processing algorithm, which includes the following steps:
step S21, data storage: obtaining all available algorithm model sets S of video service and training video sets X corresponding to the algorithm model sets S, and acquiring and processing performance score sets B;
step S22, data comparison: and comparing the similarity of the subset in the combined sampling set F' with a training video set X in the following manner: projecting the training video and the video to be detected onto the Grassmann manifold to obtain the distribution similarity of the training video and the video to be detected on the Grassmann manifold;
step S23, selection of processing algorithm: and selecting a processing algorithm according to the similarity between the training video and the video to be detected, if more than one detection algorithm of each subset is adopted, selecting an algorithm which occupies less edge computing resources and meets the requirement of accuracy, and putting the detection algorithm corresponding to each subset into an algorithm set S' to be used of the video.
4. The edge-computing-based video service method according to claim 1, wherein: the video service method in the step S05 comprises one or more of a direction gradient histogram algorithm, an aggregation channel characteristic algorithm and a deformable component model algorithm.
5. The system of the edge-computing-based video service method according to any one of claims 1 to 4, wherein: the system comprises a network camera module, a task management module, an algorithm selection module, a data preprocessing module and an analysis processing module, wherein the network camera module is used for acquiring real-time videos and transmitting the acquired videos with different qualities to the task management module; the task management module is used for managing the received video stream and distributing edge computing resources and storage resources according to the video quality and the user requirement; the algorithm selection module is used for comparing the video to be analyzed with the training video in the database through analyzing the video abstract and selecting an adaptive algorithm; the data preprocessing module comprises a data enhancement unit and a data identification unit; the analysis processing module is used for analyzing the video stream to obtain an analysis result, feeding the analysis result back to the network camera management unit in the region, and managing the camera shooting mode of the network camera according to the analysis result.
6. The system of the edge computing-based video service method according to claim 5, wherein: the network camera module comprises a camera control unit, a camera topology analysis unit and a video abstract extraction unit, wherein the camera control unit is used for receiving the data transmitted by the result feedback unit and adopting a coping strategy to control the video quality of the network camera according to the data, the camera topology analysis unit is used for obtaining the distribution condition of the network camera in the region, and the video abstract extraction unit is used for extracting image frames from the video data and transmitting the video data and the image frames to the task management module.
7. The system of the edge computing-based video service method according to claim 5, wherein: the task management module comprises a delay time acquisition unit, a video quality analysis unit and a task allocation unit, wherein the delay time acquisition unit is used for acquiring delay time of a task, the video quality analysis unit is used for judging video quality and analyzing matching degree between the video quality and precision requirements, and the task allocation unit is used for transmitting tasks which cannot be processed by edge computing to the cloud computing system and comprises the following steps:
step S31, after the video service request set is obtained by the edge cloud, the service request set is arranged according to the delay requirement and the video quality, the first request task q is taken, and the edge computing resource Cq needed by the completion task q is predicted;
step S32, comparing the sizes of the current residual edge cloud residual resources C' and Cq, if so, allocating the computing resources of the edge computing to a task q, and if the current residual edge cloud residual resources are smaller than the edge computing resources needed for predicting the completion of the task q, transmitting the task q to a cloud server for computing;
and S33, updating the residual resources and the task set of the current residual edge cloud, and repeating the step S31 and the step S32 until all the tasks are distributed.
8. The system of the edge computing-based video service method according to claim 5, wherein: the algorithm selection module comprises a data storage unit, a comparison unit and an algorithm model calling unit, wherein the data storage unit is used for storing all algorithm models and corresponding training videos required by video service, the comparison unit is used for comparing the similarity between a video to be detected and the training videos and selecting the algorithm according to the similarity, and the algorithm model calling unit is used for calling the detected video from the cloud server.
9. The system of the edge computing-based video service method according to claim 5, wherein: the system connects cloud computing with edge computing and comprises a cloud server side, an edge computing side and a video acquisition side, wherein the cloud server side is used for storing an algorithm model and a training set, analyzing the similarity between a video abstract to be detected and the training set and selecting an optimal algorithm in a cloud server; and the edge computing server obtains corresponding algorithm models from the cloud server aiming at different scenes, and deploys the algorithm models on an edge development board to compute and store video data.
CN202310174755.2A 2023-02-28 2023-02-28 Video service method and system based on edge calculation Active CN115866417B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310174755.2A CN115866417B (en) 2023-02-28 2023-02-28 Video service method and system based on edge calculation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310174755.2A CN115866417B (en) 2023-02-28 2023-02-28 Video service method and system based on edge calculation

Publications (2)

Publication Number Publication Date
CN115866417A true CN115866417A (en) 2023-03-28
CN115866417B CN115866417B (en) 2023-05-05

Family

ID=85659308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310174755.2A Active CN115866417B (en) 2023-02-28 2023-02-28 Video service method and system based on edge calculation

Country Status (1)

Country Link
CN (1) CN115866417B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104618693A (en) * 2015-02-09 2015-05-13 北京邮电大学 Cloud computing based online processing task management method and system for monitoring video
CN111462167A (en) * 2020-04-21 2020-07-28 济南浪潮高新科技投资发展有限公司 Intelligent terminal video analysis algorithm combining edge calculation and deep learning
US20210096911A1 (en) * 2020-08-17 2021-04-01 Essence Information Technology Co., Ltd Fine granularity real-time supervision system based on edge computing
CN113259451A (en) * 2021-05-31 2021-08-13 长沙鹏阳信息技术有限公司 Cluster processing architecture and method for intelligent analysis of large-scale monitoring nodes
CN114697324A (en) * 2022-03-07 2022-07-01 南京理工大学 Real-time video analysis and processing method based on edge cloud cooperation
CN114900656A (en) * 2022-04-20 2022-08-12 鹏城实验室 Traffic monitoring video stream processing method, device, system and storage medium
CN115082845A (en) * 2022-04-26 2022-09-20 北京理工大学 Monitoring video target detection task scheduling method based on deep reinforcement learning
CN115357379A (en) * 2022-07-28 2022-11-18 华中科技大学 Construction method and application of video transmission configuration model

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104618693A (en) * 2015-02-09 2015-05-13 北京邮电大学 Cloud computing based online processing task management method and system for monitoring video
CN111462167A (en) * 2020-04-21 2020-07-28 济南浪潮高新科技投资发展有限公司 Intelligent terminal video analysis algorithm combining edge calculation and deep learning
US20210096911A1 (en) * 2020-08-17 2021-04-01 Essence Information Technology Co., Ltd Fine granularity real-time supervision system based on edge computing
CN113259451A (en) * 2021-05-31 2021-08-13 长沙鹏阳信息技术有限公司 Cluster processing architecture and method for intelligent analysis of large-scale monitoring nodes
CN114697324A (en) * 2022-03-07 2022-07-01 南京理工大学 Real-time video analysis and processing method based on edge cloud cooperation
CN114900656A (en) * 2022-04-20 2022-08-12 鹏城实验室 Traffic monitoring video stream processing method, device, system and storage medium
CN115082845A (en) * 2022-04-26 2022-09-20 北京理工大学 Monitoring video target detection task scheduling method based on deep reinforcement learning
CN115357379A (en) * 2022-07-28 2022-11-18 华中科技大学 Construction method and application of video transmission configuration model

Also Published As

Publication number Publication date
CN115866417B (en) 2023-05-05

Similar Documents

Publication Publication Date Title
US20210006505A1 (en) A bursty traffic allocation method, device and proxy server
CN110084113B (en) Living body detection method, living body detection device, living body detection system, server and readable storage medium
CN109960969B (en) Method, device and system for generating moving route
CN111402297A (en) Target tracking detection method, system, electronic device and storage medium
CN111091106B (en) Image clustering method and device, storage medium and electronic device
CN111770285A (en) Exposure brightness control method and device, electronic equipment and storage medium
CN111027397B (en) Comprehensive feature target detection method, system, medium and equipment suitable for intelligent monitoring network
CN110532837B (en) Image data processing method in article picking and placing process and household appliance
CN111241868A (en) Face recognition system, method and device
CN114140710A (en) Monitoring data transmission method and system based on data processing
CN111340016A (en) Image exposure method and apparatus, storage medium, and electronic apparatus
CN115866417B (en) Video service method and system based on edge calculation
CN111050027B (en) Lens distortion compensation method, device, equipment and storage medium
CN106375378B (en) Application deployment method and system based on local area network client server structure
CN113068024B (en) Real-time capture analysis method and storage medium
CN114422776A (en) Detection method and device for camera equipment, storage medium and electronic device
CN114882414A (en) Abnormal video detection method, abnormal video detection device, electronic equipment, abnormal video detection medium and program product
CN115082911A (en) Video analysis method and device and video processing equipment
CN110544182B (en) Power distribution communication network fusion control method and system based on machine learning technology
CN113743235A (en) Electric power inspection image processing method, device and equipment based on edge calculation
CN115564700A (en) Image processing method and system, storage medium, and electronic device
CN111797693A (en) Method and device for accelerating image recognition in cloud architecture
CN111079477A (en) Monitoring analysis method and monitoring analysis system
CN107147694B (en) Information processing method and device
CN112819859A (en) Multi-target tracking method and device applied to intelligent security

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant