CN112580481B - Edge node and cloud collaborative video processing method, device and server - Google Patents
Edge node and cloud collaborative video processing method, device and server Download PDFInfo
- Publication number
- CN112580481B CN112580481B CN202011466467.7A CN202011466467A CN112580481B CN 112580481 B CN112580481 B CN 112580481B CN 202011466467 A CN202011466467 A CN 202011466467A CN 112580481 B CN112580481 B CN 112580481B
- Authority
- CN
- China
- Prior art keywords
- video image
- pixel data
- data
- image pixel
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims description 10
- 238000012545 processing Methods 0.000 claims abstract description 67
- 230000000007 visual effect Effects 0.000 claims abstract description 55
- 238000004458 analytical method Methods 0.000 claims abstract description 54
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 49
- 238000000034 method Methods 0.000 claims abstract description 47
- 230000006835 compression Effects 0.000 claims abstract description 41
- 238000007906 compression Methods 0.000 claims abstract description 41
- 238000012549 training Methods 0.000 claims abstract description 38
- 230000008569 process Effects 0.000 claims description 17
- 238000001914 filtration Methods 0.000 claims description 14
- 238000012937 correction Methods 0.000 claims description 10
- 230000008859 change Effects 0.000 claims description 8
- 241000251468 Actinopterygii Species 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 abstract description 41
- 230000007246 mechanism Effects 0.000 abstract description 5
- 238000013528 artificial neural network Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000013439 planning Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/439—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using cascaded computational arrangements for performing a single operation, e.g. filtering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method, a device and a server for processing collaborative video based on edge nodes and cloud, wherein the method comprises the following steps: acquiring video image pixel data after video image compression coding processing by an edge node; video image decoding is carried out on the video image pixel data to obtain image decoding data; and performing visual feature analysis training based on the convolutional neural network model on the image decoding data to obtain video image visual feature analysis data. In the embodiment, the video data is processed by cooperating the edge node and the cloud server, so that the task with lower calculation power requirement runs at the edge node, the task with high calculation power requirement runs at the cloud server, the high performance of the cloud server and the low delay and privacy of the edge node calculation are combined, and meanwhile, the throughput rate of the calculation task is improved by using a pipeline mechanism, so that the calculation efficiency is improved.
Description
Technical Field
The invention relates to the technical field of the Internet of things, in particular to a method, a device and a server for processing collaborative video based on edge nodes and cloud.
Background
In recent years, cloud computing, internet of things and artificial intelligence technologies are continuously developed, more and more infrastructures are transformed to intelligent transformation, and a large amount of picture and video data are generated by widely applied camera sensors. However, the cloud data center has limited bandwidth, the sensor has certain delay to the cloud data center, the requirement of low delay and high throughput cannot be met by simply processing massive video streams by means of a cloud computing server, unavoidable network delay and bandwidth limitation exist, and because the end consumer and the cloud platform are connected through the Internet, the network delay is far greater than the application deployed locally, and meanwhile, the available bandwidth is also smaller than the local area network bandwidth; the public cloud has a complex charging mode, and the Internet of things can bring great expense when a large amount of uploading data is applied; the cloud platform has a small probability that service interruption is possibly caused by software, hardware or network faults, so that a consumer cannot avoid the faults, and the recovery time cannot be determined or maintenance work cannot be carried out after the faults; due to security holes or configuration errors of a firewall and permission control of the cloud platform, data on the cloud can be accidentally revealed, and security and privacy problems are caused. The problem of throughput and bandwidth is solved by simply relying on edge calculation to place calculation near data, but the edge calculation nodes are generally low in performance, cannot finish the operation of a large-scale deep neural network, the throughput is low due to the serial processing flow, a large amount of funds are needed for constructing the high-performance edge calculation nodes, the repeated construction also brings about the waste of calculation force, and the method for processing the image data in the prior art cannot meet the requirements of bandwidth, delay, throughput rate and high calculation capacity.
Accordingly, there is a need for improvement and development in the art.
Disclosure of Invention
The invention aims to solve the technical problems that aiming at the defects in the prior art, an edge node and cloud collaborative video processing method is provided, and aims to solve the problems that in the prior art, the requirement of low delay and high throughput cannot be met by simply processing a mass video stream by a cloud computing server, the computational power is low by simply relying on edge computing, a large-scale deep neural network model cannot be operated, and the throughput is low due to serial processing flow.
The technical scheme adopted by the invention for solving the problems is as follows:
In a first aspect, an embodiment of the present invention provides a method for processing collaborative video based on an edge node and a cloud, where the method includes:
acquiring video image pixel data after video image processing by an edge node;
performing video image decoding on the video image pixel data to obtain image decoding data;
And performing visual feature analysis training based on a convolutional neural network model on the image decoding data to obtain video image visual feature analysis data.
In one implementation manner, the generating manner of the video image pixel data after the video image processing by the edge node is as follows:
Acquiring video image pixel data shot by a camera;
filtering image pixel data without change of an object in the video image pixel data to obtain effective video image pixel data;
and performing image scaling and image compression coding on the effective video image pixel data to obtain video image pixel data.
In one implementation manner, the filtering the image pixel data repeated by the object in the video image pixel data, after obtaining the effective video image pixel data, further includes:
and gamma correction, sharpening and fish eye correction are carried out on the effective video image pixel data.
In one implementation, the performing image scaling and image compression encoding on the effective video image pixel data to obtain video image pixel data includes:
Adjacent pixel combination is carried out on the effective video image pixel data to obtain scaled video image pixel data;
And according to the coding redundancy and the pixel redundancy between the scaled video image pixel data, binary coding is carried out on the scaled video image pixel data to obtain video image pixel data.
In one implementation, the performing image scaling and image compression encoding on the effective video image pixel data to obtain video image pixel data further includes:
and when the effective video image pixel data is subjected to image scaling and image compression coding, an image processing hardware unit or an image processing software thread is added to improve the speed of image scaling and image compression coding.
In one implementation manner, the performing the visual feature analysis training based on the convolutional neural network model on the image decoding data to obtain the visual feature analysis data of the video image includes:
And inputting the image decoding data into a convolutional neural network model to obtain video image visual characteristic analysis data.
In one implementation manner, the performing the visual feature analysis training based on the convolutional neural network model on the image decoding data to obtain the visual feature analysis data of the video image further includes:
and when the image decoding data is subjected to visual feature analysis training based on a convolutional neural network model, an image processing hardware unit or an image processing software thread is added to increase the rate of the visual feature analysis training.
In one implementation manner, the convolutional neural network model generation manner is as follows:
Acquiring input sample data;
inputting the input sample data into a modeling model to obtain modeling model output data;
Re-inputting the modeling model output data to the modeling model for training iteration;
Repeating the step of inputting the modeling model output data into the modeling model again for training iteration until the modeling model output data meets the preset requirement, stopping training iteration, and obtaining the convolutional neural network model.
In a second aspect, an embodiment of the present invention further provides an apparatus for collaborative video processing based on an edge node and a cloud, where the apparatus includes:
the video image pixel data acquisition unit is used for acquiring video image pixel data after video image processing by the edge node;
an image decoding data obtaining unit, configured to perform video image decoding on the video image pixel data to obtain image decoding data;
The video image visual characteristic analysis data acquisition unit is used for performing visual characteristic analysis training based on the convolutional neural network model on the image decoding data to obtain video image visual characteristic analysis data.
In a third aspect, an embodiment of the present invention further provides a server, including a memory, and one or more programs, where the one or more programs are stored in the memory, and configured to be executed by the one or more processors, where the one or more programs include a method for performing the edge node and cloud-based collaborative video processing according to any one of the above.
The invention has the beneficial effects that: according to the embodiment of the invention, after the edge node firstly obtains the video image pixel data shot by the camera, video image compression coding processing is carried out on the video image pixel data to obtain compressed coding video image pixel data, the edge node sends the compressed coding video image pixel data to the cloud server, the edge node codes the video image data, so that the cloud server carries out video image decoding on the compressed coding video image pixel data to obtain image decoding data, and finally, the cloud server carries out visual feature analysis training on the image decoding data based on a convolutional neural network model to obtain video image visual feature analysis data. According to the method, the edge nodes and the cloud server are cooperated to process video data, so that tasks with low calculation power requirements run on the edge nodes, tasks with high calculation power requirements run on the cloud server, high performance of cloud server calculation and low delay of edge node calculation are combined, communication bandwidth is reduced through image lossy compression and image filtering of the edge nodes, meanwhile, a pipeline mechanism is used, and the throughput rate of calculation tasks is improved by adding hardware units or software threads, so that operation efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings may be obtained according to the drawings without inventive effort to those skilled in the art.
Fig. 1 is a schematic flow chart of a collaborative video processing method based on edge nodes and cloud
Fig. 2 is a schematic block diagram of an apparatus based on edge node and cloud collaborative video processing according to an embodiment of the present invention.
Fig. 3 is a schematic block diagram of an internal structure of a server according to an embodiment of the present invention.
Detailed Description
The invention discloses a method, a device and a server for processing video based on edge nodes and cloud cooperation, which are used for making the purposes, the technical scheme and the effects of the invention clearer and more definite, and the invention is further described in detail below by referring to the accompanying drawings and the embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In the prior art, a cloud data center simply depends on a cloud computing server to have limited bandwidth, a sensor has a certain delay to the cloud data center, and the requirement of low delay and high throughput cannot be met by simply depending on the cloud computing server to process massive video streams. The problem of throughput and bandwidth is solved by simply relying on edge calculation and placing calculation near data, but the edge calculation node is generally low in performance, operation of a large-scale deep neural network cannot be completed, a large amount of funds are needed for building the high-performance edge calculation node, the repeated construction also brings about waste of calculation power, and the method for processing the image data in the prior art cannot meet the requirements of bandwidth, delay, throughput rate and high calculation capability.
In order to solve the problems in the prior art, the embodiment provides a collaborative video processing method based on edge nodes and cloud, and in the embodiment of the invention, the edge nodes are nodes close to the network edge side of a request place, and the edge nodes can be a server or an intelligent driving automobile, and are not particularly limited. After the edge node acquires the video image pixel data shot by the camera, video image compression coding processing is carried out on the video image pixel data to obtain compression coding video image pixel data, the edge node sends the compression coding video image pixel data to the cloud server through the RPC, the RPC is remote procedure call, and the RPC is similar to local procedure call and only works on the internet. Because the edge node encodes the video image data, the cloud server decodes the video image of the compressed encoded video image pixel data to obtain image decoding data, and finally, the cloud server performs visual feature analysis training based on a convolutional neural network model on the image decoding data to obtain video image visual feature analysis data, the cloud server sends the video image visual feature analysis data back to the edge node, and the edge node receives the video image visual feature analysis data. According to the method, the edge nodes and the cloud server are cooperated to process video data, so that tasks with low calculation power requirements run on the edge nodes, tasks with high calculation power requirements run on the cloud server, high performance of cloud server calculation and low delay of edge node calculation are combined, communication bandwidth is reduced through image lossy compression and image filtering of the edge nodes, meanwhile, a pipeline mechanism is used, and the throughput rate of calculation tasks is improved by adding hardware units or software threads, so that operation efficiency is improved.
Illustrative examples
Cloud computing refers to a computing mode in which a service provider provides shared software and hardware resources to users as needed. According to the national institute of standards and technology definition (Peter Mell;Timothy Grance(September 2011).The NIST Definition of Cloud Computing(Technical report).National Institute of Standards and Technology:U.S.Department of Commerce), cloud computing has three modes of service (1) infrastructure as a service (IaaS), cloud computing providers offer "basic computing resources," such as computing resources, storage resources, and network resources, which consumers can use to deploy operating systems, applications, firewalls, etc. programs, but cannot control the underlying infrastructure. Typically IaaS provides computing resources to consumers in the form of virtual machines (e.g., XEN or KVM virtual machines) or containers (e.g., docker containers), and cloud orchestration techniques (e.g., openStack or Kubernetes) to programmably control the virtual machines or containers. (2) Platform as a service (PaaS), cloud computing providers provide the running environment for applications. A consumer may deploy an application (e.g., a blog or forum) but cannot control an operating system, network, or hardware environment. Compared with IaaS, paaS eliminates the need for configuring complex software such as an operating system, a firewall, a database, etc., and consumers only need to deploy final application programs using a high-level language (such as Python or Java), cloud computing providers run programs submitted by consumers in a cloud platform. But PaaS also loses control of the underlying software. (3) Software is a service (SaaS), and cloud computing manufacturers rent software to consumers in the form of services, and consumers can use programs through the internet, but cannot control the operating system and the running environment of the programs. The SaaS manufacturer provides various software such as office software, instant messaging software, content management system, enterprise resource planning and the like as network services, and a user can access cloud services through clients such as a browser or through Application Programming Interfaces (APIs) disclosed by the SaaS manufacturer. SaaS speeds up the software delivery process because the software is located on the cloud, rather than in the consumer's computer, so the SaaS vendor only needs to update the software on the cloud to deliver the latest version. According to the deployment mode of cloud service, cloud computing is divided into the following deployment models:
(1) Public clouds are typically provided by public cloud computing vendors, and hardware resources and software architecture on the cloud are also responsible for the cloud computing vendors from which consumers need to purchase computing services. (2) Private clouds present privacy and privacy risks (although public clouds typically do not reveal confidential data) because data from different consumers on public clouds is commonly stored at third-party cloud computing vendors. Public clouds also have limitations such as uncontrolled availability, limited network bandwidth, and the like. The private cloud is a cloud platform which is self-erected by an organization (an enterprise, a school or a government agency) where a cloud user is located, and hardware resources and data belong to the organization. Private clouds still have the advantages of cloud computing and are more privacy. (3) The public cloud and the private cloud are used by the hybrid cloud, key data are usually located in the private cloud, and non-key data are located in the public cloud, so that the cost required for constructing private cloud hardware can be reduced.
Edge computation is defined as a form of computation that exists to reduce latency in a location close to a request. In the application of the internet of things, a sensor device generates a large amount of real-time data, and submitting all the data to cloud computing can cause great bandwidth overhead and unavoidable delay, and the requirements on economy and application effects cannot be met.
Pipeline like a production line in engineering, a pipeline of computers is a set of connected computation processes, where the output of each computation is the input of the next computation, and the computation is done in parallel.
The throughput of the pipeline depends on its slowest calculation process, and since it is unlikely that the calculation time for each stage will be the same, the pipeline will slightly increase the delay of a single task, and the introduction of the pipeline is better as long as the benefit of increased throughput is stronger than the increase in delay.
If a certain computational process is slower, but can be made to have multiple instances executing at the same time by increasing the number of hardware units or software threads, the process can be prevented from reducing the rate of throughput. Also, increasing the number of execution units does not reduce the latency of a single task.
When the intelligent driving vehicle is running, a lot of operations are involved, for example, some operations are needed, such as the operation on the edge node of the intelligent driving vehicle directly with the data of road condition identification and the like, but the processing on the edge node of the video image pixel data which is needed to be processed by the camera for face recognition, target detection, target classification and the like is relatively low in efficiency, and the operation efficiency of the edge node is low due to the fact that the requirement of the part of data on the operation force is very high, the part of data operation with high requirement on the operation performance can be placed in the cloud, and the operation efficiency is improved. In this embodiment, the edge node firstly obtains the video image pixel data shot by the camera, then performs preprocessing on the video image pixel data, such as filtering invalid data in the video image pixel data, scaling, compression encoding and the like, then sends the video image pixel data subjected to the video image compression encoding processing to the cloud server, and the cloud server decodes the video image pixel data to obtain image decoding data. According to the method, the edge nodes and the cloud server are cooperated to process video data, so that tasks with low calculation power requirements run on the edge nodes, tasks with high calculation power requirements run on the cloud server, high performance of cloud server calculation and low delay and privacy of edge node calculation are combined, communication bandwidth is reduced through image lossy compression and image filtering of the edge nodes, meanwhile, a pipeline mechanism is used, and the throughput rate of calculation tasks is improved by increasing hardware units or software threads, so that operation efficiency is improved.
Exemplary method
The embodiment provides a collaborative video processing method based on edge nodes and cloud, which can be applied to a server for image processing. As shown in fig. 1, the method includes:
Step S100, compressed and encoded video image pixel data after video image compression and encoding processing is carried out on the edge node is obtained;
Specifically, the edge node calls video image pixel data shot by the camera API, and then preprocesses the video image pixel data, such as filtering invalid data in the video image pixel data, scaling, compression coding and the like, so as to obtain compression coded video image pixel data after the video image compression coding processing, and prepare for image operation to be carried out on a cloud server.
In order to obtain the compressed and encoded video image pixel data, the generating mode of the compressed and encoded video image pixel data after the video image processing by the edge node is as follows: acquiring video image pixel data shot by a camera; filtering image pixel data without change of an object in the video image pixel data to obtain effective video image pixel data; and performing image scaling and image compression coding on the effective video image pixel data to obtain video image pixel data.
In this embodiment, for example, a camera on an intelligent driving vehicle shoots video image pixel data in real time, and an edge node acquires the video image pixel data, and since some data in the video image pixel data are invalid data for calculation, if the data are uploaded to a cloud server, bandwidth is increased, image processing is performed on the edge node, traditional machine learning or CNN network with smaller scale filters out image data without effective data or image data without change of an object, that is, the effective video image pixel data, so that an image sent to the cloud server by the edge node only contains effective images, thereby reducing the collaborative processing bandwidth based on the edge node and the cloud server. In addition, the edge node also preprocesses the effective video image pixel data, such as image scaling and image compression coding the effective video image pixel data to obtain compression coding video image pixel data; the image scaling is to change the resolution of the image data, and the image compression can further compress the image data and compress the image into a file suitable for network transmission; the video compression can further reduce the flow of network transmission by comparing the difference compression data between frames, and simultaneously, the cooperative processing bandwidth based on the edge node and the cloud server is reduced.
Further, filtering the image pixel data of the video image pixel data, where the object is not changed, further includes performing an operation on the image input and the image output, so that the image input and the image output are all the video image pixel data with the same size, and specifically the method may be adopted: gamma correction, sharpening, fish eye correction, and the like. Gamma correction is called gamma nonlinearity or gamma coding, and is used to perform nonlinear operation or inverse operation on the brightness or tri-stimulus value of light in a film or image system; sharpening is to focus the blurred edge quickly, so that the definition or focal length of a certain part in the image is improved, and the color of a specific area of the image is clearer. The fisheye correction corrects the photograph taken by the fisheye lens.
In order to obtain video image pixel data, the image scaling and image compression encoding are performed on the effective video image pixel data, and the video image pixel data are obtained, which comprises the following steps: adjacent pixel combination is carried out on the effective video image pixel data to obtain scaled video image pixel data; and according to the coding redundancy and the pixel redundancy between the scaled video image pixel data, binary coding is carried out on the scaled video image pixel data to obtain video image pixel data.
Specifically, the edge node performs adjacent pixel merging on the effective video image pixel data to obtain scaled video image pixel data. For example: the method comprises the steps of directly merging adjacent pixels to reduce the data size, reducing the image resolution from the output resolution of a camera sensor to the input resolution of a CNN network (convolutional neural network), greatly reducing the transmission data amount due to the lower input resolution of the CNN network, and binary coding the scaled video image pixel data according to coding redundancy and pixel redundancy among the scaled video image pixel data to obtain video image pixel data. Lossy coding exploits the visual characteristics of the human eye, which is difficult to identify by the human eye even if some information is lost. Common lossless encodings are PNG, webp, etc., and common lossy encodings are JEPG, webp, HEIF, etc. Video coding encodes pixel data of a sequence of images into binary data, and video coding uses the relationship between adjacent image pixel data to achieve higher compression ratios, but with higher encoding complexity.
In addition, in order to improve the efficiency of image scaling and image compression coding, the performing image scaling and image compression coding on the effective video image pixel data to obtain video image pixel data further includes the following operations: and when the effective video image pixel data is subjected to image scaling and image compression coding, an image processing hardware unit or an image processing software thread is added to improve the speed of image scaling and image compression coding.
In this embodiment, when the edge node performs image scaling and image compression encoding on the effective video image pixel data, in order to improve the computing efficiency, the thread computing speed, such as an image processing hardware unit or an image processing software thread, may be accelerated by adding hardware or software. In practice, many socs have separate components for image processing tasks, such as ISPs (image signal processors), VPUs (video processing units), NPUs (neural network processing units), hardware codecs. The image processing hardware units can avoid the realization of related processing subtasks by using CPU software, reduce the processing time and reduce the CPU occupancy rate. The hardware acceleration and unloading is combined with the pipeline, so that the related hardware units can be guaranteed to process tasks with higher throughput. If the image processing hardware unit cannot process part of tasks due to parameter limitation or quantity limitation, for example, the input resolution is too high, so that Soc (system-in-chip) does not support processing or the image processing hardware unit only supports Soc of two paths of videos, three paths of videos cannot be processed, the image processing hardware unit can be combined with an image processing software thread, such as CPU software, to realize joint processing of the image processing software thread and the image processing hardware unit.
The embodiment provides a collaborative video processing method based on edge nodes and cloud, which can be applied to a server for image processing. As shown in fig. 1, the method includes:
Step 200, video image decoding is carried out on the video image pixel data to obtain image decoding data;
Specifically, video image decoding of video image pixel data is an inverse process of encoding the video image pixel data, and mainly recovers video image pixel data before encoding, so as to prepare for subsequent visual feature analysis training based on a convolutional neural network model.
The embodiment provides a collaborative video processing method based on edge nodes and cloud, which can be applied to a server for image processing. As shown in fig. 1, the method comprises the following steps:
and step S300, performing visual feature analysis training based on a convolutional neural network model on the image decoding data to obtain video image visual feature analysis data.
Specifically, the Convolutional Neural Network (CNN) belongs to one of the artificial neural networks, and the network structure of the convolutional neural network with shared weights obviously reduces the complexity of the model and the number of the weights. The convolutional neural network can directly take the picture as the input of the network, automatically extract the characteristics, has high non-deformation on the deformation (such as translation, scaling, tilting) and the like of the picture, is commonly used for the neural network of visual analysis, and has wide application in the fields of face recognition, target detection, target classification, natural language processing, medical treatment and the like. In this embodiment, in order to perform efficient processing on the image decoding data, visual feature analysis training based on a convolutional neural network model is performed on the image decoding data, so as to obtain video image visual feature analysis data.
In order to obtain video image visual feature analysis data, the image decoding data is subjected to visual feature analysis training based on a convolutional neural network model, and the video image visual feature analysis data is obtained, which comprises the following steps:
And step S301, inputting the image decoding data into a convolutional neural network model to obtain video image visual characteristic analysis data.
Specifically, the cloud server inputs the image decoding data into the convolutional neural network model, and the complexity of the model and the number of weights are reduced due to the network structure of the weight sharing of the convolutional neural network model, in addition, the convolutional neural network can directly take the picture as the input of the network, automatically extract the characteristics, and has high non-deformation on the deformation (such as translation, scaling, inclination) and the like of the picture, and the high-quality image data can be obtained by carrying out convolutional neural network operation on the cloud server, so that the calculation efficiency can be improved.
In addition, if the cloud server wishes to increase the operation rate at the time of the visual feature analysis training based on the convolutional neural network model, the number of image processing hardware units or image processing software threads can be increased so that a plurality of instances are simultaneously executing at the time of the visual feature analysis training based on the convolutional neural network model, and the decrease of throughput by the process can be prevented. That is, the process can increase the number of execution units without changing the delay of a single task, and the throughput bottleneck brought by a CPU can be eliminated by increasing an image processing hardware unit or an image processing software thread or simultaneously increasing the image processing hardware unit and the image processing software thread to execute visual feature analysis training based on a convolutional neural network model.
In order to generate the convolutional neural network model, the convolutional neural network model is generated by the following steps: acquiring input sample data; inputting the input sample data into a modeling model to obtain modeling model output data; re-inputting the modeling model output data to the modeling model for training iteration; repeating the step of inputting the modeling model output data into the modeling model again for training iteration until the modeling model output data meets the preset requirement, stopping training iteration, and obtaining the convolutional neural network model.
Specifically, image input sample data in practice is obtained, the image input sample data is input into a modeling model, the modeling model outputs modeling model output data, the modeling model output data is input into the modeling model again for training iteration, the steps are repeated until the modeling model output data meets the preset requirement, that is, the mean square error value of the modeling model output data and the actual sample output data is smaller than a preset value, the modeling model training is successful, the training iteration is stopped, a convolutional neural network model is obtained, and the generated convolutional neural network model can perform visual characteristic analysis training on image decoding data.
Exemplary apparatus
As shown in fig. 2, an embodiment of the present invention provides a collaborative video processing device based on an edge node and a cloud, the device includes a video image pixel data obtaining unit 401, an image decoding data obtaining unit 402, and a video image visual feature analysis data obtaining unit 403, wherein:
A video image pixel data obtaining unit 401, configured to obtain video image pixel data after video image processing by the edge node;
an image decoding data obtaining unit 402, configured to perform video image decoding on the video image pixel data to obtain image decoding data;
The video image visual feature analysis data obtaining unit 403 is configured to perform visual feature analysis training based on a convolutional neural network model on the image decoding data, so as to obtain video image visual feature analysis data.
Based on the above embodiment, the present invention also provides a server, and a functional block diagram thereof may be shown in fig. 3. The server comprises a processor, a memory, a network interface, a display screen and a temperature sensor which are connected through a system bus. Wherein the processor of the server is configured to provide computing and control capabilities. The memory of the server includes nonvolatile storage medium and internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the server is used for communicating with an external terminal through a network connection. The computer program, when executed by the processor, implements a method for co-pipelining video processing based on edge nodes and cloud. The display screen of the server can be a liquid crystal display screen or an electronic ink display screen, and the temperature sensor of the server is preset in the server and is used for detecting the running temperature of the internal equipment.
It will be appreciated by those skilled in the art that the schematic diagram of fig. 3 is merely a block diagram of some of the structures associated with the present invention and is not limiting of the servers to which the present invention may be applied, and that a particular server may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a server is provided that includes a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors, the one or more programs comprising instructions for:
acquiring video image pixel data after video image processing by an edge node;
performing video image decoding on the video image pixel data to obtain image decoding data;
And performing visual feature analysis training based on a convolutional neural network model on the image decoding data to obtain video image visual feature analysis data.
In summary, the invention discloses a method, a device and a server for processing collaborative video based on edge nodes and cloud, wherein the method comprises the following steps:
according to the embodiment of the invention, after the edge node firstly obtains the video image pixel data shot by the camera, video image compression coding processing is carried out on the video image pixel data to obtain compressed coding video image pixel data, the edge node sends the compressed coding video image pixel data to the cloud server, the edge node codes the video image data, so that the cloud server carries out video image decoding on the compressed coding video image pixel data to obtain image decoding data, and finally, the cloud server carries out visual feature analysis training on the image decoding data based on a convolutional neural network model to obtain video image visual feature analysis data. According to the method, the edge nodes and the cloud server are cooperated to process video data, so that tasks with low calculation power requirements run on the edge nodes, tasks with high calculation power requirements run on the cloud server, high performance of cloud server calculation and low delay of edge node calculation are combined, communication bandwidth is reduced through image lossy compression and image filtering of the edge nodes, meanwhile, a pipeline mechanism is used, and the throughput rate of calculation tasks is improved by adding hardware units or software threads, so that operation efficiency is improved.
It should be understood that the present invention discloses a method for collaborative video processing based on edge nodes and cloud, and it should be understood that the application of the present invention is not limited to the above examples, and those skilled in the art can make modifications or changes according to the above description, and all such modifications and changes should fall within the scope of the appended claims.
Claims (7)
1. An edge node and cloud collaborative video processing method is characterized by comprising the following steps:
acquiring video image pixel data after video image processing by an edge node;
the video image pixel data generation mode of the edge node after video image processing is as follows:
Acquiring video image pixel data shot by a camera;
filtering image pixel data without change of an object in the video image pixel data to obtain effective video image pixel data;
performing image scaling and image compression coding on the effective video image pixel data to obtain video image pixel data;
the filtering of the image pixel data without change of the object in the video image pixel data further comprises the following steps of:
Gamma correction, sharpening and fish eye correction are carried out on the effective video image pixel data;
The image scaling and image compression coding are carried out on the effective video image pixel data, and the video image pixel data obtaining comprises the following steps:
Adjacent pixel combination is carried out on the effective video image pixel data to obtain scaled video image pixel data;
Binary encoding is carried out on the scaled video image pixel data according to the encoding redundancy and the pixel redundancy among the scaled video image pixel data to obtain video image pixel data;
performing video image decoding on the video image pixel data to obtain image decoding data;
Performing visual feature analysis training based on a convolutional neural network model on the image decoding data to obtain video image visual feature analysis data; the convolutional neural network directly takes an image as the input of the network, automatically extracts characteristics, and processes deformation of the image so that the image with a height is not deformed.
2. The method for collaborative video processing based on an edge node and a cloud as claimed in claim 1, wherein said performing image scaling and image compression encoding on the effective video image pixel data to obtain video image pixel data further comprises:
and when the effective video image pixel data is subjected to image scaling and image compression coding, an image processing hardware unit or an image processing software thread is added to improve the speed of image scaling and image compression coding.
3. The edge node and cloud collaborative video processing method according to claim 2, wherein the performing visual feature analysis training on the image decoding data based on a convolutional neural network model to obtain video image visual feature analysis data includes:
And inputting the image decoding data into a convolutional neural network model to obtain video image visual characteristic analysis data.
4. The method for collaborative video processing based on edge nodes and cloud computing according to claim 3, wherein the training the image decoding data for visual feature analysis based on a convolutional neural network model to obtain video image visual feature analysis data further comprises:
and when the image decoding data is subjected to visual feature analysis training based on a convolutional neural network model, an image processing hardware unit or an image processing software thread is added to increase the rate of the visual feature analysis training.
5. The method for processing the video based on the edge node and the cloud cooperation as claimed in claim 4, wherein the convolutional neural network model is generated by:
Acquiring input sample data;
inputting the input sample data into a modeling model to obtain modeling model output data;
Re-inputting the modeling model output data to the modeling model for training iteration;
Repeating the step of inputting the modeling model output data into the modeling model again for training iteration until the modeling model output data meets the preset requirement, stopping training iteration, and obtaining the convolutional neural network model.
6. An edge node and cloud-based collaborative video processing device, wherein the device comprises:
the video image pixel data acquisition unit is used for acquiring video image pixel data after video image processing by the edge node;
the video image pixel data generation mode of the edge node after video image processing is as follows:
Acquiring video image pixel data shot by a camera;
filtering image pixel data without change of an object in the video image pixel data to obtain effective video image pixel data;
performing image scaling and image compression coding on the effective video image pixel data to obtain video image pixel data;
the filtering of the image pixel data without change of the object in the video image pixel data further comprises the following steps of:
Gamma correction, sharpening and fish eye correction are carried out on the effective video image pixel data;
The image scaling and image compression coding are carried out on the effective video image pixel data, and the video image pixel data obtaining comprises the following steps:
Adjacent pixel combination is carried out on the effective video image pixel data to obtain scaled video image pixel data;
Binary encoding is carried out on the scaled video image pixel data according to the encoding redundancy and the pixel redundancy among the scaled video image pixel data to obtain video image pixel data;
an image decoding data obtaining unit, configured to perform video image decoding on the video image pixel data to obtain image decoding data;
the video image visual characteristic analysis data acquisition unit is used for performing visual characteristic analysis training based on a convolutional neural network model on the image decoding data to obtain video image visual characteristic analysis data; the convolutional neural network directly takes an image as the input of the network, automatically extracts characteristics, and processes deformation of the image so that the image with a height is not deformed.
7. A server comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured for execution by one or more processors, the one or more programs comprising instructions for performing the method of any of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011466467.7A CN112580481B (en) | 2020-12-14 | 2020-12-14 | Edge node and cloud collaborative video processing method, device and server |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011466467.7A CN112580481B (en) | 2020-12-14 | 2020-12-14 | Edge node and cloud collaborative video processing method, device and server |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112580481A CN112580481A (en) | 2021-03-30 |
CN112580481B true CN112580481B (en) | 2024-05-28 |
Family
ID=75134775
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011466467.7A Active CN112580481B (en) | 2020-12-14 | 2020-12-14 | Edge node and cloud collaborative video processing method, device and server |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112580481B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113104046B (en) * | 2021-04-28 | 2022-11-11 | 中国第一汽车股份有限公司 | Door opening early warning method and device based on cloud server |
WO2022243735A1 (en) * | 2021-05-21 | 2022-11-24 | Sensetime International Pte. Ltd. | Edge computing-based control method and apparatus, edge device and storage medium |
CN114201289A (en) * | 2021-10-27 | 2022-03-18 | 山东师范大学 | Target detection method and system based on edge computing node and cloud server |
CN114741198B (en) * | 2022-04-19 | 2023-12-15 | 中国电信股份有限公司 | Video stream processing method and device, electronic equipment and computer readable medium |
CN117173711A (en) * | 2023-08-18 | 2023-12-05 | 安徽工程大学产业创新技术研究有限公司 | Automobile tire parameter identification and detection method and service platform |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106982359A (en) * | 2017-04-26 | 2017-07-25 | 深圳先进技术研究院 | A kind of binocular video monitoring method, system and computer-readable recording medium |
CN108012121A (en) * | 2017-12-14 | 2018-05-08 | 安徽大学 | A kind of edge calculations and the real-time video monitoring method and system of cloud computing fusion |
CN108900801A (en) * | 2018-06-29 | 2018-11-27 | 深圳市九洲电器有限公司 | A kind of video monitoring method based on artificial intelligence, system and Cloud Server |
CN109543829A (en) * | 2018-10-15 | 2019-03-29 | 华东计算技术研究所(中国电子科技集团公司第三十二研究所) | Method and system for hybrid deployment of deep learning neural network on terminal and cloud |
CN109614882A (en) * | 2018-11-19 | 2019-04-12 | 浙江大学 | A kind of act of violence detection system and method based on human body attitude estimation |
CN111047818A (en) * | 2019-11-01 | 2020-04-21 | 浙江省林业技术推广总站(浙江省林业信息宣传中心) | Forest fire early warning system based on video image |
CN111182263A (en) * | 2019-12-06 | 2020-05-19 | 杭州乔戈里科技有限公司 | Robot system with cloud analysis platform and visual analysis method |
CN111510774A (en) * | 2020-04-21 | 2020-08-07 | 济南浪潮高新科技投资发展有限公司 | Intelligent terminal image compression algorithm combining edge calculation and deep learning |
CN111669587A (en) * | 2020-04-17 | 2020-09-15 | 北京大学 | Mimic compression method and device of video image, storage medium and terminal |
CN111901573A (en) * | 2020-08-17 | 2020-11-06 | 泽达易盛(天津)科技股份有限公司 | Fine granularity real-time supervision system based on edge calculation |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105138963A (en) * | 2015-07-31 | 2015-12-09 | 小米科技有限责任公司 | Picture scene judging method, picture scene judging device and server |
-
2020
- 2020-12-14 CN CN202011466467.7A patent/CN112580481B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106982359A (en) * | 2017-04-26 | 2017-07-25 | 深圳先进技术研究院 | A kind of binocular video monitoring method, system and computer-readable recording medium |
CN108012121A (en) * | 2017-12-14 | 2018-05-08 | 安徽大学 | A kind of edge calculations and the real-time video monitoring method and system of cloud computing fusion |
CN108900801A (en) * | 2018-06-29 | 2018-11-27 | 深圳市九洲电器有限公司 | A kind of video monitoring method based on artificial intelligence, system and Cloud Server |
CN109543829A (en) * | 2018-10-15 | 2019-03-29 | 华东计算技术研究所(中国电子科技集团公司第三十二研究所) | Method and system for hybrid deployment of deep learning neural network on terminal and cloud |
CN109614882A (en) * | 2018-11-19 | 2019-04-12 | 浙江大学 | A kind of act of violence detection system and method based on human body attitude estimation |
CN111047818A (en) * | 2019-11-01 | 2020-04-21 | 浙江省林业技术推广总站(浙江省林业信息宣传中心) | Forest fire early warning system based on video image |
CN111182263A (en) * | 2019-12-06 | 2020-05-19 | 杭州乔戈里科技有限公司 | Robot system with cloud analysis platform and visual analysis method |
CN111669587A (en) * | 2020-04-17 | 2020-09-15 | 北京大学 | Mimic compression method and device of video image, storage medium and terminal |
CN111510774A (en) * | 2020-04-21 | 2020-08-07 | 济南浪潮高新科技投资发展有限公司 | Intelligent terminal image compression algorithm combining edge calculation and deep learning |
CN111901573A (en) * | 2020-08-17 | 2020-11-06 | 泽达易盛(天津)科技股份有限公司 | Fine granularity real-time supervision system based on edge calculation |
Non-Patent Citations (4)
Title |
---|
基于深度学习的智能辅助驾驶系统设计;林付春等;贵州大学学报(自然科学版);第35卷(第01期);第73-77页 * |
基于边缘计算的视频监控框架;葛畅等;计算机工程与设计;第40卷(第1期);第32-39页 * |
基于边缘计算的视频监控系统及应用;潘三明等;电信科学(第06期);第64-69页 * |
面向视频监控基于联邦学习的智能边缘计算技术;赵羽等;通信学报;第41卷(第10期);第109-115段 * |
Also Published As
Publication number | Publication date |
---|---|
CN112580481A (en) | 2021-03-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112580481B (en) | Edge node and cloud collaborative video processing method, device and server | |
Emmons et al. | Cracking open the dnn black-box: Video analytics with dnns across the camera-cloud boundary | |
WO2020185234A1 (en) | Preprocessing sensor data for machine learning | |
CN110612538A (en) | Generating discrete potential representations of input data items | |
JP2021511579A (en) | Image processing system and image processing method | |
US20190164050A1 (en) | Compression of fully connected / recurrent layers of deep network(s) through enforcing spatial locality to weight matrices and effecting frequency compression | |
CN108235116B (en) | Feature propagation method and apparatus, electronic device, and medium | |
JP2023512570A (en) | Image processing method and related device | |
CN113449839A (en) | Distributed training method, gradient communication device and computing equipment | |
CN111898484A (en) | Method and device for generating model, readable storage medium and electronic equipment | |
US20220319157A1 (en) | Temporal augmentation for training video reasoning system | |
CN108491890B (en) | Image method and device | |
WO2022115236A1 (en) | Self-optimizing video analytics pipelines | |
Mittal et al. | Accelerated computer vision inference with AI on the edge | |
Qian et al. | OsmoticGate: Adaptive edge-based real-time video analytics for the Internet of Things | |
US20220327663A1 (en) | Video Super-Resolution using Deep Neural Networks | |
CN116630514A (en) | Image processing method, device, computer readable storage medium and electronic equipment | |
WO2022100140A1 (en) | Compression encoding method and apparatus, and decompression method and apparatus | |
US20210027064A1 (en) | Parallel video processing neural networks | |
DE102023129956A1 (en) | WORKLOAD RESOURCE FORECASTING | |
CN116246026B (en) | Training method of three-dimensional reconstruction model, three-dimensional scene rendering method and device | |
CN113177483A (en) | Video object segmentation method, device, equipment and storage medium | |
CN111340146A (en) | Method for accelerating video recovery task through shared feature extraction network | |
CN116757962A (en) | Image denoising method and device | |
WO2023124461A1 (en) | Video coding/decoding method and apparatus for machine vision task, device, and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |