CN112019892A - Behavior identification method, device and system for separating client and server - Google Patents

Behavior identification method, device and system for separating client and server Download PDF

Info

Publication number
CN112019892A
CN112019892A CN202010718353.0A CN202010718353A CN112019892A CN 112019892 A CN112019892 A CN 112019892A CN 202010718353 A CN202010718353 A CN 202010718353A CN 112019892 A CN112019892 A CN 112019892A
Authority
CN
China
Prior art keywords
frame
convex hull
motion
map
frame difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010718353.0A
Other languages
Chinese (zh)
Inventor
曹家豪
朱松
武庆三
潘鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Wantong Technology Co ltd
Original Assignee
Shenzhen Wantong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Wantong Technology Co ltd filed Critical Shenzhen Wantong Technology Co ltd
Priority to CN202010718353.0A priority Critical patent/CN112019892A/en
Publication of CN112019892A publication Critical patent/CN112019892A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2743Video hosting of uploaded data from client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/437Interfacing the upstream path of the transmission network, e.g. for transmitting client requests to a VOD server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

A behavior identification method, a device and a system for separating a client side and a server side are provided, the method comprises the following steps: frame difference calculating step: acquiring a video frame, calculating the frame difference between the current frame and the previous frame to obtain a frame difference image, taking the frame difference image as a motion energy map, and updating a motion history map by using the motion energy map; and (3) convex hull calculation: obtaining a convex hull on the motion history map, and calculating a statistic value s on the convex hulltA 1 is totAdding a history list with the length of k; a caching step: caching k historical convex hull characteristics by using a historical list as the characteristics; and (3) an analysis step: and training, classifying and analyzing according to the k historical convex hull characteristics. The application provides a framework of a behavior recognition module, in a systemThe level of (2) improves user experience and cost performance.

Description

Behavior identification method, device and system for separating client and server
Technical Field
The invention relates to the technical field of data analysis, in particular to a behavior recognition method, a behavior recognition device and a behavior recognition system for client-server separation based on machine vision.
Background
The input of the behavior recognition module is continuous video frames, and the output is discrete information or images. This transformation refines the dynamic, high-dimensional, redundant video information into static, low-dimensional, key numerical and image information. It is also desirable that the extracted key information is a good representation of the original continuous video data. This refinement can be viewed as a kind of information compression, and another important evaluation criterion besides the compression rate is fidelity. The high fidelity representation minimizes information distortion caused by compression. The following two main types of methods are commonly used for extracting video key information and frames:
(1) using global information
And transmitting all the images to an intention identification module, and directly transmitting the video captured by the camera to the intention identification module in a streaming mode. The intention recognition module video frames are analyzed in real time. An intent of the user is detected. This approach is equivalent to using lossless data directly without compression.
And secondly, uploading video frames at intervals, wherein the behavior recognition module filters the video frames, and the intention recognition module uploads frames at intervals of t. The method reduces the uploading amount to 1/t of the uploading of the real-time video stream. The larger the t is used, the smaller the amount of upload, the greater the information distortion.
And thirdly, uploading frame difference static frames, namely filtering moving pictures and only uploading static pictures. The motion and the stillness of the image are judged by the statistic values (such as an average value, a binary bright point count and the like) of the pixels in the frame difference image. The method assumes that the behaviors all exist in a quiescent state.
And fourthly, a global optical flow method, namely calculating dense optical flows of the whole image and judging the shape and position change of the object.
(2) Using the local information:
a local optical flow method, extracting key points and calculating sparse optical flow.
And sixthly, extracting key points and features, namely identifying by using geometric and feature information.
However, the above extraction methods all have certain drawbacks:
the method comprises the following steps: the method provides the most complete information for the intention recognition module, but the following conditions exist for the client and the server:
1. and enough bandwidth is reserved between the client and the server.
2. The client performance can meet the requirement of uninterrupted uploading.
3. The server needs to support both high throughput and low latency.
The condition 1 and the condition 2 limit the use of low-performance equipment, and a higher threshold is set for a user; while condition 3 increases the operating cost of the service provider and is difficult to support the rapidly growing user demand.
The method II comprises the following steps: and (4) uploading at intervals of t frames, and reducing the uploading amount to 1/t of the uploading of the real-time video stream. This approach has the advantage of low performance requirements on the client device. The disadvantage is that the loss of information is large and if an action is short in duration, the interval upload may not contain its start and end.
The mode III is as follows: a still frame may not exist or be inaccurate in continuous motion.
Mode (c) -sixth: the computing resource requirement is high, and the real-time requirement cannot be met on low-power-consumption user equipment.
Disclosure of Invention
The application provides a method, a device and a system for identifying a client-server separation behavior based on machine vision.
According to a first aspect, an embodiment provides a machine vision-based client-server separation behavior recognition method, which includes:
frame difference calculating step: acquiring a video frame, calculating the frame difference between the current frame and the previous frame to obtain a frame difference image, taking the frame difference image as a motion energy map, and updating a motion history map by using the motion energy map;
and (3) convex hull calculation: obtaining a convex hull on the motion history map, and calculating a statistic value s on the convex hulltA 1 is totAdding a history list with the length of k;
a caching step: caching k historical convex hull characteristics by using a historical list as the characteristics;
and (3) an analysis step: and training, classifying and analyzing according to the k historical convex hull characteristics.
According to a second aspect, an embodiment provides a machine vision-based client-server separation behavior recognition apparatus, including:
a frame difference calculation module: the motion history map updating method comprises the steps of obtaining a video frame, calculating the frame difference between a current frame and a previous frame, obtaining a frame difference image, taking the frame difference image as a motion energy map, and updating a motion history map by using the motion energy map;
a convex hull calculation module: for obtaining a convex hull on a motion history map, calculating a statistical value s on the convex hulltA 1 is totAdding a history list with the length of k;
a cache module: the device is used for caching k historical convex hull characteristics by using the historical list as the characteristics;
an analysis module: the method is used for training, classifying and analyzing according to the k historical convex hull characteristics.
According to a third aspect, an embodiment provides a machine vision-based client-server separation behavior recognition system, including:
a memory for storing a program;
a processor for implementing the method as described in the first aspect by executing the program stored by the memory.
According to a fourth aspect, an embodiment provides a computer readable storage medium comprising a program executable by a processor to implement the method according to the first aspect.
According to the embodiment, the motion history map is used for coding the time sequence information, the main body motion is obtained by calculating the convex hull, and only the key image is transmitted, so that dense uploading is changed into sparse uploading, algorithm filtering is realized, and noise influence is reduced; compared with real-time video stream uploading, the method greatly reduces the consumption of network bandwidth and server computing resources; compared with the uploading of video frames at intervals, the method has smaller delay; short-time behaviors can be detected, and unconscious behaviors of the user can be filtered; compared with frame difference static frame uploading, the method is more robust to the interference of external environment (such as shadow) and can process dynamic action; compared with a global optical flow method, a local optical flow method and key point + feature extraction, the method has low computational power requirement and real-time performance.
Drawings
FIG. 1 is a flow chart of a method for identifying client-server separation behavior based on machine vision;
fig. 2 is a schematic diagram illustrating an implementation of the identification method for client-server separation based on machine vision according to an embodiment.
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings. Wherein like elements in different embodiments are numbered with like associated elements. In the following description, numerous details are set forth in order to provide a better understanding of the present application. However, those skilled in the art will readily recognize that some of the features may be omitted or replaced with other elements, materials, methods in different instances. In some instances, certain operations related to the present application have not been shown or described in detail in order to avoid obscuring the core of the present application from excessive description, and it is not necessary for those skilled in the art to describe these operations in detail, so that they may be fully understood from the description in the specification and the general knowledge in the art.
Furthermore, the features, operations, or characteristics described in the specification may be combined in any suitable manner to form various embodiments. Also, the various steps or actions in the method descriptions may be transposed or transposed in order, as will be apparent to one of ordinary skill in the art. Thus, the various sequences in the specification and drawings are for the purpose of describing certain embodiments only and are not intended to imply a required sequence unless otherwise indicated where such sequence must be followed.
In a vision-based intelligent human-computer interaction scene, a camera captures user actions which represent specific behaviors, and a computer needs to know the intention of a user according to the user behaviors and make corresponding operations. In the technical implementation of the interaction, the two most important modules are behavior recognition and intent recognition. Behavior recognition is to analyze video data to know what a user is doing; intent recognition is then based on user behavior, analyzing why the user does so, what the intent is.
The behavior recognition module and the intent recognition module may be abstractly viewed as a client-server mode. And the behavior recognition module initiates a request to the intention recognition module after analyzing the behavior and obtains a return result. The client and server may be integrated together or independent of each other, depending on the available computing resources of the device. For high performance devices, the client and the server may run locally at the same time; for low-performance devices such as embedded devices, the part with lower computational power requirement can be placed on the edge device as a client, and the part with higher computational power requirement can be placed on the cloud server as a server.
This distributed mode naturally leads to end-to-end communication. Communication protocols, delays, efficiency, fidelity factors, etc. need to be considered when designing a system. In an intelligent human-computer interaction scene based on vision, captured videos need to be transmitted to a behavior recognition module in real time and then controlled by the behavior recognition module, and results are transmitted to an intention recognition module when needed. Most of the existing technical schemes only consider the implementation of a behavior recognition module, and neglect the influence of communication on the whole system.
The application aims at providing a framework of a behavior recognition module, and improves user experience and cost performance at the level of a system:
1. the algorithm filtering enables the communication data volume to be greatly reduced.
2. Passing only key images makes dense uploads to sparse uploads.
3. The server resources consumed by one user before can provide services for a plurality of users after using the algorithm provided by the text.
Referring to fig. 1-2, the present application provides a method for identifying a behavior of client-server separation based on machine vision, including:
frame difference calculating step: acquiring a video frame, calculating the frame difference between the current frame and the previous frame to obtain a frame difference image, taking the frame difference image as a motion energy map, and updating a motion history map by using the motion energy map;
and (3) convex hull calculation: obtaining a convex hull on the motion history map, and calculating a statistic value s on the convex hulltA 1 is totAdd a History list of length k, specifically, update the Pre-List to [ st-k,...,st-1]The updated list is [ s ]t-k+1,...,st](ii) a The time sequence information is coded by using the motion history map, and the main body motion is obtained by calculating the convex hull, so that the noise influence can be reduced;
a caching step: caching k historical convex hull characteristics by using a historical list as the characteristics;
and (3) an analysis step: and training, classifying and analyzing according to the k historical convex hull characteristics.
Specifically, let the current image be ItThe last frame of image is It-1Frame difference image DtIs defined as:
Dt=abs(It-It-1)
where abs (·) represents an element-by-element absolute value.
In some embodiments, the MEI takes the frame difference image as a motion energy mapt=Dt
Specifically, the motion history map records the temporal motion in the two-dimensional image.
Figure BDA0002599050040000041
Wherein the parameter k represents the history length of the motion history map; the parameter T is a binarization threshold of the motion energy map.
In some embodiments, the statisticsstThe method comprises the following steps: area, center of gravity, shape, mean value and mean value after binarization of given threshold value.
In some embodiments, prior to training, user behavior needs to be sampled and labeled in advance. In the training process, all the features in the history list are spliced to obtain a feature vector used for training, the training data are obtained through video sampling, video data are collected, frames of the video data are extracted, a motion history graph of each frame and a corresponding end frame of statistics and marking behaviors are calculated, statistics of k frames before the end frame are used as training sample features, and the statistics is marked as corresponding behavior types.
In some embodiments, after training, a first classifier (automatic classifier) is obtained;
meanwhile, the k historical convex hull features are classified through a preset second classifier (a manually designed classifier) and a first classifier, so that the combination of manual design logic and model discrimination is realized;
and after classification, uploading the classified data and images to a cloud server for further analysis.
Accordingly, the present application provides a client-server separation behavior recognition apparatus based on machine vision, including:
a frame difference calculation module: the motion history map updating method comprises the steps of obtaining a video frame, calculating the frame difference between a current frame and a previous frame, obtaining a frame difference image, taking the frame difference image as a motion energy map, and updating a motion history map by using the motion energy map;
a convex hull calculation module: for obtaining a convex hull on a motion history map, calculating a statistical value s on the convex hulltA 1 is totAdding a history list with the length of k;
a cache module: the device is used for caching k historical convex hull characteristics by using the historical list as the characteristics;
an analysis module: the method is used for training, classifying and analyzing according to the k historical convex hull characteristics.
Accordingly, the present application provides a machine vision-based client-server separation behavior recognition system, comprising:
a memory for storing a program;
a processor for implementing the above method by executing the program stored in the memory.
Accordingly, the present application provides a computer-readable storage medium comprising a program executable by a processor to implement the above-described method.
Those skilled in the art will appreciate that all or part of the functions of the various methods in the above embodiments may be implemented by hardware, or may be implemented by computer programs. When all or part of the functions of the above embodiments are implemented by a computer program, the program may be stored in a computer-readable storage medium, and the storage medium may include: a read only memory, a random access memory, a magnetic disk, an optical disk, a hard disk, etc., and the program is executed by a computer to realize the above functions. For example, the program may be stored in a memory of the device, and when the program in the memory is executed by the processor, all or part of the functions described above may be implemented. In addition, when all or part of the functions in the above embodiments are implemented by a computer program, the program may be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a removable hard disk, and may be downloaded or copied to a memory of a local device, or may be version-updated in a system of the local device, and when the program in the memory is executed by a processor, all or part of the functions in the above embodiments may be implemented.
The present invention has been described in terms of specific examples, which are provided to aid understanding of the invention and are not intended to be limiting. For a person skilled in the art to which the invention pertains, several simple deductions, modifications or substitutions may be made according to the idea of the invention.

Claims (10)

1. A behavior recognition method based on separation of a client side and a server side of machine vision is characterized by comprising the following steps:
frame difference calculating step: acquiring a video frame, calculating the frame difference between the current frame and the previous frame to obtain a frame difference image, taking the frame difference image as a motion energy map, and updating a motion history map by using the motion energy map;
and (3) convex hull calculation: obtaining a convex hull on the motion history map, and calculating a statistic value s on the convex hulltA 1 is totAdding a history list with the length of k;
a caching step: caching k historical convex hull characteristics by using a historical list as the characteristics;
and (3) an analysis step: and training, classifying and analyzing according to the k historical convex hull characteristics.
2. The method of claim 1,
let the current picture be ItThe last frame of image is It-1Frame difference image DtIs defined as:
Dt=abs(It-It-1)
wherein abs (·) represents an element-by-element absolute value;
MEI when using frame difference image as motion energy mapt=Dt
3. The method of claim 1,
the motion history map records the motion in time in the two-dimensional image, and the updating rule is as follows:
Figure FDA0002599050030000011
wherein the parameter k represents the history length of the motion history map; the parameter T is a binarization threshold of the motion energy map.
4. The method of claim 1, wherein the statistic stThe method comprises the following steps: area, center of gravity, shape, mean value and mean value after binarization of given threshold value.
5. The method of claim 1, wherein the pre-update list is [ s [ ]t-k,...,st-1]Updated listIs [ s ]t-k+1,...,st]。
6. The method of claim 1, wherein in the training process, all the features in the history list are spliced to obtain a feature vector used for training, the training data is obtained by video sampling, video data is collected, frames of the video data are extracted, a motion history map of each frame and a corresponding end frame of statistics and marking behaviors are calculated, and the statistics of k frames before the end frame are used as the features of the training sample and are marked as the corresponding behavior types.
7. The method of claim 6,
obtaining a first classifier after training;
classifying the k historical convex hull characteristics through a preset second classifier and a preset first classifier;
and after classification, uploading the classified data and images to a cloud server for further analysis.
8. A client-server separation behavior recognition device based on machine vision, comprising:
a frame difference calculation module: the motion history map updating method comprises the steps of obtaining a video frame, calculating the frame difference between a current frame and a previous frame, obtaining a frame difference image, taking the frame difference image as a motion energy map, and updating a motion history map by using the motion energy map;
a convex hull calculation module: for obtaining a convex hull on a motion history map, calculating a statistical value s on the convex hulltA 1 is totAdding a history list with the length of k;
a cache module: the device is used for caching k historical convex hull characteristics by using the historical list as the characteristics;
an analysis module: the method is used for training, classifying and analyzing according to the k historical convex hull characteristics.
9. A machine vision-based client-server separation behavior recognition system, comprising:
a memory for storing a program;
a processor for implementing the method of any one of claims 1-7 by executing a program stored by the memory.
10. A computer-readable storage medium, characterized by comprising a program executable by a processor to implement the method of any one of claims 1-7.
CN202010718353.0A 2020-07-23 2020-07-23 Behavior identification method, device and system for separating client and server Pending CN112019892A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010718353.0A CN112019892A (en) 2020-07-23 2020-07-23 Behavior identification method, device and system for separating client and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010718353.0A CN112019892A (en) 2020-07-23 2020-07-23 Behavior identification method, device and system for separating client and server

Publications (1)

Publication Number Publication Date
CN112019892A true CN112019892A (en) 2020-12-01

Family

ID=73499932

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010718353.0A Pending CN112019892A (en) 2020-07-23 2020-07-23 Behavior identification method, device and system for separating client and server

Country Status (1)

Country Link
CN (1) CN112019892A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6654483B1 (en) * 1999-12-22 2003-11-25 Intel Corporation Motion detection using normal optical flow
CN102592112A (en) * 2011-12-20 2012-07-18 四川长虹电器股份有限公司 Method for determining gesture moving direction based on hidden Markov model
CN103123726A (en) * 2012-09-07 2013-05-29 佳都新太科技股份有限公司 Target tracking algorithm based on movement behavior analysis
CN104778460A (en) * 2015-04-23 2015-07-15 福州大学 Monocular gesture recognition method under complex background and illumination
CN106875369A (en) * 2017-03-28 2017-06-20 深圳市石代科技有限公司 Real-time dynamic target tracking method and device
CN108647654A (en) * 2018-05-15 2018-10-12 合肥岚钊岚传媒有限公司 The gesture video image identification system and method for view-based access control model
US20190138813A1 (en) * 2016-03-11 2019-05-09 Gracenote, Inc. Digital Video Fingerprinting Using Motion Segmentation
CN110796682A (en) * 2019-09-25 2020-02-14 北京成峰科技有限公司 Detection and identification method and detection and identification system for moving target

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6654483B1 (en) * 1999-12-22 2003-11-25 Intel Corporation Motion detection using normal optical flow
CN102592112A (en) * 2011-12-20 2012-07-18 四川长虹电器股份有限公司 Method for determining gesture moving direction based on hidden Markov model
CN103123726A (en) * 2012-09-07 2013-05-29 佳都新太科技股份有限公司 Target tracking algorithm based on movement behavior analysis
CN104778460A (en) * 2015-04-23 2015-07-15 福州大学 Monocular gesture recognition method under complex background and illumination
US20190138813A1 (en) * 2016-03-11 2019-05-09 Gracenote, Inc. Digital Video Fingerprinting Using Motion Segmentation
CN106875369A (en) * 2017-03-28 2017-06-20 深圳市石代科技有限公司 Real-time dynamic target tracking method and device
CN108647654A (en) * 2018-05-15 2018-10-12 合肥岚钊岚传媒有限公司 The gesture video image identification system and method for view-based access control model
CN110796682A (en) * 2019-09-25 2020-02-14 北京成峰科技有限公司 Detection and identification method and detection and identification system for moving target

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ADRIEN ESCANDE 等: "A Strictly Convex Hull for Computing Proximity Distances With Continuous Gradients", 《 IEEE TRANSACTIONS ON ROBOTICS》 *
徐诗艺: "基于计算机视觉的动态手势识别研究", 《中国优秀硕士学位论文全文数据库(电子期刊)》 *

Similar Documents

Publication Publication Date Title
US20210160556A1 (en) Method for enhancing resolution of streaming file
JP2020508010A (en) Image processing and video compression method
CN109151501A (en) A kind of video key frame extracting method, device, terminal device and storage medium
US20120275511A1 (en) System and method for providing content aware video adaptation
CN110334753B (en) Video classification method and device, electronic equipment and storage medium
CN111107395A (en) Video transcoding method, device, server and storage medium
WO2021164216A1 (en) Video coding method and apparatus, and device and medium
CN108154137B (en) Video feature learning method and device, electronic equipment and readable storage medium
US11507324B2 (en) Using feedback for adaptive data compression
JP2009147911A (en) Video data compression preprocessing method, video data compression method employing the same and video data compression system
WO2023082453A1 (en) Image processing method and device
WO2016201683A1 (en) Cloud platform with multi camera synchronization
CN114679607B (en) Video frame rate control method and device, electronic equipment and storage medium
WO2023005740A1 (en) Image encoding, decoding, reconstruction, and analysis methods, system, and electronic device
CN111970509A (en) Video image processing method, device and system
CN112383824A (en) Video advertisement filtering method, device and storage medium
CN113992970A (en) Video data processing method and device, electronic equipment and computer storage medium
Chen et al. Quality-of-content (QoC)-driven rate allocation for video analysis in mobile surveillance networks
US11095901B2 (en) Object manipulation video conference compression
CN114650361A (en) Shooting mode determining method and device, electronic equipment and storage medium
CN114640669A (en) Edge calculation method and device
WO2023088029A1 (en) Cover generation method and apparatus, device, and medium
CN112019892A (en) Behavior identification method, device and system for separating client and server
CN110956093A (en) Big data-based model identification method, device, equipment and medium
Gupta et al. Reconnoitering the Essentials of Image and Video Processing: A Comprehensive Overview

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201201

RJ01 Rejection of invention patent application after publication