CN111400047A - Method for detecting and identifying human face from monitoring video stream through cloud edge cooperation - Google Patents

Method for detecting and identifying human face from monitoring video stream through cloud edge cooperation Download PDF

Info

Publication number
CN111400047A
CN111400047A CN202010193385.3A CN202010193385A CN111400047A CN 111400047 A CN111400047 A CN 111400047A CN 202010193385 A CN202010193385 A CN 202010193385A CN 111400047 A CN111400047 A CN 111400047A
Authority
CN
China
Prior art keywords
data
face
monitoring
image data
image frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010193385.3A
Other languages
Chinese (zh)
Inventor
刘斌
徐军帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Niuli Intelligent Technology Co Ltd
Original Assignee
Qingdao Niuli Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Niuli Intelligent Technology Co Ltd filed Critical Qingdao Niuli Intelligent Technology Co Ltd
Priority to CN202010193385.3A priority Critical patent/CN111400047A/en
Publication of CN111400047A publication Critical patent/CN111400047A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/95Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/502Proximity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a method for detecting and identifying human faces from a monitoring video stream through cloud edge cooperation, which comprises the following steps: s1, scanning and adding an image acquisition device through an edge device, and pulling a video stream of the image acquisition device; s2, separating the compressed data in the video stream, and decoding the compressed data to obtain global image frame data; s3, extracting local image frame data in the global image frame data to detect the human face characteristic points, and generating a data packet containing human face position data and human face image data; s4, carrying out face tracking recognition on the global image frame data according to the data packet, and outputting characteristic face image data; and S5, uploading the characteristic face image data to a cloud server, and after receiving the characteristic face image data, the cloud server carries out comparison and identification on a cloud database according to the characteristic face image data to obtain comparison and identification data.

Description

Method for detecting and identifying human face from monitoring video stream through cloud edge cooperation
Technical Field
The invention relates to the technical field of video image capture, face detection and recognition and the like, in particular to a method for detecting and recognizing a face from a monitoring video stream through cloud edge cooperation.
Background
The prior monitoring face recognition needs a special face snapshot machine device arranged at the front end to detect the face, and is matched with an NVR device to perform face comparison recognition.
The existing monitoring face detection and recognition: need dispose NVR equipment and face snapshot machine, the cost is higher, and face contrast discernment is restricted in single NVR system, can't form many NVR and assemble, forms the face information island, can't utilize original ordinary video surveillance camera head to carry out face detection snapshot.
Disclosure of Invention
The invention provides a method for detecting and identifying a face from a monitoring video stream through cloud-edge cooperation, which mainly utilizes edge equipment to detect the face of a common video monitoring camera without independently deploying a face snapshot machine, the edge equipment can simultaneously process a plurality of paths of video streams, and the cloud end carries out face contrast identification to form cloud end face identification convergence, thereby solving the problems of face identification limitation and face information island of single NVR (network video recorder) equipment.
The method for detecting and identifying the human face from the monitoring video stream through the cloud edge cooperation comprises the following steps: s1, scanning and adding an image acquisition device through an edge device, and pulling a video stream of the image acquisition device; s2, separating the compressed data in the video stream, and decoding the compressed data to obtain global image frame data; s3, extracting local image frame data in the global image frame data to detect the human face characteristic points, and generating a data packet containing human face position data and human face image data; s4, carrying out face tracking recognition on the global image frame data according to the data packet, and outputting characteristic face image data; and S5, uploading the characteristic face image data to a cloud server, and after receiving the characteristic face image data, the cloud server carries out comparison and identification on a cloud database according to the characteristic face image data to obtain comparison and identification data.
According to the invention, the video stream of the common camera is subjected to face detection in real time by using the edge device, so that the transmission cost is reduced, the face at the cloud end is converged to perform face comparison and identification, the face detection and identification cost of video monitoring can be reduced, and the problem of island of the face identification information of the traditional NVR is solved by the face convergence and identification at the cloud end. The special NVR equipment or the face snapshot machine is not needed, the face data can be grabbed through the common video stream, and the equipment cost is greatly reduced.
Further, in order to improve the cooperativity and flexibility of the method, in step S1, the edge device scans and adds an image capture device through the ONVIF protocol, and pulls a video stream of the image capture device through the RTSP protocol. The ONVIF specification describes a model, interfaces, data types, and modes of data interaction for network video, and includes some existing standards.
Furthermore, the compressed data is in an H.264 format, and the data in the format has the advantages of low code rate, high quality, high fault tolerance rate, strong network adaptability and the like.
Further, for convenience of real-time application, the compressed data is decoded by OpenCV to obtain global image frame data.
Further, in order to increase compatibility, the human face feature point detection is performed by the Dlib face detection module in step S3.
Further, the specific step of extracting the local image frame data in the global image frame data in step S3 is:
and randomly extracting 1 frame of image from every 5 frames of continuous images in the global image frame data to be used as local image frame data.
Further, the face tracking recognition in step S4 is completed by a face tracking module or a face tracking algorithm.
Further, the step S4 of acquiring the feature face image data specifically includes the steps of:
and importing the data packet into the face tracking module or the face tracking algorithm, and comparing and identifying each frame in the global image frame data by the face tracking module or the face tracking algorithm according to the face position data and the face image data in the data packet to obtain the face image data in each frame, and comparing the face image data to obtain the optimal face image data as the characteristic face image data.
Further, in order to increase the contrast range and accuracy, in step S5, the cloud database is subjected to contrast recognition according to the feature face image data, where the comparison includes 1 to 1 comparison and 1 to N comparison, and N is a natural number.
Further, the edge device includes a camera, a router, a routing switch, an integrated access device, a multiplexer, and a metropolitan area network or a wide area network access device.
The invention has the beneficial effects that:
the invention provides a method for detecting and identifying a face from a monitoring video stream through cloud-edge cooperation, which mainly utilizes edge equipment to detect the face of a common video monitoring camera without independently deploying a face snapshot machine, the edge equipment can simultaneously process a plurality of paths of video streams, and the cloud end carries out face comparison and identification to form cloud end face identification convergence, thereby solving the problems of single NVR (network video recorder) equipment face identification limitation and face information island. Reduce transmission cost, high in the clouds face assembles and carries out face contrast discernment, can reduce video monitoring face detection discernment cost, and high in the clouds face assembles the discernment and solves traditional NVR face identification information island problem. The special NVR equipment or the face snapshot machine is not needed, the face data can be grabbed through the common video stream, and the equipment cost is greatly reduced.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
FIG. 1 is a schematic step diagram of an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and therefore are only examples, and the protection scope of the present invention is not limited thereby.
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which the invention pertains.
Examples
As shown in fig. 1, the method for detecting and recognizing a human face from a surveillance video stream by cloud edge coordination includes the following steps: s1, scanning and adding an image acquisition device through an edge device, and pulling a video stream of the image acquisition device; s2, separating the compressed data in the video stream, and decoding the compressed data to obtain global image frame data; s3, extracting local image frame data in the global image frame data to detect the human face characteristic points, and generating a data packet containing human face position data and human face image data; s4, carrying out face tracking recognition on the global image frame data according to the data packet, and outputting characteristic face image data; and S5, uploading the characteristic face image data to a cloud server, and after receiving the characteristic face image data, the cloud server carries out comparison and identification on a cloud database according to the characteristic face image data to obtain comparison and identification data.
According to the invention, the video stream of the common camera is subjected to face detection in real time by using the edge device, so that the transmission cost is reduced, the face at the cloud end is converged to perform face comparison and identification, the face detection and identification cost of video monitoring can be reduced, and the problem of island of the face identification information of the traditional NVR is solved by the face convergence and identification at the cloud end. The special NVR equipment or the face snapshot machine is not needed, the face data can be grabbed through the common video stream, and the equipment cost is greatly reduced.
In order to improve the cooperativity and flexibility of the method, in step S1, the edge device scans and adds an image capture device through the ONVIF protocol, and pulls a video stream of the image capture device through the RTSP protocol. The ONVIF specification describes a model, interfaces, data types, and modes of data interaction for network video, and includes some existing standards; synergy means that products provided by different manufacturers can be communicated through a uniform 'language'. The integration of the system is facilitated. Flexibility means that end users and integrated users do not need to be bound by the inherent solutions of certain devices. The development cost is greatly reduced. The RTSP protocol includes a number of advantages: the RTSP protocol has the following characteristics: expandability, the new method and parameters are easy to add into RTSP; the RTSP can be analyzed by a standard HTTP or MIME analyzer; safe, RTSP uses the webpage security mechanism; independent of the transport, RTSP may use the unreliable datagram protocol (EDP), the Reliable Datagram Protocol (RDP); if application level reliability is to be achieved, a reliable streaming protocol may be used.
The compressed data is in an H.264 format, and the data in the format has the advantages of low code rate, high quality, high fault tolerance rate, strong network adaptability and the like.
For the convenience of real-time application, the compressed data is decoded by OpenCV to obtain global image frame data.
Dlib is a modern C + + tool kit that contains machine learning algorithms and tools for creating complex software in C + + to solve practical problems.
The specific step of extracting the local image frame data in the global image frame data in step S3 is:
and randomly extracting 1 frame of image from every 5 frames of continuous images in the global image frame data to be used as local image frame data.
The face tracking recognition in step S4 is completed by a face tracking module or a face tracking algorithm.
The specific steps of acquiring the feature face image data in step S4 are as follows:
and importing the data packet into the face tracking module or the face tracking algorithm, and comparing and identifying each frame in the global image frame data by the face tracking module or the face tracking algorithm according to the face position data and the face image data in the data packet to obtain the face image data in each frame, and comparing the face image data to obtain the optimal face image data as the characteristic face image data.
In the embodiment, the face tracking algorithm adopts the prior art such as the GoTurn algorithm target tracking.
The optimal face image data in the embodiment in the market refers to the image with highest definition, most pixel points and least noise in the face image data.
In order to increase the contrast range and accuracy, in step S5, performing contrast recognition on a cloud database according to the feature face image data, where the comparison includes 1 to 1 comparison and 1 to N comparison, where N is a natural number.
The edge device comprises a camera, a router, a routing switch, an integrated access device, a multiplexer and a metropolitan area network or wide area network access device.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.

Claims (10)

1. The method for detecting and identifying the human face from the monitoring video stream through the cloud edge cooperation is characterized in that: the method comprises the following steps:
s1, scanning and adding an image acquisition device through an edge device, and pulling a video stream of the image acquisition device;
s2, separating the compressed data in the video stream, and decoding the compressed data to obtain global image frame data;
s3, extracting local image frame data in the global image frame data to detect the human face characteristic points, and generating a data packet containing human face position data and human face image data;
s4, carrying out face tracking recognition on the global image frame data according to the data packet, and outputting characteristic face image data;
and S5, uploading the characteristic face image data to a cloud server, and after receiving the characteristic face image data, the cloud server carries out comparison and identification on a cloud database according to the characteristic face image data to obtain comparison and identification data.
2. The internet of things technology-based monitoring and identification system of claim 1, wherein in the step S1, the edge device scans and adds an image capture device through ONVIF protocol, and pulls a video stream of the image capture device through RTSP protocol.
3. The monitoring and identification system based on the internet of things technology as claimed in claim 1, wherein the compressed data is in h.264 format.
4. The monitoring and recognition system based on the internet of things technology as claimed in claim 1, wherein the compressed data is decoded by OpenCV to obtain global image frame data.
5. The monitoring and recognition system based on the internet of things technology of claim 1, wherein in the step S3, the facial feature point is detected by a Dlib facial detection module.
6. The monitoring and identification system based on the internet of things technology of claim 1, wherein the specific step of extracting the local image frame data in the global image frame data in the step S3 is:
and randomly extracting 1 frame of image from every 5 frames of continuous images in the global image frame data to be used as local image frame data.
7. The monitoring and recognition system based on internet of things of claim 1, wherein the face tracking recognition in the step S4 is performed by a face tracking module or a face tracking algorithm.
8. The monitoring and recognition system based on the internet of things technology of claim 7, wherein the specific steps of obtaining the feature face image data in the step S4 are as follows:
and importing the data packet into the face tracking module or the face tracking algorithm, and comparing and identifying each frame in the global image frame data by the face tracking module or the face tracking algorithm according to the face position data and the face image data in the data packet to obtain the face image data in each frame, and comparing the face image data to obtain the optimal face image data as the characteristic face image data.
9. The monitoring and recognition system based on the internet of things technology of claim 1, wherein in the step S5, comparison recognition is performed on a cloud database according to the feature face image data, and the comparison recognition includes 1-to-1 comparison and 1-to-N comparison, where N is a natural number.
10. The monitoring and identification system based on the internet of things technology as claimed in claim 1, wherein the edge device comprises a camera, a router, a routing switch, an integrated access device, a multiplexer and a metropolitan area network or a wide area network access device.
CN202010193385.3A 2020-03-18 2020-03-18 Method for detecting and identifying human face from monitoring video stream through cloud edge cooperation Pending CN111400047A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010193385.3A CN111400047A (en) 2020-03-18 2020-03-18 Method for detecting and identifying human face from monitoring video stream through cloud edge cooperation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010193385.3A CN111400047A (en) 2020-03-18 2020-03-18 Method for detecting and identifying human face from monitoring video stream through cloud edge cooperation

Publications (1)

Publication Number Publication Date
CN111400047A true CN111400047A (en) 2020-07-10

Family

ID=71430943

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010193385.3A Pending CN111400047A (en) 2020-03-18 2020-03-18 Method for detecting and identifying human face from monitoring video stream through cloud edge cooperation

Country Status (1)

Country Link
CN (1) CN111400047A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633068A (en) * 2020-11-25 2021-04-09 河北汉光重工有限责任公司 Cloud system is tracked in people's car image recognition based on land defense control
CN113780156A (en) * 2021-09-08 2021-12-10 交通银行股份有限公司 Face recognition method and system based on cloud edge architecture
CN114154018A (en) * 2022-02-08 2022-03-08 中国电子科技集团公司第二十八研究所 Cloud-edge collaborative video stream processing method and system for unmanned system
CN118586008A (en) * 2024-08-06 2024-09-03 山东新潮信息技术有限公司 Multi-user collaborative penetration test data sharing method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103514432A (en) * 2012-06-25 2014-01-15 诺基亚公司 Method, device and computer program product for extracting facial features
WO2017016516A1 (en) * 2015-07-24 2017-02-02 上海依图网络科技有限公司 Method for face recognition-based video human image tracking under complex scenes
CN109190532A (en) * 2018-08-21 2019-01-11 北京深瞐科技有限公司 It is a kind of based on cloud side fusion face identification method, apparatus and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103514432A (en) * 2012-06-25 2014-01-15 诺基亚公司 Method, device and computer program product for extracting facial features
WO2017016516A1 (en) * 2015-07-24 2017-02-02 上海依图网络科技有限公司 Method for face recognition-based video human image tracking under complex scenes
CN109190532A (en) * 2018-08-21 2019-01-11 北京深瞐科技有限公司 It is a kind of based on cloud side fusion face identification method, apparatus and system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633068A (en) * 2020-11-25 2021-04-09 河北汉光重工有限责任公司 Cloud system is tracked in people's car image recognition based on land defense control
CN113780156A (en) * 2021-09-08 2021-12-10 交通银行股份有限公司 Face recognition method and system based on cloud edge architecture
CN114154018A (en) * 2022-02-08 2022-03-08 中国电子科技集团公司第二十八研究所 Cloud-edge collaborative video stream processing method and system for unmanned system
CN114154018B (en) * 2022-02-08 2022-05-10 中国电子科技集团公司第二十八研究所 Cloud-edge collaborative video stream processing method and system for unmanned system
CN118586008A (en) * 2024-08-06 2024-09-03 山东新潮信息技术有限公司 Multi-user collaborative penetration test data sharing method and system

Similar Documents

Publication Publication Date Title
CN111400047A (en) Method for detecting and identifying human face from monitoring video stream through cloud edge cooperation
KR101942808B1 (en) Apparatus for CCTV Video Analytics Based on Object-Image Recognition DCNN
US20120057640A1 (en) Video Analytics for Security Systems and Methods
WO2013039062A1 (en) Facial analysis device, facial analysis method, and memory medium
WO2012142797A1 (en) Video monitoring system and method
CN114079820A (en) Interval shooting video generation centered on an event/object of interest input on a camera device by means of a neural network
CN112149551A (en) Safety helmet identification method based on embedded equipment and deep learning
CN109660762A (en) Size figure correlating method and device in intelligent candid device
CN110691204A (en) Audio and video processing method and device, electronic equipment and storage medium
CN105561536A (en) Man-machine interaction system having bodybuilding action correcting function
KR20210014988A (en) Image analysis system and method for remote monitoring
CN112565224A (en) Video processing method and device
CN110379130B (en) Medical nursing anti-falling system based on multi-path high-definition SDI video
CN108881119B (en) Method, device and system for video concentration
CN114419502A (en) Data analysis method and device and storage medium
KR101849092B1 (en) Method and Apparatus for Detecting Picture Breaks for Video Service of Real Time
CN116800725A (en) Data processing method and device
CN116939197A (en) Live program head broadcasting and replay content consistency monitoring method based on audio and video
CN110704268B (en) Automatic testing method and device for video images
KR102456189B1 (en) system for Cloud edge-based video analysis
CN116824480A (en) Monitoring video analysis method and system based on deep stream
ZA202209296B (en) System and method for analyzing videos in real-time
KR20070005247A (en) Remote video awareness system using gre(generic routing encapsulation) tunneling network
CN111918038A (en) Method for improving signal-to-noise ratio of signal by multi-camera cross-correlation fusion based on Internet of things
CN113038254B (en) Video playing method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200710

RJ01 Rejection of invention patent application after publication