CN111210462A - Alarm method and device - Google Patents

Alarm method and device Download PDF

Info

Publication number
CN111210462A
CN111210462A CN201911405933.8A CN201911405933A CN111210462A CN 111210462 A CN111210462 A CN 111210462A CN 201911405933 A CN201911405933 A CN 201911405933A CN 111210462 A CN111210462 A CN 111210462A
Authority
CN
China
Prior art keywords
target area
video data
moving object
video
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911405933.8A
Other languages
Chinese (zh)
Inventor
胡贵超
谢飞
韩杰
王艳辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionvera Information Technology Co Ltd
Original Assignee
Visionvera Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionvera Information Technology Co Ltd filed Critical Visionvera Information Technology Co Ltd
Priority to CN201911405933.8A priority Critical patent/CN111210462A/en
Publication of CN111210462A publication Critical patent/CN111210462A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B19/00Alarms responsive to two or more different undesired or abnormal conditions, e.g. burglary and fire, abnormal temperature and abnormal rate of flow
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention provides an alarm method and an alarm device, which relate to the technical field of video networking, and the method comprises the following steps: receiving monitoring video data aiming at a target area; analyzing any three continuous monitoring images in the monitoring video data by adopting a three-frame difference method to obtain an analysis result; determining whether a moving object exists in the target area according to the analysis result; when the moving object exists in the target area, the alarm information is generated and sent, so that the monitoring video data of the specific area can be obtained in real time, and in the technical field of video networking, a video networking camera product can be used, so that the cost can be reduced to a certain extent.

Description

Alarm method and device
Technical Field
The invention relates to the technical field of video networking, in particular to an alarm method and device.
Background
With the development of information technology and the internet of things, monitoring cameras and video monitoring systems are visible everywhere in life and are also applied to the technical field of video networking.
At present, in the field of video networking technology, an internet camera product needs to be used, the function is listened to in the removal of utilizing internet camera product self, the realization is to the control in the specific area to the conversion process that needs protocol conversion server to carry out internet to the video networking makes internet camera, video networking protocol conversion server and video networking terminal accomplish the connection, realizes the control to the specific area jointly with internet camera in order to realize video networking terminal.
However, the protocol conversion server needs to spend a certain time in the process of converting the internet into the video network, which causes the monitoring of the video network terminal on the specific area to be delayed, and the video in the specific area can not be obtained in real time, and in the technical field of the video network, the use of the internet camera product can cause the increase of the cost.
Disclosure of Invention
In view of the above, embodiments of the present invention are proposed in order to provide an alarm method and apparatus that overcome the above problems or at least partially solve the above problems.
In order to solve the above problem, an embodiment of the present invention provides an alarm method, including:
receiving monitoring video data aiming at a target area;
analyzing any three continuous monitoring images in the monitoring video data by adopting a three-frame difference method to obtain an analysis result;
determining whether a moving object exists in the target area according to the analysis result;
and generating and sending alarm information when the moving object exists in the target area.
Optionally, the analyzing any three continuous monitoring images in the monitoring video data by using a three-frame difference method to obtain an analysis result includes:
extracting any three continuous monitoring images in the monitoring video data;
acquiring gray values corresponding to the continuous monitoring images respectively;
determining a first difference image and a second difference image according to the three gray values;
and processing the first differential image and the second differential image to obtain a target image and a gray absolute value corresponding to the target image.
Optionally, the determining whether there is a moving object in the target area according to the analysis result includes:
and determining that a moving object exists in the target area under the condition that the gray absolute value is greater than a preset gray absolute value.
Optionally, the determining whether there is a moving object in the target area according to the analysis result includes:
and determining that the video data acquisition equipment corresponding to the target area moves under the condition that the gray absolute value is greater than a preset gray absolute value and no moving object exists in the target area.
Optionally, the generating and sending alarm information when the moving object exists in the target area includes:
generating and sending alarm information when the moving object exists in the target area;
screening out a first specified time interval before the alarm information is sent and a target video within a second specified time interval after the alarm information is sent from the monitoring video data;
and saving the target video.
In order to solve the above problem, an embodiment of the present invention provides an alarm device, including:
the receiving module is used for receiving monitoring video data aiming at the target area;
the analysis module is used for analyzing any three continuous monitoring images in the monitoring video data by adopting a three-frame difference method to obtain an analysis result;
the determining module is used for determining whether a moving object exists in the target area according to the analysis result;
and the generating module is used for generating and sending alarm information when the moving object exists in the target area.
Optionally, the analysis module comprises:
the extraction submodule is used for extracting any three continuous monitoring images in the monitoring video data;
the first obtaining submodule is used for obtaining gray values corresponding to the continuous monitoring images respectively;
the first determining submodule is used for determining a first difference image and a second difference image according to the three gray values;
and the second obtaining submodule is used for processing the first difference image and the second difference image to obtain a target image and a gray absolute value corresponding to the target image.
Optionally, the determining module includes:
and the second determining submodule is used for determining that a moving object exists in the target area under the condition that the gray absolute value is greater than a preset gray absolute value.
Optionally, the determining module includes:
and the third determining submodule is used for determining that the video data acquisition equipment corresponding to the target area has a movement condition under the condition that the gray absolute value is greater than a preset gray absolute value and no moving object exists in the target area.
Optionally, the generating module includes:
the generating submodule is used for generating and sending alarm information when the moving object exists in the target area;
the screening submodule is used for screening out a target video from the monitoring video data within a first specified time interval before the alarm information is sent to a second specified time interval after the alarm information is sent;
and the storage sub-module is used for storing the target video.
In order to solve the above problem, an embodiment of the present invention provides an electronic device, including:
one or more processors; and
one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the apparatus to perform any of the alert methods described above.
In order to solve the above problem, an embodiment of the present invention provides a computer-readable storage medium storing a computer program that causes a processor to execute any one of the alarm methods described above.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, the monitoring video data aiming at the target area can be received, a three-frame difference method is adopted to analyze any three continuous monitoring images in the monitoring video data to obtain an analysis result, whether a moving object exists in the target area is determined according to the analysis result, and finally, when the moving object exists in the target area, the alarm information is generated and sent, so that the moving object can be recorded, the whole alarm event process can be completely obtained, the monitoring video data of the specific area can be obtained in real time, and in the technical field of video networking, a video networking camera product can be used, and the cost can be reduced to a certain extent.
Drawings
Fig. 1 is a flow chart of an alarm method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating an alarm in a video networking scenario according to an embodiment of the present invention;
FIG. 3 is a flow chart of a method of alarming according to a second embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating an operation of a three-frame difference method according to an embodiment of the present invention
FIG. 5 is a schematic diagram of an alarm in another video networking scenario provided by an embodiment of the invention;
fig. 6 shows a schematic structural diagram of an alarm device provided by a third embodiment of the present invention.
FIG. 7 is a networking schematic of a video network of the present invention;
FIG. 8 is a diagram of a hardware architecture of a node server according to the present invention;
fig. 9 is a schematic diagram of a hardware structure of an access switch of the present invention;
fig. 10 is a schematic diagram of a hardware structure of an ethernet protocol conversion gateway according to the present invention;
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
One of the core concepts of the embodiments of the present invention is to receive surveillance video data for a target area, analyze any three consecutive surveillance images in the surveillance video data by using a three-frame difference method to obtain an analysis result, determine whether a moving object exists in the target area according to the analysis result, and generate and send alarm information when the moving object exists in the target area, thereby obtaining the surveillance video data of a specific area in real time.
Example one
Fig. 1 shows a flowchart of a method of an alarm method according to an embodiment of the present invention, where the method includes:
step 501, receiving monitoring video data for a target area.
In the embodiment of the invention, the monitoring video data in the target area can be obtained through shooting by the camera, the target area can be an area corresponding to the shooting range of the camera, and the monitoring video data can comprise video data corresponding to a video picture obtained through shooting by the camera.
Fig. 2 shows an alarm schematic diagram in a video networking scenario according to an embodiment of the present invention, and as shown in fig. 2, a first video networking terminal 601 obtains monitoring video data shot by a video networking camera; a first video network terminal 601 sends monitoring video data to a first video network server 602; after receiving the surveillance video data, the first video network server 602 sends the surveillance video data to the second video network server 603.
Step 502, analyzing any three continuous monitoring images in the monitoring video data by adopting a three-frame difference method to obtain an analysis result.
The three-frame difference method belongs to one of the interframe difference methods, and the basic principle of analyzing the monitoring image by adopting the interframe difference method comprises the following steps: because the monitoring video data collected by the terminal equipment has the characteristic of continuity, when no moving target exists in the target area, the change between the monitoring images of continuous frames is not obvious; when a moving object exists in the target area, the change between the continuous frame monitoring images is obvious, namely the positions of the moving object in different frame monitoring images are different. The interframe difference method carries out difference operation on two or three continuous monitoring images in time, and gray values of pixel points corresponding to different monitoring images are subtracted to obtain gray difference.
Specifically, any three continuous monitoring images in the monitoring video data may be extracted, gray values corresponding to the continuous monitoring images respectively are obtained, a first difference image and a second difference image are determined according to the three gray values, the first difference image and the second difference image are processed to obtain a target image and a gray absolute value corresponding to the target image, and the gray absolute value is also an analysis result.
And step 503, determining whether the moving object exists in the target area according to the analysis result.
In the embodiment of the present invention, it is possible to determine whether a moving object exists in the target area by determining the absolute value of the grayscale. The analysis result may include a first result and a second result, and how to determine whether there is a moving object in the target area according to the first result and the second result is described below, respectively:
and determining that the moving object exists in the target area under the condition that the gray absolute value is greater than the preset gray absolute value, namely under the condition that the analysis result is the first result.
The preset gray scale absolute value may be set according to a specific application scenario, which is not limited in the embodiment of the present invention.
For example, when a person illegally intrudes, the absolute value of the gray scale may be larger than the preset absolute value of the gray scale, and it may be determined that a moving object exists in the target area.
And under the condition that the gray absolute value is greater than the preset gray absolute value and no moving object exists in the target area, namely under the condition that the analysis result is the second result, determining that the video data acquisition equipment corresponding to the target area moves.
For example, when the video data acquisition device is a camera, the camera is described to be moved artificially in the case that the absolute value of the gray scale is greater than the preset absolute value of the gray scale and there is no moving object in the target area.
And step 504, generating and sending alarm information when the moving object exists in the target area.
When a moving object exists in the target area, at least one of sound alarm information and/or character alarm information is generated, a first appointed time interval before the alarm information is sent and a target video within a second appointed time interval after the alarm information is sent are screened out from the monitoring video data, the target video is stored, the moving object can be recorded, and the whole process of an alarm event can be completely acquired.
In the embodiment of the invention, the monitoring video data aiming at the target area can be received, a three-frame difference method is adopted to analyze any three continuous monitoring images in the monitoring video data to obtain an analysis result, whether a moving object exists in the target area is determined according to the analysis result, and finally, when the moving object exists in the target area, the alarm information is generated and sent, so that the moving object can be recorded, the whole alarm event process can be completely obtained, the monitoring video data of the specific area can be obtained in real time, and in the technical field of video networking, a video networking camera product can be used, and the cost can be reduced to a certain extent.
Example two
Fig. 3 shows a flowchart of a method of an alarm method according to a second embodiment of the present invention, and referring to fig. 3, the method includes:
step 701, receiving monitoring video data for a target area.
In the embodiment of the invention, the monitoring video data can be obtained by shooting through the camera.
As shown in fig. 2, a first video network terminal 601 obtains monitoring video data shot by a video network camera; a first video network terminal 601 sends monitoring video data to a first video network server 602; after receiving the surveillance video data, the first video network server 602 sends the surveillance video data to the second video network server 603.
Step 702, any three continuous monitoring images in the monitoring video data are extracted.
In the embodiment of the present invention, after receiving the monitoring video data, the second video network server 603 sends the monitoring video data to the second video network terminal 604; the second video network terminal 604 performs image processing on the monitoring video data through the inter-frame difference detection algorithm module therein, and extracts any three continuous monitoring images in the monitoring video data.
Step 703, obtaining the gray values corresponding to the continuous monitoring images respectively.
Fig. 4 shows an operation schematic diagram of a three-frame difference method according to an embodiment of the present invention, and as shown in fig. 8, it is noted that monitoring images of an n +1 th frame, an n-1 th frame and an n-1 th frame in the monitoring images are respectively fn+1、fnAnd fn-1The gray values corresponding to the pixel points of the three frames of monitoring images are respectively fn+1(x,y)、fn(x, y) and fn-1(x,y)。
Step 704, determining a first difference image and a second difference image according to the three gray values.
Referring to fig. 4, according to equation (1): dn+1(x,y)=|fn+1(x,y)-fnCalculating (x, y) l to obtain a difference image Dn+1According to formula (2): dn(x,y)=|fn(x,y)-fn-1Calculating (x, y) l to obtain a difference image Dn
Step 705, the first difference image and the second difference image are processed to obtain a target image and a gray absolute value corresponding to the target image.
In an embodiment of the present invention, according to formula (3):
D′n(x,y)=|fn+1(x,y)-fn(x,y)|∩|fn(x,y)-fn-1(x, y) |, for the difference image Dn+1And DnCalculating to obtain an image D'nOf picture D'nI.e. the target image, and then D 'to the image'nAnd carrying out threshold processing and connectivity analysis, and determining a gray absolute value corresponding to the target image.
In addition, in the inter-frame difference method, the selection of the threshold (T) is an important parameter, if the selected threshold is too small, noise in the difference image cannot be suppressed, if the selected threshold is too large, partial information of a moving object in the difference image may be masked, and the fixed threshold cannot be adapted to the situation of light change and the like in the scene of the monitoring target area.nAdding an addition item sensitive to the whole illumination when threshold processing is carried out, and modifying a threshold processing judgment formula into a formula (4):
Figure RE-GDA0002418438420000081
wherein NA is the total number of pixels in the region to be detected, lambda is the suppression coefficient of illumination, and A can be set as the whole frame monitoring image. Additive terms
Figure RE-GDA0002418438420000082
The change condition of illumination in the whole frame of monitoring image is expressed, if the light change in the monitoring target area scene is small, the value of the addition item tends to zero, if the light change in the monitoring target area scene is obvious, the value of the addition item is obviously increased, so that the right side of the formula (4) is adaptively increased, and the final judgment result is that no moving target exists, thereby effectively inhibiting the influence of the light change on the detection result of the moving target, and enabling the detection result to be more accurate.
And step 706, determining whether the moving object exists in the target area according to the analysis result.
In an implementable embodiment of the embodiments of the present invention, the step 706 may include:
and determining that the moving object exists in the target area under the condition that the gray absolute value is greater than the preset gray absolute value.
The preset gray scale absolute value may be set according to a specific application scenario, which is not limited in the embodiment of the present invention.
For example, when a person illegally intrudes, the absolute value of the gray scale may be larger than the preset absolute value of the gray scale, and it may be determined that a moving object exists in the target area.
In another alternative embodiment, the step 706 may include:
and under the condition that the gray absolute value is greater than the preset gray absolute value and no moving object exists in the target area, determining that the video data acquisition equipment corresponding to the target area moves.
For example, when the video data acquisition device is a camera, the camera is described to be moved artificially in the case that the absolute value of the gray scale is greater than the preset absolute value of the gray scale and there is no moving object in the target area.
In step 707, when a moving object exists in the target area, alarm information is generated and transmitted.
In the embodiment of the invention, the alarm information comprises at least one of sound alarm information and text alarm information.
Optionally, referring to fig. 2, when the monitored video data has a moving object, the interframe difference detection algorithm module sends a call instruction to the scheduling module in the second video networking terminal 604; and after receiving the calling instruction, the scheduling module calls a buzzer to give out an alarm sound.
Step 708, screening out a first specified time interval before the alarm information is sent and a target video within a second specified time interval after the alarm information is sent from the monitoring video data.
The first specified time interval may be five seconds, six seconds, or the like, which is not specifically limited in the embodiment of the present invention.
Step 709, save the target video.
In the embodiment of the invention, a user watches monitoring video data In a live watching mode, the monitoring video data is subjected to image processing In an interframe difference detection algorithm module of a terminal and is matched with an interface function to realize a video recording function, usually, only picture preview monitoring and motion detection are carried Out, coded data is not written into a file, but is only temporarily written into a First In First Out (FIFO) buffer area, after alarm information is generated and sent, further, data In the buffer area before the alarm information is sent can be written into the file, then the coded data is written into the file In real time, after the alarm is released, the file is stopped being written after a period of time is delayed, a buffer area writing state is carried Out, the whole process video before and after the alarm information is sent can be realized, the process of completely acquiring the whole alarm event can be realized, and resources can be saved, under the same storage space, the time for storing the video can be greatly prolonged.
For example, referring to fig. 2, when a second video network terminal 604 of the video network of a is assumed to monitor a specific target area by watching a live broadcast of a first video network terminal 601 of the video network of B, when there is an abnormality in the monitoring video data obtained by the first video network terminal 601, for example, the video network camera moves or a position of an object in the monitored target area changes, the first video network terminal 601 sends the monitoring video data to the first video network server 602; after receiving the monitoring video data, the first video network server 602 sends the monitoring video data to the second video network server 603; after receiving the monitoring video data, the second video network server 603 sends the monitoring video data to the second video network terminal 604; after receiving the monitoring video data, the second video network terminal 604 performs the following three steps: the second video network terminal decodes and displays the monitoring video data through a decoder; the second video network terminal performs image processing on the monitoring video data decoded and displayed by the decoder through the interframe difference detection algorithm module; the second video network terminal 604 performs image processing on the monitoring video data through the inter-frame difference detection algorithm module therein, and extracts any three continuous monitoring images in the monitoring video data; when moving objects exist in the monitored video data, the interframe difference detection algorithm module sends a calling instruction to a scheduling module in the second video networking terminal 604; and after receiving the calling instruction, the scheduling module calls a buzzer to give out an alarm sound.
In addition, when the monitoring video data does not have a moving object, the second video network terminal decodes and displays the monitoring video data through a decoder; the decoder sends the monitoring video data for decoding and displaying to the storage server; the storage server stores the monitoring video data; and when the storage time of the storage server for the monitoring video data exceeds the preset storage time and the monitoring video data does not have moving objects all the time, deleting the monitoring video data.
Specifically, fig. 5 shows an alarm diagram in another video network scenario provided by the embodiment of the present invention, and as shown in fig. 5, it is assumed that a terminal a monitors a specific target area by watching a terminal B to broadcast a broadcast. The specific process can comprise the following steps: and the B terminal issues live broadcast, acquires monitoring video data through the data module codes, and then sends the compiled monitoring video data to the video network server through the B terminal scheduling module. The video network server receives the monitoring video data through the terminal B scheduling module, processes the monitoring video data through the terminal B data module, and then forwards the monitoring video data to the terminal A through the terminal B scheduling module. The terminal A receives monitoring video data sent by the video networking server through the terminal A scheduling module, the monitoring video data are sent to the terminal A data module for processing to obtain monitoring video data, the monitoring video data are processed through the terminal A interframe difference algorithm module, and finally whether alarming is needed or not is judged according to return information processed by the terminal A interframe difference algorithm module. If no alarm is required: and if 5 seconds of data are stored, the alarm is still not needed, the queue head elements are dequeued, namely the data of the first second of the queue are deleted. If the alarm is needed, the monitoring video data is sent to a decoder for display, then the video data is sent to a storage server scheduling module through an A terminal scheduling module, the storage server scheduling module takes out 5 seconds of data at the tail of a queue from a storage queue to store the data into a file, then new data is stored into the file until the alarm is finished, and the data stored for 5 seconds is written into the file.
Wherein, the file name format is: the box number of the alarm + the alarm starting time, and the interframe difference algorithm module is added in the terminal, so that the corresponding monitoring function can be completed, the dependence on an internet camera and the internet is saved, the participation of a protocol conversion server is not needed, the cost is saved, and the network is safer and more reliable.
The alarm method provided by the embodiment of the invention can be applied to some unattended scenes, for example, an important archive room, under the unattended condition, the live broadcast can be checked in real time through the video network.
In the embodiment of the invention, a terminal receives monitoring video data aiming at a target area, extracts any three continuous monitoring images in the monitoring video data, acquires gray values corresponding to the continuous monitoring images respectively, determines a first differential image and a second differential image according to the three gray values, processes the first differential image and the second differential image to acquire the target image and a gray absolute value corresponding to the target image, determines whether a moving object exists in the target area according to an analysis result, generates and sends alarm information when the moving object exists in the target area, screens out a first specified time interval before the alarm information is sent from the monitoring video data, screens out a target video in a second specified time interval after the alarm information is sent, and stores the target video, so that the video recording of the moving object can be realized, and the whole alarm event can be acquired completely, the method and the device achieve real-time acquisition of monitoring video data of a specific area, can use a video network camera product in the technical field of video network, and can reduce the cost to a certain extent.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
EXAMPLE III
Referring to fig. 6, there is shown an alarm device 800 comprising:
a receiving module 801, configured to receive monitoring video data for a target area;
the analysis module 802 is configured to analyze any three continuous monitoring images in the monitoring video data by using a three-frame difference method to obtain an analysis result;
a determining module 803, configured to determine whether a moving object exists in the target area according to the analysis result;
the generating module 804 is configured to generate and send alarm information when a moving object exists in the target area.
Optionally, the analysis module comprises:
the extraction submodule is used for extracting any three continuous monitoring images in the monitoring video data;
the first acquisition submodule is used for acquiring gray values corresponding to the continuous monitoring images respectively;
the first determining submodule is used for determining a first difference image and a second difference image according to the three gray values;
and the second obtaining submodule is used for processing the first difference image and the second difference image to obtain a target image and a gray absolute value corresponding to the target image.
Optionally, the determining module includes:
and the second determining submodule is used for determining that the moving object exists in the target area under the condition that the gray absolute value is greater than the preset gray absolute value.
Optionally, the determining module includes:
and the third determining submodule is used for determining that the video data acquisition equipment corresponding to the target area has a movement condition under the condition that the gray absolute value is greater than the preset gray absolute value and no moving object exists in the target area.
Optionally, the generating module includes:
the generation submodule is used for generating and sending alarm information when a moving object exists in the target area;
the screening submodule is used for screening out a first specified time interval before the alarm information is sent and a target video within a second specified time interval after the alarm information is sent from the monitoring video data;
and the storage sub-module is used for storing the target video.
In the embodiment of the invention, the monitoring video data aiming at the target area can be received through the receiving module, the analysis module is used for analyzing any three continuous monitoring images in the monitoring video data by adopting a three-frame difference method to obtain an analysis result, the determining module is used for determining whether a moving object exists in the target area according to the analysis result, and finally the generating module is used for generating and sending the alarm information when the moving object exists in the target area, so that the video recording of the moving object can be realized, the whole alarm event process can be completely obtained, the real-time acquisition of the monitoring video data of the specific area can be realized, and in the technical field of video networking, a video networking camera product can be used, and the cost can be reduced to a certain extent.
An embodiment of the present invention further provides an electronic device, including:
one or more processors; and
one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform an alert method as provided in one or both embodiments of the invention.
An embodiment of the present invention further provides a computer-readable storage medium, in which a stored computer program causes a processor to execute the alarm method according to the first embodiment or the second embodiment.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The video networking is an important milestone for network development, is a real-time network, can realize high-definition video real-time transmission, and pushes a plurality of internet applications to high-definition video, and high-definition faces each other.
The video networking adopts a real-time high-definition video exchange technology, can integrate required services such as dozens of services of video, voice, pictures, characters, communication, data and the like on a system platform on a network platform, such as high-definition video conference, video monitoring, intelligent monitoring analysis, emergency command, digital broadcast television, delayed television, network teaching, live broadcast, VOD on demand, television mail, Personal Video Recorder (PVR), intranet (self-office) channels, intelligent video broadcast control, information distribution and the like, and realizes high-definition quality video broadcast through a television or a computer.
To better understand the embodiments of the present invention, the following description refers to the internet of view:
some of the technologies applied in the video networking are as follows:
network technology (network technology)
Network technology innovation in video networking has improved over traditional Ethernet (Ethernet) to face the potentially enormous video traffic on the network. Unlike pure network Packet Switching (Packet Switching) or network circuit Switching (circuit Switching), the Packet Switching is adopted by the technology of the video networking to meet the Streaming requirement. The video networking technology has the advantages of flexibility, simplicity and low price of packet switching, and simultaneously has the quality and safety guarantee of circuit switching, thereby realizing the seamless connection of the whole network switching type virtual circuit and the data format.
Switching Technology (Switching Technology)
The video network adopts two advantages of asynchronism and packet switching of the Ethernet, eliminates the defects of the Ethernet on the premise of full compatibility, has end-to-end seamless connection of the whole network, is directly communicated with a user terminal, and directly bears an IP data packet. The user data does not require any format conversion across the entire network. The video networking is a higher-level form of the Ethernet, is a real-time exchange platform, can realize the real-time transmission of the whole-network large-scale high-definition video which cannot be realized by the existing Internet, and pushes a plurality of network video applications to high-definition and unification.
Server technology (Servertechnology)
The server technology on the video networking and unified video platform is different from the traditional server, the streaming media transmission of the video networking and unified video platform is established on the basis of connection orientation, the data processing capacity of the video networking and unified video platform is independent of flow and communication time, and a single network layer can contain signaling and data transmission. For voice and video services, the complexity of video networking and unified video platform streaming media processing is much simpler than that of data processing, and the efficiency is greatly improved by more than one hundred times compared with that of a traditional server.
Storage Technology (Storage Technology)
The super-high speed storage technology of the unified video platform adopts the most advanced real-time operating system in order to adapt to the media content with super-large capacity and super-large flow, the program information in the server instruction is mapped to the specific hard disk space, the media content is not passed through the server any more, and is directly sent to the user terminal instantly, and the general waiting time of the user is less than 0.2 second. The optimized sector distribution greatly reduces the mechanical motion of the magnetic head track seeking of the hard disk, the resource consumption only accounts for 20% of that of the IP internet of the same grade, but concurrent flow which is 3 times larger than that of the traditional hard disk array is generated, and the comprehensive efficiency is improved by more than 10 times.
Network Security Technology (Network Security Technology)
The structural design of the video network completely eliminates the network security problem troubling the internet structurally by the modes of independent service permission control each time, complete isolation of equipment and user data and the like, generally does not need antivirus programs and firewalls, avoids the attack of hackers and viruses, and provides a structural carefree security network for users.
Service Innovation Technology (Service Innovation Technology)
The unified video platform integrates services and transmission, and is not only automatically connected once whether a single user, a private network user or a network aggregate. The user terminal, the set-top box or the PC are directly connected to the unified video platform to obtain various multimedia video services in various forms. The unified video platform adopts a menu type configuration table mode to replace the traditional complex application programming, can realize complex application by using very few codes, and realizes infinite new service innovation.
Networking of the video network is as follows:
the video network is a centralized control network structure, and the network can be a tree network, a star network, a ring network and the like, but on the basis of the centralized control node, the whole network is controlled by the centralized control node in the network.
As shown in fig. 7, the video network is divided into an access network and a metropolitan network.
The devices of the access network part can be mainly classified into 3 types: node server, access switch, terminal (including various set-top boxes, coding boards, memories, etc.). The node server is connected to an access switch, which may be connected to a plurality of terminals and may be connected to an ethernet network.
The node server is a node which plays a centralized control function in the access network and can control the access switch and the terminal. The node server can be directly connected with the access switch or directly connected with the terminal.
Similarly, devices of the metropolitan network portion may also be classified into 3 types: a metropolitan area server, a node switch and a node server. The metro server is connected to a node switch, which may be connected to a plurality of node servers.
The node server is a node server of the access network part, namely the node server belongs to both the access network part and the metropolitan area network part.
The metropolitan area server is a node which plays a centralized control function in the metropolitan area network and can control a node switch and a node server. The metropolitan area server can be directly connected with the node switch or directly connected with the node server.
Therefore, the whole video network is a network structure with layered centralized control, and the network controlled by the node server and the metropolitan area server can be in various structures such as tree, star and ring.
The access network part can form a unified video platform (the part in the dotted circle), and a plurality of unified video platforms can form a video network; each unified video platform may be interconnected via metropolitan area and wide area video networking.
Video networking device classification
1.1 devices in the video network of the embodiment of the present invention can be mainly classified into 3 types: servers, switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.). The video network as a whole can be divided into a metropolitan area network (or national network, global network, etc.) and an access network.
1.2 wherein the devices of the access network part can be mainly classified into 3 types: node servers, access switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.).
The specific hardware structure of each access network device is as follows:
a node server:
as shown in fig. 8, the system mainly includes a network interface module 201, a switching engine module 202, a CPU module 203, and a disk array module 204;
the network interface module 201, the CPU module 203, and the disk array module 204 all enter the switching engine module 202; the switching engine module 202 performs an operation of looking up the address table 205 on the incoming packet, thereby obtaining the direction information of the packet; and stores the packet in a queue of the corresponding packet buffer 206 based on the packet's steering information; if the queue of the packet buffer 206 is nearly full, it is discarded; the switching engine module 202 polls all packet buffer queues for forwarding if the following conditions are met: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero. The disk array module 204 mainly implements control over the hard disk, including initialization, read-write, and other operations on the hard disk; the CPU module 203 is mainly responsible for protocol processing with an access switch and a terminal (not shown in the figure), configuring an address table 205 (including a downlink protocol packet address table, an uplink protocol packet address table, and a data packet address table), and configuring the disk array module 204.
The access switch:
as shown in fig. 9, the network interface module (downlink network interface module 301, uplink network interface module 302), switching engine module 303 and CPU module 304 are mainly included;
wherein, the packet (uplink data) coming from the downlink network interface module 301 enters the packet detection module 305; the packet detection module 305 detects whether the Destination Address (DA), the Source Address (SA), the packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id) and enters the switching engine module 303, otherwise, discards the stream identifier; the packet (downstream data) coming from the upstream network interface module 302 enters the switching engine module 303; the incoming data packet of the CPU module 304 enters the switching engine module 303; the switching engine module 303 performs an operation of looking up the address table 306 on the incoming packet, thereby obtaining the direction information of the packet; if the packet entering the switching engine module 303 is from the downstream network interface to the upstream network interface, the packet is stored in the queue of the corresponding packet buffer 307 in association with the stream-id; if the queue of the packet buffer 307 is nearly full, it is discarded; if the packet entering the switching engine module 303 is not from the downlink network interface to the uplink network interface, the data packet is stored in the queue of the corresponding packet buffer 307 according to the guiding information of the packet; if the queue of the packet buffer 307 is nearly full, it is discarded.
The switching engine module 303 polls all packet buffer queues, which in this embodiment of the present invention is divided into two cases:
if the queue is from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queued packet counter is greater than zero; 3) obtaining a token generated by a code rate control module;
if the queue is not from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero.
The rate control module 308 is configured by the CPU module 304, and generates tokens for packet buffer queues from all downstream network interfaces to upstream network interfaces at programmable intervals to control the rate of upstream forwarding.
The CPU module 304 is mainly responsible for protocol processing with the node server, configuration of the address table 306, and configuration of the code rate control module 308.
Ethernet protocol conversion gateway
As shown in fig. 10, the system mainly includes a network interface module (a downlink network interface module 401 and an uplink network interface module 402), a switching engine module 403, a CPU module 404, a packet detection module 405, a rate control module 408, an address table 406, a packet buffer 407, a MAC adding module 409, and a MAC deleting module 410.
Wherein, the data packet coming from the downlink network interface module 401 enters the packet detection module 405; the packet detection module 405 detects whether the ethernet MAC DA, the ethernet MAC SA, the ethernet length or frame type, the video network destination address DA, the video network source address SA, the video network packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id); then, the MAC deletion module 410 subtracts MAC DA, MAC SA, length or frame type (2byte) and enters the corresponding receiving buffer, otherwise, discards it;
the downlink network interface module 401 detects the sending buffer of the port, and if there is a packet, obtains the ethernet MAC DA of the corresponding terminal according to the destination address DA of the packet, adds the ethernet MAC DA of the terminal, the MACSA of the ethernet coordination gateway, and the ethernet length or frame type, and sends the packet.
The other modules in the ethernet protocol gateway function similarly to the access switch.
A terminal:
the system mainly comprises a network interface module, a service processing module and a CPU module; for example, the set-top box mainly comprises a network interface module, a video and audio coding and decoding engine module and a CPU module; the coding board mainly comprises a network interface module, a video and audio coding engine module and a CPU module; the memory mainly comprises a network interface module, a CPU module and a disk array module.
1.3 devices of the metropolitan area network part can be mainly classified into 2 types: node server, node exchanger, metropolitan area server. The node switch mainly comprises a network interface module, a switching engine module and a CPU module; the metropolitan area server mainly comprises a network interface module, a switching engine module and a CPU module.
2. Video networking packet definition
2.1 Access network packet definition
The data packet of the access network mainly comprises the following parts: destination Address (DA), Source Address (SA), reserved bytes, payload (pdu), CRC.
As shown in the following table, the data packet of the access network mainly includes the following parts:
DA SA Reserved Payload CRC
wherein:
the Destination Address (DA) is composed of 8 bytes (byte), the first byte represents the type of the data packet (such as various protocol packets, multicast data packets, unicast data packets, etc.), there are 256 possibilities at most, the second byte to the sixth byte are metropolitan area network addresses, and the seventh byte and the eighth byte are access network addresses;
the Source Address (SA) is also composed of 8 bytes (byte), defined as the same as the Destination Address (DA);
the reserved byte consists of 2 bytes;
the payload part has different lengths according to different types of datagrams, and is 64 bytes if the datagram is various types of protocol packets, and is 32+1024 or 1056 bytes if the datagram is a unicast packet, of course, the length is not limited to the above 2 types;
the CRC consists of 4 bytes and is calculated in accordance with the standard ethernet CRC algorithm.
2.2 metropolitan area network packet definition
The topology of a metropolitan area network is a graph and there may be 2, or even more than 2, connections between two devices, i.e., there may be more than 2 connections between a node switch and a node server, a node switch and a node switch, and a node switch and a node server. However, the metro network address of the metro network device is unique, and in order to accurately describe the connection relationship between the metro network devices, parameters are introduced in the embodiment of the present invention: a label to uniquely describe a metropolitan area network device.
In this specification, the definition of the Label is similar to that of the Label of MPLS (Multi-Protocol Label Switch), and assuming that there are two connections between the device a and the device B, there are 2 labels for the packet from the device a to the device B, and 2 labels for the packet from the device B to the device a. The label is classified into an incoming label and an outgoing label, and assuming that the label (incoming label) of the packet entering the device a is 0x0000, the label (outgoing label) of the packet leaving the device a may become 0x 0001. The network access process of the metro network is a network access process under centralized control, that is, address allocation and label allocation of the metro network are both dominated by the metro server, and the node switch and the node server are both passively executed, which is different from label allocation of MPLS, and label allocation of MPLS is a result of mutual negotiation between the switch and the server.
As shown in the following table, the data packet of the metro network mainly includes the following parts:
DA SA Reserved label (R) Payload CRC
Namely Destination Address (DA), Source Address (SA), Reserved byte (Reserved), tag, payload (pdu), CRC. The format of the tag may be defined by reference to the following: the tag is 32 bits with the upper 16 bits reserved and only the lower 16 bits used, and its position is between the reserved bytes and payload of the packet.
Based on the characteristics of the video network, one of the core concepts of the embodiments of the present invention is provided, and the present invention follows the protocol of the video network, analyzes any three continuous monitoring images in the monitoring video data by receiving the monitoring video data for the target area and adopting a three-frame difference method to obtain an analysis result, determines whether a moving object exists in the target area according to the analysis result, and generates and sends alarm information when the moving object exists in the target area, thereby realizing real-time acquisition of the monitoring video data in the specific area, and in the technical field of the video network, a video network camera product can be used, so that the cost can be reduced to a certain extent.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The alarm method and device provided by the invention are introduced in detail, and the principle and the implementation mode of the invention are explained by applying specific examples, and the description of the examples is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (12)

1. An alarm method is applied to a video network, and is characterized by comprising the following steps:
receiving monitoring video data aiming at a target area;
analyzing any three continuous monitoring images in the monitoring video data by adopting a three-frame difference method to obtain an analysis result;
determining whether a moving object exists in the target area according to the analysis result;
and generating and sending alarm information when the moving object exists in the target area.
2. The method of claim 1, wherein analyzing any three consecutive monitor images in the monitor video data to obtain an analysis result by using a three-frame difference method comprises:
extracting any three continuous monitoring images in the monitoring video data;
acquiring gray values corresponding to the continuous monitoring images respectively;
determining a first difference image and a second difference image according to the three gray values;
and processing the first differential image and the second differential image to obtain a target image and a gray absolute value corresponding to the target image.
3. The method of claim 2, wherein said determining whether a moving object is present in the target area based on the analysis comprises:
and determining that a moving object exists in the target area under the condition that the gray absolute value is greater than a preset gray absolute value.
4. The method of claim 3, wherein said determining whether a moving object is present in the target area based on the analysis comprises:
and determining that the video data acquisition equipment corresponding to the target area moves under the condition that the gray absolute value is greater than a preset gray absolute value and no moving object exists in the target area.
5. The method according to claim 1, wherein the generating and transmitting alarm information when the moving object exists in the target area comprises:
generating and sending alarm information when the moving object exists in the target area;
screening out a first specified time interval before the alarm information is sent and a target video within a second specified time interval after the alarm information is sent from the monitoring video data;
and saving the target video.
6. An alarm device applied to a video network is characterized by comprising:
the receiving module is used for receiving monitoring video data aiming at the target area;
the analysis module is used for analyzing any three continuous monitoring images in the monitoring video data by adopting a three-frame difference method to obtain an analysis result;
the determining module is used for determining whether a moving object exists in the target area according to the analysis result;
and the generating module is used for generating and sending alarm information when the moving object exists in the target area.
7. The apparatus of claim 6, wherein the analysis module comprises:
the extraction submodule is used for extracting any three continuous monitoring images in the monitoring video data;
the first obtaining submodule is used for obtaining gray values corresponding to the continuous monitoring images respectively;
the first determining submodule is used for determining a first difference image and a second difference image according to the three gray values;
and the second obtaining submodule is used for processing the first difference image and the second difference image to obtain a target image and a gray absolute value corresponding to the target image.
8. The apparatus of claim 7, wherein the determining module comprises:
and the second determining submodule is used for determining that a moving object exists in the target area under the condition that the gray absolute value is greater than a preset gray absolute value.
9. The apparatus of claim 8, wherein the determining module comprises:
and the third determining submodule is used for determining that the video data acquisition equipment corresponding to the target area has a movement condition under the condition that the gray absolute value is greater than a preset gray absolute value and no moving object exists in the target area.
10. The apparatus of claim 6, wherein the generating module comprises:
the generating submodule is used for generating and sending alarm information when the moving object exists in the target area;
the screening submodule is used for screening out a target video from the monitoring video data within a first specified time interval before the alarm information is sent to a second specified time interval after the alarm information is sent;
and the storage sub-module is used for storing the target video.
11. An electronic device, comprising:
one or more processors; and
one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the apparatus to perform the alert method of any of claims 1 to 5.
12. A computer-readable storage medium, characterized in that it stores a computer program causing a processor to execute the alarm method according to any one of claims 1 to 5.
CN201911405933.8A 2019-12-30 2019-12-30 Alarm method and device Pending CN111210462A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911405933.8A CN111210462A (en) 2019-12-30 2019-12-30 Alarm method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911405933.8A CN111210462A (en) 2019-12-30 2019-12-30 Alarm method and device

Publications (1)

Publication Number Publication Date
CN111210462A true CN111210462A (en) 2020-05-29

Family

ID=70787030

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911405933.8A Pending CN111210462A (en) 2019-12-30 2019-12-30 Alarm method and device

Country Status (1)

Country Link
CN (1) CN111210462A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111787290A (en) * 2020-07-29 2020-10-16 上海船舶研究设计院(中国船舶工业集团公司第六0四研究院) Ship data transmission method and device and control terminal
CN114898577A (en) * 2022-07-13 2022-08-12 环球数科集团有限公司 Road intelligent management system and method for peak period access management
WO2024103871A1 (en) * 2022-11-14 2024-05-23 新特能源股份有限公司 Polycrystalline silicon monitoring method and apparatus, and related device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106251544A (en) * 2016-08-05 2016-12-21 吉林大学 A kind of intrusion alarm method based on Android intelligent and alarm device
CN108964963A (en) * 2017-09-20 2018-12-07 北京视联动力国际信息技术有限公司 A method of warning system and realization alarm based on view networking
CN110110624A (en) * 2019-04-24 2019-08-09 江南大学 A kind of Human bodys' response method based on DenseNet network and the input of frame difference method feature

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106251544A (en) * 2016-08-05 2016-12-21 吉林大学 A kind of intrusion alarm method based on Android intelligent and alarm device
CN108964963A (en) * 2017-09-20 2018-12-07 北京视联动力国际信息技术有限公司 A method of warning system and realization alarm based on view networking
CN110110624A (en) * 2019-04-24 2019-08-09 江南大学 A kind of Human bodys' response method based on DenseNet network and the input of frame difference method feature

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111787290A (en) * 2020-07-29 2020-10-16 上海船舶研究设计院(中国船舶工业集团公司第六0四研究院) Ship data transmission method and device and control terminal
CN114898577A (en) * 2022-07-13 2022-08-12 环球数科集团有限公司 Road intelligent management system and method for peak period access management
WO2024103871A1 (en) * 2022-11-14 2024-05-23 新特能源股份有限公司 Polycrystalline silicon monitoring method and apparatus, and related device

Similar Documents

Publication Publication Date Title
CN110636257B (en) Monitoring video processing method and device, electronic equipment and storage medium
CN110769310B (en) Video processing method and device based on video network
CN109587002B (en) State detection method and system for video network monitoring equipment
CN110572607A (en) Video conference method, system and device and storage medium
CN110049273B (en) Video networking-based conference recording method and transfer server
CN111210462A (en) Alarm method and device
CN109191808B (en) Alarm method and system based on video network
CN110557606B (en) Monitoring and checking method and device
CN111510759A (en) Video display method, device and readable storage medium
CN110149305B (en) Video network-based multi-party audio and video playing method and transfer server
CN110049268B (en) Video telephone connection method and device
CN109743284B (en) Video processing method and system based on video network
CN108965783B (en) Video data processing method and video network recording and playing terminal
CN110769297A (en) Audio and video data processing method and system
CN109698953B (en) State detection method and system for video network monitoring equipment
CN111447396A (en) Audio and video transmission method and device, electronic equipment and storage medium
CN111447407A (en) Monitoring resource transmission method and device
CN110830763A (en) Monitoring video inspection method and device
CN108574655B (en) Conference monitoring and broadcasting method and device
CN110795008B (en) Picture transmission method and device and computer readable storage medium
CN109714641B (en) Data processing method and device based on video network
CN110572608B (en) Frame rate setting method and device, electronic equipment and storage medium
CN110830185B (en) Data transmission method and device
CN110139061B (en) Video stream screen display method and device
CN110418105B (en) Video monitoring method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination