CN113079371A - Recovery and analysis method, device and equipment for video Internet of things - Google Patents

Recovery and analysis method, device and equipment for video Internet of things Download PDF

Info

Publication number
CN113079371A
CN113079371A CN202110623947.8A CN202110623947A CN113079371A CN 113079371 A CN113079371 A CN 113079371A CN 202110623947 A CN202110623947 A CN 202110623947A CN 113079371 A CN113079371 A CN 113079371A
Authority
CN
China
Prior art keywords
video
working state
target
data
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110623947.8A
Other languages
Chinese (zh)
Other versions
CN113079371B (en
Inventor
王滨
张峰
万里
何承润
刘松
徐文渊
冀晓宇
殷丽华
李超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202110623947.8A priority Critical patent/CN113079371B/en
Publication of CN113079371A publication Critical patent/CN113079371A/en
Application granted granted Critical
Publication of CN113079371B publication Critical patent/CN113079371B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/004Diagnosis, testing or measuring for television systems or their details for digital television systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a recovery and analysis method, a recovery and analysis device and equipment for a video Internet of things, wherein the method comprises the following steps: acquiring a flow data packet, wherein the flow data packet at least comprises flow characteristic information and effective flow data; acquiring a target coding mode corresponding to the flow data packet; determining identity information of the video equipment based on the flow characteristic information; performing video file recovery on the effective flow data in the flow data packet based on a target coding mode to obtain a target type video file, wherein the video file comprises the effective flow data, data related to the target type and data related to the target coding mode; performing image analysis on a video picture corresponding to the video file to obtain a working state of the video equipment, such as a normal working state or an abnormal working state; and if the abnormal working state exists, generating alarm data, wherein the alarm data at least comprises the identity information of the video equipment and the working state of the video equipment. Through the technical scheme, effectiveness and safety of the video Internet of things can be improved.

Description

Recovery and analysis method, device and equipment for video Internet of things
Technical Field
The application relates to the technical field of information security, in particular to a method, a device and equipment for recovering and analyzing a video Internet of things.
Background
The video internet of things is a video network composed of video equipment, forwarding equipment, management equipment and the like, video data transmitted in the video internet of things are not encrypted generally for the transmission efficiency and the transmission stability of the video data, and encryption protection is performed when the video data need to be transmitted out of the video internet of things.
In order to know whether the video device is abnormal or not, the management device may acquire the video stream from the video device, analyze a video image acquired by the video device from the video stream, analyze content in the video image, and know whether the video device is abnormal or not based on an analysis result, such as whether the video device is turned off or has an error.
However, the management device needs to acquire information such as a user name and a password of the video device, performs authentication based on the user name and the password, and can acquire the video stream from the video device after the authentication is successful. Due to the fact that a large number of video devices exist in the video internet of things, the management device needs to acquire user names and passwords of the video devices, the mode of abnormal detection is complex, and the management device may not acquire the user name and the password of each video device. Moreover, the management device can only know whether the video device is powered off or has an error, and in fact, even if the video device is not powered off and has no error, the video picture may be invalid, that is, the error detection result is obtained.
Disclosure of Invention
The application provides a recovery and analysis method of a video Internet of things, the video Internet of things comprises a management device and a video device, the management device is connected with the video device through a forwarding device, a bypass of the forwarding device is provided with an analysis device, the method is applied to the analysis device, and the method comprises the following steps:
acquiring a traffic data packet sent by the video equipment from the forwarding equipment, wherein the traffic data packet at least comprises traffic characteristic information and effective traffic data;
acquiring a target coding mode corresponding to the flow data packet;
determining identity information of the video device based on the traffic characteristic information;
performing video file recovery on the effective flow data in the flow data packet based on the target coding mode to obtain a video file of a target type, wherein the video file comprises the effective flow data, data associated with the target type and data associated with the target coding mode;
performing image analysis on a video picture corresponding to the video file to obtain a working state of the video equipment, wherein the working state is a normal working state or an abnormal working state;
and if the working state is an abnormal working state, generating alarm data, wherein the alarm data at least comprises the identity information of the video equipment and the working state of the video equipment.
Illustratively, the obtaining the target encoding mode corresponding to the traffic data packet includes:
if the traffic data packet further comprises a coding mode, determining the coding mode in the traffic data packet as a target coding mode corresponding to the traffic data packet; alternatively, the first and second electrodes may be,
acquiring a stream fetching instruction sent to the video device by the management device from the forwarding device, wherein the stream fetching instruction comprises a coding mode corresponding to the traffic data packet, and determining the coding mode in the stream fetching instruction as a target coding mode corresponding to the traffic data packet; alternatively, the first and second electrodes may be,
and determining a preset coding mode as a target coding mode corresponding to the flow data packet.
Illustratively, the traffic data packet further includes invalid traffic data, and the performing video file recovery on the valid traffic data in the traffic data packet based on the target encoding mode to obtain a target type video file includes: stripping invalid flow data in the flow data packet to obtain effective flow data, wherein the invalid flow data comprises physical layer data, link layer data, network layer data and transmission layer data in the flow data packet, and the effective flow data comprises application layer data in the flow data packet; deconstructing the effective flow data based on the target coding mode to obtain video data, wherein the video data comprises the effective flow data and data related to the target coding mode; and recombining the video data based on a target type to obtain a video file of the target type, wherein the video file comprises the video data and data related to the target type.
Illustratively, the performing image analysis on the video picture corresponding to the video file to obtain the working state of the video device includes: playing the video file, and collecting a video picture in the playing process of the video file; inputting the video picture to a trained first target classifier model, and outputting a first classification result corresponding to the video picture by the first target classifier model; determining an operating state of the video device based on the first classification result.
Illustratively, the determining the operating state of the video device based on the first classification result includes: if the first target classifier model is obtained based on normal video picture and abnormal video picture training, the first classification result is that the video picture is normal or the video picture is abnormal; if the first classification result is that the video picture is normal, determining that the working state of the video equipment is a normal working state; if the first classification result is that the video picture is abnormal, determining that the working state of the video equipment is an abnormal working state;
or if the first target classifier model is obtained based on normal video picture, low-quality video picture, occlusion video picture and invalid video picture training, the first classification result is that the video picture is normal, or the video picture quality is low, or the video picture is occluded, or the video picture is invalid; if the first classification result is that the video picture is normal, determining that the working state of the video equipment is a normal working state;
and if the first classification result is that the video picture quality is low, or the video picture is blocked, or the video picture is invalid, determining that the working state of the video equipment is an abnormal working state.
Illustratively, the performing image analysis on the video picture corresponding to the video file to obtain the working state of the video device includes: inputting the video file to a trained second target classifier model, and outputting a second classification result corresponding to a video picture in the video file by the second target classifier model; determining an operating state of the video device based on the second classification result; if the second target classifier model is obtained based on normal video file and abnormal video file training, the second classification result is that the video picture in the video file is normal or the video picture in the video file is abnormal; if the second classification result is that the video picture in the video file is normal, determining that the working state of the video equipment is a normal working state; and if the second classification result is that the video picture in the video file is abnormal, determining that the working state of the video equipment is an abnormal working state.
Illustratively, when a self-updating condition of a first target classifier model is met, a first training data set is obtained, wherein the first training data set comprises a normal video picture and an abnormal video picture; retraining the first target classifier model based on the first training data set; the retrained first target classifier model is used for outputting a first classification result corresponding to a video picture in a video file; or when the self-updating condition of the first target classifier model is met, acquiring a second training data set, wherein the second training data set comprises a normal video picture, a low-quality video picture, a shielding video picture and an invalid video picture; retraining the first target classifier model based on the second training data set; and the retrained first target classifier model is used for outputting a first classification result corresponding to a video picture in the video file.
Illustratively, when a self-updating condition of the second target classifier model is met, a third training data set is obtained, wherein the third training data set comprises a normal video file and an abnormal video file; retraining a second target classifier model based on the third training data set; and the retrained second target classifier model is used for outputting a second classification result corresponding to the video picture in the video file.
For example, after the video file recovery is performed on the effective traffic data in the traffic data packet based on the target encoding manner to obtain a video file of a target type, the method further includes:
and generating data to be displayed, wherein the data to be displayed at least comprises the identity information of the video equipment and the video file, and sending the data to be displayed to user equipment.
The application provides a video thing networking's recovery and analytical equipment, the video thing networking includes management equipment and video equipment, management equipment through transmit equipment with video equipment connects, transmit the bypass of equipment and deploy analysis equipment, the device is applied to analysis equipment, the device includes:
an obtaining module, configured to obtain, from the forwarding device, a traffic data packet sent by the video device, where the traffic data packet at least includes traffic characteristic information and effective traffic data; acquiring a target coding mode corresponding to the flow data packet;
a determining module, configured to determine identity information of the video device based on the traffic characteristic information;
a recovery module, configured to perform video file recovery on the effective traffic data in the traffic data packet based on the target coding mode to obtain a video file of a target type, where the video file includes the effective traffic data, data associated with the target type, and data associated with the target coding mode;
the analysis module is used for carrying out image analysis on a video picture corresponding to the video file to obtain the working state of the video equipment, wherein the working state is a normal working state or an abnormal working state;
and the generating module is used for generating alarm data if the working state is an abnormal working state, wherein the alarm data at least comprises the identity information of the video equipment and the working state of the video equipment.
The application provides an analytical equipment, the video thing networking includes management equipment and video equipment, management equipment through forwarding device with video equipment connects, forwarding device's bypass deploys analytical equipment, analytical equipment includes: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
acquiring a traffic data packet sent by the video equipment from the forwarding equipment, wherein the traffic data packet at least comprises traffic characteristic information and effective traffic data;
acquiring a target coding mode corresponding to the flow data packet;
determining identity information of the video device based on the traffic characteristic information;
performing video file recovery on the effective flow data in the flow data packet based on the target coding mode to obtain a video file of a target type, wherein the video file comprises the effective flow data, data associated with the target type and data associated with the target coding mode;
performing image analysis on a video picture corresponding to the video file to obtain a working state of the video equipment, wherein the working state is a normal working state or an abnormal working state;
and if the working state is an abnormal working state, generating alarm data, wherein the alarm data at least comprises the identity information of the video equipment and the working state of the video equipment.
According to the technical scheme, the analysis equipment is deployed in a bypass mode, the analysis equipment acquires the flow data packet sent by the video equipment, the working state of the video equipment is analyzed based on the flow data packet, information such as a user name and a password of the video equipment does not need to be obtained, verification is not needed to be carried out based on the user name and the password, and the mode of abnormality detection is simple. The analysis device can know whether the video device is powered off or has an error, and can also know that the video device has low quality, video image shielding, and video image invalidation (for example, if the video device is moved to a meaningless angle, the video image is invalid) and other abnormalities, so as to obtain a correct detection result. The flow data packet is obtained in a bypass mode, the video picture of the video equipment can be extracted through analysis, deconstruction and recombination of the flow data packet on the basis of not generating interference data flow, the working state of the video equipment is known through analysis of the video picture, then the video equipment which does not normally run is alarmed, the working personnel is helped to know the running state of the video equipment, the working cost is reduced, the working efficiency is improved, and the effectiveness and the safety of the video internet of things are improved. The video picture is analyzed from a third-party device (namely, an analysis device, a non-management device and a web page of a non-video device) to detect whether the video device is abnormal or not.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments of the present application or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings of the embodiments of the present application.
Fig. 1 is a schematic networking diagram of a video internet of things in an embodiment of the present application;
fig. 2 is another networking schematic diagram of a video internet of things in an embodiment of the present application;
fig. 3 is a flowchart of a recovery and analysis method of a video internet of things according to an embodiment of the present application;
fig. 4 is a structural diagram of a recovery and analysis device of a video internet of things according to an embodiment of the present application;
fig. 5 is a structural diagram of a recovery and analysis device of a video internet of things according to an embodiment of the present application;
fig. 6 is a hardware configuration diagram of an analysis device according to an embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in the embodiments of the present application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Depending on the context, moreover, the word "if" as used may be interpreted as "at … …" or "when … …" or "in response to a determination".
The Video internet of things is a Video Network formed by Video equipment, forwarding equipment, management equipment and the like, the Video equipment is equipment for receiving instructions of the management equipment, the Video equipment can be equipment such as an analog Camera, an IPC (IP Camera), an NVR (Network Video Recorder) and the like, the type of the Video equipment is not limited, and the Video equipment is equipment for collecting Video images. The management device is a device that sends an instruction to the video device, and the management device may be an internet of things platform, a Personal Computer (PC), a terminal device, a notebook Computer, a smart phone, or the like, and the type of the management device is not limited. The forwarding device may be a switch, a router, or the like, and is configured to forward the instruction of the management device to the video device, and forward the video image acquired by the video device to the management device, where the type of the forwarding device is not limited.
Referring to fig. 1, a networking schematic diagram of the video internet of things is shown, the number of the video devices may be at least one, and the video devices may be connected to the management device through the forwarding device.
In a possible implementation manner, the management device may obtain information such as a user name and a password of the video device, perform verification based on the user name and the password, obtain a video stream from the video device after the verification is successful, analyze a video image acquired by the video device from the video stream, analyze content in the video image, and obtain whether the video device is abnormal based on an analysis result, such as whether the video device is turned off or has an error.
However, since a large number of video devices exist in the video internet of things, the management device needs to acquire the user names and passwords of the video devices, so that the abnormality detection manner is complicated, and the management device may not acquire the user name and password of each video device, so that abnormality detection cannot be performed on the video devices.
Moreover, the management device can only know whether the video device is powered off or has an error, and in fact, even if the video device is not powered off and has no error, the video picture may be invalid, that is, the error detection result is obtained. For example, when the orientation of the video device is manually moved (to an invalid angle facing a wall, the ground, the sky, or the like), the video frame captured by the video device is invalid, but the management device cannot detect such an abnormality.
In order to solve the above problems, an embodiment of the application provides a recovery and analysis method for a video internet of things, where analysis equipment is deployed in a bypass manner, and the analysis equipment acquires a traffic data packet sent by video equipment, the traffic data packet can be acquired in the bypass manner on the basis of not generating interference data traffic, a video picture of the video equipment is extracted through analysis, deconstruction and recombination of the traffic data packet, and the video picture is analyzed to obtain a working state of the video equipment, so that the video equipment which does not normally run is warned, a worker is helped to know a running state of the video equipment, working cost is reduced, working efficiency is improved, and effectiveness and safety of the video internet of things are improved. In the embodiment, the information such as the user name, the password and the like of the video equipment does not need to be obtained, the verification based on the user name and the password is also not needed, and the anomaly detection mode is simpler. Besides knowing whether the video equipment is powered off or has an error, the analysis equipment can also know that the video picture quality is low, the video picture is shielded, the video picture is invalid (for example, the orientation of the video equipment is manually moved to an invalid angle opposite to a wall, the ground, the sky and the like, and the video picture is invalid) and the like, so that a correct detection result is obtained.
In the embodiment, the video file recovery process comprises the steps of carrying out operations such as invalid data stripping and video data recombination according to a flow data packet, further recovering a video file, and carrying out screenshot on the video file to obtain a video picture. In the image analysis process, whether the video equipment normally works is judged according to the video picture, if so: whether the orientation of the video equipment is manually moved (to an invalid angle facing a wall, the ground, the sky and the like), whether a picture is blocked, whether the picture quality reaches the standard and the like.
Referring to fig. 2, another networking schematic diagram of the video internet of things is shown, where the video internet of things includes video devices, forwarding devices, and management devices, the management devices are connected with the video devices through the forwarding devices, the number of the video devices may be at least one, and the video devices may all be connected to the management devices through the forwarding devices.
On the basis, analysis equipment is arranged on the forwarding equipment in a bypass mode, namely, the analysis equipment is arranged in a bypass mode instead of being arranged in series. The flow data packet transmitted between the video device and the management device does not pass through the analysis device, the analysis device obtains the flow data packet from the forwarding device, namely, the forwarding device copies one flow data packet and sends the copied flow data packet to the analysis device on the basis of normally forwarding the flow data packet.
To sum up, in the embodiment of the application, the analysis device can acquire the traffic data packet in the video internet of things, does not generate interference traffic, does not need to acquire information such as a user name and a password of the video device, does not need to rely on the video device to provide information, can acquire the traffic data packet in the video internet of things in a mode of a third-party device, then analyzes, deconstructs and recombines the traffic data packet, obtains a video picture of the video device, and then analyzes the video picture to acquire the working state of the video device.
In the application scenario, referring to fig. 3, a schematic flow chart of a recovery and analysis method of a video internet of things is shown, where the method may be applied to an analysis device, and the method may include:
step 301, obtaining a traffic data packet sent by a video device from a forwarding device, where the traffic data packet may include at least traffic characteristic information, valid traffic data, and invalid traffic data.
For example, for a traffic data packet sent by each video device (a video device is described as an example later), when a forwarding device receives the traffic data packet sent by the video device each time, on the basis of forwarding the traffic data packet normally, a copy of the traffic data packet is sent to an analysis device, so that the analysis device can obtain the traffic data packet sent by the video device from the forwarding device.
For example, the traffic data packet sent by the video device may be a traffic data packet sent by the video device to the management device, for example, the management device sends a stream fetching instruction to the video device, and the video device sends the traffic data packet to the management device after receiving the stream fetching instruction, or the video device actively sends the traffic data packet to the management device. The traffic data packet sent by the video device may also be a traffic data packet sent by the video device to the non-management device, and the receiver of the traffic data packet is not limited.
For example, video data transmitted in the video internet of things is not encrypted (for the transmission efficiency and the transmission stability of the video data), and encryption protection is performed when the video data needs to be transmitted outside the video internet of things.
For example, the traffic packet may be a traffic packet of an RTSP (Real Time Streaming Protocol) Protocol or an RTP (Real-Time Transport Protocol) Protocol, or may be a traffic packet of another Protocol, and the type of the traffic packet is not limited.
Step 302, a target encoding mode corresponding to the traffic data packet is obtained.
Illustratively, the target encoding mode corresponding to the traffic data packet may be obtained as follows:
in the method 1, if the traffic data packet further includes an encoding method, the encoding method in the traffic data packet may be determined as the target encoding method corresponding to the traffic data packet.
For example, for a traffic packet sent by a video device, the traffic packet may include traffic characteristic information, valid traffic data, invalid traffic data, an encoding method, and the like, where the encoding method indicates an encoding method of the traffic packet, such as h.264, h.265, and the like, and the type of the encoding method is not limited, and on this basis, the encoding method in the traffic packet may be directly determined as a target encoding method corresponding to the traffic packet. For example, if the traffic packet carries the characteristics of the h.264 coding scheme (e.g., a = rtpmap: h 264), the target coding scheme is h.264.
Mode 2, a stream fetching instruction sent by the management device to the video device is obtained from the forwarding device, where the stream fetching instruction may include an encoding mode corresponding to a traffic data packet, and the encoding mode in the stream fetching instruction is determined as a target encoding mode corresponding to the traffic data packet (e.g., a traffic data packet sent by the video device to the management device).
For example, the management device sends a streaming instruction, such as a streaming instruction of an RTSP protocol, to the video device through the forwarding device, the forwarding device copies a streaming instruction to send to the analysis device on the basis of normally sending the streaming instruction to the video device, and the analysis device may obtain the streaming instruction. If the stream fetching instruction includes the encoding mode corresponding to the traffic data packet, the analysis device may acquire the encoding mode, such as h.264, h.265, and the like, from the stream fetching instruction. After receiving the stream fetching instruction, the video device sends a traffic data packet, such as a traffic data packet of an RTP protocol, to the management device through the forwarding device, the forwarding device copies a part of the traffic data packet to send to the analysis device on the basis of normally sending the traffic data packet to the management device, and the analysis device can obtain the traffic data packet and determine a coding mode in the stream fetching instruction as a target coding mode corresponding to the traffic data packet. For example, if the stream fetching instruction carries the characteristics of the h.264 encoding scheme (e.g., a = rtpmap: h 264), the target encoding scheme is h.264.
And 3, determining the preset coding mode as a target coding mode corresponding to the flow data packet.
For example, it is known in advance that the video device will use h.264 to encode the traffic data packet, then the h.264 may be used as a preset encoding mode, and based on this, after the analysis device acquires each traffic data packet, the analysis device may use the h.264 as a target encoding mode corresponding to the traffic data packet.
For another example, all possible encoding manners (e.g., h.264, h.265, etc.) may be used as preset encoding manners, based on which, after the analysis device acquires each traffic data packet, each possible encoding manner is used as a target encoding manner corresponding to the traffic data packet, that is, the target encoding manner may be h.264 and h.265, that is, the target encoding manner may be multiple, and then, one target encoding manner is taken as an example.
For example, if the target coding method corresponding to the traffic data packet is not obtained in the manner 1 and/or the manner 2, the preset coding method is determined as the target coding method corresponding to the traffic data packet.
Step 303, determining the identity information of the video device based on the traffic characteristic information in the traffic data packet.
For example, the analysis device may obtain identity information of each video device in the video internet of things in advance. For example, for each video device, asset information of the video device is obtained, the asset information including, but not limited to, at least one of: IP address, MAC address, device manufacturer, device model, device serial number, etc. The method for generating the identity information (such as the identity identifier) of the video device is based on the asset information of the video device, and the generation manner of the identity information is not limited, for example, character strings of all asset information of the video device are connected, and the connected character strings are used as the identity information of the video device, or after the character strings of all asset information of the video device are connected, a hash operation is performed, and a hash value is used as the identity information of the video device.
After obtaining the identity information of the video device, the analysis device may further record a mapping relationship between an IP address (or an IP address and a MAC address) of the video device and the identity information of the video device.
For example, in step 303, the traffic characteristic information in the traffic data packet may include a source IP address of the traffic data packet (i.e., an IP address of the video device), and of course, the traffic characteristic information may also include contents such as a source port, a destination IP address, a destination port, a protocol type, and the like, so that the mapping relationship may be queried through the source IP address in the traffic data packet to obtain identity information corresponding to the source IP address, i.e., identity information of the video device, so as to obtain the identity information of the video device based on the traffic characteristic information.
To sum up, for each traffic data packet, the identity information and the target encoding mode corresponding to the traffic data packet may be obtained and recorded as [ identity information, target encoding mode, traffic data packet ].
In step 304, performing video file recovery on the effective traffic data in the traffic data packet based on the target coding mode to obtain a video file of the target type, which may include, for example, the effective traffic data, data associated with the target type, and data associated with the target coding mode.
For example, after obtaining the traffic data packet, operations such as stripping of invalid traffic data, deconstruction and reassembly of valid traffic data may be performed on the traffic data packet to obtain the target type video file.
In one possible implementation, the following steps may be taken to obtain a video file of the target type:
step 3041, strip the invalid traffic data in the traffic data packet to obtain valid traffic data, where the invalid traffic data may include physical layer data, link layer data, network layer data, and transport layer data in the traffic data packet, and the valid traffic data may include application layer data in the traffic data packet.
For example, since the traffic data packet may include invalid traffic data and valid traffic data, after the invalid traffic data in the traffic data packet is stripped off, the remaining traffic data in the traffic data packet is the valid traffic data. In this embodiment, the traffic data packet may include physical layer data, link layer data, network layer data, transport layer data, and application layer data, the physical layer data, link layer data, network layer data, and transport layer data may be used as invalid traffic data, and the application layer data may be used as valid traffic data.
Step 3042, deconstructing the effective traffic data (i.e. the application layer data) in the traffic data packet based on the target coding mode to obtain video data, where the video data may include the effective traffic data and data associated with the target coding mode. For each traffic data packet, after the video data corresponding to the traffic data packet is obtained, the [ identity information, target encoding mode, video data ] may be recorded for the traffic data packet.
For example, when the effective traffic data (i.e., the application layer data) in the traffic data packet is deconstructed based on the target coding method to obtain the video data, the deconstruction process is not limited in this embodiment, as long as the effective traffic data can be converted into the video data. For example, the effective traffic data may include a real data value and additional information related to encoding (for indicating how the real data value is encoded), and based on the target encoding scheme and the additional information, data encoded by the video device, which is referred to as data associated with the target encoding scheme, may be recovered, and this process may be referred to as a deconstruction process.
In summary, the effective traffic data in the traffic data packet may be deconstructed based on the target encoding method to obtain the video data, and the video data includes the effective traffic data and the data associated with the target encoding method. For example, if the effective traffic data in the traffic data packet is 12, and the data recovered to be encoded by the video device is 01 (data associated with the target encoding scheme) based on the target encoding scheme and the additional information in the effective traffic data, the video data may be 0112. Of course, the above is only a simple example of deconstructing the effective traffic data based on the target coding method to obtain the video data, and this is not limited thereto.
Step 3043, the video data is reassembled based on the target type to obtain a video file of the target type, where the video file includes the video data and data associated with the target type, that is, the video file includes the effective traffic data, data associated with the target type, and data associated with the target coding mode.
For example, the video data corresponding to the multiple traffic packets may be reassembled into one video file based on the target type, that is, the video file of the target type, and the video file is a playable file. For example, the target type (which may also be referred to as a target file format) may be a type such as flv, MP4, avi, or the like, and if the target type is a flv type, the video data corresponding to the multiple traffic packets may be reassembled into a flv type video file. After the video file of the target type is obtained, the [ identity information, video file ] can be recorded for the video file, that is, a plurality of flow data packets corresponding to the identity information are recombined into the video file. Of course, in practical application, the video data corresponding to the multiple traffic data packets may also be directly reassembled into a video file, and the video file is also a playable file, and is not stored as a target type video file, which is not limited in this respect.
For example, when the video data is reassembled based on the target type to obtain the video file of the target type, the reassembling process is not limited in this embodiment as long as the video data can be reassembled into the video file. For example, the process of recombining video data corresponding to a plurality of traffic packets and adding data associated with a target type may be referred to as a recombination process. In summary, the video data may be reassembled to obtain a video file of a target type, and the video file may include video data corresponding to a plurality of traffic packets (the video data corresponding to each traffic packet may include effective traffic data and data associated with a target coding scheme), and data associated with the target type.
For example, if the video data corresponding to the first traffic data packet is 0112, the video data corresponding to the second traffic data packet is 34, the video data corresponding to the third traffic data packet is 0156, the video data corresponding to the fourth traffic data packet is 78, and the video data corresponding to the fifth traffic data packet is 9, the video data corresponding to the traffic data packets may be reassembled to obtain data 0112340156789. Data associated with the target type may then be added to the data 0112340156789, such as 0000, such as adding 0000 before data 0112340156789 resulting in video file 00000112340156789, or adding 0000 after data 0112340156789 resulting in video file 01123401567890000. Of course, the above is only a simple example of reconstructing video data to obtain a video file, and the method is not limited to this.
In summary, by performing operations such as invalid traffic data stripping, effective traffic data deconstruction and reassembly on the traffic data packet, a target type video file, i.e., a video file that can be played, can be obtained.
In a possible implementation manner, after the target type video file is obtained, data to be presented may be further generated, where the data to be presented at least includes the identity information of the video device and the video file, and the data to be presented is sent to a user device, that is, a user device of a worker. For example, the data to be displayed is [ identity information, video file ], and the [ identity information, video file ] can be sent to the staff, and the staff can know the picture content of the video device (i.e. the picture content in the video file) by online watching or downloading watching, and the like, and view and analyze the picture content of the video device. Illustratively, the data to be presented may also include a timestamp of the video file, such as a start time and an end time of the video file.
Step 305, performing image analysis on the video frame corresponding to the video file to obtain a working state of the video device, where the working state may be a normal working state or an abnormal working state.
In one possible embodiment, the operating state of the video device may be obtained as follows:
the method comprises the steps of 1, playing the video file, collecting a video picture in the playing process of the video file, inputting the video picture to a trained first target classifier model, outputting a first classification result corresponding to the video picture by the first target classifier model, and determining the working state of the video equipment based on the first classification result.
For example, the first target classifier model may be trained based on normal video pictures and abnormal video pictures, and the first classification result is normal video pictures or abnormal video pictures. Based on the above, if the first classification result is that the video picture is normal, determining that the working state of the video equipment is a normal working state; and if the first classification result is that the video picture is abnormal, determining that the working state of the video equipment is an abnormal working state.
For example, a training data set 1 may be obtained, where the training data set 1 includes a large number of video pictures, and the video pictures may include a normal video picture (i.e., a video picture having a tag value of a first value that indicates that the video picture is a normal video picture) and an abnormal video picture (i.e., a video picture having a tag value of a second value that indicates that the video picture is an abnormal video picture). Then, training the initial classifier model based on the training data set 1 to obtain a trained classifier model, and referring the trained classifier model as a first target classifier model without limitation to the training process. Since the first target classifier model is trained based on the normal video picture and the abnormal video picture, the classification result of the first target classifier model is that the video picture is normal (as indicated by the first value) or the video picture is abnormal (as indicated by the second value).
On the basis, after the video picture in the video file playing process is input to the first target classifier model, the first target classifier model can output a first classification result corresponding to the video picture, and the first classification result is that the video picture is normal or the video picture is abnormal. Based on this, if the first classification result is that the video picture is normal, the working state of the video equipment can be determined to be a normal working state; if the first classification result is that the video picture is abnormal, the working state of the video equipment can be determined to be an abnormal working state.
And 2, playing the video file, collecting a video picture in the playing process of the video file, inputting the video picture to the trained first target classifier model, outputting a first classification result corresponding to the video picture by the first target classifier model, and determining the working state of the video equipment based on the first classification result.
For example, the first target classifier model may be trained based on a normal video picture, a low quality video picture, an occlusion video picture, and an invalid video picture, and the first classification result may be that the video picture is normal, or the video picture quality is low, or the video picture is occluded, or the video picture is invalid. Based on this, if the first classification result is that the video picture is normal, the working state of the video equipment is determined to be a normal working state. And if the first classification result is that the video picture quality is low, or the video picture is blocked, or the video picture is invalid, determining that the working state of the video equipment is an abnormal working state. For example, when the first classification result is that the video picture quality is low, it may be further determined that the abnormal type of the video device is that the picture quality is low; when the first classification result is video image occlusion, determining that the abnormal type of the video equipment is image occlusion; when the first classification result is that the video picture is invalid, the abnormal type of the video equipment can also be determined as picture invalid.
For example, a training data set 2 may be obtained, where the training data set 2 includes a plurality of video pictures, and the video pictures may include normal video pictures (i.e., the video pictures have a tag value of a first value, which represents a normal video picture), and at least one of the following video pictures: a low-quality video picture (i.e., the label value of the video picture is the second value and represents the low-quality video picture), a blocked video picture (i.e., the label value of the video picture is the third value and represents the blocked video picture), and an invalid video picture (i.e., the label value of the video picture is the fourth value and represents the invalid video picture). For convenience of description, the training data set 2 is subsequently exemplified to include a normal video picture, a low-quality video picture, an occlusion video picture, and an invalid video picture.
Training the initial classifier model based on the training data set 2 to obtain a trained classifier model, and calling the trained classifier model as a first target classifier model without limitation to the training process. Since the first target classifier model is trained based on the normal video picture, the low-quality video picture, the occlusion video picture and the invalid video picture, the classification result of the first target classifier model is that the video picture is normal (represented by the first value), or the video picture is low in quality (represented by the second value), or the video picture is occluded (represented by the third value), or the video picture is invalid (represented by the fourth value).
On the basis, after the video picture in the video file playing process is input to the first target classifier model, the first target classifier model can output a first classification result corresponding to the video picture. If the first classification result is that the video picture is normal, the working state of the video equipment is a normal working state; if the first classification result is that the video image quality is low, the working state of the video equipment is an abnormal working state, and the abnormal type is that the image quality is low; if the first classification result is video image occlusion, the working state of the video equipment is an abnormal working state, and the abnormal type is image occlusion; if the first classification result is that the video picture is invalid, the working state of the video equipment is an abnormal working state, and the abnormal type is picture invalid.
Mode 3, inputting the video file to a trained second target classifier model, and outputting a second classification result corresponding to a video picture in the video file by the second target classifier model; an operating state of the video device is determined based on the second classification result. For example, the second target classifier model may be trained based on a normal video file and an abnormal video file, and the second classification result is that a video frame in the video file is normal or a video frame in the video file is abnormal. Based on the above, if the second classification result is that the video picture in the video file is normal, determining that the working state of the video equipment is a normal working state; and if the second classification result is that the video picture in the video file is abnormal, determining that the working state of the video equipment is an abnormal working state.
For example, a training data set 3 may be obtained, where the training data set 3 includes a large number of video files (a file of binary data itself is not a video picture when the video file is played), and the video files include a normal video file (i.e., a tag value of the video file is a first value and indicates a normal video file) and an abnormal video file (i.e., a tag value of the video file is a second value). Training the initial classifier model based on the training data set 3 to obtain a trained classifier model, and calling the trained classifier model as a second target classifier model without limitation to the training process. Since the second target classifier model is trained based on the normal video file and the abnormal video file, the classification result of the second target classifier model is that the video file is normal (represented by the first value) or the video file is abnormal (represented by the second value).
After the video file itself is input to the second target classifier model, the second target classifier model may output a second classification result corresponding to the video picture in the video file. If the second classification result is that the video picture in the video file is normal, the working state of the video equipment is a normal working state; and if the second classification result is that the video picture in the video file is abnormal, the working state of the video equipment is an abnormal working state.
In the above embodiment, the first target classifier model may be a classifier model of any structure, such as a classifier model based on a convolutional neural network, without limitation. The second target classifier model may be any structure of classifier models, such as a convolutional neural network-based classifier model, without limitation.
In step 306, if the working state of the video device is an abnormal working state, alarm data is generated, where the alarm data may at least include the identity information of the video device and the working state of the video device (i.e., the abnormal working state). Illustratively, the alert data may further include at least one of: video files, video pictures during the playing of the video files (i.e. the video pictures input to the first target classifier model), and exception types (for mode 2, the exception types may be low picture quality, blocked picture, invalid picture, etc.).
In summary, the alarm data may be [ identity information, abnormal operating state, video file, video picture, abnormal type ], and of course, the above is only an example of the alarm data, and the above is not limited thereto, and may include at least one of identity information, abnormal operating state, video file, video picture, and abnormal type, and the abnormal type may be low picture quality (i.e. picture blur), picture occlusion, and picture invalidity.
In one possible embodiment, the alarm data may be sent to a user device, i.e. a user device of a staff member, for example, by mail, message, page popup, etc.
In one possible implementation, regarding mode 1 of step 305, when a self-update condition of the first target classifier model is satisfied, a first training data set may be obtained, where the first training data set includes a normal video frame and an abnormal video frame, for example, the training data set 1 is updated, such as a new normal video frame and an abnormal video frame are added, so as to obtain the first training data set. Retraining the first target classifier model based on the first training data set, where the retrained first target classifier model is used to output a first classification result corresponding to a video frame in a video file, that is, when step 305 is executed again, obtaining the operating state of the video device based on the retrained first target classifier model by using a method 1.
With respect to mode 2 of step 305, when the self-updating condition of the first target classifier model is satisfied, a second training data set may be obtained, where the second training data set may include a normal video picture, a low-quality video picture, an occlusion video picture, and an invalid video picture, for example, the training data set 2 may be updated, such as adding a new normal video picture, a low-quality video picture, an occlusion video picture, and an invalid video picture, to obtain the second training data set. Retraining the first target classifier model based on the second training data set, where the retrained first target classifier model is used to output a first classification result corresponding to a video frame in the video file, that is, when step 305 is executed again, obtaining the operating state of the video device based on the retrained first target classifier model by adopting a method 2.
With respect to mode 3 of step 305, when the self-updating condition of the second target classifier model is satisfied, a third training data set may be obtained, where the third training data set may include a normal video file and an abnormal video file, for example, the training data set 3 is updated, such as a new normal video file and a new abnormal video file are added, so as to obtain the third training data set. Then, the second target classifier model is retrained based on the third training data set, and the retrained second target classifier model is used for outputting a second classification result corresponding to the video picture in the video file, that is, when step 305 is executed again, the operating state of the video device is obtained by adopting the method 3 based on the retrained second target classifier model.
In the above embodiments, the self-updating condition of the first target classifier model is satisfied, or the self-updating condition of the second target classifier model is satisfied, which may include but is not limited to: 1. periodic triggering, such as triggering once every month to satisfy the self-updating condition. 2. And (3) threshold triggering, namely if the abnormal alarm is judged as normal data by a worker or the video file is judged as abnormal data by the worker, increasing the manual judgment times, and triggering to meet the self-updating condition when the manual judgment times reach the threshold. 3. And manual triggering, such as manual triggering of a worker, meets the self-updating condition. Of course, the above are only a few examples and are not limiting.
In the above embodiment, each time the self-updating condition of the first target classifier model or the second target classifier model is satisfied, the first target classifier model or the second target classifier model is retrained, so that the self-updating of the first target classifier model or the second target classifier model is realized, and the effectiveness of data analysis is ensured.
According to the technical scheme, the analysis equipment is deployed in a bypass mode, the analysis equipment acquires the flow data packet sent by the video equipment, the working state of the video equipment is analyzed based on the flow data packet, information such as a user name and a password of the video equipment does not need to be obtained, verification is not needed to be carried out based on the user name and the password, and the mode of abnormality detection is simple. The analysis device can know whether the video device is powered off or has an error, and can also know that the video device has low quality, video image shielding, and video image invalidation (for example, if the video device is moved to a meaningless angle, the video image is invalid) and other abnormalities, so as to obtain a correct detection result. The flow data packet is obtained in a bypass mode, the video picture of the video equipment can be extracted through analysis, deconstruction and recombination of the flow data packet on the basis of not generating interference data flow, the working state of the video equipment is known through analysis of the video picture, then the video equipment which does not normally run is alarmed, the working personnel is helped to know the running state of the video equipment, the working cost is reduced, the working efficiency is improved, and the effectiveness and the safety of the video internet of things are improved. The video picture is analyzed from a third-party device (namely, an analysis device, a non-management device and a web page of a non-video device) to detect whether the video device is abnormal or not.
The following describes a recovery and analysis method of the video internet of things in combination with a specific application scenario.
In the embodiment of the application, the flow data packet in the video internet of things is detected in real time in a bypass deployment mode, the flow is not interfered, the user name and the password of the video equipment do not need to be known, the information is not provided by the video equipment, the video image is recovered in a third-party equipment mode, and the video image is analyzed and abnormal alarm is given. In the embodiment of the application, the recovery and analysis method of the video internet of things may include the following steps:
and S1, detecting the traffic data packet in the video Internet of things in real time. On one hand, asset information of video equipment in the video Internet of things, such as an IP address, an MAC address, a manufacturer, a model, a serial number and the like, is obtained, and identity information of each piece of video equipment in the video Internet of things is determined according to the asset information. On the other hand, a traffic data packet of video equipment in the video internet of things, namely a traffic data packet of a video picture, is obtained.
(1) The method for acquiring the asset information of the video device in the video internet of things includes, but is not limited to: the identity information of each video device is generated according to the asset information, such as the IP address, the MAC address, the manufacturer, the model, the serial number and the like, and the generation mode is not limited, such as direct character string connection or a character string hash value after connection.
(2) The method for acquiring the traffic data packet of the video equipment in the video internet of things is not limited, and the traffic data packet can be the traffic data packet of a monitoring RTSP protocol and an RTP protocol. In general, the management device may send a streaming instruction of the RTSP protocol to the video device, where the streaming instruction carries an encoding mode, such as h.264, h.265, and the video device sends a traffic packet of the RTP protocol to the management device.
In summary, the traffic data packet of the video device in the video internet of things can be acquired.
Step S2, analyzing, deconstructing, and recombining the traffic data packet, obtaining a playable video file after the recombination, marking the video file as "identity information of the video device + timestamp", and displaying the extracted video file, that is, providing the video file to a worker for viewing and analyzing.
(1) And analyzing the traffic data packet. According to the source IP address, the source port, the destination IP address, the destination port, the protocol type, and the like of the traffic data packet, the identity information of the video device corresponding to the traffic data packet is determined, and the target encoding mode of the traffic data packet is analyzed and stored as [ identity information, target encoding mode, traffic data packet ], and the determination modes of the target encoding mode and the identity information are referred to in the above embodiments and will not be described herein again.
(2) And deconstructing the traffic data packet. For each flow data packet, the invalid flow data (such as physical layer data, link layer data, network layer data and transport layer data) in the flow data packet is stripped to obtain the effective flow data (such as application layer data) in the flow data packet, and the effective flow data in the flow data packet is deconstructed based on a target coding mode to obtain video data. For each traffic packet, the [ identity information, target coding mode, video data ] may be stored for that traffic packet.
Generally, the traffic data packet is transmitted by an RTP protocol, and the process of analyzing the video data from the traffic data packet of the RTP protocol is performed according to the standard of the RTP protocol, which is not described herein again.
(3) And (4) recombining the traffic data packets. The video data corresponding to the multiple traffic data packets may be reassembled to obtain a video file of a target type (i.e., a target format), such as: flv, MP4, avi, etc. Or, the video data corresponding to the plurality of traffic packets is directly reassembled into a video file (or a video file that can be played), instead of the video files of the types of flv, MP4, avi, and the like. For a plurality of traffic packets, after obtaining the video file, the [ identity information, video file ] may be stored for these traffic packets.
(4) And displaying the video file. The [ ID, video file ] can be displayed to the staff, and the staff can know the video picture in the video file through online watching or downloading watching and the like.
And step S3, analyzing the video file, acquiring the working state of the video equipment, and alarming the video equipment with abnormal conditions. For example, based on the video file in step S2, the video file is analyzed to determine whether the video device is abnormal, and the analyzed abnormality is notified to the staff by way of mail, message alarm, or the like.
(1) And analyzing the video file. And analyzing the video file, and judging whether the running state of the video equipment is abnormal or not, for example, whether the running state of the video equipment is abnormal or not, such as low video picture quality, video picture occlusion, and video picture invalidation exist or not.
Illustratively, the video file may be analyzed by an anomaly detection method that trains the classifier in advance. For example, a normal video picture, a low-quality video picture, a blocked video picture and an invalid video picture are collected, the video pictures are combined into a data set, a classifier is trained based on the data set, the structure of the classifier is not limited, such as a convolutional neural network, and the classifier is trained on a labeled data set. Based on the trained classifier, the video pictures (namely, picture screenshots) in the video file can be detected, and whether the video file has the abnormalities of low video picture quality, video picture occlusion, video picture invalidation and the like or not is known.
(2) And (5) alarming for abnormality. If the video image is abnormal, namely the video equipment is abnormal, the abnormal detection method generates alarm data [ identity information, video files, video images and abnormal descriptions ], and sends the alarm data to a worker, wherein the abnormal descriptions can comprise abnormal types, such as low video image quality, video image shielding, video image invalidation and the like, and can also comprise abnormal states to indicate that the video equipment is abnormal.
And step S4, self-updating the video file analysis method to ensure the effectiveness of the video file analysis method.
(1) A self-updating trigger condition. Triggering self-updating periodically, such as triggering self-updating once every month; triggering self-updating by a threshold, if an abnormal alarm is judged as normal data by a worker, or a video file is judged as a certain type of abnormal data by the worker, updating the manual judgment times, and if the manual judgment times exceed the threshold, triggering self-updating; and thirdly, manually triggering self-updating, and manually triggering self-updating by workers.
(2) And (5) self-updating process. And retraining the classifier according to various stored video pictures (such as normal video pictures, low-quality video pictures, blocked video pictures, invalid video pictures and the like).
In a possible implementation manner, referring to fig. 4, in order to implement the recovery and analysis method of the video internet of things, the following modules may be used: the device comprises a flow data detection module, a flow data packet processing module, a video data analysis module and a self-updating module. Illustratively, the traffic data detection module is configured to detect a traffic data packet in the video internet of things in real time, for example, step S1 may be executed. The traffic data packet processing module is configured to analyze, deconstruct, and reassemble the traffic data packet to obtain a playable video file, for example, step S2 may be executed. The video data analysis module is used for analyzing the video file, acquiring the working state of the video equipment, alarming the video equipment with abnormal state, and judging whether the video equipment is in the abnormal state, if so: the angle of the video device is abnormal, the video device is blocked, the picture of the video device is blurred, etc., as step S3 can be executed. The self-updating module is used for realizing self-updating of the video file analysis method, and step S4 can be executed if necessary.
Based on the same application concept as the method, an embodiment of the present application provides a recovery and analysis apparatus for a video internet of things, where the video internet of things includes a management device and a video device, the management device is connected to the video device through a forwarding device, and an analysis device is deployed in a bypass of the forwarding device, as shown in fig. 5, which is a schematic structural diagram of the apparatus, and the apparatus is applied to the analysis device, and the apparatus may include:
an obtaining module 51, configured to obtain, from the forwarding device, a traffic data packet sent by the video device, where the traffic data packet at least includes traffic characteristic information and effective traffic data; acquiring a target coding mode corresponding to the flow data packet; a determining module 52, configured to determine identity information of the video device based on the traffic characteristic information; a recovery module 53, configured to perform video file recovery on the effective traffic data in the traffic data packet based on the target coding manner to obtain a video file of a target type, where the video file includes the effective traffic data, data associated with the target type, and data associated with the target coding manner; an analysis module 54, configured to perform image analysis on a video picture corresponding to the video file to obtain a working state of the video device, where the working state is a normal working state or an abnormal working state; a generating module 55, configured to generate alarm data if the working state is an abnormal working state, where the alarm data at least includes the identity information of the video device and the working state of the video device.
The obtaining module 51 is specifically configured to, when obtaining the target encoding mode corresponding to the traffic data packet: if the traffic data packet further comprises a coding mode, determining the coding mode in the traffic data packet as a target coding mode corresponding to the traffic data packet; or, acquiring, from the forwarding device, a stream fetching instruction sent by the management device to the video device, where the stream fetching instruction includes a coding mode corresponding to the traffic data packet, and determining a coding mode in the stream fetching instruction as a target coding mode corresponding to the traffic data packet; or, determining a preset encoding mode as a target encoding mode corresponding to the traffic data packet.
Illustratively, the traffic data packet further includes invalid traffic data, and the recovery module 53 performs video file recovery on the valid traffic data in the traffic data packet based on the target encoding mode, and when a video file of a target type is obtained, is specifically configured to: stripping invalid flow data in the flow data packet to obtain effective flow data, wherein the invalid flow data comprises physical layer data, link layer data, network layer data and transmission layer data in the flow data packet, and the effective flow data comprises application layer data in the flow data packet; deconstructing the effective flow data based on the target coding mode to obtain video data, wherein the video data comprises the effective flow data and data related to the target coding mode; and recombining the video data based on a target type to obtain a video file of the target type, wherein the video file comprises the video data and data related to the target type.
For example, the analysis module 54 performs image analysis on the video picture corresponding to the video file, and when the working state of the video device is obtained, is specifically configured to: playing the video file, and collecting a video picture in the playing process of the video file; inputting the video picture to a trained first target classifier model, and outputting a first classification result corresponding to the video picture by the first target classifier model; determining an operating state of the video device based on the first classification result.
Illustratively, the analyzing module 54 is specifically configured to, when determining the operating status of the video device based on the first classification result: if the first target classifier model is obtained based on normal video picture and abnormal video picture training, the first classification result is that the video picture is normal or the video picture is abnormal; if the first classification result is that the video picture is normal, determining that the working state of the video equipment is a normal working state; if the first classification result is that the video picture is abnormal, determining that the working state of the video equipment is an abnormal working state; or if the first target classifier model is obtained based on normal video picture, low-quality video picture, occlusion video picture and invalid video picture training, the first classification result is that the video picture is normal, or the video picture quality is low, or the video picture is occluded, or the video picture is invalid; if the first classification result is that the video picture is normal, determining that the working state of the video equipment is a normal working state; and if the first classification result is that the video picture quality is low, or the video picture is blocked, or the video picture is invalid, determining that the working state of the video equipment is an abnormal working state.
For example, the analysis module 54 performs image analysis on the video picture corresponding to the video file, and when the working state of the video device is obtained, is specifically configured to: inputting the video file to a trained second target classifier model, and outputting a second classification result corresponding to a video picture in the video file by the second target classifier model; determining an operating state of the video device based on the second classification result; if the second target classifier model is obtained based on normal video file and abnormal video file training, the second classification result is that the video picture in the video file is normal or the video picture in the video file is abnormal; if the second classification result is that the video picture in the video file is normal, determining that the working state of the video equipment is a normal working state; and if the second classification result is that the video picture in the video file is abnormal, determining that the working state of the video equipment is an abnormal working state.
Based on the same application concept as the method, an analysis device is provided in the embodiment of the present application, where a video internet of things includes a management device and a video device, the management device is connected to the video device through a forwarding device, and an analysis device is deployed in a bypass of the forwarding device, as shown in fig. 6, the analysis device includes: a processor 61 and a machine-readable storage medium 62, the machine-readable storage medium 62 storing machine-executable instructions executable by the processor 61; the processor 61 is configured to execute machine executable instructions to perform the following steps:
acquiring a traffic data packet sent by the video equipment from the forwarding equipment, wherein the traffic data packet at least comprises traffic characteristic information and effective traffic data;
acquiring a target coding mode corresponding to the flow data packet;
determining identity information of the video device based on the traffic characteristic information;
performing video file recovery on the effective flow data in the flow data packet based on the target coding mode to obtain a video file of a target type, wherein the video file comprises the effective flow data, data associated with the target type and data associated with the target coding mode;
performing image analysis on a video picture corresponding to the video file to obtain a working state of the video equipment, wherein the working state is a normal working state or an abnormal working state;
and if the working state is an abnormal working state, generating alarm data, wherein the alarm data at least comprises the identity information of the video equipment and the working state of the video equipment.
Based on the same application concept as the method, embodiments of the present application further provide a machine-readable storage medium, where a plurality of computer instructions are stored, and when the computer instructions are executed by a processor, the method for restoring and analyzing a video internet of things disclosed in the above examples of the present application can be implemented.
The machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A recovery and analysis method for a video Internet of things, wherein the video Internet of things comprises a management device and a video device, the management device is connected with the video device through a forwarding device, the method is applied to the analysis device by deploying the analysis device on a bypass of the forwarding device, and the method comprises the following steps:
acquiring a traffic data packet sent by the video equipment from the forwarding equipment, wherein the traffic data packet at least comprises traffic characteristic information and effective traffic data;
acquiring a target coding mode corresponding to the flow data packet;
determining identity information of the video device based on the traffic characteristic information;
performing video file recovery on the effective flow data in the flow data packet based on the target coding mode to obtain a video file of a target type, wherein the video file comprises the effective flow data, data associated with the target type and data associated with the target coding mode;
performing image analysis on a video picture corresponding to the video file to obtain a working state of the video equipment, wherein the working state is a normal working state or an abnormal working state;
and if the working state is an abnormal working state, generating alarm data, wherein the alarm data at least comprises the identity information of the video equipment and the working state of the video equipment.
2. The method of claim 1,
the obtaining of the target coding mode corresponding to the traffic data packet includes:
if the traffic data packet further comprises a coding mode, determining the coding mode in the traffic data packet as a target coding mode corresponding to the traffic data packet; alternatively, the first and second electrodes may be,
acquiring a stream fetching instruction sent to the video device by the management device from the forwarding device, wherein the stream fetching instruction comprises a coding mode corresponding to the traffic data packet, and determining the coding mode in the stream fetching instruction as a target coding mode corresponding to the traffic data packet; alternatively, the first and second electrodes may be,
and determining a preset coding mode as a target coding mode corresponding to the flow data packet.
3. The method according to claim 1, wherein the traffic data packet further includes invalid traffic data, and the performing video file recovery on the valid traffic data in the traffic data packet based on the target encoding manner to obtain a target type video file comprises:
stripping invalid flow data in the flow data packet to obtain effective flow data, wherein the invalid flow data comprises physical layer data, link layer data, network layer data and transmission layer data in the flow data packet, and the effective flow data comprises application layer data in the flow data packet;
deconstructing the effective flow data based on the target coding mode to obtain video data, wherein the video data comprises the effective flow data and data related to the target coding mode;
and recombining the video data based on a target type to obtain a video file of the target type, wherein the video file comprises the video data and data related to the target type.
4. The method according to claim 1, wherein the performing image analysis on the video picture corresponding to the video file to obtain the operating state of the video device comprises:
playing the video file, and collecting a video picture in the playing process of the video file;
inputting the video picture to a trained first target classifier model, and outputting a first classification result corresponding to the video picture by the first target classifier model;
determining an operating state of the video device based on the first classification result.
5. The method of claim 4,
the determining the operating state of the video device based on the first classification result comprises:
if the first target classifier model is obtained based on normal video picture and abnormal video picture training, the first classification result is that the video picture is normal or the video picture is abnormal; if the first classification result is that the video picture is normal, determining that the working state of the video equipment is a normal working state; if the first classification result is that the video picture is abnormal, determining that the working state of the video equipment is an abnormal working state;
or if the first target classifier model is obtained based on normal video picture, low-quality video picture, occlusion video picture and invalid video picture training, the first classification result is that the video picture is normal, or the video picture quality is low, or the video picture is occluded, or the video picture is invalid; if the first classification result is that the video picture is normal, determining that the working state of the video equipment is a normal working state;
and if the first classification result is that the video picture quality is low, or the video picture is blocked, or the video picture is invalid, determining that the working state of the video equipment is an abnormal working state.
6. The method according to claim 1, wherein the performing image analysis on the video picture corresponding to the video file to obtain the operating state of the video device comprises:
inputting the video file to a trained second target classifier model, and outputting a second classification result corresponding to a video picture in the video file by the second target classifier model;
determining an operating state of the video device based on the second classification result;
if the second target classifier model is obtained based on normal video file and abnormal video file training, the second classification result is that the video picture in the video file is normal or the video picture in the video file is abnormal; if the second classification result is that the video picture in the video file is normal, determining that the working state of the video equipment is a normal working state; and if the second classification result is that the video picture in the video file is abnormal, determining that the working state of the video equipment is an abnormal working state.
7. The method according to claim 5 or 6,
when a self-updating condition of a first target classifier model is met, acquiring a first training data set, wherein the first training data set comprises a normal video picture and an abnormal video picture; retraining the first target classifier model based on the first training data set; the retrained first target classifier model is used for outputting a first classification result corresponding to a video picture in a video file; or when the self-updating condition of the first target classifier model is met, acquiring a second training data set, wherein the second training data set comprises a normal video picture, a low-quality video picture, a shielding video picture and an invalid video picture; retraining the first target classifier model based on the second training data set; the retrained first target classifier model is used for outputting a first classification result corresponding to a video picture in a video file;
when the self-updating condition of the second target classifier model is met, a third training data set is obtained, wherein the third training data set comprises a normal video file and an abnormal video file; retraining a second target classifier model based on the third training data set; and the retrained second target classifier model is used for outputting a second classification result corresponding to the video picture in the video file.
8. The method of claim 1,
after the video file recovery is performed on the effective traffic data in the traffic data packet based on the target coding mode to obtain a video file of a target type, the method further includes:
and generating data to be displayed, wherein the data to be displayed at least comprises the identity information of the video equipment and the video file, and sending the data to be displayed to user equipment.
9. The utility model provides a recovery and analytical equipment of video thing networking, video thing networking includes management equipment and video equipment, management equipment through the forwarding device with video equipment connects, its characterized in that, forwarding device's bypass deployment analytical equipment, the device is applied to analytical equipment, the device includes:
an obtaining module, configured to obtain, from the forwarding device, a traffic data packet sent by the video device, where the traffic data packet at least includes traffic characteristic information and effective traffic data; acquiring a target coding mode corresponding to the flow data packet;
a determining module, configured to determine identity information of the video device based on the traffic characteristic information;
a recovery module, configured to perform video file recovery on the effective traffic data in the traffic data packet based on the target coding mode to obtain a video file of a target type, where the video file includes the effective traffic data, data associated with the target type, and data associated with the target coding mode;
the analysis module is used for carrying out image analysis on a video picture corresponding to the video file to obtain the working state of the video equipment, wherein the working state is a normal working state or an abnormal working state;
and the generating module is used for generating alarm data if the working state is an abnormal working state, wherein the alarm data at least comprises the identity information of the video equipment and the working state of the video equipment.
10. An analysis device, wherein the video internet of things includes a management device and a video device, the management device is connected to the video device through a forwarding device, the analysis device is deployed by a bypass of the forwarding device, and the analysis device includes: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
acquiring a traffic data packet sent by the video equipment from the forwarding equipment, wherein the traffic data packet at least comprises traffic characteristic information and effective traffic data;
acquiring a target coding mode corresponding to the flow data packet;
determining identity information of the video device based on the traffic characteristic information;
performing video file recovery on the effective flow data in the flow data packet based on the target coding mode to obtain a video file of a target type, wherein the video file comprises the effective flow data, data associated with the target type and data associated with the target coding mode;
performing image analysis on a video picture corresponding to the video file to obtain a working state of the video equipment, wherein the working state is a normal working state or an abnormal working state;
and if the working state is an abnormal working state, generating alarm data, wherein the alarm data at least comprises the identity information of the video equipment and the working state of the video equipment.
CN202110623947.8A 2021-06-04 2021-06-04 Recovery and analysis method, device and equipment for video Internet of things Active CN113079371B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110623947.8A CN113079371B (en) 2021-06-04 2021-06-04 Recovery and analysis method, device and equipment for video Internet of things

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110623947.8A CN113079371B (en) 2021-06-04 2021-06-04 Recovery and analysis method, device and equipment for video Internet of things

Publications (2)

Publication Number Publication Date
CN113079371A true CN113079371A (en) 2021-07-06
CN113079371B CN113079371B (en) 2021-09-21

Family

ID=76617134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110623947.8A Active CN113079371B (en) 2021-06-04 2021-06-04 Recovery and analysis method, device and equipment for video Internet of things

Country Status (1)

Country Link
CN (1) CN113079371B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385931A (en) * 2023-04-04 2023-07-04 北京中科睿途科技有限公司 Method and device for detecting video monitoring picture, electronic equipment and storage medium
CN116546191A (en) * 2023-07-05 2023-08-04 杭州海康威视数字技术股份有限公司 Video link quality detection method, device and equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110064267A1 (en) * 2009-09-17 2011-03-17 Wesley Kenneth Cobb Classifier anomalies for observed behaviors in a video surveillance system
CN105376092A (en) * 2015-11-19 2016-03-02 杭州当虹科技有限公司 HLS flow real-time monitoring and alarming system based on switch port mirroring
CN107133654A (en) * 2017-05-25 2017-09-05 大连理工大学 A kind of method of monitor video accident detection
CN110049317A (en) * 2019-04-30 2019-07-23 睿石网云(北京)科技有限公司 A kind of online fault detection method, system and the electronic equipment of video monitoring system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110064267A1 (en) * 2009-09-17 2011-03-17 Wesley Kenneth Cobb Classifier anomalies for observed behaviors in a video surveillance system
CN105376092A (en) * 2015-11-19 2016-03-02 杭州当虹科技有限公司 HLS flow real-time monitoring and alarming system based on switch port mirroring
CN107133654A (en) * 2017-05-25 2017-09-05 大连理工大学 A kind of method of monitor video accident detection
CN110049317A (en) * 2019-04-30 2019-07-23 睿石网云(北京)科技有限公司 A kind of online fault detection method, system and the electronic equipment of video monitoring system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385931A (en) * 2023-04-04 2023-07-04 北京中科睿途科技有限公司 Method and device for detecting video monitoring picture, electronic equipment and storage medium
CN116385931B (en) * 2023-04-04 2023-08-29 北京中科睿途科技有限公司 Method and device for detecting video monitoring picture, electronic equipment and storage medium
CN116546191A (en) * 2023-07-05 2023-08-04 杭州海康威视数字技术股份有限公司 Video link quality detection method, device and equipment
CN116546191B (en) * 2023-07-05 2023-09-29 杭州海康威视数字技术股份有限公司 Video link quality detection method, device and equipment

Also Published As

Publication number Publication date
CN113079371B (en) 2021-09-21

Similar Documents

Publication Publication Date Title
Mughal A COMPREHENSIVE STUDY OF PRACTICAL TECHNIQUES AND METHODOLOGIES IN INCIDENT-BASED APPROACHES FOR CYBER FORENSICS
CN113079371B (en) Recovery and analysis method, device and equipment for video Internet of things
KR102156818B1 (en) Action recognition in a video sequence
CN111586361B (en) Image processing method and related device
US8079083B1 (en) Method and system for recording network traffic and predicting potential security events
US20120078864A1 (en) Electronic data integrity protection device and method and data monitoring system
Joshi et al. Fundamentals of Network Forensics
Ferrando et al. Classification of device behaviour in internet of things infrastructures: towards distinguishing the abnormal from security threats
Deshpande et al. Security and Data Storage Aspect in Cloud Computing
CN106851231A (en) A kind of video frequency monitoring method and system
CN113012034A (en) Method, device and system for image display processing
CN113141335B (en) Network attack detection method and device
CN116707965A (en) Threat detection method and device, storage medium and electronic equipment
CN113839925A (en) IPv6 network intrusion detection method and system based on data mining technology
US11398091B1 (en) Repairing missing frames in recorded video with machine learning
CN112468454B (en) Remote file management system and remote file management method thereof
CN114866310A (en) Malicious encrypted flow detection method, terminal equipment and storage medium
CN114363035A (en) Flow traction method and device
CN113141274A (en) Method, system and storage medium for detecting sensitive data leakage in real time based on network hologram
CN116582700B (en) Real-time video monitoring tamper detection method, device and equipment
Matusek et al. NIVSS: a nearly indestructible video surveillance system
CN115557409B (en) Intelligent early warning system of tower crane
CN112449237B (en) Method, device and system for detecting video code stream
CN114189371B (en) Audit method and device for camera management and control behaviors, electronic equipment and storage medium
US11699334B2 (en) Quantum computing-based video alert system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant