CN116886951A - Unified storage method based on video equipment management cloud platform - Google Patents

Unified storage method based on video equipment management cloud platform Download PDF

Info

Publication number
CN116886951A
CN116886951A CN202310836394.3A CN202310836394A CN116886951A CN 116886951 A CN116886951 A CN 116886951A CN 202310836394 A CN202310836394 A CN 202310836394A CN 116886951 A CN116886951 A CN 116886951A
Authority
CN
China
Prior art keywords
preset
media
cloud platform
method based
management cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310836394.3A
Other languages
Chinese (zh)
Inventor
梁帅
孙维
戴书球
文学峰
蒋波
韩麟之
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Smart City Science And Technology Research Institute Co ltd
CCTEG Chongqing Research Institute Co Ltd
Original Assignee
Chongqing Smart City Science And Technology Research Institute Co ltd
CCTEG Chongqing Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Smart City Science And Technology Research Institute Co ltd, CCTEG Chongqing Research Institute Co Ltd filed Critical Chongqing Smart City Science And Technology Research Institute Co ltd
Priority to CN202310836394.3A priority Critical patent/CN116886951A/en
Publication of CN116886951A publication Critical patent/CN116886951A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234309Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234381Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the temporal resolution, e.g. decreasing the frame rate by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The application relates to the technical field of video monitoring, in particular to a unified storage method based on a video equipment management cloud platform, which comprises the following steps: protocol conversion step: acquiring video data, and converting a protocol of the video data into a preset protocol; slicing: slicing the video data after protocol conversion to generate a description file and a plurality of media fragments; and (3) identification: judging whether the media fragment contains a preset mark, if so, jumping to a correction step, and if not, jumping to a storage step; and (3) correcting: carrying out noise reduction treatment on the media fragments; the storage step: the file and the media fragment after the noise reduction processing are stored. By adopting the technical scheme of the application, the picture quality loss can be reduced.

Description

Unified storage method based on video equipment management cloud platform
Technical Field
The application relates to the technical field of video monitoring, in particular to a unified storage method based on a video equipment management cloud platform.
Background
The traditional video monitoring system is an analog video monitoring system, the analog video monitoring system consists of a monitoring terminal and a television wall, in the analog video monitoring system, a camera at the front end sends analog video signals to a matrix monitoring host through a video cable, and the matrix monitoring host then transmits the received analog video signals to the television wall of a main control room for monitoring by a user. However, in the case of a large number of cameras or a long-time shooting of the monitoring video, the stored video data is affected by the space of the storage device, so that the time for storing the video data is reduced, and remote access and real-time playback are inconvenient.
For this reason, a scheme of slicing video data to form media segments and storing the media segments in a cloud server appears, and the media segments are transmitted to a client by a video streaming mode to realize remote play. However, video data is composed of a frame-by-frame picture, and when slicing, separation between consecutive frames, that is, between two consecutive media segments, cannot be completely achieved, and a situation that a plurality of frames are lost may occur, resulting in a loss of picture quality.
Therefore, there is a need for a unified storage method based on a video device management cloud platform that can reduce picture quality loss.
Disclosure of Invention
The application provides a unified storage method based on a video equipment management cloud platform, which can reduce the loss of picture quality.
In order to solve the technical problems, the application provides the following technical scheme:
a unified storage method based on a video equipment management cloud platform comprises the following steps:
protocol conversion step: acquiring video data, and converting a protocol of the video data into a preset protocol;
slicing: slicing the video data after protocol conversion to generate a description file and a plurality of media fragments;
and (3) identification: judging whether the media fragment contains a preset mark, if so, jumping to a correction step, and if not, jumping to a storage step;
and (3) correcting: carrying out noise reduction treatment on the media fragments;
the storage step: the file and the media fragment after the noise reduction processing are stored.
The basic scheme principle and the beneficial effects are as follows:
in the scheme, the slicing operation is ensured under the unified protocol by carrying out protocol conversion on the video data. The quality of the picture can be effectively improved by carrying out noise reduction treatment on the media fragments after slicing. However, if all media segments are noise reduced, a significant amount of computing resources are consumed. Moreover, when no person or object passes through the monitoring range of the camera, the picture is static, the difference between the pictures of each frame is small, and even if a plurality of frames are lost in the middle, the picture quality is not affected excessively. In this scheme, the preset identifier may be a human body or a vehicle according to the monitored object, for example, the preset identifier is a vehicle, and when the media segment includes the vehicle, the noise reduction process is performed on the media segment. Because vehicles appear in the pictures, the difference between the pictures of each frame becomes large, if a plurality of frames are lost in the middle, the influence on the picture quality becomes large, and the noise reduction processing is performed at the moment, so that the definition of the pictures is improved, the influence caused by the possibility of losing a plurality of frames is offset, and the picture quality loss is reduced. In addition, the possibility that the vehicle appears in the picture and the media fragment is subsequently called and watched by the staff is higher, and the look and feel can be improved after the noise reduction treatment is carried out.
In the correction step, the media segment is subjected to graying treatment, and then Gaussian filtering and noise reduction are performed.
By the graying processing, the information of the first dimension of the color is removed, and the processing amount of data can be reduced.
Further, the description file is used for recording shooting date and total duration of video data and numbers and duration of each media segment.
Further, the method further comprises a playback request step, wherein a playback request is received, a corresponding description file is acquired based on the playback request, and the description file is issued.
Further, the method also comprises a playback step, wherein the description file is received and analyzed, and the media fragments are downloaded and played in sequence according to the numbers of the media fragments.
In the step of identifying, binarization processing is carried out on the media fragments, whether the media fragments after the binarization processing contain preset identifiers or not is judged, if the media fragments contain the preset identifiers, the step of correcting is skipped, and if the media fragments do not contain the preset identifiers, the step of storing is skipped.
By binarization processing, information in a picture can be reduced, the processing amount of data is reduced, and the processing efficiency is improved.
Further, in the slicing step, the video data is sliced in units of milliseconds.
The accuracy of slicing is high in units of milliseconds, and the number of lost frames during slicing can be reduced, so that the loss of picture quality is reduced.
Further, the method further comprises a cleaning step, based on the shooting date of the video data in the description file, whether the preset storage duration is exceeded or not is judged, and if yes, the description file and the corresponding media fragments are deleted.
The description file and the media fragment are cleaned regularly, so that the occupation of the storage space by the outdated video can be reduced, and the storage pressure is reduced.
Further, in the identifying step, if the preset identifier is included, the type of the preset identifier is also judged; and determining a first preset quantity value based on the type of the preset mark, omitting the identification step for the first preset quantity of media fragments after the media fragments, and directly carrying out the correction step.
Drawings
Fig. 1 is a flowchart of a unified storage method based on a video device management cloud platform according to an embodiment.
Detailed Description
The following is a further detailed description of the embodiments:
example 1
As shown in fig. 1, the unified storage method based on the video device management cloud platform of the embodiment includes the following steps:
protocol conversion step: and acquiring video data, and converting the protocol of the video data into a preset protocol. In this embodiment, the protocol of the video data is a protocol of GB/T28181-2016 or an RTSP protocol, and the preset protocol is an HLS protocol. Specifically, a ZLMediakit service framework is adopted to carry out protocol conversion, and the ZLMediakit supports the conversion of protocols such as RTSP/RTMP/HLS/HTTP-FLV/WebSoc key-FLV/GBT 28181/HTTP-TS/WebSocket-TS/HTTP-fMP 4/WebSocket-fMP/MP 4 and the like.
Slicing: slicing the video data after protocol conversion to generate a description file and a plurality of media fragments; the description file is used for recording shooting date, total duration and number and duration of each media segment of the video data. In this embodiment, video data is sliced in units of milliseconds.
In this embodiment, the description file adopts an m3u8 file, and the media fragment adopts a ts file. For example, the total duration of video data is 10 seconds, slices are 10 ts files, the duration of a single ts file is 1 second, and the number of ts files is from 001 to 010. Shooting dates are 2021-7-15-12:01:00:001, for example.
And (3) identification: and (3) carrying out binarization processing on the media fragments, judging whether the media fragments subjected to the binarization processing contain preset identifiers, if so, jumping to a correction step, and if not, jumping to a storage step. In this embodiment, binarization processing is performed on each frame of media segment.
The preset mark comprises a human body, a vehicle and the like, for example, a camera for monitoring the human body, the corresponding preset mark is the human body, the camera for monitoring the vehicles entering and exiting at the road gate, and the corresponding preset mark is the vehicle.
And (3) correcting: carrying out noise reduction treatment on the media fragments; in this embodiment, the media segment is first grayed, and then gaussian filtered to reduce noise.
The storage step: the file and the media fragment after the noise reduction processing are stored.
And a playback request step, receiving a playback request, acquiring a corresponding description file based on the playback request, and issuing the description file. In the present embodiment, the recall request includes the date of playback of the video, and the corresponding description file is determined based on the date of playback of the video and the shooting date in the description file.
Playback: and receiving and analyzing the description file, and sequentially downloading and playing the media fragments according to the numbers of the media fragments.
Cleaning: and judging whether the preset storage duration is exceeded or not based on the shooting date of the video data in the description file, and deleting the description file and the corresponding media fragment if the preset storage duration is exceeded. In this embodiment, the storage period is 30 days.
The embodiment also provides a video equipment management cloud platform, a camera, a server and a client. The camera is used for collecting video data and uploading the video data to the server. The server receives the video data and then performs a protocol conversion step, a slicing step, an identification step, a correction step and a storage module, a playback request step and a cleaning step. The client is used for sending a playback request to the server, and executing a playback step after receiving the description file issued by the server.
Example two
The difference between the present embodiment and the first embodiment is that, in the identifying step of the present embodiment, if the preset identifier is included, the type of the preset identifier is also determined; and determining a first preset quantity value based on the type of the preset mark, omitting the identification step for the first preset quantity of media fragments after the media fragments, and directly carrying out the correction step.
Because the monitoring range of the camera is fixed, the time that human body and vehicle pass under the camera is also in a certain range under normal conditions (the time of the vehicle is smaller than the time that human passes due to the fast speed). When a vehicle or a human body appears in the monitoring range of the camera, the vehicle or the human body is reflected on the media segment, namely, the preset mark is included. The vehicle or human body needs a certain time when leaving the monitoring range, and the possibility that the vehicle or human body is contained in the media segment in the time is high, so that the identification step is omitted, the correction step is directly carried out, the identification process can be further simplified, and the computing resource is saved. In this embodiment, the first preset number is determined comprehensively according to the estimated average speed of the preset identifier, the duration of the media segment, and the monitoring range of the camera. The estimated average speed may be determined based on the mounting location of the camera. For example, a camera installed on a highway and a camera installed on a road inside a cell, different values are used for the estimated average speed of the vehicle.
Further, in this embodiment, the actual average speed of the preset mark is calculated based on the moving distance of the preset mark of the media segment, and whether the ratio of the absolute value of the difference between the actual average speed and the estimated average speed to the estimated average speed is greater than a threshold is determined. Namely: V1-V2/V2, where V1 is the actual average speed and V2 is the estimated average speed.
If the first preset number is larger than the threshold value, comprehensively determining the first preset number according to the estimated average speed of the preset mark, the duration of the media fragment and the monitoring range of the camera. If the first preset number is smaller than or equal to the threshold value, the first preset number is comprehensively determined according to the actual average speed of the preset mark, the duration of the media fragment and the monitoring range of the camera. The estimated average speed may be determined based on the mounting location of the camera. For example, a camera installed on a highway and a camera installed on a road inside a cell, different values are used for the estimated average speed of the vehicle. In this embodiment, the threshold is 20%.
The foregoing is merely an embodiment of the present application, the present application is not limited to the field of this embodiment, and the specific structures and features well known in the schemes are not described in any way herein, so that those skilled in the art will know all the prior art in the field before the application date or priority date of the present application, and will have the capability of applying the conventional experimental means before the date, and those skilled in the art may, in light of the present application, complete and implement the present scheme in combination with their own capabilities, and some typical known structures or known methods should not be an obstacle for those skilled in the art to practice the present application. It should be noted that modifications and improvements can be made by those skilled in the art without departing from the structure of the present application, and these should also be considered as the scope of the present application, which does not affect the effect of the implementation of the present application and the utility of the patent. The protection scope of the present application is subject to the content of the claims, and the description of the specific embodiments and the like in the specification can be used for explaining the content of the claims.

Claims (9)

1. The unified storage method based on the video equipment management cloud platform is characterized by comprising the following steps:
protocol conversion step: acquiring video data, and converting a protocol of the video data into a preset protocol;
slicing: slicing the video data after protocol conversion to generate a description file and a plurality of media fragments;
and (3) identification: judging whether the media fragment contains a preset mark, if so, jumping to a correction step, and if not, jumping to a storage step;
and (3) correcting: carrying out noise reduction treatment on the media fragments;
the storage step: the file and the media fragment after the noise reduction processing are stored.
2. The unified storage method based on the video device management cloud platform according to claim 1, wherein: in the correction step, the media fragments are subjected to graying treatment, and Gaussian filtering and noise reduction are performed.
3. The unified storage method based on the video device management cloud platform according to claim 2, wherein: the description file is used for recording shooting date and total duration of video data and numbers and duration of each media fragment.
4. The unified storage method based on the video device management cloud platform according to claim 3, wherein: the method further comprises a playback request step, wherein a playback request is received, a corresponding description file is acquired based on the playback request, and the description file is issued.
5. The unified storage method based on the video device management cloud platform according to claim 4, wherein: and the method also comprises a playback step, wherein the description file is received and analyzed, and the media fragments are downloaded and played in sequence according to the numbers of the media fragments.
6. The unified storage method based on the video device management cloud platform according to claim 1, wherein: in the identification step, binarization processing is carried out on the media fragments, whether the media fragments after the binarization processing contain preset identifiers or not is judged, if the media fragments contain the preset identifiers, the correction step is skipped, and if the media fragments do not contain the preset identifiers, the storage step is skipped.
7. The unified storage method based on the video device management cloud platform according to claim 1, wherein: in the slicing step, the video data is sliced in units of milliseconds.
8. The unified storage method based on the video device management cloud platform according to claim 3, wherein: and the method further comprises a cleaning step of judging whether the preset storage duration is exceeded or not based on the shooting date of the video data in the description file, and deleting the description file and the corresponding media fragment if the preset storage duration is exceeded.
9. The unified storage method based on the video device management cloud platform according to claim 6, wherein: in the identifying step, if the preset identifier is included, the type of the preset identifier is also judged; and determining a first preset quantity value based on the type of the preset mark, omitting the identification step for the first preset quantity of media fragments after the media fragments, and directly carrying out the correction step.
CN202310836394.3A 2023-07-07 2023-07-07 Unified storage method based on video equipment management cloud platform Pending CN116886951A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310836394.3A CN116886951A (en) 2023-07-07 2023-07-07 Unified storage method based on video equipment management cloud platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310836394.3A CN116886951A (en) 2023-07-07 2023-07-07 Unified storage method based on video equipment management cloud platform

Publications (1)

Publication Number Publication Date
CN116886951A true CN116886951A (en) 2023-10-13

Family

ID=88261545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310836394.3A Pending CN116886951A (en) 2023-07-07 2023-07-07 Unified storage method based on video equipment management cloud platform

Country Status (1)

Country Link
CN (1) CN116886951A (en)

Similar Documents

Publication Publication Date Title
CA2936217C (en) Storage management of data streamed from a video source device
US20060077256A1 (en) High resolution pre-event record
EP2268029A1 (en) Wireless video distribution system, content bit rate control method, and computer readable recording medium having content bit rate control program stored therein
US11050924B2 (en) Method and system for auto-setting of cameras
CN109348279B (en) Plug flow method, device, equipment and storage medium
US20150036736A1 (en) Method, device and system for producing a merged digital video sequence
CN107948605A (en) Method, apparatus, equipment and the storage medium of vehicle-mounted monitoring video data storage
CN114640886B (en) Self-adaptive bandwidth audio/video transmission method, device, computer equipment and medium
CN111726657A (en) Live video playing processing method and device and server
CN110198475B (en) Video processing method, device, equipment, server and readable storage medium
CN105898625B (en) Playing processing method and terminal equipment
CN111131786A (en) Video monitoring storage system applying cloud storage
CN111277800A (en) Monitoring video coding and playing method and device, electronic equipment and storage medium
EP3975133A1 (en) Processing of images captured by vehicle mounted cameras
US11438545B2 (en) Video image-based media stream bandwidth reduction
US11042752B2 (en) Aligning advertisements in video streams
CN116886951A (en) Unified storage method based on video equipment management cloud platform
CN109308778B (en) Mobile detection alarm method, device, acquisition equipment and storage medium
CN114531528A (en) Method for video processing and image processing apparatus
CN112565693A (en) Method, system and equipment for monitoring video on demand
CN110602507A (en) Frame loss processing method, device and system
US11716475B2 (en) Image processing device and method of pre-processing images of a video stream before encoding
CN111800649A (en) Method and device for storing video and method and device for generating video
WO2024108950A1 (en) Bitstream control method and apparatus, and electronic device
JP7419151B2 (en) Server device, information processing method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination