CN112926385A - Video processing method of monitoring equipment and related product - Google Patents

Video processing method of monitoring equipment and related product Download PDF

Info

Publication number
CN112926385A
CN112926385A CN202110081207.6A CN202110081207A CN112926385A CN 112926385 A CN112926385 A CN 112926385A CN 202110081207 A CN202110081207 A CN 202110081207A CN 112926385 A CN112926385 A CN 112926385A
Authority
CN
China
Prior art keywords
video
monitoring
videos
static
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110081207.6A
Other languages
Chinese (zh)
Other versions
CN112926385B (en
Inventor
危平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongguang Shaoxing Keqiao Cable Information Network Co ltd
Original Assignee
Shenzhen Yibailong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yibailong Technology Co ltd filed Critical Shenzhen Yibailong Technology Co ltd
Priority to CN202110081207.6A priority Critical patent/CN112926385B/en
Publication of CN112926385A publication Critical patent/CN112926385A/en
Application granted granted Critical
Publication of CN112926385B publication Critical patent/CN112926385B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0608Saving storage space on storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0652Erasing, e.g. deleting, data cleaning, moving of data to a wastebasket
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Abstract

The invention discloses a video processing method of monitoring equipment, which comprises the following steps of S1, collecting monitoring videos: collecting videos through monitoring and shooting, collecting the videos from monitoring and shooting in a centralized manner, transmitting the collected videos to a control host, and classifying video states through the control host; s2, primary classification: extracting a plurality of video frames in the collected monitoring video, extracting images of the plurality of video frames, identifying persons in the images of the plurality of video frames, and determining the images as dynamic videos if the images have the persons. The video processing method of the monitoring equipment provided by the invention mainly processes the static video, only one static video picture is stored, and most of the static video is deleted, so that the video storage capacity is reduced, and the cost is reduced.

Description

Video processing method of monitoring equipment and related product
Technical Field
The invention relates to the technical field of internet videos, in particular to a video processing method of monitoring equipment and a related product.
Background
With the advancement of safe city construction, a large number of monitoring cameras are deployed in many cities in China, and the opportunity brought by the generated massive video data to public area security and protection is also a great challenge.
The monitoring system is composed of five parts of camera, transmission, control, display and record registration, the camera transmits video images to the control host machine through coaxial video cable, network cable and optical fiber, the control host machine distributes the video signals to each monitor and video recording equipment, simultaneously, the voice signals to be transmitted can be synchronously recorded into the video recorder, the operator can send out instructions through the control host machine to control the up, down, left and right actions of the cloud deck and the focusing and zooming operation of the lens, the switching between the multi-path camera and the cloud deck can be realized through the control host machine, the images can be recorded, played back, processed and the like by utilizing a special video recording processing mode, and the video recording effect is optimal.
However, the existing monitoring equipment is not sufficient in storage of the monitoring video, if the monitoring video is completely stored, the monitoring video occupies a large space, the storage space is full, the risk of video coverage exists, if the monitoring video is completely stored, a large amount of storage space is needed as a storage medium, the storage cost is high, a large amount of useless videos exist, the storage space is occupied, and the storage capacity is reduced.
Disclosure of Invention
The invention aims to solve the problems that the existing monitoring video occupies a large space for video storage, is easy to cause video coverage, and has high storage cost and small storage capacity.
In order to achieve the purpose, the invention adopts the following technical scheme:
the video processing method of the monitoring equipment comprises the following steps:
s1, collecting monitoring videos: collecting videos through monitoring and shooting, collecting the videos from monitoring and shooting in a centralized manner, transmitting the collected videos to a control host, and classifying video states through the control host;
s2, primary classification: extracting a plurality of video frames in the collected monitoring video, extracting images of the plurality of video frames, identifying persons in the images of the plurality of video frames, determining the images as dynamic videos if the images have the persons, determining the images as static videos if the images do not have the persons, classifying the static videos again, and determining the states of the videos;
s3, reclassifying: performing secondary filtering on the static video, when scene monitoring is needed, extracting a plurality of video frames in the collected monitoring video, comparing the extracted images, if the comparison is consistent, determining the images as the static video, and if the comparison is inconsistent, determining the images as the dynamic video;
when the monitoring subject needs to be monitored, confirming that a certain object in a corresponding scene in the monitoring video is the monitoring subject, identifying and confirming the monitoring subject, confirming whether the monitoring subject is complete in the monitoring video, confirming that the monitoring subject is a static video if the monitoring subject is in the complete state, and directly confirming that the monitoring subject is an incomplete state when the monitoring subject is shielded;
s4, video arrangement: classifying the surveillance videos which are still confirmed to be static videos after being classified again, classifying the surveillance videos which are confirmed to be dynamic videos after being classified primarily and classified again, and storing the surveillance videos serving as the static videos and the dynamic videos separately;
s5, playback: playing back the separately stored static video and dynamic video, and separately repeating the operations of the steps S2 and S3 on the video, confirming the static video and the dynamic video, repeating the operations of the steps S2 and S3 when the video states of the static video and the dynamic video are confirmed to have errors, reclassifying the monitoring video, and playing back the monitoring video after classification, repeating the operations of the steps S2 and S3 until the video states of the monitoring video are confirmed to be consistent at least twice, confirming that the video states of the static video and the dynamic video are both correct, and performing the next operation when the video states of the static video and the dynamic video are both confirmed to be correct;
s6, storing: and processing the classified monitoring videos of the static videos which are confirmed to be correct, only storing one video picture, deleting all the rest static monitoring parts, and directly storing the classified dynamic videos which are confirmed to be correct.
Preferably, the human recognition involved in step S2 includes infrared recognition and thermal recognition, the infrared recognition includes human face recognition and human body recognition, the thermal recognition is temperature recognition, the temperature range of the human recognition is controlled between 35-40 ℃, the human face recognition, the human body recognition and the temperature recognition are mutually restricted in the human recognition process, and the cyclic comparison between the human face recognition, the human body recognition and the temperature recognition is performed, the number of cycles is 2-5 times.
Preferably, the method for comparing video frame images in step S3 is embodied in that a first frame of video image is temporarily stored, a second frame of video image is compared with the first frame of video image, if the comparison is consistent, the first frame of video image is automatically temporarily stored as a static video, and at the same time, a third frame of video image is moved forward to be the second frame of video image, the image comparison method is repeated until different video image frames appear, the different video image frames are used as new first frame of video image for comparison, and the comparison is repeated until all comparison ends, and in the comparison process, the video image of the first frame is temporarily stored as a dynamic video, and the same video image is temporarily stored as a static video.
Preferably, the monitoring subject matter involved in step S3 includes all standing objects in the image, the monitoring subject matter is a monitoring subject, and the number of the monitoring subject matter is not limited.
Compared with the prior art, the invention has the beneficial effects that: the video processing method of the monitoring equipment provided by the invention is mainly characterized in that the type of the monitoring video is identified on line, the static video is processed, only one static video picture is stored, so that most useless static videos are deleted, the dynamic video is stored normally, the storage space occupied by the static video is reduced, the video coverage caused by the static video is avoided, the storage time of the monitoring video is prolonged relatively, and the video storage cost is reduced.
In summary, the video processing method of the monitoring device processes the static video, only one static video picture is stored, most of the static video is deleted, the video storage amount is reduced, and the cost is reduced, so that the problems that the space occupied by video storage is large, video coverage is easy to cause, the storage cost is high, and the storage amount is small in the existing monitoring video are effectively solved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flow chart of a video processing method of a monitoring device.
Fig. 2 is a schematic diagram of the synthetic data provided herein.
Fig. 3 is a schematic diagram of a three-dimensional matrix provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of the invention and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1
Referring to fig. 1, fig. 1 provides a video processing method of a monitoring device, including the following steps:
s11, collecting monitoring videos: collecting videos through monitoring and shooting, collecting the videos from monitoring and shooting in a centralized manner, transmitting the collected videos to a control host, and classifying video states through the control host;
s12, primary classification: extracting a plurality of video frames in the collected monitoring video, extracting images of the plurality of video frames, identifying persons in the images of the plurality of video frames, determining the images as dynamic videos if the images have the persons, determining the images as static videos if the images do not have the persons, classifying the static videos again, and determining the states of the videos;
s13, reclassifying: performing secondary filtering on a static video, extracting a plurality of video frames in a collected monitoring video when scene monitoring is needed, comparing the extracted images, confirming the extracted images as the static video if the extracted images are consistent, confirming the extracted images as the dynamic video if the extracted images are inconsistent, setting an object needing important monitoring as a monitoring main body, monitoring the monitoring main body when the monitoring main body is monitored, firstly confirming the monitoring main body in a corresponding scene in the monitoring video, identifying and confirming the monitoring main body, confirming whether the monitoring main body is complete in the monitoring video or not, confirming the monitoring main body as the static video if the monitoring main body is complete, and directly confirming the monitoring main body as the dynamic video if the monitoring main body is in an incomplete state when the monitoring main body is shielded;
s14, video arrangement: classifying the surveillance videos which are still confirmed to be static videos after being classified again, classifying the surveillance videos which are confirmed to be dynamic videos after being classified primarily and classified again, and storing the surveillance videos serving as the static videos and the dynamic videos separately;
s15, playback: playing back the separately stored static video and dynamic video, and separately repeating the operations of the steps S12 and S13 on the video, confirming the static video and the dynamic video, repeating the operations of the steps S12 and S13 when the video states of the static video and the dynamic video are confirmed to have errors, reclassifying the monitoring video, and playing back the monitoring video after classification, repeating the operations of the steps S12 and S13 until the video states of the monitoring video are confirmed to be consistent at least twice, confirming that the video states of the static video and the dynamic video are both correct, and performing the next operation when the video states of the static video and the dynamic video are both confirmed to be correct;
s16, storing: and processing the classified monitoring videos of the static videos which are confirmed to be correct, only storing one video picture of the static videos, deleting all the rest static monitoring parts, and directly storing the classified dynamic videos which are confirmed to be correct.
In this embodiment, the person identification involved in step S12 includes infrared identification and thermal identification, the infrared identification includes face identification and body identification, the thermal identification is temperature identification, the temperature range of the person identification is controlled between 35-40 ℃, the face identification, the body identification and the temperature identification are mutually restricted in the person identification process, and the face identification, the body identification and the temperature identification are circularly compared, and the number of cycles is 5.
The storing of only one video picture of the static video and the deleting of all the rest static monitoring parts may specifically include:
and synthesizing and mapping the static video to obtain a video picture and a mapping relation, storing the video picture and the mapping relation, and deleting all the rest static monitoring parts.
The synthesizing and mapping the still video to obtain a video picture and the mapping relationship may specifically include:
the static video is arranged in ascending order according to the frame number to obtain a first sequence, RGB values of pictures of all frames in the first sequence form a three-dimensional matrix, and the three-dimensional matrix of the first picture of the first sequence is extracted1And a three-dimensional matrix of a second picture2(ii) a Combining three-dimensional matrices1The RGB values of the first column of pixels (i.e. the three-dimensional matrix)1The first column element value in three depth directions) to obtain a partial three-dimensional matrix1(ii) a By part of a three-dimensional matrix1Is of a basic size1From three-dimensional matrices2Cutting to the basic size1Same plurality of partial three-dimensional matrices2(ii) a Separately computing partial three-dimensional matrices1With a plurality of partial three-dimensional matrices2Obtaining a plurality of difference matrices if the difference matrix in the plurality of difference matrices1Is greater than a quantity threshold value, determining a difference matrix1Corresponding partial three-dimensional matrix2 1And part of the three-dimensional matrix1Similarly, a three-dimensional matrix is formed2Partial three-dimensional matrix of2 1Adding the cutting data to the three-dimensional matrix1Obtaining intermediate synthetic data; performing a synthesizing operation on a third frame of picture in the first sequence, wherein the synthesizing operation specifically includes: combining the RGB values of the third frame of picture into a three-dimensional matrix3Cutting the RGB values of the front n columns or the rear n columns to obtain a plurality of partial three-dimensional matrixes3Forming a three-dimensional matrix of a plurality of portions3A partial three-dimensional matrix of3Cutting m intermediate partial matrices from the intermediate composite data as basic sizes, and dividing a partial three-dimensional matrix3Respectively carrying out difference operation with the m intermediate part matrixes to obtain m difference value matrixes, and if the difference value matrixes in the m difference value matrixes are different, obtaining the m difference value matrixesxIs greater than the number threshold, the three-dimensional matrix is divided into three-dimensional matrix3Difference matrix inxCorresponding partial three-dimensional matrix3The final composite data (i.e. one video picture, it should be noted that one video picture is a composite video picture, andpictures that do not refer to a frame in still video); and (3) taking the RGB values corresponding to the pixels forming the front y rows of the three-dimensional matrix by the RGB values of each frame of picture in the first sequence as index values, establishing a mapping relation between the frame number and the index values, storing the synthesized data and the mapping relation, and deleting the rest of static monitoring videos. The above n is 1, 2 or 3. Y is 1, 2 or 3. Partial three-dimensional matrix2 1Subscript 2 in (1) indicates the frame number and superscript 1 indicates the number of the partial three-dimensional matrix.
The static video may be video data of a fixed scene moving in a single direction (for a monitoring scene, a camera generally rotates back and forth), for example, the video data of monitoring moving in a single direction, in which the pixel points of most data frames are the same, and the row values and column values of the pixel points of each frame of a shot picture are the same, in which case the video data are synthesized to obtain synthesized data, and then the mapping relationship can obtain the data of the corresponding frame, and the extraction mode may specifically include: determining a current frame number i, obtaining an index value i corresponding to the frame number i according to a mapping relation, performing volume difference operation on the index value i in synthetic data by using a volume difference kernel to obtain a plurality of volume difference values, determining volume difference input corresponding to one volume difference value as pixel data of an initial y row corresponding to the frame number i if the number of zero elements of the volume difference value is greater than a number threshold, extracting the volume difference input from the synthetic data, and determining an RGB value of a set size (the set size can be an RGB value corresponding to a z row of pixel) as an RGB value of a pixel corresponding to the frame number i.
The volume difference operation may specifically be that a volume difference kernel (i.e., an index value i) is used as a window size, the synthesized data is cut based on the window size to obtain RGB values of y rows of pixels, each time the synthesized data is cut, a matrix corresponding to the RGB values of the y rows of pixels is determined as a volume difference input, and the volume difference input and the index value i are subjected to difference operation to obtain a volume difference value.
Referring to fig. 2, fig. 2 is a schematic diagram of a composite data and a frame picture, since a shooting scene is fixed (no moving object), when the shooting scene moves in a single direction, the scene actually moves in the composite data, and an arrow in fig. 2 is a start column of pixel points of each frame of data, which takes uniform movement as an example. Referring to fig. 3, fig. 3 is a schematic diagram of an RGB three-dimensional matrix of a picture, where RGB values corresponding to pixels in a first y row may be first y element values in the three-dimensional matrix of fig. 3, such as gray display in fig. 3, and each square in fig. 3 represents an element value corresponding to an R value, a G value, or a B value of a pixel, so that when extracting, RGB values of all pixels in the frame can be rapidly extracted only by comparing and determining positions of an initial pixel row corresponding to a frame number, and then the frame picture is displayed. The synthesized data can not store the data of repeated pixel points, so that the data volume of the video can be reduced, and the cost of data storage is reduced.
Further, the method for comparing the video frame images in step S3 is specifically represented by temporarily storing a first frame video image, comparing a second frame video image with the first frame video image, automatically temporarily storing the second frame video image as a static video if the comparison is consistent, simultaneously moving a third frame video image forward to the second frame video image, repeating the image comparison until different video image frames appear, comparing the different video image frames as new first frame video images, repeating the comparison until all comparison is completed, and temporarily storing the video image of the first frame as a dynamic video and the same video image as the static video during the comparison.
In addition, the monitoring subject matter involved in step S3 includes all standing objects in the image, the monitoring subject matter is a monitoring subject, the number of the monitoring subject matters is not limited, and the monitoring subject matter should be set and confirmed before classifying the image, if the video processing method of the monitoring device should be in the monitoring device, the monitoring subject matter should be set and confirmed during monitoring, so as to avoid that the video processing method of the monitoring device deletes the video by mistake in the operation process.
Example 2
The video processing method of the monitoring equipment is characterized by comprising the following steps:
s21, collecting monitoring videos: collecting videos through monitoring and shooting, collecting the videos from monitoring and shooting in a centralized manner, transmitting the collected videos to a control host, and classifying video states through the control host;
s22, primary classification: extracting a plurality of video frames in the collected monitoring video, extracting images of the plurality of video frames, identifying persons in the images of the plurality of video frames, determining the images as dynamic videos if the images have the persons, determining the images as static videos if the images do not have the persons, classifying the static videos again, and determining the states of the videos;
s23, reclassifying: when the monitoring subject needs to be monitored, confirming that a certain object in a corresponding scene in the monitoring video is the monitoring subject, identifying and confirming the monitoring subject, confirming whether the monitoring subject is complete in the monitoring video, confirming that the monitoring subject is a static video if the monitoring subject is in the complete state, and directly confirming that the monitoring subject is an incomplete state when the monitoring subject is shielded;
s24, video arrangement: classifying the surveillance videos which are still confirmed to be static videos after being classified again, classifying the surveillance videos which are confirmed to be dynamic videos after being classified primarily and classified again, and storing the surveillance videos serving as the static videos and the dynamic videos separately;
s25, playback: playing back the separately stored static video and dynamic video, and separately repeating the operations of the steps S22 and S23 on the video, confirming the static video and the dynamic video, repeating the operations of the steps S22 and S23 when the video states of the static video and the dynamic video are confirmed to have errors, reclassifying the monitoring video, and playing back the monitoring video after classification, repeating the operations of the steps S22 and S23 until the video states of the monitoring video are confirmed to be consistent at least twice, confirming that the video states of the static video and the dynamic video are both correct, and performing the next operation when the video states of the static video and the dynamic video are both confirmed to be correct;
s26, storing: and processing the classified monitoring videos of the static videos which are confirmed to be correct, only storing one video picture, deleting all the rest static monitoring parts, and directly storing the classified dynamic videos which are confirmed to be correct.
In this embodiment, the person identification involved in step S22 includes infrared identification and thermal identification, the infrared identification includes face identification and body identification, the thermal identification is temperature identification, the temperature range of the person identification is controlled between 35-40 ℃, the face identification, the body identification and the temperature identification are mutually restricted in the person identification process, and the face identification, the body identification and the temperature identification are circularly compared, and the number of cycles is 2.
In addition, the monitoring subject matter involved in step S23 includes all standing objects in the image, the monitoring subject matter is a monitoring subject, the number of the monitoring subject matters is not limited, and the monitoring subject matter should be set and confirmed before classifying the image, if the video processing method of the monitoring device should be in the monitoring device, the monitoring subject matter should be set and confirmed during monitoring, so as to avoid that the video processing method of the monitoring device deletes the video by mistake in the operation process.
Example 3
The video processing method of the monitoring equipment is characterized by comprising the following steps:
s31, collecting monitoring videos: collecting videos through monitoring and shooting, collecting the videos from monitoring and shooting in a centralized manner, transmitting the collected videos to a control host, and classifying video states through the control host;
s32, primary classification: extracting a plurality of video frames in the collected monitoring video, extracting images of the plurality of video frames, identifying persons in the images of the plurality of video frames, determining the images as dynamic videos if the images have the persons, determining the images as static videos if the images do not have the persons, classifying the static videos again, and determining the states of the videos;
s33, reclassifying: performing secondary filtering on the static video, when scene monitoring is needed, extracting a plurality of video frames in the collected monitoring video, comparing the extracted images, if the comparison is consistent, determining the images as the static video, and if the comparison is inconsistent, determining the images as the dynamic video;
s34, video arrangement: classifying the surveillance videos which are still confirmed to be static videos after being classified again, classifying the surveillance videos which are confirmed to be dynamic videos after being classified primarily and classified again, and storing the surveillance videos serving as the static videos and the dynamic videos separately;
s35, playback: playing back the separately stored static video and dynamic video, and separately repeating the operations of the steps S32 and S33 on the video, confirming the static video and the dynamic video, repeating the operations of the steps S32 and S33 when the video states of the static video and the dynamic video are confirmed to have errors, reclassifying the monitoring video, and playing back the monitoring video after classification, repeating the operations of the steps S32 and S33 until the video states of the monitoring video are confirmed to be consistent at least twice, confirming that the video states of the static video and the dynamic video are both correct, and performing the next operation when the video states of the static video and the dynamic video are both confirmed to be correct;
s36, storing: and processing the classified monitoring videos of the static videos which are confirmed to be correct, only storing one video picture, deleting all the rest static monitoring parts, and directly storing the classified dynamic videos which are confirmed to be correct.
In this embodiment, the person identification involved in step S32 includes infrared identification and thermal identification, the infrared identification includes face identification and body identification, the thermal identification is temperature identification, the temperature range of the person identification is controlled between 35-40 ℃, the face identification, the body identification and the temperature identification are mutually restricted in the person identification process, and the face identification, the body identification and the temperature identification are circularly compared, and the number of cycles is 3.
In addition, the method for comparing the video frame images in step S33 is specifically represented by temporarily storing a first frame video image, comparing a second frame video image with the first frame video image, automatically temporarily storing the second frame video image as a still video if the comparison is consistent, simultaneously moving a third frame video image forward to the second frame video image, repeating the image comparison until different video image frames appear, comparing the different video image frames as new first frame video images, repeating the comparison until all comparison is completed, temporarily storing the video image of the first frame as a dynamic video during the comparison process, and temporarily storing the same video image as the still video.
Example 4
The video processing method of the monitoring equipment comprises the following steps:
s41, collecting monitoring videos: collecting videos through monitoring and shooting, collecting the videos from monitoring and shooting in a centralized manner, transmitting the collected videos to a control host, and classifying video states through the control host;
s42, video classification: performing secondary filtering on a static video, extracting a plurality of video frames in a collected monitoring video when scene monitoring is needed, comparing the extracted images, confirming the extracted images as the static video if the extracted images are consistent, confirming the extracted images as the dynamic video if the extracted images are inconsistent, setting an object needing important monitoring as a monitoring main body, monitoring the monitoring main body when the monitoring main body is monitored, firstly confirming the monitoring main body in a corresponding scene in the monitoring video, identifying and confirming the monitoring main body, confirming whether the monitoring main body is complete in the monitoring video or not, confirming the monitoring main body as the static video if the monitoring main body is complete, and directly confirming the monitoring main body as the dynamic video if the monitoring main body is in an incomplete state when the monitoring main body is shielded;
s43, video arrangement: classifying the surveillance videos which are still confirmed to be static videos after being classified again, classifying the surveillance videos which are confirmed to be dynamic videos after being classified primarily and classified again, and storing the surveillance videos serving as the static videos and the dynamic videos separately;
s44, playback: playing back the separately stored static video and dynamic video, and separately repeating the operation of the step S42 on the video, confirming the static video and the dynamic video, when the video states of the static video and the dynamic video are confirmed to have errors, repeating the operation of the step S42, reclassifying the monitoring video, and playing back the monitoring video after classification, repeating the operation of the step S42 until the states of the monitoring video are confirmed to be consistent at least twice, confirming that the video states of the static video and the dynamic video are both correct, and when the video states of the static video and the dynamic video are both confirmed to be correct, performing the next operation;
s45, storing: and processing the classified monitoring videos of the static videos which are confirmed to be correct, only storing one video picture, deleting all the rest static monitoring parts, and directly storing the classified dynamic videos which are confirmed to be correct.
In this embodiment, the method for comparing video frame images in step S42 is specifically represented by temporarily storing a first frame video image, comparing a second frame video image with the first frame video image, automatically temporarily storing the second frame video image as a static video if the comparison is consistent, simultaneously moving a third frame video image forward to the second frame video image, repeating the image comparison until different video image frames appear, comparing the different video image frames as new first frame video images, repeating until all comparison is completed, and temporarily storing the video image of the first frame as a dynamic video during the comparison process, and temporarily storing the same video image as the static video.
In addition, the monitoring subject matter involved in step S42 includes all standing objects in the image, the monitoring subject matter is a monitoring subject, the number of the monitoring subject matters is not limited, and the monitoring subject matter should be set and confirmed before classifying the image, if the video processing method of the monitoring device should be in the monitoring device, the monitoring subject matter should be set and confirmed during monitoring, so as to avoid that the video processing method of the monitoring device deletes the video by mistake in the operation process.
Compared with the prior art, the video processing method of the monitoring device provided in the above embodiment processes the static video by performing online identification on the type of the monitoring video, and only stores one static video picture, thereby deleting most useless static videos, performing common storage on the dynamic videos, reducing the storage amount occupied by the static videos, avoiding video coverage caused by the static videos, relatively prolonging the storage time of the monitoring video, and reducing the cost of video storage, so that the video processing method of the monitoring device processes the static videos, only stores one static video picture, and deletes most static videos, thereby achieving the purposes of reducing the video storage amount and reducing the cost, and effectively solving the problems that the existing monitoring videos occupy a large amount of space for video storage and are easy to cause video coverage, high storage cost and small storage capacity.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (5)

1. The video processing method of the monitoring equipment is characterized by comprising the following steps:
s1, collecting monitoring videos: collecting videos through monitoring and shooting, collecting the videos from monitoring and shooting in a centralized manner, transmitting the collected videos to a control host, and classifying video states through the control host;
s2, primary classification: extracting a plurality of video frames in the collected monitoring video, extracting images of the plurality of video frames, identifying persons in the images of the plurality of video frames, determining the images as dynamic videos if the images have the persons, determining the images as static videos if the images do not have the persons, classifying the static videos again, and determining the states of the videos;
s3, reclassifying: performing secondary filtering on the static video, when scene monitoring is needed, extracting a plurality of video frames in the collected monitoring video, comparing the extracted images, if the comparison is consistent, determining the images as the static video, and if the comparison is inconsistent, determining the images as the dynamic video;
when the monitoring subject needs to be monitored, confirming that a certain object in a corresponding scene in the monitoring video is the monitoring subject, identifying and confirming the monitoring subject, confirming whether the monitoring subject is complete in the monitoring video, confirming that the monitoring subject is a static video if the monitoring subject is in the complete state, and directly confirming that the monitoring subject is an incomplete state when the monitoring subject is shielded;
s4, video arrangement: classifying the surveillance videos which are still confirmed to be static videos after being classified again, classifying the surveillance videos which are confirmed to be dynamic videos after being classified primarily and classified again, and storing the surveillance videos serving as the static videos and the dynamic videos separately;
s5, playback: playing back the separately stored static video and dynamic video, and separately repeating the operations of the steps S2 and S3 on the video, confirming the static video and the dynamic video, repeating the operations of the steps S2 and S3 when the video states of the static video and the dynamic video are confirmed to have errors, reclassifying the monitoring video, and playing back the monitoring video after classification, repeating the operations of the steps S2 and S3 until the video states of the monitoring video are confirmed to be consistent at least twice, confirming that the video states of the static video and the dynamic video are both correct, and performing the next operation when the video states of the static video and the dynamic video are both confirmed to be correct;
s6, storing: and processing the classified monitoring videos of the static videos which are confirmed to be correct, only storing one video picture, deleting all the rest static monitoring parts, and directly storing the classified dynamic videos which are confirmed to be correct.
2. The video processing method of the monitoring device according to claim 1, wherein the human recognition involved in the step S2 comprises infrared recognition and thermal recognition, the infrared recognition comprises human face recognition and human body recognition, the thermal recognition is temperature recognition, and the temperature range of the human recognition is controlled between 35-40 ℃.
3. The video processing method of the monitoring device according to claim 1, wherein the video frame image comparison method in step S3 is embodied as temporarily storing a first frame video image, comparing a second frame video image with the first frame video image, automatically temporarily storing the second frame video image as a still video if the comparison is consistent, and simultaneously advancing a third frame video image to the second frame video image, repeating the image comparison until a different video image frame appears, comparing the different video image frame as a new first frame video image, and repeating until all comparisons are finished.
4. The video processing method of the monitoring device according to claim 1, wherein the monitoring subject involved in step S3 includes a stationary object in the image, the monitoring subject is a monitoring subject, and the number of the monitoring subject is not limited.
5. A computer-readable storage medium storing a program for electronic data exchange, wherein the program causes a terminal to perform the method as provided in any one of claims 1-4.
CN202110081207.6A 2021-01-21 2021-01-21 Video processing method of monitoring equipment and related product Active CN112926385B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110081207.6A CN112926385B (en) 2021-01-21 2021-01-21 Video processing method of monitoring equipment and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110081207.6A CN112926385B (en) 2021-01-21 2021-01-21 Video processing method of monitoring equipment and related product

Publications (2)

Publication Number Publication Date
CN112926385A true CN112926385A (en) 2021-06-08
CN112926385B CN112926385B (en) 2023-01-13

Family

ID=76163915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110081207.6A Active CN112926385B (en) 2021-01-21 2021-01-21 Video processing method of monitoring equipment and related product

Country Status (1)

Country Link
CN (1) CN112926385B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114095750A (en) * 2021-11-20 2022-02-25 深圳市伊登软件有限公司 Cloud platform monitoring method and related product
CN116797993A (en) * 2023-05-13 2023-09-22 全景智联(武汉)科技有限公司 Monitoring method, system, medium and equipment based on intelligent community scene

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120082381A1 (en) * 2010-09-30 2012-04-05 Kabushiki Kaisha Toshiba Video analysis method and system
US20130051613A1 (en) * 2011-08-29 2013-02-28 International Business Machines Corporation Modeling of temporarily static objects in surveillance video data
CN104010151A (en) * 2014-06-13 2014-08-27 深圳市景阳科技股份有限公司 Method for compressing monitoring video file
CN104836992A (en) * 2015-05-08 2015-08-12 无锡天脉聚源传媒科技有限公司 Surveillance video recording method and device
CN108229300A (en) * 2017-11-02 2018-06-29 深圳市商汤科技有限公司 Video classification methods, device, computer readable storage medium and electronic equipment
US20200380260A1 (en) * 2019-05-31 2020-12-03 Apple Inc. Automated media editing operations in consumer devices

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120082381A1 (en) * 2010-09-30 2012-04-05 Kabushiki Kaisha Toshiba Video analysis method and system
US20130051613A1 (en) * 2011-08-29 2013-02-28 International Business Machines Corporation Modeling of temporarily static objects in surveillance video data
CN104010151A (en) * 2014-06-13 2014-08-27 深圳市景阳科技股份有限公司 Method for compressing monitoring video file
CN104836992A (en) * 2015-05-08 2015-08-12 无锡天脉聚源传媒科技有限公司 Surveillance video recording method and device
CN108229300A (en) * 2017-11-02 2018-06-29 深圳市商汤科技有限公司 Video classification methods, device, computer readable storage medium and electronic equipment
US20200380260A1 (en) * 2019-05-31 2020-12-03 Apple Inc. Automated media editing operations in consumer devices

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
方帅等: "一种运动区域敏感的监控视频压缩算法", 《合肥工业大学学报(自然科学版)》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114095750A (en) * 2021-11-20 2022-02-25 深圳市伊登软件有限公司 Cloud platform monitoring method and related product
CN116797993A (en) * 2023-05-13 2023-09-22 全景智联(武汉)科技有限公司 Monitoring method, system, medium and equipment based on intelligent community scene
CN116797993B (en) * 2023-05-13 2024-03-19 全景智联(武汉)科技有限公司 Monitoring method, system, medium and equipment based on intelligent community scene

Also Published As

Publication number Publication date
CN112926385B (en) 2023-01-13

Similar Documents

Publication Publication Date Title
US9305331B2 (en) Image processor and image combination method thereof
CN112926385B (en) Video processing method of monitoring equipment and related product
CN108282644B (en) Single-camera imaging method and device
US10728510B2 (en) Dynamic chroma key for video background replacement
CN104574331B (en) A kind of data processing method, device, computer storage medium and user terminal
CA2607633A1 (en) Stereoscopic image generation device and program
DE102020124815A1 (en) SYSTEM AND DEVICE FOR USER CONTROLLED VIRTUAL CAMERA FOR VOLUMETRIC VIDEO
JPH0993588A (en) Moving image processing method
EP3573327B1 (en) Method and device for displaying target object
KR100964726B1 (en) Method for tracking moving objects using characteristics of moving objects in image camera system
US20190005133A1 (en) Method, apparatus and arrangement for summarizing and browsing video content
CN116431857B (en) Video processing method and system for unmanned scene
WO2018008871A1 (en) Compact video generation device and method, and recording medium in which computer program is recorded
CN112489103B (en) High-resolution depth map acquisition method and system
CN108200390A (en) Video structure analyzing method and device
CN112437253B (en) Video splicing method, device, system, computer equipment and storage medium
US20060001833A1 (en) Moving picture search apparatus
CN111988520B (en) Picture switching method and device, electronic equipment and storage medium
CN106961551A (en) A kind of VR panoramic videos later stage stabilization technology
CN109657098B (en) Video fingerprint extraction method and device
JP5111313B2 (en) Data collection device
CN111031243A (en) Panoramic image generation method and device
KR20130104024A (en) Apparatus for searching image data
KR20190051924A (en) Method and Apparatus for Searching Object of Interest by Selection of Object
CN111177449A (en) Multi-dimensional information integration method based on picture and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20221222

Address after: 312000 Floor 2-4, No. 598 Huayu Road, Keqiao Street, Keqiao District, Shaoxing City, Zhejiang Province

Applicant after: Zhongguang (Shaoxing Keqiao) Cable Information Network Co.,Ltd.

Address before: 1201 qiurui building, 207 Minkang Road, Zhangkeng community, Minzhi street, Longhua District, Shenzhen, Guangdong 518000

Applicant before: Shenzhen yibailong Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant