CN112532938B - Video monitoring system based on big data technology - Google Patents

Video monitoring system based on big data technology Download PDF

Info

Publication number
CN112532938B
CN112532938B CN202011357633.XA CN202011357633A CN112532938B CN 112532938 B CN112532938 B CN 112532938B CN 202011357633 A CN202011357633 A CN 202011357633A CN 112532938 B CN112532938 B CN 112532938B
Authority
CN
China
Prior art keywords
image
video
monitoring
frame image
nth frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011357633.XA
Other languages
Chinese (zh)
Other versions
CN112532938A (en
Inventor
许力
陈敏
赵志超
潘金鹤
万军
王伟
傅尤杰
梁映霞
陈晟杰
李双意
田军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Hongshu Information Technology Co ltd
Original Assignee
Wuhan Hongshu Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Hongshu Information Technology Co ltd filed Critical Wuhan Hongshu Information Technology Co ltd
Priority to CN202011357633.XA priority Critical patent/CN112532938B/en
Publication of CN112532938A publication Critical patent/CN112532938A/en
Application granted granted Critical
Publication of CN112532938B publication Critical patent/CN112532938B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/71Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention provides a video monitoring system based on big data technology, which comprises a monitoring video acquisition module, a monitoring video distributed storage module, a monitoring video identification module and an identification result display module, wherein the monitoring video acquisition module is used for acquiring a monitoring video; the monitoring video acquisition module is used for carrying out video shooting on a monitoring area to obtain a monitoring video; the distributed storage module is used for dividing the monitoring video into a plurality of video data blocks and respectively storing the video data blocks to a plurality of data storage nodes; the monitoring video identification module is used for respectively identifying the video data blocks stored in each data storage node to obtain an identification result and sending the identification result to an identification result display module; the identification result display module is used for receiving and displaying the identification result. The identification tasks are distributed to the storage nodes through a big data technology, and the image identification processing is performed on the plurality of storage nodes in parallel, so that the speed of identifying the monitoring video is improved.

Description

Video monitoring system based on big data technology
Technical Field
The invention relates to the field of monitoring, in particular to a video monitoring system based on a big data technology.
Background
In a video monitoring system in the prior art, generally, monitoring data is uniformly stored in a single storage server, and then the monitoring data is read from the storage server for further identification. However, this storage method is not favorable for fast identification processing of monitoring videos with huge data volumes.
Disclosure of Invention
In view of the above problems, the present invention is directed to a video monitoring system and system based on big data technology.
The invention provides a video monitoring system based on big data technology, which comprises a monitoring video acquisition module, a monitoring video distributed storage module, a monitoring video identification module and an identification result display module, wherein the monitoring video acquisition module is used for acquiring a monitoring video;
the monitoring video acquisition module is used for carrying out video shooting on a monitoring area to obtain a monitoring video and sending the monitoring video to the distributed storage module;
the distributed storage module is used for dividing the monitoring video into a plurality of video data blocks and respectively storing the video data blocks to a plurality of data storage nodes;
the monitoring video identification module is used for respectively identifying the video data blocks stored in each data storage node to obtain an identification result and sending the identification result to an identification result display module;
the identification result display module is used for receiving and displaying the identification result.
Preferably, the distributed storage module comprises a Hadoop storage unit and an HDFS storage unit;
the Hadoop storage unit is used for receiving and caching the monitoring video from the monitoring video acquisition module to form a video cache pool;
the HDFS storage unit is used for acquiring the monitoring video from the video cache pool, dividing the monitoring video into a plurality of video data blocks and sending each video data block to a corresponding data storage node.
Preferably, the surveillance video acquisition module includes a plurality of surveillance cameras, and the surveillance cameras are used for surveillance videos in a surveillance area thereof.
Preferably, the identifying the video data block stored in each data storage node to obtain an identification result includes:
and respectively carrying out image identification processing on each frame of image in the video data block, judging whether the image contains abnormal conditions of a preset type, and if so, generating an identification result according to the shooting place and time of the image and the type of the abnormal conditions.
Preferably, the receiving and displaying the recognition result includes:
and receiving the identification result from the video monitoring identification module, sequencing the abnormal condition types of the same shooting place according to the shooting time of the image according to the shooting place contained in the identification result, and displaying.
Preferably, the performing image recognition processing on each frame of image in the video data block to determine whether the image contains a preset type of abnormal condition includes:
performing image difference processing on an nth frame image in a video data block and an nth-1 frame image, judging whether a moving object exists in the nth frame image, and if so, further identifying the nth frame image; if not, the nth frame image is not further identified, wherein n belongs to [2, numN ], and numN represents the total number of frames contained in the video data block;
preferably, the further performing identification processing includes:
carrying out image preprocessing on the nth frame image to obtain a preprocessed image;
extracting feature information contained in the preprocessed image, matching the feature information with feature information corresponding to various types of pre-stored abnormal conditions, and if the matching is successful, the nth frame image contains the preset types of abnormal conditions;
and outputting the type of the abnormal condition contained in the nth frame image.
Compared with the prior art, the invention has the advantages that:
the monitoring video is stored in a distributed storage mode, when video identification is needed, an identification task is distributed to each storage node through a big data technology, and the plurality of storage nodes perform image identification processing in parallel, so that the speed of identifying the monitoring video is improved. When the speed of automatically identifying a monitoring video with huge data volume needs to be increased, the traditional storage mode of a single server is usually only capable of increasing a processor, but the cost required by the increasing mode is too high, and the performance improvement has an upper limit. And the method adopts a distributed storage mode, and when the speed of identification processing needs to be improved, the number of the storage nodes only needs to be increased, so that the cost is lower.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
Fig. 1 is a diagram of an exemplary embodiment of a video surveillance system based on big data technology according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
The invention provides a video monitoring system based on big data technology, which comprises a monitoring video acquisition module, a monitoring video distributed storage module, a monitoring video identification module and an identification result display module, wherein the monitoring video acquisition module is used for acquiring a monitoring video;
the monitoring video acquisition module is used for carrying out video shooting on a monitoring area to obtain a monitoring video and sending the monitoring video to the distributed storage module;
the distributed storage module is used for dividing the monitoring video into a plurality of video data blocks and respectively storing the video data blocks to a plurality of data storage nodes;
the monitoring video identification module is used for respectively identifying the video data blocks stored in each data storage node to obtain an identification result and sending the identification result to an identification result display module;
the identification result display module is used for receiving and displaying the identification result.
Preferably, the distributed storage module comprises a Hadoop storage unit and an HDFS storage unit;
the Hadoop storage unit is used for receiving and caching the monitoring video from the monitoring video acquisition module to form a video cache pool;
the HDFS storage unit is used for acquiring the monitoring video from the video cache pool, dividing the monitoring video into a plurality of video data blocks and sending each video data block to a corresponding data storage node.
In one embodiment, the surveillance video acquisition module includes a plurality of surveillance cameras, and the surveillance cameras are used for surveillance videos in a surveillance area thereof.
In one embodiment, the identifying the stored video data block in each data storage node to obtain an identification result includes:
and respectively carrying out image identification processing on each frame of image in the video data block, judging whether the image contains abnormal conditions of a preset type, and if so, generating an identification result according to the shooting place and time of the image and the type of the abnormal conditions.
Preferably, the receiving and displaying the recognition result includes:
and receiving the identification result from the video monitoring identification module, sequencing the abnormal condition types of the same shooting place according to the shooting time of the image according to the shooting place contained in the identification result, and displaying.
In an embodiment, the performing image recognition processing on each frame of image in the video data block to determine whether the image includes a preset type of abnormal condition includes:
the identification process for the nth frame image in the video data block includes:
carrying out image difference processing on the nth frame image and the (n-1) th frame image, judging whether a moving object exists in the nth frame image, and if so, further identifying the nth frame image; if not, carrying out image recognition processing on the n +1 th frame image, wherein n belongs to [2, numN ], and numN represents the total number of frames contained in the video data block;
the further identification processing comprises:
carrying out image preprocessing on the nth frame image to obtain a preprocessed image;
extracting feature information contained in the preprocessed image, matching the feature information with feature information corresponding to various types of pre-stored abnormal conditions, and if the matching is successful, the nth frame image contains the preset types of abnormal conditions;
and outputting the type of the abnormal condition contained in the nth frame image.
If there is no moving object between the adjacent n-th frame and the n-1 th frame, it can be determined that there is no change in the probability in the frame, and therefore, the next frame, i.e. the n +1 th frame, is processed directly without further identification processing on the n-th frame. By the design mode, the speed of automatically identifying the monitoring video can be obviously increased greatly, and the time for automatically identifying and processing the monitoring video with huge data volume can be shortened.
In one embodiment, extracting feature information contained in the preprocessed image comprises:
extracting feature information contained in the preprocessed image using a SIFT algorithm.
In one embodiment, the image preprocessing the nth frame image to obtain a preprocessed image includes:
carrying out graying processing on the nth frame image to obtain a grayscale image;
performing image noise reduction processing on the gray level image to obtain a noise-reduced image;
performing image segmentation processing on the noise reduction image to obtain a foreground area image;
and taking the foreground area image as a preprocessing image.
In one embodiment, the graying the nth frame image to obtain a grayscale image includes:
converting the nth frame image from an RGB color space to an HSV color space;
modeling the graying process to obtain the following graying functions:
Figure BDA0002803060620000041
in the formula, f (x, y) represents the gray value of the pixel point with the position (x, y) in the nth frame image, and V (x, y), S (x, y) and H (x, y) respectively represent the component values of the brightness component, the saturation component and the color component of the pixel point with the position (x, y) in the nth frame image in the HSV color space; alpha and beta represent two proportional parameters, tp represents a preset calculation parameter, tp belongs to [0,255],
and carrying out graying processing on each pixel point of the nth frame image by using the graying function in the HSV color space, and converting the nth frame image into a grayscale image.
In one embodiment, α and β are obtained by:
marking the pixel point with (x, y) in the nth frame image as r, and marking the set of the pixel points in the neighborhood of k multiplied by k of the pixel point r as neiUr
Establishing an optimal function model to be solved,
Figure BDA0002803060620000051
Figure BDA0002803060620000052
Figure BDA0002803060620000053
in the formula, d (alpha, beta) represents an optimal function model to be solved, and V(s), S(s) and H(s) respectively represent component values of a brightness component, a saturation component and a color component of a pixel point s in an nth frame image in an HSV color space; v (r), S (r), H (r) each represents neiUrThe component values of the brightness component, the saturation component and the color component of the pixel point r in the HSV color space; thre represents a preset threshold parameter; fh denotes a function of value taken by the user,
Figure BDA0002803060620000054
and when d (alpha, beta) is obtained to be the minimum value, the values of alpha and beta are obtained, and the values are the final values of alpha and beta.
In a traditional image graying processing mode, for example, a weighted average method, only weights of R, G, B three color components are considered, and the weights are the same for all pixel points in the whole image, so that the processing mode easily causes that after two completely different pixel points are grayed, the gray values are the same, and the loss of detail information of a gray image is serious. In the graying mode, the weighted values of the graying functions of the pixel points at different positions are adaptively changed along with the neighborhood pixel points of the pixel points, so that the detail information of the image can be effectively kept while graying is performed. Meanwhile, the gray image is converted from the HSV color space, and the brightness component information of the pixel points is considered in a key mode during conversion, so that the problem that the gray value of the pixel points with larger brightness difference is the same after gray is changed in a weighted average method, and the detail information is lost can be further solved, and the accuracy of gray is improved. More detailed information is reserved for the subsequent extraction of the characteristic information, so that the accuracy of automatic identification processing of the monitoring video is improved.
In one embodiment, performing image noise reduction processing on the grayscale image to obtain a noise-reduced image includes:
acquiring a high-frequency coefficient image and a low-frequency coefficient image which are obtained after wavelet decomposition of the gray level image;
performing the following threshold processing on the high-frequency coefficient image:
Figure BDA0002803060620000055
in the formula, ap represents a result of threshold processing on the high-frequency coefficient image p, sgn represents a sign function, and Thre represents a preset processing threshold;
and processing the low-frequency coefficient image as follows:
Figure BDA0002803060620000061
in the formula, alp represents the processed low-frequency coefficient image, alp (a) represents the pixel value of a pixel point a in alp, neiaSet of pixels representing a c × c sized neighborhood of pixel a, dis (a, b) representing pixels a and neiaThe spatial distance between the pixel points b in (a), (a) and (b) respectively represent the pixel values of the pixel points a and b in the low-frequency coefficient image before processingUg (a) and ug (b) respectively indicate the component values of the luminance components of the pixel a and the pixel b in the HSV color space in the low-frequency coefficient image before processing,
Figure BDA0002803060620000062
wherein avedis (a) represents neiaThe average value of the spatial distances between the pixel point in (1) and the pixel point a, nofneiaRepresentation neiaThe total number of pixel points in (a),
Figure BDA0002803060620000063
Figure BDA0002803060620000064
and reconstructing the low-frequency coefficient image alp and the high-frequency coefficient image ap obtained after the processing to obtain a noise reduction image.
The gray level image is further processed after wavelet decomposition, so that the edge information of the image can be more effectively retained while noise is reduced, and a high-quality noise-reduced image is provided for subsequent identification. Specifically, when the low-frequency coefficient image is processed, the difference between the spatial distance and the pixel value between the currently processed pixel point and the neighboring pixel point is considered, and the difference between the component values of the luminance components of the currently processed pixel point and the neighboring pixel point in the HSV color space is also considered, so that the noise reduction processing can be effectively performed on the area with larger luminance, namely, the noise reduction effect on the overexposed image is better. Therefore, the latitude of automatic identification processing of the monitoring videos is improved, and the monitoring videos with different qualities can be accurately identified. Furthermore, the standard deviation of the spatial distance, the pixel value and the brightness component is set in the weight parameter, the standard deviation automatically follows the change along with the change of the currently processed pixel point, the good self-adaptive following can be carried out on the neighborhood pixel point of the currently processed pixel point, and the problem that the image details are lost due to the fact that the globally unchanged standard deviation is set is avoided.
In one embodiment, performing image segmentation processing on the noise-reduced image to obtain a foreground region image includes:
and (3) performing the following operation on the nth frame image and the (n-1) th frame image:
Figure BDA0002803060620000071
in the formula, ys (n, n-1, h) represents the mark value of a pixel point h in the nth frame image after the nth frame image and the nth-1 frame image are subjected to the operation, bn (n, h) represents the component value of the brightness component of the pixel point h in the nth frame image in the HSV color space, bnnthre represents the preset brightness processing threshold, and bn (n-1, h) represents the component value of the brightness component of the pixel point h in the nth frame image at the corresponding position in the nth-1 frame image in the HSV color space
For a pixel point with a mark value of 1 in the nth frame image, enhancing the gray value of the pixel point in the noise reduction image:
Figure BDA0002803060620000072
in the formula, gray (q) represents the gray value of the q-th pixel in the n-th frame image with the mark value of 1 in the noise-reduced image, agray (q) represents the gray value of the pixel after the gray enhancement processing is performed on the pixel, and gray represents a preset enhancement parameter;
in the noise-reduced image, an image obtained by enhancing the gray values of all the pixel points with the mark values of 1 in the nth frame image is marked as alownoi;
dividing alownoi into numcut sub-images with the same area;
respectively carrying out threshold segmentation processing on the numcut sub-images by using an otsu algorithm to obtain a foreground region of each sub-image;
and combining the foreground areas of all the sub-images to obtain a foreground area image.
When the segmentation is carried out, the pixel points of the motion region, namely the pixel points marked as 1 in the nth frame image, are emphasized to carry out pixel value enhancement, so that the pixel points of the motion region can be ensured to be correctly divided into the foreground region in the subsequent threshold segmentation process. Because the pixel points of the motion region record the change content of the image, the region is also the region which needs to be focused on when the monitoring video is automatically identified. And a more accurate foreground area is provided for the subsequent extraction of the characteristic information, and the identification accuracy is improved.
Compared with the prior art, the invention has the advantages that:
the monitoring video is stored in a distributed storage mode, when video identification is needed, an identification task is distributed to each storage node through a big data technology, and the plurality of storage nodes perform image identification processing in parallel, so that the speed of identifying the monitoring video is improved. When the speed of automatically identifying a monitoring video with huge data volume needs to be increased, the traditional storage mode of a single server is usually only capable of increasing a processor, but the cost required by the increasing mode is too high, and the performance improvement has an upper limit. And the method adopts a distributed storage mode, when the speed of identification processing needs to be improved, only the number of storage nodes needs to be increased, the cost is lower, and theoretically, the performance improvement has no upper limit.
While embodiments of the invention have been shown and described, it will be understood by those skilled in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (3)

1. A video monitoring system based on big data technology is characterized by comprising a monitoring video acquisition module, a monitoring video distributed storage module, a monitoring video identification module and an identification result display module;
the monitoring video acquisition module is used for carrying out video shooting on a monitoring area to obtain a monitoring video and sending the monitoring video to the distributed storage module;
the distributed storage module is used for dividing the monitoring video into a plurality of video data blocks and respectively storing the video data blocks to a plurality of data storage nodes;
the monitoring video identification module is used for respectively identifying the video data blocks stored in each data storage node to obtain an identification result and sending the identification result to an identification result display module;
the identification result display module is used for receiving and displaying the identification result;
the distributed storage module comprises a Hadoop storage unit and an HDFS storage unit;
the Hadoop storage unit is used for receiving and caching the monitoring video from the monitoring video acquisition module to form a video cache pool;
the HDFS storage unit is used for acquiring the monitoring video from the video cache pool, dividing the monitoring video into a plurality of video data blocks and sending each video data block to a corresponding data storage node;
the identifying the video data block stored in each data storage node to obtain an identification result includes:
respectively carrying out image identification processing on each frame of image in the video data block, judging whether the image contains abnormal conditions of a preset type, and if so, generating an identification result according to the shooting place and time of the image and the type of the abnormal conditions;
the image recognition processing of each frame image in the video data block to determine whether the image contains a preset type of abnormal condition includes:
performing image difference processing on an nth frame image in a video data block and an nth-1 frame image, judging whether a moving object exists in the nth frame image, and if so, further identifying the nth frame image; if not, the nth frame image is not further identified, wherein n belongs to [2, numN ], and numN represents the total number of frames contained in the video data block;
the further identification processing comprises:
carrying out image preprocessing on the nth frame image to obtain a preprocessed image;
extracting feature information contained in the preprocessed image, matching the feature information with feature information corresponding to various types of pre-stored abnormal conditions, and if the matching is successful, the nth frame image contains the preset types of abnormal conditions;
outputting the type of the abnormal condition contained in the nth frame image;
performing image preprocessing on the nth frame image to obtain a preprocessed image, wherein the image preprocessing comprises the following steps:
carrying out graying processing on the nth frame image to obtain a grayscale image;
performing image noise reduction processing on the gray level image to obtain a noise-reduced image;
performing image segmentation processing on the noise reduction image to obtain a foreground area image;
taking the foreground area image as a preprocessing image;
carrying out graying processing on the nth frame image to obtain a grayscale image, and the method comprises the following steps:
converting the nth frame image from an RGB color space to an HSV color space;
modeling the graying process to obtain the following graying functions:
Figure FDA0003151987480000021
in the formula, f (x, y) represents the gray value of the pixel point with the position (x, y) in the nth frame image, and V (x, y), S (x, y) and H (x, y) respectively represent the component values of the brightness component, the saturation component and the color component of the pixel point with the position (x, y) in the nth frame image in the HSV color space; alpha and beta represent two proportional parameters, tp represents a preset calculation parameter, tp belongs to [0,255],
graying each pixel point of the nth frame image by using the graying function in the HSV color space, and converting the nth frame image into a grayscale image;
α and β are obtained by:
marking the pixel point with (x, y) in the nth frame image as r, and marking the set of the pixel points in the neighborhood of k multiplied by k of the pixel point r as neiUr
Establishing an optimal function model to be solved,
Figure FDA0003151987480000022
Figure FDA0003151987480000031
Figure FDA0003151987480000032
in the formula, d (alpha, beta) represents an optimal function model to be solved, and V(s), S(s) and H(s) respectively represent component values of a brightness component, a saturation component and a color component of a pixel point s in an nth frame image in an HSV color space; v (r), S (r), H (r) each represents neiUrThe component values of the brightness component, the saturation component and the color component of the pixel point r in the HSV color space; thre represents a preset threshold parameter; fh denotes a function of value taken by the user,
Figure FDA0003151987480000033
and when d (alpha, beta) is obtained to be the minimum value, the values of alpha and beta are obtained, and the values are the final values of alpha and beta.
2. The video monitoring system based on big data technology as claimed in claim 1, wherein the surveillance video acquisition module comprises a plurality of surveillance cameras for capturing surveillance videos in the surveillance area thereof.
3. The video monitoring system based on big data technology as claimed in claim 1, wherein said receiving and presenting said recognition result comprises:
and receiving the identification result from the video monitoring identification module, sequencing the abnormal condition types of the same shooting place according to the shooting time of the image according to the shooting place contained in the identification result, and displaying.
CN202011357633.XA 2020-11-26 2020-11-26 Video monitoring system based on big data technology Active CN112532938B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011357633.XA CN112532938B (en) 2020-11-26 2020-11-26 Video monitoring system based on big data technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011357633.XA CN112532938B (en) 2020-11-26 2020-11-26 Video monitoring system based on big data technology

Publications (2)

Publication Number Publication Date
CN112532938A CN112532938A (en) 2021-03-19
CN112532938B true CN112532938B (en) 2021-08-31

Family

ID=74994057

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011357633.XA Active CN112532938B (en) 2020-11-26 2020-11-26 Video monitoring system based on big data technology

Country Status (1)

Country Link
CN (1) CN112532938B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115408557B (en) * 2022-11-01 2023-02-03 吉林信息安全测评中心 Safety monitoring system based on big data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102289948A (en) * 2011-09-02 2011-12-21 浙江大学 Multi-characteristic fusion multi-vehicle video tracking method under highway scene
CN106203261A (en) * 2016-06-24 2016-12-07 大连理工大学 Unmanned vehicle field water based on SVM and SURF detection and tracking
CN106327437A (en) * 2016-08-10 2017-01-11 大连海事大学 Color text image correction method and system
CN109214322A (en) * 2018-08-27 2019-01-15 厦门哲林软件科技有限公司 A kind of optimization method and system of file and picture visual effect
CN111369478A (en) * 2020-03-04 2020-07-03 腾讯科技(深圳)有限公司 Face image enhancement method and device, computer equipment and storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8503539B2 (en) * 2010-02-26 2013-08-06 Bao Tran High definition personal computer (PC) cam
CN102521578B (en) * 2011-12-19 2013-10-30 中山爱科数字科技股份有限公司 Method for detecting and identifying intrusion
JP6451133B2 (en) * 2014-08-01 2019-01-16 株式会社リコー Anomaly detection device, anomaly detection method, anomaly detection system, and program
CN105828052A (en) * 2016-06-02 2016-08-03 中国联合网络通信集团有限公司 Video monitoring method and monitoring system based on Storm technology
CN106341658A (en) * 2016-08-31 2017-01-18 广州精点计算机科技有限公司 Intelligent city security state monitoring system
CN110019898A (en) * 2017-08-08 2019-07-16 航天信息股份有限公司 A kind of animation image processing system
CN108040221B (en) * 2017-11-30 2020-05-12 江西洪都航空工业集团有限责任公司 Intelligent video analysis and monitoring system
CN111161313B (en) * 2019-12-16 2023-03-14 华中科技大学鄂州工业技术研究院 Multi-target tracking method and device in video stream
CN111177469A (en) * 2019-12-20 2020-05-19 国久大数据有限公司 Face retrieval method and face retrieval device
CN111930799A (en) * 2020-07-10 2020-11-13 江苏玻二代网络科技有限公司 Shared cloud warehouse platform system based on big data and use method
CN111912819B (en) * 2020-07-15 2023-11-07 北京华云星地通科技有限公司 Ecological detection method based on satellite data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102289948A (en) * 2011-09-02 2011-12-21 浙江大学 Multi-characteristic fusion multi-vehicle video tracking method under highway scene
CN106203261A (en) * 2016-06-24 2016-12-07 大连理工大学 Unmanned vehicle field water based on SVM and SURF detection and tracking
CN106327437A (en) * 2016-08-10 2017-01-11 大连海事大学 Color text image correction method and system
CN109214322A (en) * 2018-08-27 2019-01-15 厦门哲林软件科技有限公司 A kind of optimization method and system of file and picture visual effect
CN111369478A (en) * 2020-03-04 2020-07-03 腾讯科技(深圳)有限公司 Face image enhancement method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
视频图像序列中运动目标检测与跟踪方法研究;赵佳;《中国优秀硕士学位论文全文数据库信息科技辑》;20120215;全文 *

Also Published As

Publication number Publication date
CN112532938A (en) 2021-03-19

Similar Documents

Publication Publication Date Title
CN108229526B (en) Network training method, network training device, image processing method, image processing device, storage medium and electronic equipment
CN107038416B (en) Pedestrian detection method based on binary image improved HOG characteristics
CN109389569B (en) Monitoring video real-time defogging method based on improved DehazeNet
CN107563985B (en) Method for detecting infrared image air moving target
CN111325051A (en) Face recognition method and device based on face image ROI selection
CN113781421A (en) Underwater-based target identification method, device and system
CN113592776A (en) Image processing method and device, electronic device and storage medium
CN115063331B (en) Multi-scale block LBP operator-based ghost-free multi-exposure image fusion method
CN107564041B (en) Method for detecting visible light image aerial moving target
CN113743378B (en) Fire monitoring method and device based on video
CN111192213A (en) Image defogging adaptive parameter calculation method, image defogging method and system
CN112532938B (en) Video monitoring system based on big data technology
CN111027564A (en) Low-illumination imaging license plate recognition method and device based on deep learning integration
CN111027637A (en) Character detection method and computer readable storage medium
CN112396016B (en) Face recognition system based on big data technology
CN110298796B (en) Low-illumination image enhancement method based on improved Retinex and logarithmic image processing
CN110136085B (en) Image noise reduction method and device
CN107704864B (en) Salient object detection method based on image object semantic detection
WO2024016632A1 (en) Bright spot location method, bright spot location apparatus, electronic device and storage medium
CN110633705A (en) Low-illumination imaging license plate recognition method and device
CN116403200A (en) License plate real-time identification system based on hardware acceleration
CN112070771B (en) Adaptive threshold segmentation method and device based on HS channel and storage medium
CN114693543A (en) Image noise reduction method and device, image processing chip and image acquisition equipment
CN110276260B (en) Commodity detection method based on depth camera
CN113486788A (en) Video similarity determination method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant