CN115065798B - Big data-based video analysis monitoring system - Google Patents

Big data-based video analysis monitoring system Download PDF

Info

Publication number
CN115065798B
CN115065798B CN202210991968.XA CN202210991968A CN115065798B CN 115065798 B CN115065798 B CN 115065798B CN 202210991968 A CN202210991968 A CN 202210991968A CN 115065798 B CN115065798 B CN 115065798B
Authority
CN
China
Prior art keywords
video
frame
image
video frame
big data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210991968.XA
Other languages
Chinese (zh)
Other versions
CN115065798A (en
Inventor
陈志明
陈博允
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Zhilian Information Technology Co ltd
Guangzhou Intelligent Computing Information Technology Co ltd
Original Assignee
Guangzhou Zhilian Information Technology Co ltd
Guangzhou Intelligent Computing Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Zhilian Information Technology Co ltd, Guangzhou Intelligent Computing Information Technology Co ltd filed Critical Guangzhou Zhilian Information Technology Co ltd
Priority to CN202210991968.XA priority Critical patent/CN115065798B/en
Publication of CN115065798A publication Critical patent/CN115065798A/en
Application granted granted Critical
Publication of CN115065798B publication Critical patent/CN115065798B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/96Management of image or video recognition tasks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a video analysis monitoring system based on big data, which comprises a shooting module, a frame extracting module, a big data identification module and an alarm module, wherein the shooting module is used for shooting the big data; the shooting module is used for shooting the monitoring area to obtain a monitoring video; the frame extracting module is used for adopting a self-adaptive frame number interval to carry out frame extracting processing on the monitoring video to obtain video frames; the big data identification module is used for identifying the video frames by adopting a big data technology to obtain an identification result; and the alarm module is used for prompting the person on duty according to the identification result. When the video is used for monitoring the monitoring area, the video frames are extracted through the self-adaptive frame number interval to carry out big data identification, and the occurrence of the event of wasting computing resources caused by extracting the video frames by using the fixed frame number interval all the time is effectively avoided.

Description

Big data-based video analysis monitoring system
Technical Field
The invention relates to the field of monitoring, in particular to a video analysis monitoring system based on big data.
Background
The monitoring is a physical basis for real-time monitoring of key departments or important places of various industries. The management department can acquire effective data, image or sound information through the monitoring system, and timely monitor and memorize the process of the emergency abnormal event so as to provide efficient and timely command and height, police force deployment, case handling and the like. With the rapid development and popularization of the current computer application, a strong digital wave is raised globally, and the digitalization of various devices is the primary objective of safety protection. The performance characteristics of the digital monitoring alarm are as follows: the method comprises the steps of monitoring picture real-time display, video image quality single-channel adjusting function, independent setting of video recording speed of each channel, quick retrieval, setting of multiple video recording modes, automatic backup, tripod head/lens control function, network transmission and the like.
With the development of big data technology, the existing monitoring system also develops a function of analyzing the content of the monitored video in real time. In the prior art, generally, a fixed frame number interval is adopted to perform frame extraction processing on the content of a video, so that a big data technology is used to identify the content in a monitoring video and judge whether a set type event occurs. However, in the normal monitoring process, the probability of occurrence of the set type of event is very small, and if the frame is extracted at a fixed frame number interval, it obviously wastes the computing resource.
Disclosure of Invention
The invention aims to disclose a video analysis monitoring system based on big data, which solves the problem that the existing video monitoring system adopts fixed frame number intervals to extract frames, then identifies frame pictures, judges whether events with set types occur or not and wastes computing resources.
In order to achieve the purpose, the invention adopts the following technical scheme:
a video analysis monitoring system based on big data comprises a shooting module, a frame extracting module, a big data identification module and an alarm module;
the shooting module is used for shooting the monitoring area to obtain a monitoring video;
the frame extracting module is used for performing frame extracting processing on the monitoring video by adopting a self-adaptive frame number interval to obtain video frames;
the big data identification module is used for identifying the video frames by adopting a big data technology to obtain an identification result;
the alarm module is used for prompting the operator on duty according to the identification result;
the frame number interval is calculated as follows:
for the extracted k video frame
Figure 224325DEST_PATH_IMAGE001
To is aligned with
Figure 741894DEST_PATH_IMAGE001
Performing identification processing, if the obtained identification result is
Figure 704033DEST_PATH_IMAGE001
If the event contains the set type of event, the frame number interval between the (k + 1) th video frame to be extracted and the (k) th video frame already extracted is calculated by adopting the following method:
Figure 801302DEST_PATH_IMAGE002
if the obtained identification result is
Figure 430867DEST_PATH_IMAGE001
If the event does not contain the set type, the frame number interval between the (k + 1) th video frame to be extracted and the extracted k-th video frame is calculated by adopting the following method:
Figure 322600DEST_PATH_IMAGE003
wherein the content of the first and second substances,
Figure 571703DEST_PATH_IMAGE004
indicating the frame number interval between the (k + 1) th video frame to be extracted and the (k) th video frame already extracted,
Figure 738242DEST_PATH_IMAGE005
indicating the frame number interval between the extracted kth video frame and the extracted (k-1) th video frame,
Figure 956734DEST_PATH_IMAGE006
indicates the number of the preset number of frames,
Figure 19368DEST_PATH_IMAGE007
a preset lower limit value of the number of frames is shown,
Figure 690520DEST_PATH_IMAGE008
indicating a preset upper limit value of the number of frames.
Preferably, the shooting module comprises a shooting unit and a light supplementing unit;
the shooting unit is used for shooting the monitoring area to obtain a monitoring video;
the light supplementing unit is used for supplementing light to the monitored area when the light brightness is lower than a set brightness threshold value.
Preferably, the corresponding frame number of the extracted k-th video frame in the monitored video is recorded as
Figure 457488DEST_PATH_IMAGE009
Then, the calculation formula of the frame number corresponding to the extracted (k + 1) th video frame in the surveillance video is:
Figure 530486DEST_PATH_IMAGE010
wherein, the first and the second end of the pipe are connected with each other,
Figure 378005DEST_PATH_IMAGE011
and the number of the frame number corresponding to the (k + 1) th video frame to be extracted in the monitoring video is shown.
Preferably, the identifying the video frame by using the big data technology to obtain the identification result includes:
preprocessing a video frame to obtain a preprocessed image;
and inputting the preprocessed image into a recognition model obtained by big data technology training for recognition processing to obtain a recognition result.
Preferably, the preprocessing the video frame to obtain a preprocessed image includes:
carrying out graying processing on the video frame to obtain a grayscale image;
carrying out noise reduction processing on the gray level image to obtain a noise reduction image;
and performing foreground extraction processing on the noise-reduced image to obtain a preprocessed image.
Preferably, the identification result includes an event that the video frame includes a set type or an event that the video frame does not include a set type.
Preferably, the alarm module comprises a control unit and a prompt unit;
the control unit is used for judging whether the identification result is
Figure 536454DEST_PATH_IMAGE001
When the event contains the set type of event, a prompt instruction is sent to a prompt unit;
the prompting unit is used for prompting the operator on duty after receiving the prompting instruction.
Preferably, the graying the video frame to obtain the grayscale image includes:
graying the video frame by using the following formula:
Figure 44795DEST_PATH_IMAGE012
wherein the content of the first and second substances,
Figure 237879DEST_PATH_IMAGE013
representing a grayscale image
Figure 642316DEST_PATH_IMAGE014
The center coordinate is
Figure 288061DEST_PATH_IMAGE015
The pixel value of the pixel point of (a),
Figure 396831DEST_PATH_IMAGE016
which represents a pre-set scaling factor, is,
Figure 647684DEST_PATH_IMAGE017
respectively expressed in red component image, green component image, and blue component image, and having coordinates of
Figure 19759DEST_PATH_IMAGE015
The red component image, the green component image and the blue component image are respectively images of a red component, a green component and a blue component of the video frame in an RGB color space.
When the video is used for monitoring the monitoring area, the video frames are extracted through the self-adaptive frame number interval to carry out big data identification, and the occurrence of the event of wasting computing resources caused by extracting the video frames by using the fixed frame number interval all the time is effectively avoided.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
Fig. 1 is a diagram of an embodiment of a big data based video analysis monitoring system according to the present invention.
FIG. 2 is a diagram of an embodiment of obtaining a pre-processed image according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
As shown in fig. 1, the present invention provides a video analysis monitoring system based on big data, which includes a shooting module, a frame extracting module, a big data identification module and an alarm module;
the shooting module is used for shooting the monitoring area to obtain a monitoring video;
the frame extracting module is used for adopting a self-adaptive frame number interval to carry out frame extracting processing on the monitoring video to obtain video frames;
the big data identification module is used for identifying the video frames by adopting a big data technology to obtain an identification result;
the alarm module is used for prompting the operator on duty according to the identification result;
the frame number interval is calculated as follows:
for the extracted k video frame
Figure 155730DEST_PATH_IMAGE001
To is aligned with
Figure 537033DEST_PATH_IMAGE001
Performing identification processing, if the obtained identification result is
Figure 439130DEST_PATH_IMAGE001
If the event contains the set type of event, the frame number interval between the (k + 1) th video frame to be extracted and the k-th video frame already extracted is calculated in the following way:
Figure 185369DEST_PATH_IMAGE018
if the obtained identification result is
Figure 540127DEST_PATH_IMAGE001
If the event does not contain the set type, the frame number interval between the (k + 1) th video frame to be extracted and the extracted k-th video frame is calculated by adopting the following method:
Figure 256279DEST_PATH_IMAGE019
wherein the content of the first and second substances,
Figure 747303DEST_PATH_IMAGE004
indicating the frame number interval between the (k + 1) th video frame to be extracted and the (k) th video frame already extracted,
Figure 726761DEST_PATH_IMAGE020
indicating the frame number interval between the extracted kth video frame and the extracted (k-1) th video frame,
Figure 34726DEST_PATH_IMAGE006
indicates a preset number of frames,
Figure 492253DEST_PATH_IMAGE007
represents a preset lower limit value of the number of frames,
Figure 368942DEST_PATH_IMAGE008
represents a preset upper limit value of the number of frames.
When the video is used for monitoring the monitoring area, the video frames are extracted through the self-adaptive frame number interval to carry out big data identification, and the occurrence of the event of wasting computing resources caused by extracting the video frames by using the fixed frame number interval all the time is effectively avoided.
When an event is detected, the invention starts to shorten the time interval to improve the security level of the system, and when no event is detected, the invention increases the frame interval to avoid wasting computing resources.
Preferably, the set type event can be set according to different monitoring areas, and meanwhile, the corresponding type of data set class is also required to be used for training to obtain the corresponding recognition model. For example, when monitoring an escalator, the event of the set type may be that a person riding the escalator falls, the number of passengers exceeds a set value, or the like.
Preferably, the shooting module comprises a shooting unit and a light supplementing unit;
the shooting unit is used for shooting the monitoring area to obtain a monitoring video;
the light supplementing unit is used for supplementing light to the monitored area when the light brightness is lower than a set brightness threshold value.
Preferably, the number of the frame corresponding to the k-th extracted video frame in the monitored video is recorded as
Figure 519300DEST_PATH_IMAGE009
Then, the calculation formula of the frame number corresponding to the extracted (k + 1) th video frame in the surveillance video is:
Figure 51913DEST_PATH_IMAGE021
wherein the content of the first and second substances,
Figure 47551DEST_PATH_IMAGE011
and the corresponding frame number of the (k + 1) th video frame to be extracted in the monitoring video is shown.
Preferably, the identifying the video frame by using the big data technology to obtain the identification result includes:
preprocessing a video frame to obtain a preprocessed image;
and inputting the preprocessed image into a recognition model obtained by big data technology training for recognition processing to obtain a recognition result.
Preferably, the recognition model of the invention is trained by adopting a distributed computing mode, the training tasks are distributed to a plurality of nodes for computation, and finally, the computation results are collected to obtain the training results.
Preferably, as shown in fig. 2, the preprocessing the video frame to obtain a preprocessed image includes:
carrying out graying processing on the video frame to obtain a grayscale image;
carrying out noise reduction processing on the gray level image to obtain a noise reduction image;
and performing foreground extraction processing on the noise-reduced image to obtain a preprocessed image.
The noise reduction processing is carried out before the foreground extraction, so that the influence of noise on the foreground extraction can be effectively reduced, and the accuracy of the extracted preprocessed image only comprising the pixels of the foreground part is improved.
Preferably, the recognition result includes an event that the video frame includes a set type or an event that the video frame does not include a set type.
Preferably, the alarm module comprises a control unit and a prompt unit;
the control unit is used for judging whether the identification result is
Figure 44326DEST_PATH_IMAGE001
When the event contains the set type of event, a prompt instruction is sent to a prompt unit;
the prompting unit is used for prompting the operator on duty after receiving the prompting instruction.
Preferably, the graying the video frame to obtain the grayscale image includes:
graying the video frame by using the following formula:
Figure 568848DEST_PATH_IMAGE022
wherein the content of the first and second substances,
Figure 119915DEST_PATH_IMAGE013
representing a grayscale image
Figure 453332DEST_PATH_IMAGE023
The center coordinate is
Figure 773455DEST_PATH_IMAGE015
The pixel value of the pixel point of (a),
Figure 37DEST_PATH_IMAGE016
which represents a pre-set scaling factor, is,
Figure 772821DEST_PATH_IMAGE017
respectively expressed in the red component image, the green component image and the blue component image, and having coordinates of
Figure 375840DEST_PATH_IMAGE015
The red component image, the green component image and the blue component image are respectively images of a red component, a green component and a blue component of the video frame in an RGB color space.
Preferably, the performing noise reduction processing on the grayscale image to obtain a noise-reduced image includes:
carrying out edge detection on the gray level image by using a Canny algorithm to obtain a set oneSet of edge pixel points;
noise detection is carried out on pixel points in the gray-scale image based on oneSet, and a set twoSet of the noise pixel points is obtained;
and carrying out noise reduction processing on the pixel points in the twoSet to obtain a noise reduction image.
Different from a general noise reduction processing mode, the noise reduction processing method is not used for directly carrying out noise reduction processing on all pixel points, and due to the fact that the complexity of a noise reduction algorithm is high, the speed of the noise reduction processing is influenced, and therefore the real-time performance of correctly identifying the preset type event by the monitoring system is influenced. Therefore, the invention adopts the steps of firstly carrying out edge detection, then carrying out noise detection on the basis of the result of the edge detection, and finally carrying out noise reduction processing on the set of pixel points obtained on the basis of the noise detection.
Preferably, the performing noise detection on the pixel points in the gray-scale image based on oneSet to obtain a set twoSet of noise pixel points includes:
respectively calculating the noise parameter of each pixel point in the gray level image;
and storing the pixel points with the noise parameters larger than the set parameter threshold value into the set twoSet.
Preferably, the noise parameter is calculated as follows:
for pixel wtj, the following formula is used to calculate the noise parameter of wtj:
Figure 816049DEST_PATH_IMAGE024
wherein the content of the first and second substances,
Figure 275849DEST_PATH_IMAGE025
representing the noise parameter of wtj,
Figure 4771DEST_PATH_IMAGE026
Figure 411481DEST_PATH_IMAGE027
denotes a predetermined weight coefficient, and niset denotes a value centered around wtj
Figure 968846DEST_PATH_IMAGE028
A collection of pixel points within a region of size,
Figure 537230DEST_PATH_IMAGE029
and
Figure 815765DEST_PATH_IMAGE030
respectively representing pixel values of pixel wtj and pixel i,
Figure 26166DEST_PATH_IMAGE031
the standard deviation of the pixel values representing the pixels in niset,
Figure 644230DEST_PATH_IMAGE032
indicating the length of the connection between pixel wtj and pixel i,
Figure 180253DEST_PATH_IMAGE033
the standard deviation of the length of the connecting line between the pixel point in the niset and the pixel point wtj is represented;
Figure 680505DEST_PATH_IMAGE034
a similar parameter is indicated and is,
Figure 632280DEST_PATH_IMAGE035
numbs represents the number of pixels in niset that have the same gradient direction as wtj,
Figure 904517DEST_PATH_IMAGE036
represents the edge judgment parameter, if wtj belongs to the set oneSet
Figure 80284DEST_PATH_IMAGE036
Is 1.5, if wtj does not belong to the set oneSet
Figure 67831DEST_PATH_IMAGE036
The content of the acid-resistant acrylic resin is 0.5,
Figure 354456DEST_PATH_IMAGE037
representing a preset constant coefficient.
The noise parameters of the invention are related to the result of edge detection, when a pixel belongs to a set oneSet, the probability that the pixel belongs to the edge pixel is very high, therefore, the value of the right part of the above formula is correspondingly reduced, but because the invention carries out edge detection before noise reduction, part of the noise pixels are possibly wrongly identified as the edge pixels, therefore, when one noise pixel is wrongly identified as the edge pixel, the value of the left part is very high, and at the moment, the noise pixel in the state can be correctly identified through the parameter threshold value set by the invention. The left part of the invention considers the difference of the connecting line length and the pixel value between the pixel point which is currently calculated and the pixel point in the set range, and when the difference of the weighted result of the pixel point and the area is larger, the probability of belonging to the noise pixel point is also larger, therefore, the accuracy of the detection result of the noise pixel point can be improved by the arrangement.
Preferably, the performing noise reduction processing on the pixel point in the twoSet to obtain a noise-reduced image includes:
and carrying out noise reduction processing on the pixel points in the twoSet by using a non-local mean filtering algorithm to obtain a noise reduction image.
Preferably, the performing foreground extraction processing on the noise-reduced image to obtain a preprocessed image includes:
performing foreground extraction processing on the noise-reduced image by using an Otsu method to obtain a set rdSet of foreground pixels;
performing foreground extraction processing on the noise-reduced image by using a watershed algorithm to obtain a set sdSet of foreground pixel points;
acquiring an intersection tdSet of the rdSet and the sdSet;
and taking the edge pixel points in the tdSet as seed pixel points, and performing image growth processing to obtain a preprocessed image.
The existing foreground extraction algorithm generally adopts a single algorithm to extract, so that the continuity of the obtained preprocessed image is poorer, therefore, the two algorithms are adopted to process, then the intersection of the processing results is obtained, and the hole is repaired based on the intersection, so that the continuity of the image is improved.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (6)

1. A video analysis monitoring system based on big data is characterized by comprising a shooting module, a frame extracting module, a big data identification module and an alarm module;
the shooting module is used for shooting the monitoring area to obtain a monitoring video;
the frame extracting module is used for adopting a self-adaptive frame number interval to carry out frame extracting processing on the monitoring video to obtain video frames;
the big data identification module is used for identifying the video frames by adopting a big data technology to obtain an identification result;
the alarm module is used for prompting the operator on duty according to the identification result;
the frame number interval is calculated as follows:
for the extracted k-th video frame fra k To fra k Performing identification processing, if the obtained identification result is fra k If the event contains the set type of event, the frame number interval between the (k + 1) th video frame to be extracted and the k-th video frame already extracted is calculated in the following way:
Figure FDA0003897111980000011
if the obtained identification result is fra k If the event does not contain the set type of event, the frame number interval between the (k + 1) th video frame to be extracted and the (k) th video frame already extracted is calculated by adopting the following method:
Figure FDA0003897111980000012
wherein frailt (k + 1) represents a frame number interval between a (k + 1) th video frame to be extracted and a (k) th extracted video frame, frailt (k) represents a frame number interval between the (k) th extracted video frame and the (k-1) th extracted video frame, timer represents a preset number of frame numbers, miffrailt represents a preset lower limit value of frame numbers, and mafrailt represents a preset upper limit value of frame numbers;
the method for identifying and processing the video frame by adopting the big data technology to obtain the identification result comprises the following steps:
preprocessing a video frame to obtain a preprocessed image;
inputting the preprocessed image into a recognition model obtained by big data technology training for recognition processing to obtain a recognition result;
the preprocessing the video frame to obtain a preprocessed image includes:
carrying out graying processing on the video frame to obtain a grayscale image;
carrying out noise reduction processing on the gray level image to obtain a noise reduction image;
performing foreground extraction processing on the noise-reduced image to obtain a preprocessed image;
the performing noise reduction processing on the grayscale image to obtain a noise-reduced image includes:
carrying out edge detection on the gray level image by using a Canny algorithm to obtain a set oneSet of edge pixel points;
noise detection is carried out on pixel points in the gray image based on oneSet to obtain a set twoSet of noise pixel points;
carrying out noise reduction processing on pixel points in the twoSet to obtain a noise reduction image;
the noise detection is performed on the pixel points in the gray level image based on the oneSet to obtain a set twoSet of the noise pixel points, and the method comprises the following steps:
respectively calculating the noise parameter of each pixel point in the gray level image;
storing the pixel points with the noise parameters larger than the set parameter threshold value into a set twoSet;
the noise parameter is calculated in the following way:
for pixel wtj, the following formula is used to calculate the noise parameter of wtj:
Figure FDA0003897111980000021
wherein, nsc wtj Wtj, α and β denote predetermined weight coefficients, niset denotes a set of pixels in a K × K region centered on wtj, hdp wtj And hdp i Respectively representing pixel values of wtj and i, cfnist represents standard deviation of pixel values of pixels in niset, sdt (wtj, i) represents length of a connecting line between wtj and i, and dfniset represents standard deviation of length of a connecting line between a pixel in niset and wtj; psd wtj A similar parameter is indicated and is,
Figure FDA0003897111980000022
numbs represents the number of pixels in the same gradient direction as wtj in the set, best represents an edge determination parameter, and if wtj belongs to the set oneSet, best is 1.5, and if wtj does not belong to the set oneSet, best is 0.5, and Φ represents a preset constant coefficient.
2. The big data based video analysis monitoring system according to claim 1, wherein the photographing module comprises a photographing unit and a light supplementing unit;
the shooting unit is used for shooting the monitoring area to obtain a monitoring video;
the light supplementing unit is used for supplementing light to the monitored area when the light brightness is lower than a set brightness threshold value.
3. The video analysis monitoring system based on big data as claimed in claim 1, wherein the number of frames corresponding to the extracted kth video frame in the monitored video is recorded as num k Then, the calculation formula of the frame number corresponding to the extracted (k + 1) th video frame in the surveillance video is:
num k+1 =num k +fraitr(k+1)
wherein, num k+1 And the number of the frame number corresponding to the (k + 1) th video frame to be extracted in the monitoring video is shown.
4. The big data based video analysis and monitoring system according to claim 1, wherein the recognition result comprises an event with a set type contained in a video frame or an event without a set type contained in a video frame.
5. The big data based video analysis monitoring system according to claim 1, wherein the alarm module comprises a control unit and a prompt unit;
the control unit is used for judging that the identification result is fra k When the event contains the set type of event, sending a prompt instruction to a prompt unit;
the prompting unit is used for prompting the operator on duty after receiving the prompting instruction.
6. The big data based video analysis monitoring system according to claim 1, wherein the graying the video frame to obtain a grayscale image comprises:
graying the video frame by using the following formula:
hdp(x,y)=w 1 ×R(x,y)+w 2 ×G(x,y)+w 3 ×B(x,y)
wherein hdp (x, y) represents the pixel value of the pixel point with coordinates (x, y) in the gray scale image hdp, w 1 、w 2 、w 3 The scale factor is a preset scale factor, R (x, y), G (x, y), and B (x, y) respectively represent pixel values of pixel points with coordinates (x, y) in a red component image, a green component image, and a blue component image, and the red component image, the green component image, and the blue component image are images of a red component, a green component, and a blue component of a video frame in an RGB color space.
CN202210991968.XA 2022-08-18 2022-08-18 Big data-based video analysis monitoring system Active CN115065798B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210991968.XA CN115065798B (en) 2022-08-18 2022-08-18 Big data-based video analysis monitoring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210991968.XA CN115065798B (en) 2022-08-18 2022-08-18 Big data-based video analysis monitoring system

Publications (2)

Publication Number Publication Date
CN115065798A CN115065798A (en) 2022-09-16
CN115065798B true CN115065798B (en) 2022-11-22

Family

ID=83208138

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210991968.XA Active CN115065798B (en) 2022-08-18 2022-08-18 Big data-based video analysis monitoring system

Country Status (1)

Country Link
CN (1) CN115065798B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115761571A (en) * 2022-10-26 2023-03-07 北京百度网讯科技有限公司 Video-based target retrieval method, device, equipment and storage medium
CN115408557B (en) * 2022-11-01 2023-02-03 吉林信息安全测评中心 Safety monitoring system based on big data
CN116805433B (en) * 2023-06-27 2024-02-13 北京奥康达体育科技有限公司 Human motion trail data analysis system
CN117404636A (en) * 2023-09-15 2024-01-16 山东省金海龙建工科技有限公司 Intelligent street lamp for parking lot based on image processing

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102572357A (en) * 2011-12-31 2012-07-11 中兴通讯股份有限公司 Video monitoring system front end memory method and video monitoring system
CN104618679A (en) * 2015-03-13 2015-05-13 南京知乎信息科技有限公司 Method for extracting key information frame from monitoring video
CN111064924A (en) * 2019-11-26 2020-04-24 天津易华录信息技术有限公司 Video monitoring method and system based on artificial intelligence
CN111523347A (en) * 2019-02-01 2020-08-11 北京奇虎科技有限公司 Image detection method and device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102572357A (en) * 2011-12-31 2012-07-11 中兴通讯股份有限公司 Video monitoring system front end memory method and video monitoring system
CN104618679A (en) * 2015-03-13 2015-05-13 南京知乎信息科技有限公司 Method for extracting key information frame from monitoring video
CN111523347A (en) * 2019-02-01 2020-08-11 北京奇虎科技有限公司 Image detection method and device, computer equipment and storage medium
CN111064924A (en) * 2019-11-26 2020-04-24 天津易华录信息技术有限公司 Video monitoring method and system based on artificial intelligence

Also Published As

Publication number Publication date
CN115065798A (en) 2022-09-16

Similar Documents

Publication Publication Date Title
CN115065798B (en) Big data-based video analysis monitoring system
KR101942808B1 (en) Apparatus for CCTV Video Analytics Based on Object-Image Recognition DCNN
KR102194499B1 (en) Apparatus for CCTV Video Analytics Based on Object-Image Recognition DCNN and Driving Method Thereof
CN106056079B (en) A kind of occlusion detection method of image capture device and human face five-sense-organ
US20080152236A1 (en) Image processing method and apparatus
CN113887412B (en) Detection method, detection terminal, monitoring system and storage medium for pollution emission
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN106851229B (en) Security and protection intelligent decision method and system based on image recognition
CN110096945B (en) Indoor monitoring video key frame real-time extraction method based on machine learning
CN112287823A (en) Facial mask identification method based on video monitoring
CN110781735A (en) Alarm method and system for identifying on-duty state of personnel
CN113065568A (en) Target detection, attribute identification and tracking method and system
CN110728212B (en) Road well lid monitoring device and monitoring method based on computer vision
CN111581679A (en) Method for preventing screen from shooting based on deep network
CN113989732A (en) Real-time monitoring method, system, equipment and readable medium based on deep learning
CN109740527A (en) Image processing method in a kind of video frame
CN111325731A (en) Installation detection method and device of remote control device
CN111145219B (en) Efficient video moving target detection method based on Codebook principle
CN113743235B (en) Electric power inspection image processing method, device and equipment based on edge calculation
JP2019211921A (en) Object recognition system and object recognition method
CN111553408B (en) Automatic test method for video recognition software
CN114627434A (en) Automobile sales exhibition room passenger flow identification system based on big data
CN114913438A (en) Yolov5 garden abnormal target identification method based on anchor frame optimal clustering
CN117011288B (en) Video quality diagnosis method and system
CN112200036A (en) Student behavior remote monitoring method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant