CN115065798B - Big data-based video analysis monitoring system - Google Patents
Big data-based video analysis monitoring system Download PDFInfo
- Publication number
- CN115065798B CN115065798B CN202210991968.XA CN202210991968A CN115065798B CN 115065798 B CN115065798 B CN 115065798B CN 202210991968 A CN202210991968 A CN 202210991968A CN 115065798 B CN115065798 B CN 115065798B
- Authority
- CN
- China
- Prior art keywords
- video
- frame
- image
- video frame
- big data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 54
- 238000004458 analytical method Methods 0.000 title claims abstract description 14
- 238000012545 processing Methods 0.000 claims abstract description 37
- 238000005516 engineering process Methods 0.000 claims abstract description 12
- 238000000605 extraction Methods 0.000 claims description 10
- 238000000034 method Methods 0.000 claims description 10
- 230000001502 supplementing effect Effects 0.000 claims description 9
- 238000004422 calculation algorithm Methods 0.000 claims description 8
- 238000001514 detection method Methods 0.000 claims description 7
- 238000003708 edge detection Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 6
- 238000012549 training Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 3
- 241000287196 Asthenes Species 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000004925 Acrylic resin Substances 0.000 description 1
- 229920000178 Acrylic resin Polymers 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 239000002253 acid Substances 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0135—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/96—Management of image or video recognition tasks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a video analysis monitoring system based on big data, which comprises a shooting module, a frame extracting module, a big data identification module and an alarm module, wherein the shooting module is used for shooting the big data; the shooting module is used for shooting the monitoring area to obtain a monitoring video; the frame extracting module is used for adopting a self-adaptive frame number interval to carry out frame extracting processing on the monitoring video to obtain video frames; the big data identification module is used for identifying the video frames by adopting a big data technology to obtain an identification result; and the alarm module is used for prompting the person on duty according to the identification result. When the video is used for monitoring the monitoring area, the video frames are extracted through the self-adaptive frame number interval to carry out big data identification, and the occurrence of the event of wasting computing resources caused by extracting the video frames by using the fixed frame number interval all the time is effectively avoided.
Description
Technical Field
The invention relates to the field of monitoring, in particular to a video analysis monitoring system based on big data.
Background
The monitoring is a physical basis for real-time monitoring of key departments or important places of various industries. The management department can acquire effective data, image or sound information through the monitoring system, and timely monitor and memorize the process of the emergency abnormal event so as to provide efficient and timely command and height, police force deployment, case handling and the like. With the rapid development and popularization of the current computer application, a strong digital wave is raised globally, and the digitalization of various devices is the primary objective of safety protection. The performance characteristics of the digital monitoring alarm are as follows: the method comprises the steps of monitoring picture real-time display, video image quality single-channel adjusting function, independent setting of video recording speed of each channel, quick retrieval, setting of multiple video recording modes, automatic backup, tripod head/lens control function, network transmission and the like.
With the development of big data technology, the existing monitoring system also develops a function of analyzing the content of the monitored video in real time. In the prior art, generally, a fixed frame number interval is adopted to perform frame extraction processing on the content of a video, so that a big data technology is used to identify the content in a monitoring video and judge whether a set type event occurs. However, in the normal monitoring process, the probability of occurrence of the set type of event is very small, and if the frame is extracted at a fixed frame number interval, it obviously wastes the computing resource.
Disclosure of Invention
The invention aims to disclose a video analysis monitoring system based on big data, which solves the problem that the existing video monitoring system adopts fixed frame number intervals to extract frames, then identifies frame pictures, judges whether events with set types occur or not and wastes computing resources.
In order to achieve the purpose, the invention adopts the following technical scheme:
a video analysis monitoring system based on big data comprises a shooting module, a frame extracting module, a big data identification module and an alarm module;
the shooting module is used for shooting the monitoring area to obtain a monitoring video;
the frame extracting module is used for performing frame extracting processing on the monitoring video by adopting a self-adaptive frame number interval to obtain video frames;
the big data identification module is used for identifying the video frames by adopting a big data technology to obtain an identification result;
the alarm module is used for prompting the operator on duty according to the identification result;
the frame number interval is calculated as follows:
for the extracted k video frameTo is aligned withPerforming identification processing, if the obtained identification result isIf the event contains the set type of event, the frame number interval between the (k + 1) th video frame to be extracted and the (k) th video frame already extracted is calculated by adopting the following method:
if the obtained identification result isIf the event does not contain the set type, the frame number interval between the (k + 1) th video frame to be extracted and the extracted k-th video frame is calculated by adopting the following method:
wherein,indicating the frame number interval between the (k + 1) th video frame to be extracted and the (k) th video frame already extracted,indicating the frame number interval between the extracted kth video frame and the extracted (k-1) th video frame,indicates the number of the preset number of frames,a preset lower limit value of the number of frames is shown,indicating a preset upper limit value of the number of frames.
Preferably, the shooting module comprises a shooting unit and a light supplementing unit;
the shooting unit is used for shooting the monitoring area to obtain a monitoring video;
the light supplementing unit is used for supplementing light to the monitored area when the light brightness is lower than a set brightness threshold value.
Preferably, the corresponding frame number of the extracted k-th video frame in the monitored video is recorded asThen, the calculation formula of the frame number corresponding to the extracted (k + 1) th video frame in the surveillance video is:
wherein,and the number of the frame number corresponding to the (k + 1) th video frame to be extracted in the monitoring video is shown.
Preferably, the identifying the video frame by using the big data technology to obtain the identification result includes:
preprocessing a video frame to obtain a preprocessed image;
and inputting the preprocessed image into a recognition model obtained by big data technology training for recognition processing to obtain a recognition result.
Preferably, the preprocessing the video frame to obtain a preprocessed image includes:
carrying out graying processing on the video frame to obtain a grayscale image;
carrying out noise reduction processing on the gray level image to obtain a noise reduction image;
and performing foreground extraction processing on the noise-reduced image to obtain a preprocessed image.
Preferably, the identification result includes an event that the video frame includes a set type or an event that the video frame does not include a set type.
Preferably, the alarm module comprises a control unit and a prompt unit;
the control unit is used for judging whether the identification result isWhen the event contains the set type of event, a prompt instruction is sent to a prompt unit;
the prompting unit is used for prompting the operator on duty after receiving the prompting instruction.
Preferably, the graying the video frame to obtain the grayscale image includes:
graying the video frame by using the following formula:
wherein,representing a grayscale imageThe center coordinate isThe pixel value of the pixel point of (a),which represents a pre-set scaling factor, is,respectively expressed in red component image, green component image, and blue component image, and having coordinates ofThe red component image, the green component image and the blue component image are respectively images of a red component, a green component and a blue component of the video frame in an RGB color space.
When the video is used for monitoring the monitoring area, the video frames are extracted through the self-adaptive frame number interval to carry out big data identification, and the occurrence of the event of wasting computing resources caused by extracting the video frames by using the fixed frame number interval all the time is effectively avoided.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
Fig. 1 is a diagram of an embodiment of a big data based video analysis monitoring system according to the present invention.
FIG. 2 is a diagram of an embodiment of obtaining a pre-processed image according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
As shown in fig. 1, the present invention provides a video analysis monitoring system based on big data, which includes a shooting module, a frame extracting module, a big data identification module and an alarm module;
the shooting module is used for shooting the monitoring area to obtain a monitoring video;
the frame extracting module is used for adopting a self-adaptive frame number interval to carry out frame extracting processing on the monitoring video to obtain video frames;
the big data identification module is used for identifying the video frames by adopting a big data technology to obtain an identification result;
the alarm module is used for prompting the operator on duty according to the identification result;
the frame number interval is calculated as follows:
for the extracted k video frameTo is aligned withPerforming identification processing, if the obtained identification result isIf the event contains the set type of event, the frame number interval between the (k + 1) th video frame to be extracted and the k-th video frame already extracted is calculated in the following way:
if the obtained identification result isIf the event does not contain the set type, the frame number interval between the (k + 1) th video frame to be extracted and the extracted k-th video frame is calculated by adopting the following method:
wherein,indicating the frame number interval between the (k + 1) th video frame to be extracted and the (k) th video frame already extracted,indicating the frame number interval between the extracted kth video frame and the extracted (k-1) th video frame,indicates a preset number of frames,represents a preset lower limit value of the number of frames,represents a preset upper limit value of the number of frames.
When the video is used for monitoring the monitoring area, the video frames are extracted through the self-adaptive frame number interval to carry out big data identification, and the occurrence of the event of wasting computing resources caused by extracting the video frames by using the fixed frame number interval all the time is effectively avoided.
When an event is detected, the invention starts to shorten the time interval to improve the security level of the system, and when no event is detected, the invention increases the frame interval to avoid wasting computing resources.
Preferably, the set type event can be set according to different monitoring areas, and meanwhile, the corresponding type of data set class is also required to be used for training to obtain the corresponding recognition model. For example, when monitoring an escalator, the event of the set type may be that a person riding the escalator falls, the number of passengers exceeds a set value, or the like.
Preferably, the shooting module comprises a shooting unit and a light supplementing unit;
the shooting unit is used for shooting the monitoring area to obtain a monitoring video;
the light supplementing unit is used for supplementing light to the monitored area when the light brightness is lower than a set brightness threshold value.
Preferably, the number of the frame corresponding to the k-th extracted video frame in the monitored video is recorded asThen, the calculation formula of the frame number corresponding to the extracted (k + 1) th video frame in the surveillance video is:
wherein,and the corresponding frame number of the (k + 1) th video frame to be extracted in the monitoring video is shown.
Preferably, the identifying the video frame by using the big data technology to obtain the identification result includes:
preprocessing a video frame to obtain a preprocessed image;
and inputting the preprocessed image into a recognition model obtained by big data technology training for recognition processing to obtain a recognition result.
Preferably, the recognition model of the invention is trained by adopting a distributed computing mode, the training tasks are distributed to a plurality of nodes for computation, and finally, the computation results are collected to obtain the training results.
Preferably, as shown in fig. 2, the preprocessing the video frame to obtain a preprocessed image includes:
carrying out graying processing on the video frame to obtain a grayscale image;
carrying out noise reduction processing on the gray level image to obtain a noise reduction image;
and performing foreground extraction processing on the noise-reduced image to obtain a preprocessed image.
The noise reduction processing is carried out before the foreground extraction, so that the influence of noise on the foreground extraction can be effectively reduced, and the accuracy of the extracted preprocessed image only comprising the pixels of the foreground part is improved.
Preferably, the recognition result includes an event that the video frame includes a set type or an event that the video frame does not include a set type.
Preferably, the alarm module comprises a control unit and a prompt unit;
the control unit is used for judging whether the identification result isWhen the event contains the set type of event, a prompt instruction is sent to a prompt unit;
the prompting unit is used for prompting the operator on duty after receiving the prompting instruction.
Preferably, the graying the video frame to obtain the grayscale image includes:
graying the video frame by using the following formula:
wherein,representing a grayscale imageThe center coordinate isThe pixel value of the pixel point of (a),which represents a pre-set scaling factor, is,respectively expressed in the red component image, the green component image and the blue component image, and having coordinates ofThe red component image, the green component image and the blue component image are respectively images of a red component, a green component and a blue component of the video frame in an RGB color space.
Preferably, the performing noise reduction processing on the grayscale image to obtain a noise-reduced image includes:
carrying out edge detection on the gray level image by using a Canny algorithm to obtain a set oneSet of edge pixel points;
noise detection is carried out on pixel points in the gray-scale image based on oneSet, and a set twoSet of the noise pixel points is obtained;
and carrying out noise reduction processing on the pixel points in the twoSet to obtain a noise reduction image.
Different from a general noise reduction processing mode, the noise reduction processing method is not used for directly carrying out noise reduction processing on all pixel points, and due to the fact that the complexity of a noise reduction algorithm is high, the speed of the noise reduction processing is influenced, and therefore the real-time performance of correctly identifying the preset type event by the monitoring system is influenced. Therefore, the invention adopts the steps of firstly carrying out edge detection, then carrying out noise detection on the basis of the result of the edge detection, and finally carrying out noise reduction processing on the set of pixel points obtained on the basis of the noise detection.
Preferably, the performing noise detection on the pixel points in the gray-scale image based on oneSet to obtain a set twoSet of noise pixel points includes:
respectively calculating the noise parameter of each pixel point in the gray level image;
and storing the pixel points with the noise parameters larger than the set parameter threshold value into the set twoSet.
Preferably, the noise parameter is calculated as follows:
for pixel wtj, the following formula is used to calculate the noise parameter of wtj:
wherein,representing the noise parameter of wtj,、denotes a predetermined weight coefficient, and niset denotes a value centered around wtjA collection of pixel points within a region of size,andrespectively representing pixel values of pixel wtj and pixel i,the standard deviation of the pixel values representing the pixels in niset,indicating the length of the connection between pixel wtj and pixel i,the standard deviation of the length of the connecting line between the pixel point in the niset and the pixel point wtj is represented;a similar parameter is indicated and is,numbs represents the number of pixels in niset that have the same gradient direction as wtj,represents the edge judgment parameter, if wtj belongs to the set oneSetIs 1.5, if wtj does not belong to the set oneSetThe content of the acid-resistant acrylic resin is 0.5,representing a preset constant coefficient.
The noise parameters of the invention are related to the result of edge detection, when a pixel belongs to a set oneSet, the probability that the pixel belongs to the edge pixel is very high, therefore, the value of the right part of the above formula is correspondingly reduced, but because the invention carries out edge detection before noise reduction, part of the noise pixels are possibly wrongly identified as the edge pixels, therefore, when one noise pixel is wrongly identified as the edge pixel, the value of the left part is very high, and at the moment, the noise pixel in the state can be correctly identified through the parameter threshold value set by the invention. The left part of the invention considers the difference of the connecting line length and the pixel value between the pixel point which is currently calculated and the pixel point in the set range, and when the difference of the weighted result of the pixel point and the area is larger, the probability of belonging to the noise pixel point is also larger, therefore, the accuracy of the detection result of the noise pixel point can be improved by the arrangement.
Preferably, the performing noise reduction processing on the pixel point in the twoSet to obtain a noise-reduced image includes:
and carrying out noise reduction processing on the pixel points in the twoSet by using a non-local mean filtering algorithm to obtain a noise reduction image.
Preferably, the performing foreground extraction processing on the noise-reduced image to obtain a preprocessed image includes:
performing foreground extraction processing on the noise-reduced image by using an Otsu method to obtain a set rdSet of foreground pixels;
performing foreground extraction processing on the noise-reduced image by using a watershed algorithm to obtain a set sdSet of foreground pixel points;
acquiring an intersection tdSet of the rdSet and the sdSet;
and taking the edge pixel points in the tdSet as seed pixel points, and performing image growth processing to obtain a preprocessed image.
The existing foreground extraction algorithm generally adopts a single algorithm to extract, so that the continuity of the obtained preprocessed image is poorer, therefore, the two algorithms are adopted to process, then the intersection of the processing results is obtained, and the hole is repaired based on the intersection, so that the continuity of the image is improved.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.
Claims (6)
1. A video analysis monitoring system based on big data is characterized by comprising a shooting module, a frame extracting module, a big data identification module and an alarm module;
the shooting module is used for shooting the monitoring area to obtain a monitoring video;
the frame extracting module is used for adopting a self-adaptive frame number interval to carry out frame extracting processing on the monitoring video to obtain video frames;
the big data identification module is used for identifying the video frames by adopting a big data technology to obtain an identification result;
the alarm module is used for prompting the operator on duty according to the identification result;
the frame number interval is calculated as follows:
for the extracted k-th video frame fra k To fra k Performing identification processing, if the obtained identification result is fra k If the event contains the set type of event, the frame number interval between the (k + 1) th video frame to be extracted and the k-th video frame already extracted is calculated in the following way:
if the obtained identification result is fra k If the event does not contain the set type of event, the frame number interval between the (k + 1) th video frame to be extracted and the (k) th video frame already extracted is calculated by adopting the following method:
wherein frailt (k + 1) represents a frame number interval between a (k + 1) th video frame to be extracted and a (k) th extracted video frame, frailt (k) represents a frame number interval between the (k) th extracted video frame and the (k-1) th extracted video frame, timer represents a preset number of frame numbers, miffrailt represents a preset lower limit value of frame numbers, and mafrailt represents a preset upper limit value of frame numbers;
the method for identifying and processing the video frame by adopting the big data technology to obtain the identification result comprises the following steps:
preprocessing a video frame to obtain a preprocessed image;
inputting the preprocessed image into a recognition model obtained by big data technology training for recognition processing to obtain a recognition result;
the preprocessing the video frame to obtain a preprocessed image includes:
carrying out graying processing on the video frame to obtain a grayscale image;
carrying out noise reduction processing on the gray level image to obtain a noise reduction image;
performing foreground extraction processing on the noise-reduced image to obtain a preprocessed image;
the performing noise reduction processing on the grayscale image to obtain a noise-reduced image includes:
carrying out edge detection on the gray level image by using a Canny algorithm to obtain a set oneSet of edge pixel points;
noise detection is carried out on pixel points in the gray image based on oneSet to obtain a set twoSet of noise pixel points;
carrying out noise reduction processing on pixel points in the twoSet to obtain a noise reduction image;
the noise detection is performed on the pixel points in the gray level image based on the oneSet to obtain a set twoSet of the noise pixel points, and the method comprises the following steps:
respectively calculating the noise parameter of each pixel point in the gray level image;
storing the pixel points with the noise parameters larger than the set parameter threshold value into a set twoSet;
the noise parameter is calculated in the following way:
for pixel wtj, the following formula is used to calculate the noise parameter of wtj:
wherein, nsc wtj Wtj, α and β denote predetermined weight coefficients, niset denotes a set of pixels in a K × K region centered on wtj, hdp wtj And hdp i Respectively representing pixel values of wtj and i, cfnist represents standard deviation of pixel values of pixels in niset, sdt (wtj, i) represents length of a connecting line between wtj and i, and dfniset represents standard deviation of length of a connecting line between a pixel in niset and wtj; psd wtj A similar parameter is indicated and is,numbs represents the number of pixels in the same gradient direction as wtj in the set, best represents an edge determination parameter, and if wtj belongs to the set oneSet, best is 1.5, and if wtj does not belong to the set oneSet, best is 0.5, and Φ represents a preset constant coefficient.
2. The big data based video analysis monitoring system according to claim 1, wherein the photographing module comprises a photographing unit and a light supplementing unit;
the shooting unit is used for shooting the monitoring area to obtain a monitoring video;
the light supplementing unit is used for supplementing light to the monitored area when the light brightness is lower than a set brightness threshold value.
3. The video analysis monitoring system based on big data as claimed in claim 1, wherein the number of frames corresponding to the extracted kth video frame in the monitored video is recorded as num k Then, the calculation formula of the frame number corresponding to the extracted (k + 1) th video frame in the surveillance video is:
num k+1 =num k +fraitr(k+1)
wherein, num k+1 And the number of the frame number corresponding to the (k + 1) th video frame to be extracted in the monitoring video is shown.
4. The big data based video analysis and monitoring system according to claim 1, wherein the recognition result comprises an event with a set type contained in a video frame or an event without a set type contained in a video frame.
5. The big data based video analysis monitoring system according to claim 1, wherein the alarm module comprises a control unit and a prompt unit;
the control unit is used for judging that the identification result is fra k When the event contains the set type of event, sending a prompt instruction to a prompt unit;
the prompting unit is used for prompting the operator on duty after receiving the prompting instruction.
6. The big data based video analysis monitoring system according to claim 1, wherein the graying the video frame to obtain a grayscale image comprises:
graying the video frame by using the following formula:
hdp(x,y)=w 1 ×R(x,y)+w 2 ×G(x,y)+w 3 ×B(x,y)
wherein hdp (x, y) represents the pixel value of the pixel point with coordinates (x, y) in the gray scale image hdp, w 1 、w 2 、w 3 The scale factor is a preset scale factor, R (x, y), G (x, y), and B (x, y) respectively represent pixel values of pixel points with coordinates (x, y) in a red component image, a green component image, and a blue component image, and the red component image, the green component image, and the blue component image are images of a red component, a green component, and a blue component of a video frame in an RGB color space.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210991968.XA CN115065798B (en) | 2022-08-18 | 2022-08-18 | Big data-based video analysis monitoring system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210991968.XA CN115065798B (en) | 2022-08-18 | 2022-08-18 | Big data-based video analysis monitoring system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115065798A CN115065798A (en) | 2022-09-16 |
CN115065798B true CN115065798B (en) | 2022-11-22 |
Family
ID=83208138
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210991968.XA Active CN115065798B (en) | 2022-08-18 | 2022-08-18 | Big data-based video analysis monitoring system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115065798B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115761571A (en) * | 2022-10-26 | 2023-03-07 | 北京百度网讯科技有限公司 | Video-based target retrieval method, device, equipment and storage medium |
CN115408557B (en) * | 2022-11-01 | 2023-02-03 | 吉林信息安全测评中心 | Safety monitoring system based on big data |
CN116805433B (en) * | 2023-06-27 | 2024-02-13 | 北京奥康达体育科技有限公司 | Human motion trail data analysis system |
CN117404636A (en) * | 2023-09-15 | 2024-01-16 | 山东省金海龙建工科技有限公司 | Intelligent street lamp for parking lot based on image processing |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102572357A (en) * | 2011-12-31 | 2012-07-11 | 中兴通讯股份有限公司 | Video monitoring system front end memory method and video monitoring system |
CN104618679A (en) * | 2015-03-13 | 2015-05-13 | 南京知乎信息科技有限公司 | Method for extracting key information frame from monitoring video |
CN111064924A (en) * | 2019-11-26 | 2020-04-24 | 天津易华录信息技术有限公司 | Video monitoring method and system based on artificial intelligence |
CN111523347A (en) * | 2019-02-01 | 2020-08-11 | 北京奇虎科技有限公司 | Image detection method and device, computer equipment and storage medium |
-
2022
- 2022-08-18 CN CN202210991968.XA patent/CN115065798B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102572357A (en) * | 2011-12-31 | 2012-07-11 | 中兴通讯股份有限公司 | Video monitoring system front end memory method and video monitoring system |
CN104618679A (en) * | 2015-03-13 | 2015-05-13 | 南京知乎信息科技有限公司 | Method for extracting key information frame from monitoring video |
CN111523347A (en) * | 2019-02-01 | 2020-08-11 | 北京奇虎科技有限公司 | Image detection method and device, computer equipment and storage medium |
CN111064924A (en) * | 2019-11-26 | 2020-04-24 | 天津易华录信息技术有限公司 | Video monitoring method and system based on artificial intelligence |
Also Published As
Publication number | Publication date |
---|---|
CN115065798A (en) | 2022-09-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115065798B (en) | Big data-based video analysis monitoring system | |
KR101942808B1 (en) | Apparatus for CCTV Video Analytics Based on Object-Image Recognition DCNN | |
CN110348312A (en) | A kind of area video human action behavior real-time identification method | |
KR102194499B1 (en) | Apparatus for CCTV Video Analytics Based on Object-Image Recognition DCNN and Driving Method Thereof | |
CN111401311A (en) | High-altitude parabolic recognition method based on image detection | |
WO2019114145A1 (en) | Head count detection method and device in surveillance video | |
CN106851229B (en) | Security and protection intelligent decision method and system based on image recognition | |
US20100027842A1 (en) | Object detection method and apparatus thereof | |
CN113887412A (en) | Detection method, detection terminal, monitoring system and storage medium for pollution emission | |
CN112287823A (en) | Facial mask identification method based on video monitoring | |
CN110781735A (en) | Alarm method and system for identifying on-duty state of personnel | |
CN113065568A (en) | Target detection, attribute identification and tracking method and system | |
CN110728212B (en) | Road well lid monitoring device and monitoring method based on computer vision | |
CN109740527A (en) | Image processing method in a kind of video frame | |
CN113989732A (en) | Real-time monitoring method, system, equipment and readable medium based on deep learning | |
CN117557937A (en) | Monitoring camera image anomaly detection method and system | |
CN116749817A (en) | Remote control method and system for charging pile | |
CN116310953A (en) | Pontoon off-grid intelligent detection method of cold source interception net | |
CN111325731A (en) | Installation detection method and device of remote control device | |
CN111145219B (en) | Efficient video moving target detection method based on Codebook principle | |
JP2019211921A (en) | Object recognition system and object recognition method | |
CN114627434A (en) | Automobile sales exhibition room passenger flow identification system based on big data | |
CN111553408B (en) | Automatic test method for video recognition software | |
CN117011288B (en) | Video quality diagnosis method and system | |
CN115408557B (en) | Safety monitoring system based on big data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |