CN113014876A - Video monitoring method and device, electronic equipment and readable storage medium - Google Patents
Video monitoring method and device, electronic equipment and readable storage medium Download PDFInfo
- Publication number
- CN113014876A CN113014876A CN202110212326.0A CN202110212326A CN113014876A CN 113014876 A CN113014876 A CN 113014876A CN 202110212326 A CN202110212326 A CN 202110212326A CN 113014876 A CN113014876 A CN 113014876A
- Authority
- CN
- China
- Prior art keywords
- preset
- video image
- video
- detection
- video monitoring
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 136
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000003860 storage Methods 0.000 title claims abstract description 13
- 238000001514 detection method Methods 0.000 claims abstract description 173
- 230000002159 abnormal effect Effects 0.000 claims abstract description 15
- 238000005520 cutting process Methods 0.000 claims description 47
- 238000012549 training Methods 0.000 claims description 38
- 238000012360 testing method Methods 0.000 claims description 15
- 238000011156 evaluation Methods 0.000 claims description 14
- 238000001914 filtration Methods 0.000 claims description 8
- 238000012806 monitoring device Methods 0.000 abstract 1
- 230000006870 function Effects 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000013136 deep learning model Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000002349 favourable effect Effects 0.000 description 2
- 238000003064 k means clustering Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 239000002351 wastewater Substances 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 239000010865 sewage Substances 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/002—Diagnosis, testing or measuring for television systems or their details for television cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/4425—Monitoring of client processing errors or hardware failure
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
- H04N7/185—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Databases & Information Systems (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The application discloses a video monitoring method, a video monitoring device, electronic equipment and a readable storage medium, wherein the method comprises the following steps: under the condition that the video monitoring terminal is controlled to move according to a preset moving track, acquiring video data acquired by the video monitoring terminal at a current preset position, wherein the preset moving track comprises N preset positions, each preset position in the N preset positions is provided with at least one electronic fence, the electronic fences correspond to detection models one by one, the N preset positions comprise the current preset position, and N is an integer greater than 1; decoding the video data to obtain a video image; detecting the video image by using a target detection model to obtain a detection result, wherein the target detection model is a detection model corresponding to the electronic fence of the current preset position; and if the detection result is abnormal, generating alarm information. Therefore, the hardware cost of the video monitoring system can be reduced, and the utilization rate of the video monitoring terminal can be improved.
Description
Technical Field
The application belongs to the technical field of video monitoring, and particularly relates to a video monitoring method and device, electronic equipment and a readable storage medium.
Background
With the development of video monitoring technology, video monitoring systems are widely used in various fields, such as industrial manufacturing, medical treatment, transportation, environmental protection, public safety, and the like. At present, because a video monitoring terminal can only obtain video data of a fixed preset position, each video monitoring terminal can only monitor the video data in a single scene. When the video monitoring system needs to monitor a plurality of scenes, a plurality of video monitoring terminals need to be arranged, so that the hardware cost of the video monitoring system is increased, and the utilization rate of a single video monitoring terminal is not high.
Disclosure of Invention
An object of the embodiments of the present application is to provide a video monitoring method, an apparatus, an electronic device, and a readable storage medium, which can solve the problem that the hardware cost of a video monitoring system is high and the utilization rate is not high in the existing video monitoring method.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a video monitoring method, where the method includes:
under the condition that a video monitoring terminal is controlled to move according to a preset moving track, acquiring video data acquired by the video monitoring terminal at a current preset position, wherein the preset moving track comprises N preset positions, each preset position in the N preset positions is provided with at least one electronic fence, the electronic fences correspond to detection models one to one, the N preset positions comprise the current preset position, and N is an integer greater than 1;
decoding the video data to obtain a video image;
detecting the video image by using a target detection model to obtain a detection result, wherein the target detection is the detection model corresponding to the electronic fence of the current preset position;
and if the detection result is abnormal, generating alarm information.
Further, the detecting the video image by using the target detection model to obtain a detection result includes:
acquiring the position parameters of the electronic fence of the current preset position;
cutting the video image according to the position parameter to obtain a cut video image;
and detecting the cut video image by using the target detection model to obtain a detection result.
Further, the cutting the video image according to the position parameter to obtain a cut video image includes:
acquiring target parameters in the position parameters, wherein the target parameters comprise a minimum value and a maximum value in a horizontal direction and a minimum value and a maximum value in a vertical direction in a preset coordinate system;
determining a cutting area according to the minimum value and the maximum value in the horizontal direction and the minimum value and the maximum value in the vertical direction;
and according to the cutting area, cutting the video image to obtain a cut video image.
Further, the cutting the video image according to the cutting area to obtain a cut video image includes:
according to the cutting area, cutting the video image to obtain a first intermediate video image;
adjusting the first intermediate video image to a preset size to obtain a second intermediate video image;
and filtering the second intermediate video image to obtain a cut video image.
Furthermore, each preset position in the N preset positions is also provided with a preset stay time and an available time range; the preset stay time refers to the stay time of the video monitoring terminal when the video monitoring terminal moves to each preset position, and the available time range refers to the effective time range of the detection model corresponding to the electronic fence set in each preset position.
Further, before the detecting the video image by using the object detection model, the method further includes:
acquiring sample video images corresponding to different scenes;
training a basic model according to the sample video image to obtain detection models corresponding to different scenes;
and respectively associating the detection models corresponding to the different scenes with the electronic fences arranged in the preset positions.
Further, the sample video image includes a training data set and a testing data set;
the training of the basic model according to the sample video image to obtain the detection models corresponding to different scenes comprises:
clustering the sample video images according to a preset clustering algorithm to obtain prior frame parameters corresponding to different scenes;
respectively training the basic model according to the prior frame parameters, preset configurable hyper-parameters and the training data set to obtain a plurality of candidate models corresponding to different scenes;
according to the test data set, evaluating a plurality of candidate models corresponding to different scenes to obtain evaluation results corresponding to the different scenes;
and determining the detection models corresponding to different scenes according to the evaluation results corresponding to the different scenes.
In a second aspect, an embodiment of the present application provides a video monitoring apparatus, including:
the video monitoring terminal comprises a first obtaining module, a second obtaining module and a detection module, wherein the first obtaining module is used for obtaining video data collected by the video monitoring terminal at a current preset position under the condition of controlling the video monitoring terminal to move according to a preset moving track, the preset moving track comprises N preset positions, each preset position in the N preset positions is provided with at least one electronic fence, the electronic fences correspond to detection models in a one-to-one mode, the N preset positions comprise the current preset position, and N is an integer greater than 1;
the decoding module is used for decoding the video data to obtain a video image;
the detection module is used for detecting the video image by using a target detection model to obtain a detection result, wherein the target detection model is a detection model corresponding to the electronic fence of the current preset position;
and the generating module is used for generating alarm information if the detection result is abnormal.
Further, the detection module includes:
the obtaining submodule is used for obtaining the position parameters of the electronic fence of the current preset position;
the cutting submodule is used for cutting the video image according to the position parameter to obtain a cut video image;
and the detection submodule is used for detecting the cut video image by using the target detection model to obtain a detection result.
Further, the cropping sub-module includes:
an obtaining unit, configured to obtain a target parameter in the position parameters, where the target parameter includes a minimum value and a maximum value in a horizontal direction and a minimum value and a maximum value in a vertical direction in a preset coordinate system;
the determining unit is used for determining a cutting area according to the minimum value and the maximum value in the horizontal direction and the minimum value and the maximum value in the vertical direction;
and the cutting unit is used for cutting the video image according to the cutting area to obtain the cut video image.
Further, the clipping unit is specifically configured to:
according to the cutting area, cutting the video image to obtain a first intermediate video image;
adjusting the first intermediate video image to a preset size to obtain a second intermediate video image;
and filtering the second intermediate video image to obtain a cut video image.
Furthermore, each preset position in the N preset positions is also provided with a preset stay time and an available time range; the preset stay time refers to the stay time of the video monitoring terminal when the video monitoring terminal moves to each preset position, and the available time range refers to the effective time range of the detection model corresponding to the electronic fence set in each preset position.
Further, the apparatus further comprises:
the second acquisition module is used for acquiring sample video images corresponding to different scenes;
the training module is used for training a basic model according to the sample video image to obtain detection models corresponding to different scenes;
and the association module is used for associating the detection models corresponding to the different scenes with the electronic fences arranged in the preset positions respectively.
Further, the sample video image includes a training data set and a testing data set; the training module comprises:
the clustering submodule is used for clustering the sample video images according to a preset clustering algorithm to obtain prior frame parameters corresponding to different scenes;
the training sub-module is used for respectively training the basic model according to the prior frame parameters, preset configurable hyper-parameters and the training data set to obtain a plurality of candidate models corresponding to different scenes;
the evaluation sub-module is used for evaluating a plurality of candidate models corresponding to different scenes according to the test data set to obtain evaluation results corresponding to the different scenes;
and the determining submodule is used for determining the detection models corresponding to different scenes according to the evaluation results corresponding to the different scenes.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In the embodiment of the application, under the condition that a video monitoring terminal is controlled to move according to a preset moving track, video data acquired by the video monitoring terminal at a current preset position is acquired, wherein the preset moving track comprises N preset positions, each preset position in the N preset positions is provided with at least one electronic fence, the electronic fences correspond to detection models in a one-to-one mode, the N preset positions comprise the current preset position, and N is an integer greater than 1; decoding the video data to obtain a video image; detecting the video image by using a target detection model to obtain a detection result, wherein the target detection model is a detection model corresponding to the electronic fence of the current preset position; and if the detection result is abnormal, generating alarm information. Through this kind of mode, can control video monitor terminal and remove between N preset position, gather the video data of N preset position, simultaneously, set up at least one fence through each preset position in N preset position, be favorable to acquireing the video image that every fence corresponds to utilize the detection model that every fence corresponds to detect video image, thereby realize the video monitoring to different scenes. Therefore, the use amount of the video monitoring terminal can be reduced, the hardware cost of the video monitoring system is reduced, the same video monitoring terminal can be used for monitoring video images of a plurality of different scenes, and the utilization rate of the video monitoring terminal is improved.
Drawings
Fig. 1 is a flowchart of a video monitoring method according to an embodiment of the present disclosure;
fig. 2 is a second flowchart of a video monitoring method according to an embodiment of the present application;
fig. 3 is a flowchart for setting a moving track according to an embodiment of the present application;
fig. 4 is a third flowchart of a video monitoring method according to an embodiment of the present application;
fig. 5 is a structural diagram of a video monitoring apparatus according to an embodiment of the present application;
fig. 6 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The video monitoring method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, fig. 1 is a flowchart of a video monitoring method according to an embodiment of the present disclosure. As shown in fig. 1, the video monitoring method specifically includes the following steps:
Specifically, the video monitoring method of the embodiment of the application is applied to a video monitoring system, and the video monitoring system comprises at least one video monitoring terminal and a server connected with the video monitoring terminal. The video monitoring terminal includes, but is not limited to, a network camera, or a terminal device with a camera, such as a mobile phone, a computer, etc. The server is used for detecting the collected video data by using the detection model and sending out alarm information when an abnormal condition is detected.
The preset position refers to a physical position where the video monitoring terminal acquires video data. The electronic fence is used for indicating the area needing important monitoring on each preset position, and the server can conveniently cut the video image of the preset position based on the electronic fence. For example, if the preset position a needs to monitor the conditions of vehicles on both sides of a road, two electronic fences can be arranged at the preset position a, and the two electronic fences are respectively arranged in the areas on the left and right sides of the road, so that when video data of the preset position a is analyzed, only video images in the two cut electronic fences need to be analyzed, and thus the monitoring of the vehicles on both sides of the road is realized. And if the preset position B is required to monitor the exhaust emission and the wastewater emission of a certain factory, two electronic fences can be arranged at the preset position B and are respectively arranged above the smoke exhaust port and the sewage exhaust port of the factory, so that when the video data of the preset position B are analyzed, only the video images in the two cut electronic fences need to be analyzed, and the monitoring of the exhaust emission and the wastewater emission of the factory is realized.
It should be noted that the monitoring scenes corresponding to the electronic fences may be the same or different, and thus the detection models corresponding to the electronic fences may be the same or different according to the types of the monitoring scenes. Each electronic fence is in one-to-one correspondence with one detection model, and therefore the server can detect the monitoring scene corresponding to the electronic fence according to the detection model corresponding to the electronic fence.
In the embodiment of the application, the server is preset with a preset moving track corresponding to the video monitoring terminal. The preset moving track comprises N preset positions, and each preset position is provided with at least one electronic fence, so that the server can control the video monitoring terminal to move according to the preset moving track, and in the moving process, video data collected by the preset positions are acquired.
And 102, decoding the video data to obtain a video image.
After the server acquires the video data, the server can decode the video data to obtain a corresponding video image. Specifically, since the video data acquired by the video monitoring terminal is a video stream, such as an RTSP (Real Time Streaming Protocol) stream, the server needs to transcode the video stream into video images according to a preset frame rate when receiving the video stream transmitted by the video monitoring terminal, such as image data in a base64 format. As an implementation manner, decoding of a video stream may be implemented by using a video-like capture for processing pictures and videos in OpenCV (a BSD license-based cross-platform computer vision and machine learning software library), which may of course be implemented by using other video decoding software, and the present application is not limited in particular.
And 103, detecting the video image by using a target detection model to obtain a detection result, wherein the target detection model is a detection model corresponding to the electronic fence of the current preset position.
After the server obtains the video image, the electronic fence corresponding to the current preset position can be obtained according to the current preset position, and then the target detection model corresponding to the electronic fence is obtained. Therefore, the server can detect the video image according to the obtained target detection model to obtain a detection result.
It should be noted that the number of target detection models matches the number of electronic fences of the current preset position. For example, assuming that the current preset bit is provided with 2 electronic fences, the number of target detection models is 2. The object detection model herein may be any object detection model, such as an R-CNN (short for Region-based capacitive Neural Network) model, a Fast R-CNN (short for Fast-based capacitive Neural Network) model, an SSD (short for Single Shot Multi Box Detector) model, and a YOLO model.
And step 104, if the detection result is abnormal, generating alarm information.
When detecting that the detection result is abnormal, the server can generate alarm information to prompt the user. Specifically, the detection result here is abnormal, and it can be understood that a target object exists in the video image, or the number of the target objects reaches a preset threshold. For example, if the target detection model is used for monitoring illegal vehicles on two sides of a road, when the illegal vehicles exist on two sides of the road in the video image, the detection result is judged to be abnormal; and assuming that the target detection model is used for monitoring the traffic flow of the early peak on duty, and when the number of vehicles appearing in the video image reaches a preset threshold value, judging that the detection result is abnormal and the like. The alarm information includes, but is not limited to, alarm information in the form of text, picture, sound, photo, etc.
In this embodiment, the server can control the video monitor terminal to move between N preset positions, gathers the video data of N preset positions, simultaneously, sets up at least one fence through each preset position in N preset positions, is favorable to acquireing the video image that every fence corresponds to utilize the detection model that every fence corresponds to detect video image, thereby realize the video monitoring to different scenes. Therefore, the use amount of the video monitoring terminal can be reduced, the hardware cost of the video monitoring system is reduced, the same video monitoring terminal can be used for monitoring video images of a plurality of different scenes, and the utilization rate of the video monitoring terminal is improved.
Further, referring to fig. 2, fig. 2 is a second flowchart of a video monitoring method according to an embodiment of the present application. Based on the embodiment shown in fig. 1, the step 103 of detecting the video image by using the target detection model to obtain the detection result specifically includes the following steps:
After the server acquires the video image, the position parameters of the electronic fence can be preset at present. The location parameter herein can be understood as the location of the area enclosed by the electronic fence in the preset coordinate system.
And step 202, cutting the video image according to the position parameters to obtain a cut video image.
And after the server obtains the position parameters of the electronic fence of the current preset position, the video image can be cut according to the position parameters. Specifically, as an implementation manner, the edge information of each electronic fence may be determined according to the position parameter of the electronic fence at the current preset position, and the electronic fence may be cut along the edge of each electronic fence. As another embodiment, the maximum value and the minimum value in the horizontal direction and the maximum value and the minimum value in the vertical direction in the preset coordinate system in the position parameters of each electronic fence may be determined according to the position parameters of the electronic fence at the current preset position, then the clipping area of each electronic fence may be determined according to the maximum value and the minimum value in the horizontal direction and the maximum value and the minimum value in the vertical direction, and clipping may be performed according to the clipping area of each electronic fence. Therefore, by removing unimportant areas in the video image, the server only needs to identify and detect the cut video image, so that the identification and detection of the whole video image are avoided, and the identification and detection efficiency is improved.
And step 203, detecting the cut video image by using the target detection model to obtain a detection result.
And inputting the video images obtained by cutting based on each electronic fence into the target detection model corresponding to the electronic fence, and detecting the video images obtained by cutting by using the target detection model to obtain the detection results corresponding to each scene. The video data collected by each preset position of the N preset positions can be detected by adopting the mode, and the video images corresponding to the electronic fences of the preset positions are detected.
In the embodiment, by cutting the video image, only the cut video image needs to be identified and detected, so that the identification and detection of the whole video image are avoided, and the identification and detection efficiency is improved. In addition, the cut video image is input to the corresponding target detection model for detection, so that the detection result corresponding to each scene can be obtained, the video monitoring function of a plurality of different scenes can be realized through one video monitoring terminal, and the flexibility and the utilization rate of the video monitoring terminal are effectively improved.
Further, the step 202 of cropping the video image according to the position parameter to obtain a cropped video image includes:
acquiring target parameters in the position parameters, wherein the target parameters comprise a minimum value and a maximum value in the horizontal direction and a minimum value and a maximum value in the vertical direction in a preset coordinate system;
determining a cutting area according to the minimum value and the maximum value in the horizontal direction and the minimum value and the maximum value in the vertical direction;
and according to the cutting area, cutting the video image to obtain the cut video image.
In an embodiment, the server may obtain the position parameters of each electronic fence, determine a maximum value xmax and a minimum value xmin in a horizontal direction, a maximum value ymax and a minimum value ymin in a vertical direction in the position parameters of each electronic fence in a preset coordinate system, and determine the clipping area of each electronic fence according to the maximum value xmax and the minimum value xmin in the horizontal direction, and the maximum value ymax and the minimum value ymin the vertical direction. The cutting area of each electronic fence is a rectangular area enclosed by four points with coordinates (xmin, ymin), (xmin, ymax), (xmax, ymin) and (xmax, ymax). And finally, cutting according to the cutting area of each electronic fence to obtain a cut video image. Therefore, the server can remove the video images except the electronic fence, only carries out identification detection on the video images obtained after cutting, avoids the identification detection of the whole video image, and improves the efficiency of the identification detection.
Further, the above-mentioned step of cutting the video image according to the cutting area to obtain the cut video image specifically includes the following steps:
according to the cutting area, cutting the video image to obtain a first intermediate video image;
adjusting the first intermediate video image to a preset size to obtain a second intermediate video image;
and filtering the second intermediate video image to obtain a cut video image.
In an embodiment, the server may crop the video image based on the cropping range corresponding to each fence, so as to obtain a first intermediate video image corresponding to each fence. And after the first intermediate video images are obtained, carrying out size adjustment on each first intermediate video image, and adjusting the first intermediate video images to a preset size to obtain second intermediate video images corresponding to each electronic fence. And then filtering each second intermediate video image to remove noise interference of each second intermediate video image so as to obtain a cut video image. Therefore, the image quality can be effectively improved and the accuracy of image identification can be improved by carrying out size adjustment and filtering processing on the video image, and meanwhile, the accuracy of a detection result can be improved when the cut video image is detected by using the target detection model in the follow-up process.
It should be noted that, as another embodiment, the step of performing size adjustment on the video image and the step of filtering the video image may be performed simultaneously, or the video image may be filtered first and then the size of the video image is adjusted, which is not limited in this application.
Of course, as another implementation, after the video image is resized and filtered, the algorithms such as defogging and raindrop removal can be selected for the video image according to different scenes, so as to eliminate the influence of the environment on the quality of the video image in different scenes, thereby effectively improving the quality of the video image.
Furthermore, each preset position in the N preset positions is also provided with a preset stay time and an available time range; the preset stay time refers to the stay time of the video monitoring terminal when the video monitoring terminal moves to each preset position, and the available time range refers to the effective time range of the detection model corresponding to the electronic fence arranged in each preset position.
Specifically, the preset stay time refers to the stay time of the video monitoring terminal when the video monitoring terminal moves to each preset position, and may be any time such as 5 seconds, 10 seconds, 5 minutes, 1 hour, and the stay time of each preset position may be the same or different. The available time range refers to an effective time range of the detection model corresponding to the electronic fence set in each preset position, and the available time range corresponding to each electronic fence may be the same or different. The preset stay time and the available time range can be set according to actual needs, and the application is not particularly limited.
Specifically, the correspondence between each video monitoring terminal, the preset position, the electronic fence, the detection model, the preset stay time, and the available time range may be stored in the server in advance. When the server collects video data and calls the target detection model to detect the video image, the server can collect and detect the video image based on the corresponding relation. For example, assume the corresponding relationship between each video monitoring terminal, the preset location, the electronic fence, the detection model, the preset stay time and the available time range, as shown in the following table:
it can be seen that, for the video monitoring terminal 001, there may be 3 preset bits, which are 001001, 001002 and 001003 respectively, and each preset bit includes at least one electronic fence. And each electronic fence is respectively corresponding to one detection model and is used for carrying out video monitoring on different scenes. And each preset position corresponds to a preset stay time, so that the server can control the video monitoring terminal 001 to move among the 3 preset positions and stay for the corresponding preset stay time when moving to the corresponding preset position. For example, when the server controls the video monitoring terminal 001 to move to the preset position 001001 at a time point of 9:00, the video monitoring terminal 001 may stay at the preset position 001001 for 2 hours, and the server may determine whether to collect the video data of the preset stay duration according to the available time range (i.e., the generation start time and the generation end time) and the valid flag of the detection model. Since the time that the video monitoring terminal 001 stays at the preset position 001001 conforms to the available time range of the electronic fence 001001001 and does not conform to the available time range of the electronic fence 001001002, the server only needs to monitor the video image scene corresponding to the electronic fence 001001001. At this time, the server may request the video monitoring terminal 001 to upload the video data collected at the preset location 001001, decode the video data when the video data of the preset location 001001 is obtained, obtain a video image, cut the video image according to the position parameter of the electronic fence 001001001, obtain the cut video image, and finally detect the video image corresponding to the electronic fence 001001001 by using the detection model 03. When the server controls the video monitoring terminal 001 to move to the preset position 001002, the time point is 11:00, the server stays at the preset position 001002 for 1 hour, and the staying time of the video monitoring terminal 001 at the preset position 001002 accords with the available time range of the electronic fences 001002001 and 001002002, so that the server needs to monitor video image scenes corresponding to the electronic fences 001002001 and 001002002. At this time, the server may request the video monitoring terminal 001 to upload the video data acquired at the preset location 001002, decode the video data when acquiring the video data of the preset location 001002 to obtain a video image, then crop the video image according to the position parameters of the electronic fences 001002001 and 001002002 to obtain the cropped video image, and finally detect the video image corresponding to the electronic fence 001002001 by using the detection model 13, and detect the video image corresponding to the electronic fence 001002002 by using the detection model 06. When the time point when the server controls the video monitoring terminal 001 to move to the preset position 001003 is 12:00, the server stays at the preset position 001003 for 0.5 hour, and the staying time of the video monitoring terminal 001 at the preset position 001003 meets the available time range of the electronic fence 001003002, so that the server needs to monitor the video image scene corresponding to the electronic fence 001003002. At this time, the server may request the video monitoring terminal 001 to upload the video data acquired at the preset location 001003, decode the video data to obtain a video image when the video data of the preset location 001003 is acquired, cut the video image according to the position parameters of the electronic fence 001003002 to obtain the cut video image, and finally detect the video image corresponding to the electronic fence 001003002 by using the detection model 11. By analogy, after the server completes monitoring the preset bit 001003, the server may return to the preset bit 001001 again to perform the above loop operation again. Of course, in practical application, the sequence of movement of the preset positions in the preset movement track, the retention time of the preset positions, the number of the electronic fences of the preset positions, the detection modules corresponding to the electronic fences, the available time range of the detection models corresponding to the electronic fences, whether the detection models are effective, and other parameters can be flexibly set, and the application is not particularly limited.
In this embodiment, each preset position is provided with a corresponding electronic fence, and is also provided with a corresponding preset stay time and an available time range, so that the server can control the stay time of the video monitoring terminal at each preset position to reach the preset stay time according to the preset stay time corresponding to each preset position, so as to ensure that the server can acquire the video data required by each preset position. The server can also control the effective time of each detection model according to the available time range, namely the effective time range of the detection model corresponding to the electronic fence arranged in each preset position, and the server can detect the video data acquired within the effective time range of each detection model without detecting the video data acquired outside the effective time range, so that the load of the server can be effectively reduced, and the operation efficiency of the server is improved.
Further, based on the embodiment shown in fig. 1, before the step 103 of detecting the video image by using the object detection model, the method further includes:
step 301, sample video images corresponding to different scenes are obtained.
The sample video images refer to a large amount of video images acquired for training and evaluating models. When the sample video images are obtained, the monitoring videos in different scenes can be obtained according to the scenes corresponding to the detection models, frames of the monitoring videos are extracted to obtain a large number of video images, and then the large number of video images are sorted, screened and counted to ensure sample balance. And then, performing data annotation on the video image by image annotation software such as LabelImg and the like to obtain an XML-formatted annotation file. In order to enhance the robustness and generalization performance of the detection model, data enhancement technologies such as random inversion, random clipping, noise addition, random variation brightness, random variation channels, random variation contrast, random variation saturation, random saturation and chrominance conversion and the like are introduced to amplify the sample video image. And finally, dividing the amplified video image and the marking data into a training data set and a testing data set according to a preset proportion. The preset ratio may be 8:2, 9:1, and the like, and the application is not particularly limited.
Step 302, training the basic model according to the sample video image to obtain detection models corresponding to different scenes.
The basic model is a deep learning model, and the detection model is a target detection model based on the deep learning model, such as an R-CNN (short for Region-based capacitive Neural Network) model, a Fast R-CNN (short for Fast Region-based capacitive Neural Network) model, an SSD (short for Single Shot multi Detector) model, and a YOLO model.
Specifically, after the server acquires the sample video image, the training data set can be input to the basic model to obtain a plurality of candidate models in different scenes, then the plurality of candidate models are evaluated based on the test data set, and the model with the optimal performance is selected as the detection model.
And 303, associating the detection models corresponding to different scenes with the electronic fences arranged in the preset positions respectively.
After the server obtains the detection models corresponding to different scenes, the detection models corresponding to the different scenes can be established to be respectively associated with the electronic fences arranged in the presetting bits, so that when the server obtains the video data of the presetting bits, the corresponding target detection models can be determined according to the electronic fences, and therefore the video monitoring function of different scenes is achieved.
In this embodiment, detection models corresponding to different scenes need to be established, and an association relationship between each detection model and each electronic fence needs to be established, so that the server can detect video images of different scenes based on different detection models.
Further, the sample video image includes a training data set and a testing data set;
the step 302 of training the basic model according to the sample video image to obtain the detection models corresponding to different scenes includes:
clustering the sample video images according to a preset clustering algorithm to obtain prior frame parameters corresponding to different scenes;
respectively training the basic model according to the prior frame parameters, the preset configurable hyperparameters and the training data set to obtain a plurality of candidate models corresponding to different scenes;
according to the test data set, evaluating a plurality of candidate models corresponding to different scenes to obtain evaluation results corresponding to the different scenes;
and determining the detection models corresponding to different scenes according to the evaluation results corresponding to the different scenes.
The preset clustering algorithm may be any one of a k-means clustering algorithm (k-means clustering algorithm), a mean shift clustering algorithm, and the like. The prior box parameters refer to the anchors values used for training the detection model.
In an embodiment, a k-means algorithm may be used to cluster the training data set in the sample video image, obtain the size and aspect ratio of the box label in the training data set, obtain an anchors value, and replace the initial value of anchors in the model configuration file according to the obtained anchors value. Initializing configurable hyper-parameters of a basic model, such as learning _ rate, burn _ in and the like, respectively training the basic model according to the obtained anchors value, preset configurable hyper-parameters and a training data set to obtain a plurality of candidate models corresponding to different scenes, using the testing data set to evaluate the candidate models generated in each round, and recording evaluation indexes of each candidate model, such as accuracy (precision), recall (call), intersection over ratio (IOU), Average Precision (AP) and mean average precision (mAP) and the like. And comprehensively comparing the evaluation index data of each candidate model in different scenes, and taking the model with the optimal performance in different scenes as the detection model in the scene.
In this embodiment, a plurality of candidate models may be obtained through test training, and then the detection model may be determined from the candidate models based on the test data set, thereby improving the accuracy of the detection result of the detection model.
In an application example, referring to fig. 3, fig. 3 is a flowchart for setting a movement trajectory. As shown in fig. 3, the setting process of the movement trajectory includes the following steps:
the parameters of the preset position i include, but are not limited to, information such as a horizontal angle, an inclination angle and a camera focal length of the pan-tilt.
the detection rules include, but are not limited to, rules such as a preset stay time, an available time range, and the like.
if the preset bit is continuously set, i +1, and repeatedly executing the step 310 to the step 350; if the preset bit is not set, step 370 is executed.
Like this, can accomplish the process of setting up of removal orbit, when starting the camera and gathering video data, the camera can cruise according to this removal orbit.
In this application example, through setting up a plurality of preset positions to the camera to set up at least one fence in every preset position, can improve the utilization ratio of camera.
In another application example, referring to fig. 4, fig. 4 is a specific flowchart of a video monitoring method. As shown in fig. 4, the video monitoring method includes the following steps:
at this moment, the server can also load information such as the camera angle and the lens focal length of the preset position i +1, so that the camera can be conveniently controlled to quickly reach the preset position i + 1.
the parameters in the detection rule and the preset table are the same as those in step 320 and step 350 in the above application example, and are not described herein again.
the above steps 404 to 407 are described in detail in the above embodiments, and are not described again.
if the preset result is abnormal, executing step 409; if the predetermined result is normal, step 410 is executed.
if the cruising is continued, i +1, repeatedly executing the steps 401 to 409; and if the cruising is not continued, ending the cruising.
In the application example, the video images of the scenes are detected respectively through the detection models corresponding to the electronic fences, so that video monitoring of different scenes can be realized. Therefore, the use amount of the video monitoring terminal can be reduced, the hardware cost of the video monitoring system is reduced, the same video monitoring terminal can be used for monitoring video images of a plurality of different scenes, and the utilization rate of the video monitoring terminal is improved.
It should be noted that, in the video monitoring method provided in the embodiment of the present application, the execution main body may be a video monitoring apparatus, or a control module used in the video monitoring apparatus for the video monitoring method. In the embodiment of the present application, a video monitoring apparatus executing a video monitoring method is taken as an example to describe the video monitoring apparatus provided in the embodiment of the present application.
Referring to fig. 5, fig. 5 is a structural diagram of a video monitoring apparatus according to an embodiment of the present application. As shown in fig. 5, the video monitoring apparatus 500 includes:
the first obtaining module 501 is configured to obtain video data, which is collected by the video monitoring terminal at a current preset position, under the condition that the video monitoring terminal is controlled to move according to a preset movement track, where the preset movement track includes N preset positions, each preset position in the N preset positions is provided with at least one electronic fence, the electronic fences correspond to the detection models one to one, the N preset positions include the current preset position, and N is an integer greater than 1;
a decoding module 502, configured to decode video data to obtain a video image;
the detection module 503 is configured to detect the video image by using a target detection model to obtain a detection result, where the target detection model is a detection model corresponding to the currently preset electronic fence;
the generating module 504 is configured to generate alarm information if the detection result is abnormal.
Further, the detection module 503 includes:
the acquisition submodule is used for acquiring the position parameters of the electronic fence of the current preset position;
the cutting submodule is used for cutting the video image according to the position parameters to obtain a cut video image;
and the detection submodule is used for detecting the cut video image by using the target detection model to obtain a detection result.
Further, the cropping sub-module includes:
the acquisition unit is used for acquiring target parameters in the position parameters, wherein the target parameters comprise a minimum value and a maximum value in the horizontal direction and a minimum value and a maximum value in the vertical direction in a preset coordinate system;
the determining unit is used for determining a cutting area according to the minimum value and the maximum value in the horizontal direction and the minimum value and the maximum value in the vertical direction;
and the cutting unit is used for cutting the video image according to the cutting area to obtain the cut video image.
Further, the clipping unit is specifically configured to:
according to the cutting area, cutting the video image to obtain a first intermediate video image;
adjusting the first intermediate video image to a preset size to obtain a second intermediate video image;
and filtering the second intermediate video image to obtain a cut video image.
Furthermore, each preset position in the N preset positions is also provided with a preset stay time and an available time range; the preset stay time refers to the stay time of the video monitoring terminal when the video monitoring terminal moves to each preset position, and the available time range refers to the effective time range of the detection model corresponding to the electronic fence arranged in each preset position.
Further, the video monitoring apparatus 500 further includes:
the second acquisition module is used for acquiring sample video images corresponding to different scenes;
the training module is used for training the basic model according to the sample video image to obtain detection models corresponding to different scenes;
and the association module is used for associating the detection models corresponding to different scenes with the electronic fences arranged in the preset positions respectively.
Further, the sample video image includes a training data set and a testing data set; the training module comprises:
the clustering submodule is used for clustering the sample video images according to a preset clustering algorithm to obtain prior frame parameters corresponding to different scenes;
the training sub-module is used for respectively training the basic model according to the prior frame parameters, the preset configurable hyper-parameters and the training data set to obtain a plurality of candidate models corresponding to different scenes;
the evaluation sub-module is used for evaluating a plurality of candidate models corresponding to different scenes according to the test data set to obtain evaluation results corresponding to the different scenes;
and the determining submodule is used for determining the detection models corresponding to different scenes according to the evaluation results corresponding to the different scenes.
The video monitoring apparatus 500 in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The video monitoring apparatus 500 in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The video monitoring apparatus 500 provided in this embodiment of the application can implement each process implemented by the method embodiments of fig. 1 to fig. 4, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 6, an electronic device 600 is further provided in this embodiment of the present application, and includes a processor 601, a memory 602, and a program or an instruction stored in the memory 602 and executable on the processor 601, where the program or the instruction is executed by the processor 601 to implement each process of the above-mentioned embodiment of the video monitoring method, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic device and the non-mobile electronic device described above.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the video monitoring method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. Readable storage media, including computer-readable storage media, such as Read-Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the methods of the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. A video surveillance method, the method comprising:
under the condition that a video monitoring terminal is controlled to move according to a preset moving track, acquiring video data acquired by the video monitoring terminal at a current preset position, wherein the preset moving track comprises N preset positions, each preset position in the N preset positions is provided with at least one electronic fence, the electronic fences correspond to detection models one to one, the N preset positions comprise the current preset position, and N is an integer greater than 1;
decoding the video data to obtain a video image;
detecting the video image by using a target detection model to obtain a detection result, wherein the target detection model is a detection model corresponding to the electronic fence of the current preset position;
and if the detection result is abnormal, generating alarm information.
2. The method according to claim 1, wherein the detecting the video image by using the object detection model to obtain a detection result comprises:
acquiring the position parameters of the electronic fence of the current preset position;
cutting the video image according to the position parameter to obtain a cut video image;
and detecting the cut video image by using the target detection model to obtain a detection result.
3. The method according to claim 2, wherein the cropping the video image according to the position parameter to obtain a cropped video image comprises:
acquiring target parameters in the position parameters, wherein the target parameters comprise a minimum value and a maximum value in a horizontal direction and a minimum value and a maximum value in a vertical direction in a preset coordinate system;
determining a cutting area according to the minimum value and the maximum value in the horizontal direction and the minimum value and the maximum value in the vertical direction;
and according to the cutting area, cutting the video image to obtain a cut video image.
4. The method according to claim 3, wherein the cropping the video image according to the cropping area to obtain a cropped video image comprises:
according to the cutting area, cutting the video image to obtain a first intermediate video image;
adjusting the first intermediate video image to a preset size to obtain a second intermediate video image;
and filtering the second intermediate video image to obtain a cut video image.
5. The method according to claim 1, wherein each preset bit of the N preset bits is further provided with a preset dwell time and an available time range; the preset stay time refers to the stay time of the video monitoring terminal when the video monitoring terminal moves to each preset position, and the available time range refers to the effective time range of the detection model corresponding to the electronic fence set in each preset position.
6. The method of claim 1, wherein prior to said detecting the video image using the object detection model, the method further comprises:
acquiring sample video images corresponding to different scenes;
training a basic model according to the sample video image to obtain detection models corresponding to different scenes;
and respectively associating the detection models corresponding to the different scenes with the electronic fences arranged in the preset positions.
7. The method of claim 6, wherein the sample video image comprises a training dataset and a testing dataset;
the training of the basic model according to the sample video image to obtain the detection models corresponding to different scenes comprises:
clustering the sample video images according to a preset clustering algorithm to obtain prior frame parameters corresponding to different scenes;
respectively training the basic model according to the prior frame parameters, preset configurable hyper-parameters and the training data set to obtain a plurality of candidate models corresponding to different scenes;
according to the test data set, evaluating a plurality of candidate models corresponding to different scenes to obtain evaluation results corresponding to the different scenes;
and determining the detection models corresponding to different scenes according to the evaluation results corresponding to the different scenes.
8. A video monitoring apparatus, the apparatus comprising:
the video monitoring terminal comprises a first obtaining module, a second obtaining module and a detection module, wherein the first obtaining module is used for obtaining video data collected by the video monitoring terminal at a current preset position under the condition of controlling the video monitoring terminal to move according to a preset moving track, the preset moving track comprises N preset positions, each preset position in the N preset positions is provided with at least one electronic fence, the electronic fences correspond to detection models in a one-to-one mode, the N preset positions comprise the current preset position, and N is an integer greater than 1;
the decoding module is used for decoding the video data to obtain a video image;
the detection module is used for detecting the video image by using a target detection model to obtain a detection result, wherein the target detection model is a detection model corresponding to the electronic fence of the current preset position;
and the generating module is used for generating alarm information if the detection result is abnormal.
9. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the video surveillance method of any of claims 1-7.
10. A readable storage medium, on which a program or instructions are stored, which program or instructions, when executed by a processor, carry out the steps of the video surveillance method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110212326.0A CN113014876B (en) | 2021-02-25 | 2021-02-25 | Video monitoring method and device, electronic equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110212326.0A CN113014876B (en) | 2021-02-25 | 2021-02-25 | Video monitoring method and device, electronic equipment and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113014876A true CN113014876A (en) | 2021-06-22 |
CN113014876B CN113014876B (en) | 2023-06-02 |
Family
ID=76386804
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110212326.0A Active CN113014876B (en) | 2021-02-25 | 2021-02-25 | Video monitoring method and device, electronic equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113014876B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113808376A (en) * | 2021-09-18 | 2021-12-17 | 中国工商银行股份有限公司 | Safety monitoring method and device for self-service teller machine equipment |
CN115412704A (en) * | 2022-07-15 | 2022-11-29 | 浙江大华技术股份有限公司 | Control method of video monitoring system, video monitoring system and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100139576A1 (en) * | 2008-11-04 | 2010-06-10 | Dt Systems, Inc. | Electronic fence system |
CN103280041A (en) * | 2013-05-08 | 2013-09-04 | 广东电网公司珠海供电局 | Monitoring method and monitoring system for automatic deploying virtual electronic fence |
CN106600951A (en) * | 2017-01-19 | 2017-04-26 | 西安电子科技大学 | Vehicle monitoring system with vehicle monitoring terminal and method thereof |
CN108900812A (en) * | 2018-07-20 | 2018-11-27 | 合肥云联电子科技有限公司 | It is a kind of based on the indoor video monitoring method remotely controlled |
CN111131783A (en) * | 2019-12-27 | 2020-05-08 | 泰斗微电子科技有限公司 | Monitoring method and device based on electronic fence, terminal equipment and storage medium |
CN111163294A (en) * | 2020-01-03 | 2020-05-15 | 重庆特斯联智慧科技股份有限公司 | Building safety channel monitoring system and method for artificial intelligence target recognition |
-
2021
- 2021-02-25 CN CN202110212326.0A patent/CN113014876B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100139576A1 (en) * | 2008-11-04 | 2010-06-10 | Dt Systems, Inc. | Electronic fence system |
CN103280041A (en) * | 2013-05-08 | 2013-09-04 | 广东电网公司珠海供电局 | Monitoring method and monitoring system for automatic deploying virtual electronic fence |
CN106600951A (en) * | 2017-01-19 | 2017-04-26 | 西安电子科技大学 | Vehicle monitoring system with vehicle monitoring terminal and method thereof |
CN108900812A (en) * | 2018-07-20 | 2018-11-27 | 合肥云联电子科技有限公司 | It is a kind of based on the indoor video monitoring method remotely controlled |
CN111131783A (en) * | 2019-12-27 | 2020-05-08 | 泰斗微电子科技有限公司 | Monitoring method and device based on electronic fence, terminal equipment and storage medium |
CN111163294A (en) * | 2020-01-03 | 2020-05-15 | 重庆特斯联智慧科技股份有限公司 | Building safety channel monitoring system and method for artificial intelligence target recognition |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113808376A (en) * | 2021-09-18 | 2021-12-17 | 中国工商银行股份有限公司 | Safety monitoring method and device for self-service teller machine equipment |
CN115412704A (en) * | 2022-07-15 | 2022-11-29 | 浙江大华技术股份有限公司 | Control method of video monitoring system, video monitoring system and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113014876B (en) | 2023-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110660066B (en) | Training method of network, image processing method, network, terminal equipment and medium | |
CN110149482B (en) | Focusing method, focusing device, electronic equipment and computer readable storage medium | |
CN108447091B (en) | Target positioning method and device, electronic equipment and storage medium | |
CN109299703B (en) | Method and device for carrying out statistics on mouse conditions and image acquisition equipment | |
CN110493527B (en) | Body focusing method and device, electronic equipment and storage medium | |
CN110580428A (en) | image processing method, image processing device, computer-readable storage medium and electronic equipment | |
CN110650291B (en) | Target focus tracking method and device, electronic equipment and computer readable storage medium | |
CN111489342B (en) | Video-based flame detection method, system and readable storage medium | |
US20140064626A1 (en) | Adaptive image processing apparatus and method based in image pyramid | |
CN110659391A (en) | Video detection method and device | |
CN109145771A (en) | A kind of face snap method and device | |
KR102297217B1 (en) | Method and apparatus for identifying object and object location equality between images | |
CN107005655A (en) | Image processing method | |
CN113014876B (en) | Video monitoring method and device, electronic equipment and readable storage medium | |
CN110991385A (en) | Method and device for identifying ship driving track and electronic equipment | |
CN113158773B (en) | Training method and training device for living body detection model | |
CN109698906A (en) | Dithering process method and device, video monitoring system based on image | |
CN108040244B (en) | Snapshot method and device based on light field video stream and storage medium | |
CN113225550A (en) | Offset detection method and device, camera module, terminal equipment and storage medium | |
CN113010736B (en) | Video classification method and device, electronic equipment and storage medium | |
CN111783732A (en) | Group mist identification method and device, electronic equipment and storage medium | |
CN111476132A (en) | Video scene recognition method and device, electronic equipment and storage medium | |
CN110688926B (en) | Subject detection method and apparatus, electronic device, and computer-readable storage medium | |
CN116912517B (en) | Method and device for detecting camera view field boundary | |
CN111091089B (en) | Face image processing method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: Room 101, floors 1-3, building 14, North District, yard 9, dongran North Street, Haidian District, Beijing 100029 Applicant after: CHINA TOWER Co.,Ltd. Address before: 100142 19th floor, 73 Fucheng Road, Haidian District, Beijing Applicant before: CHINA TOWER Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |