WO2017177902A1 - 视频录制方法、服务器、系统及存储介质 - Google Patents
视频录制方法、服务器、系统及存储介质 Download PDFInfo
- Publication number
- WO2017177902A1 WO2017177902A1 PCT/CN2017/080113 CN2017080113W WO2017177902A1 WO 2017177902 A1 WO2017177902 A1 WO 2017177902A1 CN 2017080113 W CN2017080113 W CN 2017080113W WO 2017177902 A1 WO2017177902 A1 WO 2017177902A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- region
- sub
- image
- area
- pixel
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000001514 detection method Methods 0.000 claims abstract description 29
- 238000012216 screening Methods 0.000 claims abstract description 18
- 230000011218 segmentation Effects 0.000 claims description 26
- 238000013527 convolutional neural network Methods 0.000 claims description 20
- 238000012544 monitoring process Methods 0.000 abstract description 22
- 230000008439 repair process Effects 0.000 description 21
- 230000009471 action Effects 0.000 description 17
- 238000005070 sampling Methods 0.000 description 15
- 230000008569 process Effects 0.000 description 11
- 238000009826 distribution Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 6
- 239000000284 extract Substances 0.000 description 5
- 238000002372 labelling Methods 0.000 description 5
- 230000003068 static effect Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 3
- 238000012806 monitoring device Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/91—Television signal processing therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/77—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
- G06T2207/10021—Stereoscopic video; Stereoscopic image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/44—Event detection
Definitions
- the present application relates to the field of video processing technologies, and in particular, to a video recording method, a server, a system, and a storage medium.
- the monitoring system will continuously record the image 24 hours a day, so the image recording will be performed when the car is not repaired, and a large number of still video frames will be generated. Retaining a large number of such static video frames can result in wasted storage space and network bandwidth.
- key information can be found from long static video frames, wasting time and effort, and may even miss key frames.
- the existing video recording method monitors the panorama, and the recording action is triggered only when an action occurs, and such a function can alleviate the above problem to some extent.
- the problem is that unrelated actions can also trigger the recording action. For example, when there is a pedestrian passing 5 meters away from the vehicle to be repaired, although there is no relationship with the vehicle to be repaired, the video recording will be performed because of the action triggering, and information redundancy will be caused.
- the present application provides a video recording method, server, system, and storage medium that can reduce the recording of unnecessary video frames.
- the video recording methods provided by the application include:
- the surveillance camera is controlled to start video recording from the currently extracted second image.
- the server provided by the application includes a storage device and a processor, wherein:
- the storage device is configured to store a video recording system
- the processor is configured to invoke and execute the video recording system to perform the following steps:
- the surveillance camera is controlled to start video recording from the currently extracted second image.
- the video recording system provided by the present application includes:
- a first image acquisition module configured to extract a first image captured by a surveillance camera every first preset time period
- a modeling module configured to perform area detection on the extracted first image by using a pre-established model to extract an area of interest including a part or all parts of the target;
- a screening module configured to perform a motion region screening on the region of interest by using an analysis rule to filter out a target region
- a segmentation module configured to segment the selected target regions according to a segmentation rule, to divide each target region into a plurality of sub-regions
- a second image acquisition module configured to extract a second image captured by the surveillance camera every second preset time
- a motion detection module configured to compare an image block in each sub-region of the second image with an image block of a second image that is previously extracted in the same sub-region to determine whether a motion event occurs in each sub-region ;
- the video recording module is configured to control the surveillance camera to perform video recording from the currently extracted second image when a motion event occurs in a certain sub-area.
- the present application provides a non-volatile storage medium having computer readable instructions executable by one or more processors to perform the following steps:
- the surveillance camera is controlled to start video recording from the currently extracted second image.
- the video recording method and the server, the system and the storage medium applicable to the method can reduce unnecessary video frame recording and reduce waste of storage space and network bandwidth.
- FIG. 1 is a schematic diagram of a server application environment of a first preferred embodiment of a video recording system of the present application.
- FIG. 2 is a schematic diagram of a terminal application environment of a second preferred embodiment of the video recording system of the present application.
- FIG. 3 is a functional block diagram of a preferred embodiment of the video recording system of the present application.
- FIG. 4 is a flow chart of a method implementation of a preferred embodiment of the video recording method of the present application.
- FIG. 5 is a detailed implementation flowchart of determining whether a motion event has occurred in each sub-area in the preferred embodiment of the video recording method of FIG. 4.
- FIG. 1 it is a schematic diagram of a server application environment of a first preferred embodiment of the video recording system of the present application.
- the video recording system 10 can be installed and run in a server.
- the server may be a monitoring server 1.
- the monitoring server 1 can be communicatively coupled to one or more surveillance cameras 3 installed in a monitoring location 2 via a communication module (not shown).
- the monitoring place 2 can be a school, a kindergarten, a shopping mall, a hospital, a park, a city. Places such as squares and underground pedestrian passages can also be special areas where installation and monitoring are required, such as homes, small supermarkets, and ATM (Automatic Teller Machine) machines. In this embodiment, the monitoring place 2 is an automobile repair shop, such as a 4S shop.
- the surveillance camera 3 can be an analog camera.
- the analog camera can convert the analog video signal generated by the video capture device into a digital signal through a specific video capture card, and then transmit and store it in the monitoring server 1.
- the surveillance camera 3 is a webcam. After the network camera is fixed, the network cable is connected to the router, and is connected to the monitoring server 1 through a router to perform video output by the monitoring server 1.
- the monitoring server 1 may include a processor and a storage device (not shown).
- the processor is a Core Unit and a Control Unit for interpreting computer instructions and processing data in computer software.
- the storage device stores a database, an operating system, and the video recording system 10 described above.
- the storage device includes an internal memory and a non-volatile storage medium; the video recording system, an operating system, and a database are stored on a non-volatile storage medium; and the internal memory is an operating system, a database, and a video recording system. 10 provides a cached runtime environment.
- the video recording system 10 includes at least one computer-executable program instruction code, which can be executed by a processor to implement the following operations.
- the pre-established model is a Convolutional Neural Network (CNN) model.
- CNN Convolutional Neural Network
- the model generation step includes:
- the CNN model of the preset model structure is trained using a preset number of images after the area in which the vehicle is located to generate a CNN model that identifies the region of interest in the image.
- the purpose of the training is to optimize the values of the weights within the CNN network so that the network model as a whole can actually be better applied to the identification of the region of interest.
- the network model has a total of seven layers, five convolutional layers, one downsampled layer, and one fully connected layer.
- the convolutional layer is formed by a feature map constructed by a plurality of feature vectors, and the function of the feature map is to extract key features by using a convolution filter.
- the function of the downsampling layer is to remove the feature points of repeated expression and reduce the number of feature extractions by sampling method, thereby improving the efficiency of data communication between network layers.
- the available sampling methods include maximum sampling method, mean sampling method and random sampling method.
- the role of the fully connected layer is to connect the previous convolutional layer with downsampling and calculate the weight matrix for subsequent actual classification.
- each layer undergoes two processes of forward iteration and backward iteration. Each iteration generates a probability distribution. The probability distributions after multiple iterations are superimposed, and the system selects the category with the largest value in the probability distribution. As a final classification result.
- the analysis rule is: analyzing whether the extracted interest area is within a preset pixel area, for example, the preset pixel area range includes an abscissa area range and an ordinate area range, wherein the abscissa area
- the range is (X1, X2), the ordinate range is (Y1, Y2), the X1 represents the X1 column pixel, the X2 represents the X2 column pixel, and X1 is smaller than X2.
- Y1 represents the Y1th row of pixels, Y2 represents the Y2th row of pixels, and Y1 is smaller than Y2; if the extracted region of interest is within the preset pixel region, it is confirmed that the region of interest is The target area.
- the principle of the analysis rule is that the monitoring of the repair shop is generally directed to a repair station to ensure that the vehicle occupies the main area of the lens, that is, the middle area. Therefore, the preset pixel area range should cover the main area of the lens as much as possible; The range should not be too large to prevent multiple areas of interest from falling into it; the range should not be too small to prevent the target area from falling into it; the range of the abscissa and the range of the ordinate can be verified manually. If it is too large, it will be reduced. If it is too small, it will be adjusted.
- the segmentation rule is: a uniform segmentation mode is adopted, that is, the size and shape of the segmented sub-regions are consistent, and the target region is divided into sub-regions; and the target region is divided into N*N sub-regions, wherein N is a positive integer greater than 1, for example, 8*8.
- N is a positive integer greater than 1, for example, 8*8.
- the frame can be saved, and it is not necessary to continue to detect other parts. For example, in one example, taking 8*8 sub-regions as an example, if an action is detected on the first sub-area, there is no need to detect the remaining 63 sub-regions, thereby improving efficiency. 64 times.
- the step of determining whether each of the sub-areas has a motion event comprises: placing the currently extracted second image in a pixel value of each pixel of the image block in each sub-area, and being in the same sub-sample as the second image extracted last time.
- the image block of the region is compared with the pixel value of the pixel; the total difference corresponding to the image block in each sub-region is summed, and the calculated sum is divided by the number of pixels of the image block to obtain each sub- The average value of the difference corresponding to the image block in the area; and if the average value of the difference corresponding to the image block in the sub-area is greater than a preset threshold, it is determined that a motion event has occurred in the sub-area.
- the video recording system 10 can also be installed and run in any one of the terminal devices, such as the mobile terminal 4 shown in FIG.
- the mobile terminal 4 can be any electronic device with certain data processing functions, such as a smart phone, a tablet computer, a notebook computer, a wearable watch, wearable glasses, and the like.
- the terminal device 2 also includes a processor and a storage device (not shown), the video recording system 10 including at least one computer-executable program instruction code stored in the storage device of the terminal device 2; The operations described in the first embodiment are implemented under the execution of the processor of the terminal device 2.
- FIG. 1 and FIG. 2 are only block diagrams of partial structures related to the solution of the present application, and do not constitute a limitation of the server or the terminal device of the solution of the present application. Specifically, the electronic device More or fewer components than those shown in the figures may be included, or some components may be combined, or have different component arrangements.
- non-volatile storage medium in the foregoing embodiment may be a non-volatile storage medium such as a magnetic disk, an optical disk, a read-only memory (ROM), or a random storage memory (Random Access). Memory, RAM), etc.
- a non-volatile storage medium such as a magnetic disk, an optical disk, a read-only memory (ROM), or a random storage memory (Random Access). Memory, RAM), etc.
- the storage device can be built in or externally connected to the monitoring server 1 or the terminal device 2.
- FIG. 3 it is a functional block diagram of a preferred embodiment of the video recording system of the present invention.
- the program code of the video recording system 10 can be divided into a plurality of functional modules according to different functions thereof.
- the video recording system 10 may include a first image acquisition module 100, a modeling module 101, a screening module 102, a segmentation module 103, a second image acquisition module 104, a motion detection module 105, and a video. Recording module 106.
- the first image acquisition module 100 is configured to extract a first image captured by the surveillance camera 3 every first preset time period, such as every 5 minutes.
- the modeling module 101 is configured to perform area detection on the extracted first image by using a pre-established model to extract an interest area including a target such as a part or all parts of the vehicle.
- the pre-established model is a convolutional neural network (Convolutional Neural Network, CNN) model.
- CNN Convolutional Neural Network
- the preset type model generating step includes:
- the area where the vehicle is located is marked in each of the collected photos, wherein in the process of labeling, the position of the vehicle can be marked with a rectangular frame and the label is given. This process can be carried out in the form of crowdsourcing or manual labeling within the company, and the area where the marked vehicle is located is the area of interest.
- the CNN model of the preset model structure is trained using a preset number of images after the area in which the vehicle is located to generate a CNN model that identifies the region of interest in the image.
- the purpose of the training is to optimize the values of the weights within the CNN network so that the network model as a whole can actually be better applied to the identification of the region of interest.
- the network model has a total of seven layers, five convolutional layers, one downsampled layer, and one fully connected layer.
- the convolutional layer is formed by a feature map constructed by a plurality of feature vectors, and the function of the feature map is to extract key features by using a convolution filter.
- the function of the downsampling layer is to remove the feature points of repeated expression and reduce the number of feature extractions by sampling method, thereby improving the efficiency of data communication between network layers.
- the available sampling methods include maximum sampling method, mean sampling method and random sampling method.
- the role of the fully connected layer is to connect the previous convolutional layer with downsampling and calculate the weight matrix for subsequent actual classification. After the image enters the model, each layer undergoes two processes of forward iteration and backward iteration. Each iteration generates a probability distribution. The probability distributions after multiple iterations are superimposed, and the system selects the category with the largest value in the probability distribution. As a final classification result.
- the screening module 102 is configured to perform motion region screening on the region of interest by using an analysis rule to filter out the target region.
- the analysis rule is: analyzing whether the extracted interest area is within a preset pixel area, for example, the preset pixel area range includes an abscissa area range and an ordinate area range, wherein the abscissa area The range is (X1, X2), the ordinate area range is (Y1, Y2); if the extracted interest area is within the preset pixel area range, it is confirmed that the interest area is the target area.
- the principle of the analysis rule is that the monitoring of the repair shop is generally directed to a repair station to ensure that the vehicle occupies the main area of the lens, that is, the middle area.
- the preset pixel area range should cover the main area of the lens as much as possible;
- the range should not be too large to prevent multiple areas of interest from falling into it; the range should not be too small to prevent the target area from falling into it; the range of the abscissa and the range of the ordinate can be verified manually. If it is too large, it will be reduced. If it is too small, it will be adjusted.
- the segmentation module 103 is configured to segment the selected target regions according to a segmentation rule to divide each target region into a plurality of sub-regions.
- the segmentation rule is: a uniform segmentation method, that is, a segmented sub-region surface
- the product size and shape are consistent, and the target area is divided into sub-areas; the target area is divided into N*N sub-areas, where N is a positive integer greater than 1, for example, 8*8.
- N is a positive integer greater than 1, for example, 8*8.
- N*N sub-areas for motion detection instead of for the overall target.
- the first is accuracy. If only the target value is compared for the target value, the smaller motion may be averaged by other static parts. If it is dropped, it is impossible to detect such a subtle action; the second is efficiency, and the possible action only occurs in a certain area.
- the frame can be saved, and it is not necessary to continue to detect other parts. For example, in one example, taking 8*8 sub-regions as an example, if an action is detected on the first sub-area, it is not necessary to detect the remaining 63 sub-regions, thereby improving the efficiency by 64 times.
- the second image obtaining module 104 is configured to extract the second image captured by the surveillance camera 3 every second preset time, such as 0.5 seconds.
- the motion detection module 105 is configured to compare an image block in each sub-region in the second image with an image block in which the second image extracted in the same sub-region is in the same sub-region to determine whether each sub-region has occurred. Sports events.
- the step of determining whether each of the sub-areas has a motion event comprises: the pixel value of each pixel of the image block in which the currently extracted second image is in one of the sub-areas is the same as the previous image extracted
- the image block of the sub-region corresponds to the pixel value of the pixel, and the difference is corresponding; all the differences corresponding to the image block in the sub-region are summed, and the calculated sum is divided by the number of pixels of the image block to obtain the The average value of the difference corresponding to the image block in the sub-area; and if the average value of the difference corresponding to the image block in the sub-area is greater than a preset threshold, it is determined that a motion event occurs in the sub-area.
- the video recording module 106 is configured to control the surveillance camera 3 to perform video recording from the currently extracted second image when a motion event occurs in a certain sub-area.
- FIG. 4 it is a flowchart of a method implementation of a preferred embodiment of the video recording method of the present invention.
- the video recording method in this embodiment is not limited to the steps shown in the flowchart. In addition, in the steps shown in the flowchart, some steps may be omitted, and the order between the steps may be changed.
- step S10 the first image acquisition module 100 extracts a first image captured by the surveillance camera 3 every first preset time period, such as every 5 minutes.
- step S11 the modeling module 101 performs area detection on the extracted first image by using a pre-established model to extract an area of interest including a target object, such as a part of the vehicle or all parts.
- the pre-established model is a Convolutional Neural Network (CNN) model.
- CNN Convolutional Neural Network
- the preset type model generating step includes:
- the area where the vehicle is located is marked in each of the collected photos, wherein in the process of labeling, the position of the vehicle can be marked with a rectangular frame and the label is given. This process can be carried out in the form of crowdsourcing or manual labeling within the company, and the area where the marked vehicle is located is the area of interest.
- the CNN model of the preset model structure is trained using a preset number of images after the area in which the vehicle is located to generate a CNN model that identifies the region of interest in the image.
- the purpose of the training is to optimize the values of the weights within the CNN network so that the network model as a whole can actually be better applied to the identification of the region of interest.
- the network model has a total of seven layers, five convolutional layers, one downsampled layer, and one fully connected layer.
- the convolutional layer is formed by a feature map constructed by a plurality of feature vectors, and the function of the feature map is to extract key features by using a convolution filter.
- the function of the downsampling layer is to remove the feature points of repeated expression and reduce the number of feature extractions by sampling method, thereby improving the efficiency of data communication between network layers.
- the available sampling methods include maximum sampling method, mean sampling method and random sampling method.
- the role of the fully connected layer is to connect the previous convolutional layer with downsampling and calculate the weight matrix for subsequent actual classification. After the image enters the model, each layer undergoes two processes of forward iteration and backward iteration. Each iteration generates a probability distribution. The probability distributions after multiple iterations are superimposed, and the system selects the category with the largest value in the probability distribution. As a final classification result.
- step S12 the screening module 102 performs motion region screening on the region of interest by using an analysis rule to filter out the target region.
- the analysis rule is: analyzing whether the extracted interest area is within a preset pixel area, for example, the preset pixel area range includes an abscissa area range and an ordinate area range, wherein the abscissa area The range is (X1, X2), the ordinate area range is (Y1, Y2); if the extracted interest area is within the preset pixel area range, it is confirmed that the interest area is the target area.
- the principle of the analysis rule is that the monitoring of the repair shop is generally directed to a repair station to ensure that the vehicle occupies the main area of the lens, that is, the middle area.
- the preset pixel area range should cover the main area of the lens as much as possible;
- the range should not be too large to prevent multiple areas of interest from falling into it; the range should not be too small to prevent the target area from falling into it; the range of the abscissa and the range of the ordinate can be verified manually. If it is too large, it will be reduced. If it is too small, it will be adjusted.
- step S13 the screening module 102 determines whether at least one target area is selected. When no target area is selected, return to step 10 above to re-execute the extraction of the first image. When the target area is screened, the following step S14 is performed.
- step S14 the segmentation module 103 divides the selected target regions according to a segmentation rule to divide each target region into a plurality of sub-regions.
- the segmentation rule is: a uniform segmentation mode is adopted, that is, the size and shape of the segmented sub-regions are consistent, and the target region is divided into sub-regions; and the target region is divided into N*N sub-regions, wherein N is a positive integer greater than 1, for example, 8*8.
- N is a positive integer greater than 1, for example, 8*8.
- the frame can be saved, and it is not necessary to continue to detect other parts. For example, in one example, taking 8*8 sub-regions as an example, if an action is detected on the first sub-area, it is not necessary to detect the remaining 63 sub-regions, thereby improving the efficiency by 64 times.
- step S15 the second image acquisition module 104 extracts the second image captured by the surveillance camera 3 every second preset time, such as 0.5 seconds.
- Step S16 the motion detection module 105 compares the image block in each sub-region of the second image with the image block of the second image extracted in the same sub-region to determine whether each sub-region has undergone motion. event.
- the detailed implementation flow of step S16 is as described in the following FIG.
- step S17 the motion detection module 105 determines whether a motion event has occurred in each sub-area. When no motion event has occurred in any of the sub-areas, the process returns to the above-described step S15. When a motion event has occurred in any of the sub-areas, the following step S18 is performed.
- step S18 the video recording module 106 controls the surveillance camera 3 to perform video recording from the currently extracted second image.
- step S16 in FIG. 4 it is a detailed implementation flowchart of step S16 in FIG. 4, that is, whether or not a motion event has occurred in each sub-area.
- the video recording method in this embodiment is not limited to the steps shown in the flowchart. In addition, in the steps shown in the flowchart, some steps may be omitted, and the order between the steps may be changed.
- step S160 the motion detection module 105 acquires pixel values of respective pixel points of the image block in which the currently extracted second image is in one of the sub-regions.
- Step S161 the motion detection module 105 sets the currently extracted second image to the pixel value of each pixel of the image block in the sub-region and the pixel of the corresponding pixel of the image block of the same sub-region in the same sub-region.
- the value is the difference.
- Step S162 the motion detection module 105 sums all the differences corresponding to the image blocks in the sub-area, and divides the calculated sum by the number of pixels of the image block to obtain an image in the sub-area. The average value of the difference corresponding to the block.
- step S163 the motion detection module 105 determines whether the average value of the difference corresponding to the image block of the sub-area is greater than a preset threshold. If the average value of the difference corresponding to the image block of the sub-area is greater than the preset threshold, step S164 is performed. Otherwise, when the average value of the difference corresponding to the image block of the sub-area is less than the preset threshold, step S165 is performed.
- Step S164 the motion detection module 105 determines that a motion event has occurred in the sub-area.
- Step S165 the motion detection module 105 determines that no motion event occurs in the sub-area, and returns to the foregoing step S160, and the motion detection module 105 acquires the current extracted The pixel value of each pixel of the image block in the next sub-area.
- the storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
Description
Claims (20)
- 一种视频录制方法,其特征在于,该方法包括:每隔第一预设时间段抽取一监控摄像头所捕获的第一图像;利用一预先建立的模型,对抽取的第一图像进行区域检测,以提取出包含目标物部分部位或者全部部位的兴趣区域;利用一分析规则,对所述兴趣区域执行运动区域筛选,以筛选出目标区域;按照一分割规则,对所筛选出来的目标区域进行分割,以将每个目标区域分割成多个子区域;每隔第二预设时间抽取所述监控摄像头所捕获的第二图像;将所述第二图像中处于每个子区域中的图像块与前一次抽取的第二图像处于相同子区域的图像块进行比较,以判断各个子区域是否发生了运动事件;及当某一个子区域发生了运动事件时,控制所述监控摄像头从当前抽取的第二图像开始进行视频录制。
- 如权利要求1所述的方法,其特征在于,所述预先建立的模型为卷积神经网络模型。
- 如权利要求1所述的方法,其特征在于,所述分析规则为:分析是否有提取的兴趣区域处于预设的像素区域范围内,所述预设的像素区域范围包括横坐标区域范围和纵坐标区域范围,其中,所述横坐标区域范围为(X1,X2),所述纵坐标区域范围为(Y1,Y2),所述X1代表的是第X1列像素点,所述X2代表的是第X2列像素点,X1小于X2,所述Y1代表的是第Y1行像素点,所述Y2代表的是第Y2行像素点,且Y1小于Y2;若有提取的兴趣区域处于预设的像素区域范围内,则确认该兴趣区域为所述目标区域。
- 如权利要求1所述的方法,其特征在于,所述分割规则为:采用均匀分割方式对所述目标区域进行子区域分割;将所述目标区域分割成N*N个子区域,其中,N为大于1的正整数。
- 如权利要求1所述的方法,其特征在于,所述判断各个子区域是否发生了运动事件的步骤包括:将当前抽取的第二图像处于每个子区域中的图像块中的各个像素点的像素值和前一次抽取的第二图像处于相同子区域的图像块对应像素点的像素值求差值;对每个子区域中的图像块对应的所有差值求和,并将计算的和除以所述图像块的像素点数量,以获得每个子区域中的图像块对应的差值平均值;及若有子区域中的图像块对应的差值平均值大于预设阈值,则确定该子区域发生了运动事件。
- 一种服务器,其特征在于,该服务器包括存储设备以及处理器,其中:所述存储设备,用于存储一个视频录制系统;所述处理器,用于执行所述视频录制系统,以执行如下步骤:每隔第一预设时间段抽取一监控摄像头所捕获的第一图像;利用一预先建立的模型,对抽取的第一图像进行区域检测,以提取出包含目标物部分部位或者全部部位的兴趣区域;利用一分析规则,对所述兴趣区域执行运动区域筛选,以筛选出目标区域;按照一分割规则,对所筛选出来的目标区域进行分割,以将每个目标区域分割成多个子区域;每隔第二预设时间抽取所述监控摄像头所捕获的第二图像;将所述第二图像中处于每个子区域中的图像块与前一次抽取的第二图像处于相同子区域的图像块进行比较,以判断各个子区域是否发生了运动事件;及当某一个子区域发生了运动事件时,控制所述监控摄像头从当前抽取的第二图像开始进行视频录制。
- 如权利要求6所述的服务器,其特征在于,所述预先建立的模型为卷积神经网络模型。
- 如权利要求6所述的服务器,其特征在于,所述分析规则为:分析是否有提取的兴趣区域处于预设的像素区域范围内,所述预设的像素区域范围包括横坐标区域范围和纵坐标区域范围,其中,所述横坐标区域范围为(X1,X2),所述纵坐标区域范围为(Y1,Y2);若有提取的兴趣区域处于预设的像素区域范围内,则确认该兴趣区域为所述目标区域。
- 如权利要求6所述的服务器,其特征在于,所述分割规则为:采用均匀分割方式对所述目标区域进行子区域分割;将所述目标区域分割成N*N个子区域,其中,N为大于1的正整数。
- 如权利要求6所述的服务器,其特征在于,所述判断各个子区 域是否发生了运动事件的步骤包括:将当前抽取的第二图像处于每个子区域中的图像块各个像素点的像素值,和前一次抽取的第二图像处于相同子区域的图像块对应像素点的像素值求差值;对每个子区域中的图像块对应的所有差值求和,并将计算的和除以所述图像块的像素点数量,以获得每个子区域中的图像块对应的差值平均值;及若有子区域中的图像块对应的差值平均值大于预设阈值,则确定该子区域发生了运动事件。
- 一种视频录制系统,其特征在于,该系统包括:第一图像获取模块,用于每隔第一预设时间段抽取一监控摄像头所捕获的第一图像;建模模块,用于利用一预先建立的模型,对抽取的第一图像进行区域检测,以提取出包含目标物部分部位或者全部部位的兴趣区域;筛选模块,用于利用一分析规则,对所述兴趣区域执行运动区域筛选,以筛选出目标区域;分割模块,用于按照一分割规则,对所筛选出来的目标区域进行分割,以将每个目标区域分割成多个子区域;第二图像获取模块,用于每隔第二预设时间抽取所述监控摄像头所捕获的第二图像;运动侦测模块,用于将所述第二图像中处于每个子区域中的图像块与前一次抽取的第二图像处于相同子区域的图像块进行比较,以判断各个子区域是否发生了运动事件;及视频录制模块,用于当某一个子区域发生了运动事件时,控制所述监控摄像头从当前抽取的第二图像开始进行视频录制。
- 如权利要求11所述的系统,其特征在于,所述预先建立的模型为卷积神经网络模型。
- 如权利要求11所述的系统,其特征在于,所述分析规则为:分析是否有提取的兴趣区域处于预设的像素区域范围内,所述预设的像素区域范围包括横坐标区域范围和纵坐标区域范围,其中,所述横坐标区域范围为(X1,X2),所述纵坐标区域范围为(Y1,Y2),所述X1代表的是第X1列像素点,所述X2代表的是第X2列像素点,X1小于X2,所述Y1代表的是第Y1行像素点,所述Y2代表的是第Y2行像素点,且Y1小于Y2;若有提取的兴趣区域处于预设的像素区域范围内,则确认该兴趣 区域为所述目标区域。
- 如权利要求11所述的系统,其特征在于,所述分割规则为:采用均匀分割方式对所述目标区域进行子区域分割;将所述目标区域分割成N*N个子区域,其中,N为大于1的正整数。
- 如权利要求11所述的系统,其特征在于,所述运动侦测模块用于:将当前抽取的第二图像处于每个子区域中的图像块中的各个像素点的像素值和前一次抽取的第二图像处于相同子区域的图像块对应像素点的像素值求差值;对每个子区域中的图像块对应的所有差值求和,并将计算的和除以所述图像块的像素点数量,以获得每个子区域中的图像块对应的差值平均值;及若有子区域中的图像块对应的差值平均值大于预设阈值,则确定该子区域发生了运动事件。
- 一种具有计算机可读指令的存储介质,所述计算机可读指令可被一个或多个处理器执行,以执行如下步骤:每隔第一预设时间段抽取一监控摄像头所捕获的第一图像;利用一预先建立的模型,对抽取的第一图像进行区域检测,以提取出包含目标物部分部位或者全部部位的兴趣区域;利用一分析规则,对所述兴趣区域执行运动区域筛选,以筛选出目标区域;按照一分割规则,对所筛选出来的目标区域进行分割,以将每个目标区域分割成多个子区域;每隔第二预设时间抽取所述监控摄像头所捕获的第二图像;将所述第二图像中处于每个子区域中的图像块与前一次抽取的第二图像处于相同子区域的图像块进行比较,以判断各个子区域是否发生了运动事件;及当某一个子区域发生了运动事件时,控制所述监控摄像头从当前抽取的第二图像开始进行视频录制。
- 如权利要求16所述的存储介质,其特征在于,所述预先建立的模型为卷积神经网络模型。
- 如权利要求16所述的存储介质,其特征在于,所述分析规则为:分析是否有提取的兴趣区域处于预设的像素区域范围内,所述预设的像素区域范围包括横坐标区域范围和纵坐标区域范围,其中,所 述横坐标区域范围为(X1,X2),所述纵坐标区域范围为(Y1,Y2),所述X1代表的是第X1列像素点,所述X2代表的是第X2列像素点,X1小于X2,所述Y1代表的是第Y1行像素点,所述Y2代表的是第Y2行像素点,且Y1小于Y2;若有提取的兴趣区域处于预设的像素区域范围内,则确认该兴趣区域为所述目标区域。
- 如权利要求16所述的存储介质,其特征在于,所述分割规则为:采用均匀分割方式对所述目标区域进行子区域分割;将所述目标区域分割成N*N个子区域,其中,N为大于1的正整数。
- 如权利要求16所述的存储介质,其特征在于,所述判断各个子区域是否发生了运动事件的步骤包括:将当前抽取的第二图像处于每个子区域中的图像块中的各个像素点的像素值和前一次抽取的第二图像处于相同子区域的图像块对应像素点的像素值求差值;对每个子区域中的图像块对应的所有差值求和,并将计算的和除以所述图像块的像素点数量,以获得每个子区域中的图像块对应的差值平均值;及若有子区域中的图像块对应的差值平均值大于预设阈值,则确定该子区域发生了运动事件。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
SG11201800364YA SG11201800364YA (en) | 2016-04-14 | 2017-04-11 | Video recording system, server, system, and storage medium |
US15/737,323 US10349003B2 (en) | 2016-04-14 | 2017-04-11 | Video recording system, server, system, and storage medium |
KR1020187019521A KR102155182B1 (ko) | 2016-04-14 | 2017-04-11 | 비디오 리코딩 방법, 서버, 시스템 및 저장 매체 |
EP17781878.8A EP3445044B1 (en) | 2016-04-14 | 2017-04-11 | Video recording method, server, system, and storage medium |
AU2017250159A AU2017250159B2 (en) | 2016-04-14 | 2017-04-11 | Video recording method, server, system, and storage medium |
JP2018524835A JP6425856B1 (ja) | 2016-04-14 | 2017-04-11 | ビデオ録画方法、サーバー、システム及び記憶媒体 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610234956.7 | 2016-04-14 | ||
CN201610234956.7A CN106027931B (zh) | 2016-04-14 | 2016-04-14 | 视频录制方法及服务器 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017177902A1 true WO2017177902A1 (zh) | 2017-10-19 |
Family
ID=57081964
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/080113 WO2017177902A1 (zh) | 2016-04-14 | 2017-04-11 | 视频录制方法、服务器、系统及存储介质 |
Country Status (8)
Country | Link |
---|---|
US (1) | US10349003B2 (zh) |
EP (1) | EP3445044B1 (zh) |
JP (1) | JP6425856B1 (zh) |
KR (1) | KR102155182B1 (zh) |
CN (1) | CN106027931B (zh) |
AU (1) | AU2017250159B2 (zh) |
SG (1) | SG11201800364YA (zh) |
WO (1) | WO2017177902A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110502986A (zh) * | 2019-07-12 | 2019-11-26 | 平安科技(深圳)有限公司 | 识别图像中人物位置方法、装置、计算机设备和存储介质 |
CN111339879A (zh) * | 2020-02-19 | 2020-06-26 | 安徽领云物联科技有限公司 | 一种兵器室单人入室监测方法及装置 |
CN111507896A (zh) * | 2020-04-27 | 2020-08-07 | 北京字节跳动网络技术有限公司 | 图像液化处理方法、装置、设备和存储介质 |
US10949952B2 (en) * | 2018-06-07 | 2021-03-16 | Beijing Kuangshi Technology Co., Ltd. | Performing detail enhancement on a target in a denoised image |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106027931B (zh) | 2016-04-14 | 2018-03-16 | 平安科技(深圳)有限公司 | 视频录制方法及服务器 |
CN107766829A (zh) * | 2017-10-27 | 2018-03-06 | 浙江大华技术股份有限公司 | 一种物品检测的方法和设备 |
GB2569555B (en) * | 2017-12-19 | 2022-01-12 | Canon Kk | Method and apparatus for detecting deviation from a motion pattern in a video |
GB2569557B (en) | 2017-12-19 | 2022-01-12 | Canon Kk | Method and apparatus for detecting motion deviation in a video |
GB2569556B (en) * | 2017-12-19 | 2022-01-12 | Canon Kk | Method and apparatus for detecting motion deviation in a video sequence |
CN109522814B (zh) * | 2018-10-25 | 2020-10-02 | 清华大学 | 一种基于视频数据的目标追踪方法及装置 |
CN112019868A (zh) * | 2019-05-31 | 2020-12-01 | 广州虎牙信息科技有限公司 | 人像分割方法、装置及电子设备 |
JP7294927B2 (ja) * | 2019-07-23 | 2023-06-20 | ファナック株式会社 | 相違点抽出装置 |
KR102090739B1 (ko) * | 2019-10-21 | 2020-03-18 | 주식회사 휴머놀러지 | 영상의 유사도 분석을 위해 영상 영역 격자 다분할을 이용하는 지능형 이동 감시 시스템 및 그 감시방법 |
CN111652128B (zh) * | 2020-06-02 | 2023-09-01 | 浙江大华技术股份有限公司 | 一种高空电力作业安全监测方法、系统和存储装置 |
CN112203054B (zh) * | 2020-10-09 | 2022-10-14 | 深圳赛安特技术服务有限公司 | 监控视频点位标注方法、装置、存储介质及电子设备 |
CN112601049B (zh) * | 2020-12-08 | 2023-07-25 | 北京精英路通科技有限公司 | 视频的监控方法、装置、计算机设备及存储介质 |
CN114295058B (zh) * | 2021-11-29 | 2023-01-17 | 清华大学 | 一种建筑结构的整面动位移的测量方法 |
CN114666497B (zh) * | 2022-02-28 | 2024-03-15 | 青岛海信移动通信技术有限公司 | 成像方法、终端设备及存储介质 |
CN115314717B (zh) * | 2022-10-12 | 2022-12-20 | 深流微智能科技(深圳)有限公司 | 视频传输方法、装置、电子设备和计算机可读存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101017573A (zh) * | 2007-02-09 | 2007-08-15 | 南京大学 | 一种基于视频监控的运动目标检测与识别方法 |
US20130201338A1 (en) * | 2012-02-07 | 2013-08-08 | Sensormatic Electronics, LLC | Method and System for Monitoring Portal to Detect Entry and Exit |
CN104270619A (zh) * | 2014-10-22 | 2015-01-07 | 中国建设银行股份有限公司 | 一种安全告警方法及装置 |
CN104601918A (zh) * | 2014-12-29 | 2015-05-06 | 小米科技有限责任公司 | 视频录制方法和装置 |
CN105279898A (zh) * | 2015-10-28 | 2016-01-27 | 小米科技有限责任公司 | 报警方法及装置 |
CN106027931A (zh) * | 2016-04-14 | 2016-10-12 | 平安科技(深圳)有限公司 | 视频录制方法及服务器 |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8711217B2 (en) * | 2000-10-24 | 2014-04-29 | Objectvideo, Inc. | Video surveillance system employing video primitives |
CN100426837C (zh) * | 2005-05-10 | 2008-10-15 | 北京中星微电子有限公司 | 一种监控摄像方法及装置 |
JP5134556B2 (ja) * | 2009-01-08 | 2013-01-30 | 株式会社日立製作所 | 監視記録装置、監視システムおよび監視記録方法 |
JP2011035663A (ja) * | 2009-07-31 | 2011-02-17 | Panasonic Corp | 監視装置および監視方法 |
JP5358851B2 (ja) * | 2009-11-12 | 2013-12-04 | 将文 萩原 | 不審行動検知方法および不審行動検知装置 |
US8660368B2 (en) * | 2011-03-16 | 2014-02-25 | International Business Machines Corporation | Anomalous pattern discovery |
AU2012227263A1 (en) * | 2012-09-21 | 2014-04-10 | Canon Kabushiki Kaisha | Differentiating abandoned and removed object using temporal edge information |
JP5954106B2 (ja) * | 2012-10-22 | 2016-07-20 | ソニー株式会社 | 情報処理装置、情報処理方法、プログラム、及び情報処理システム |
KR102003671B1 (ko) * | 2012-10-29 | 2019-07-25 | 삼성전자주식회사 | 영상 처리 방법 및 장치 |
US9521377B2 (en) * | 2013-10-08 | 2016-12-13 | Sercomm Corporation | Motion detection method and device using the same |
KR102015954B1 (ko) * | 2014-03-21 | 2019-08-29 | 한화테크윈 주식회사 | 영상 처리 시스템 및 영상 처리 방법 |
KR101681233B1 (ko) * | 2014-05-28 | 2016-12-12 | 한국과학기술원 | 저 에너지/해상도 가지는 얼굴 검출 방법 및 장치 |
US20160042621A1 (en) * | 2014-06-13 | 2016-02-11 | William Daylesford Hogg | Video Motion Detection Method and Alert Management |
US10481696B2 (en) * | 2015-03-03 | 2019-11-19 | Nvidia Corporation | Radar based user interface |
US20170076195A1 (en) * | 2015-09-10 | 2017-03-16 | Intel Corporation | Distributed neural networks for scalable real-time analytics |
US10437878B2 (en) * | 2016-12-28 | 2019-10-08 | Shutterstock, Inc. | Identification of a salient portion of an image |
-
2016
- 2016-04-14 CN CN201610234956.7A patent/CN106027931B/zh active Active
-
2017
- 2017-04-11 US US15/737,323 patent/US10349003B2/en active Active
- 2017-04-11 WO PCT/CN2017/080113 patent/WO2017177902A1/zh active Application Filing
- 2017-04-11 AU AU2017250159A patent/AU2017250159B2/en active Active
- 2017-04-11 SG SG11201800364YA patent/SG11201800364YA/en unknown
- 2017-04-11 KR KR1020187019521A patent/KR102155182B1/ko active IP Right Grant
- 2017-04-11 EP EP17781878.8A patent/EP3445044B1/en active Active
- 2017-04-11 JP JP2018524835A patent/JP6425856B1/ja active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101017573A (zh) * | 2007-02-09 | 2007-08-15 | 南京大学 | 一种基于视频监控的运动目标检测与识别方法 |
US20130201338A1 (en) * | 2012-02-07 | 2013-08-08 | Sensormatic Electronics, LLC | Method and System for Monitoring Portal to Detect Entry and Exit |
CN104270619A (zh) * | 2014-10-22 | 2015-01-07 | 中国建设银行股份有限公司 | 一种安全告警方法及装置 |
CN104601918A (zh) * | 2014-12-29 | 2015-05-06 | 小米科技有限责任公司 | 视频录制方法和装置 |
CN105279898A (zh) * | 2015-10-28 | 2016-01-27 | 小米科技有限责任公司 | 报警方法及装置 |
CN106027931A (zh) * | 2016-04-14 | 2016-10-12 | 平安科技(深圳)有限公司 | 视频录制方法及服务器 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10949952B2 (en) * | 2018-06-07 | 2021-03-16 | Beijing Kuangshi Technology Co., Ltd. | Performing detail enhancement on a target in a denoised image |
CN110502986A (zh) * | 2019-07-12 | 2019-11-26 | 平安科技(深圳)有限公司 | 识别图像中人物位置方法、装置、计算机设备和存储介质 |
CN111339879A (zh) * | 2020-02-19 | 2020-06-26 | 安徽领云物联科技有限公司 | 一种兵器室单人入室监测方法及装置 |
CN111339879B (zh) * | 2020-02-19 | 2023-06-02 | 安徽领云物联科技有限公司 | 一种兵器室单人入室监测方法及装置 |
CN111507896A (zh) * | 2020-04-27 | 2020-08-07 | 北京字节跳动网络技术有限公司 | 图像液化处理方法、装置、设备和存储介质 |
CN111507896B (zh) * | 2020-04-27 | 2023-09-05 | 抖音视界有限公司 | 图像液化处理方法、装置、设备和存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US10349003B2 (en) | 2019-07-09 |
JP6425856B1 (ja) | 2018-11-21 |
CN106027931B (zh) | 2018-03-16 |
EP3445044B1 (en) | 2020-07-29 |
KR102155182B1 (ko) | 2020-09-11 |
EP3445044A1 (en) | 2019-02-20 |
US20180227538A1 (en) | 2018-08-09 |
AU2017250159A1 (en) | 2017-11-23 |
JP2018535496A (ja) | 2018-11-29 |
EP3445044A4 (en) | 2019-09-18 |
CN106027931A (zh) | 2016-10-12 |
SG11201800364YA (en) | 2018-02-27 |
KR20180133379A (ko) | 2018-12-14 |
AU2017250159B2 (en) | 2018-07-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017177902A1 (zh) | 视频录制方法、服务器、系统及存储介质 | |
CN109815843B (zh) | 图像处理方法及相关产品 | |
US9373034B2 (en) | Apparatus and method for tracking object | |
US11790553B2 (en) | Method and apparatus for detecting target object, electronic device and storage medium | |
US20180114071A1 (en) | Method for analysing media content | |
Viswanath et al. | Background modelling from a moving camera | |
WO2020008667A1 (en) | System and method for video anomaly detection | |
CN111079621B (zh) | 检测对象的方法、装置、电子设备和存储介质 | |
US20220301317A1 (en) | Method and device for constructing object motion trajectory, and computer storage medium | |
Saran et al. | Traffic video surveillance: Vehicle detection and classification | |
CN113496208B (zh) | 视频的场景分类方法及装置、存储介质、终端 | |
CN115760912A (zh) | 运动目标跟踪方法、装置、设备及计算机可读存储介质 | |
Angelo | A novel approach on object detection and tracking using adaptive background subtraction method | |
CN113112479A (zh) | 基于关键区块提取的渐进式目标检测方法和装置 | |
Jaiswal et al. | Survey paper on various techniques of recognition and tracking | |
CN113869163B (zh) | 目标跟踪方法、装置、电子设备及存储介质 | |
CN116349233A (zh) | 用于确定视频中的运动显著性和视频回放风格的方法和电子装置 | |
SanMiguel et al. | Object-size invariant anomaly detection in video-surveillance | |
EP2570962A1 (en) | Video analysis | |
Puri et al. | Ssd based vehicle number plate detection and recognition | |
Dinakaran et al. | Image resolution impact analysis on pedestrian detection in smart cities surveillance | |
CN114820692B (zh) | 跟踪目标的状态分析方法、装置、存储介质和终端 | |
Chandrajit et al. | Data Association and Prediction for Tracking Multiple Objects | |
CN111046724B (zh) | 一种基于区域匹配网络的行人检索方法 | |
Asundi et al. | Raindrop detection algorithm for ADAS |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2017250159 Country of ref document: AU Date of ref document: 20170411 Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15737323 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11201800364Y Country of ref document: SG |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2018524835 Country of ref document: JP |
|
ENP | Entry into the national phase |
Ref document number: 20187019521 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17781878 Country of ref document: EP Kind code of ref document: A1 |