CN110378930B - Moving object extraction method and device, electronic equipment and readable storage medium - Google Patents

Moving object extraction method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN110378930B
CN110378930B CN201910860408.9A CN201910860408A CN110378930B CN 110378930 B CN110378930 B CN 110378930B CN 201910860408 A CN201910860408 A CN 201910860408A CN 110378930 B CN110378930 B CN 110378930B
Authority
CN
China
Prior art keywords
region
area
video frame
moving
motion vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910860408.9A
Other languages
Chinese (zh)
Other versions
CN110378930A (en
Inventor
谢昌颐
李建成
陈一平
孙红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HUNAN SUIFUYAN ELECTRONIC TECHNOLOGY Co.,Ltd.
Original Assignee
Hunan Deyakun Creative Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Deyakun Creative Technology Co Ltd filed Critical Hunan Deyakun Creative Technology Co Ltd
Priority to CN201910860408.9A priority Critical patent/CN110378930B/en
Publication of CN110378930A publication Critical patent/CN110378930A/en
Application granted granted Critical
Publication of CN110378930B publication Critical patent/CN110378930B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation

Abstract

The application provides moving object extraction methods, which include extracting a fixed area and a moving area from preset video frame data by using a deep neural network, obtaining a global motion vector according to an image block method and the fixed area, performing background modeling according to the video frame data of the area outside the moving area to obtain an initial background model, performing motion compensation on the initial background model according to a current video frame by using the global motion vector to obtain a background model, and performing moving object extraction on the current video frame data by using the background model to obtain a moving object area.

Description

Moving object extraction method and device, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of moving object extraction technologies, and in particular, to moving object extraction methods, moving object extraction apparatuses, electronic devices, and computer-readable storage media.
Background
The intelligent video analysis is important directions for the development of the current security industry, gives full play to the real-time performance and the initiative of the monitoring video, analyzes, tracks and judges the monitored object in real time, gives corresponding alarm information, and can provide support for the decision and correct action of the relevant part .
The method comprises the steps of extracting moving targets, extracting a background model, and obtaining corresponding target information according to the Blob, wherein the Blob refers to connected areas in an image, can represent independent targets, and can obtain corresponding target information according to the Blob.
Therefore, how to provide solutions to solve the above technical problems is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The purpose of this application is to provide kinds of moving object extraction methods, moving object extraction devices, electronic equipment and computer readable storage medium, can improve the accuracy of moving object extraction its concrete scheme is as follows:
the application discloses moving object extraction methods, including:
extracting a fixed area and a moving area from preset video frame data by using a deep neural network;
obtaining a global motion vector according to an image block method and the fixed area;
performing background modeling according to the video frame data of the region outside the moving region to obtain an initial background model;
performing motion compensation on the initial background model according to the current video frame by using the global motion vector to obtain a background model;
and performing moving target extraction on the current video frame data through the background model to obtain a moving target area.
Optionally, extracting the fixed region and the moving region from the preset video frame data by using a deep neural network, including:
extracting an identification area from the preset video frame data by using the deep neural network;
and performing region extraction on the identification region according to preset scene parameters to obtain the fixed region and the moving region.
Optionally, performing background modeling according to the video frame data of the region outside the moving region to obtain an initial background model, including:
and performing background modeling by using a ViBe algorithm according to the video frame data of the region outside the moving region to obtain the initial background model.
Optionally, obtaining a global motion vector according to the image block method and the fixed area includes:
performing area division on the video frame corresponding to the fixed area to obtain each rectangular subarea;
calculating a sub-region motion vector of the rectangular sub-region by using a three-step method;
and analyzing the sub-region motion vector to obtain the global motion vector.
Optionally, analyzing the sub-region motion vector to obtain the global motion vector includes:
and analyzing the sub-region operation vectors by utilizing K-means clustering to obtain the global motion vector.
Optionally, performing moving object extraction on the current video frame data through the background model to obtain a moving object region, including:
extracting a moving target of the current video frame data through the background model to obtain a moving area;
judging whether the ratio of the area of the motion area to the area of the moving area is larger than a preset threshold value or not;
and if so, the motion area is the motion target area, and the motion target area is output.
The application discloses kinds of moving object extraction element includes:
the region extraction module is used for extracting a fixed region and a mobile region from preset video frame data by using a deep neural network;
the global motion vector acquisition module is used for acquiring a global motion vector according to the image block method and the fixed area;
the initial background modeling module is used for carrying out background modeling according to the video frame data of the region outside the moving region to obtain an initial background model;
the compensation module is used for performing motion compensation on the initial background model according to the current video frame by using the global motion vector to obtain a background model;
and the extraction module is used for extracting a moving target of the current video frame data through the background model to obtain a moving target area.
Optionally, the extraction module comprises:
a motion region obtaining unit, configured to perform motion target extraction on the current video frame data through the background model to obtain a motion region;
the judging unit is used for judging whether the ratio of the area of the motion area to the area of the moving area is larger than a preset threshold value or not;
and the output unit is used for determining that the motion area is the motion target area and outputting the motion target area if the motion area is the motion target area.
The application discloses kinds of electronic equipment, include:
a memory for storing a computer program;
and the processor is used for realizing the steps of the moving object extraction method when the computer program is executed.
The present application discloses computer-readable storage media having stored thereon a computer program which, when executed by a processor, implements the steps of the moving object extraction method as described above.
The application provides moving object extraction methods, which comprise the steps of extracting a fixed area and a moving area from preset video frame data by using a deep neural network, obtaining a global motion vector according to an image block method and the fixed area, carrying out background modeling according to the video frame data of the area outside the moving area to obtain an initial background model, carrying out motion compensation on the initial background model according to a current video frame by using the global motion vector to obtain a background model, and carrying out moving object extraction on the current video frame data by using the background model to obtain a moving object area.
The method comprises the steps of obtaining an initial background model by utilizing video frame data of a region outside a moving region to perform background modeling, obtaining the background model by utilizing global motion vectors to perform motion compensation on the initial background model according to a current video frame, extracting a moving object through the background model, improving the accuracy of background model establishment and further improving the accuracy and high efficiency of moving object extraction, and avoiding the existing moving object in the initial background modeling process in the related technology and the inaccuracy of background model establishment caused by the fact that a camera is in motion.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of moving object extraction methods provided in an embodiment of the present application;
fig. 2 is a flowchart of another moving object extraction methods provided in the embodiments of the present application;
fig. 3 is a schematic structural diagram of moving object extraction devices according to an embodiment of the present application.
Detailed Description
For purposes of making the objects, solutions and advantages of the embodiments of the present application clearer, the embodiments of the present application will be described in detail and completely with reference to the drawings of the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of of the present application.
Based on the above technical problems, the present embodiment provides moving target extraction methods, which avoid existing moving targets in the initial background modeling process in the related art and inaccuracy in establishing a background model due to the camera being in motion, and improve accuracy in establishing a background model and further improve accuracy, efficiency, and adaptivity of moving target extraction, and refer to fig. 1 specifically, where fig. 1 is a flowchart of moving target extraction methods provided by the present embodiment, and specifically includes:
s101, extracting a fixed area and a moving area from preset video frame data by using a deep neural network.
The method includes the steps of extracting a fixed region by using the deep neural network, wherein the adaptability to parameters and scenes of a camera is high, and images which are not easily subjected to the motion of the camera are not repeated, the moving target extraction provided by the method is suitable for three scenes in which a foreground motion vector is larger than a background motion vector, the foreground motion vector is equal to the background motion vector, and the foreground motion vector is smaller than the background motion vector, so that the adaptivity is higher, the fixed region and the moving region extracted from preset video frame data are certainly not capable of being identified except for the two identifiable regions in the preset video frame data, the fixed region is used for supporting the extraction of a subsequent global motion vector, the moving region is used for verifying a subsequent motion target detection result, the preset video frame data are reference data, so that a target region in the current video frame data can be identified finally, the preset video frame data can be limited by other video frame data, and the user frame data can be no longer limited according to other preset video frame data required by the current frame data .
And , extracting the fixed area and the moving area from the preset video frame data by using the deep neural network, including extracting the identification area from the preset video frame data by using the deep neural network, and extracting the area from the identification area according to the preset scene parameters to obtain the fixed area and the moving area.
Extracting all identifiable identification areas from input preset video frame data; the identification area is divided into a fixed area and a moving area according to preset scene parameters and/or prior knowledge, specifically, the identification area can be obtained by analyzing input preset video frame data through an SSD algorithm based on a deep neural network technology, then extracting the identifiable identification area, and dividing the identifiable identification area into the fixed area and the moving area, for example, areas of human bodies, vehicles, animals and the like are marked as the moving area, and areas of other categories are marked as the fixed area. Therefore, in the embodiment, all recognizable identification areas are extracted from the input preset video frame data to perform area extraction, so that the fixed area and the moving area can be accurately acquired, and the extraction accuracy is improved.
And S102, obtaining a global motion vector according to the image block method and the fixed area.
The image block method is also called as a block matching motion estimation method, and mainly comprises the steps of dividing an image frame into a plurality of non-overlapping blocks, and searching the relative position of each block in a target frame, which is the most matched block in a reference frame (an upper frame or other frames) by taking the block as a unit.
And , obtaining a global motion vector according to the image block method and the fixed area, including dividing the area of the video frame corresponding to the fixed area to obtain each rectangular sub-area, calculating the motion vector of the rectangular sub-area by using a three-step method, and analyzing the motion vector to obtain the global motion vector.
In another 8632 realizable modes, the preset video frame is divided into a plurality of non-overlapping regions, wherein the size of the region can be 16 × 16, and certainly can be other sizes, as long as the purpose of the embodiment can be met, so as to obtain each rectangular sub-region of the fixed region in the preset video frame.
And , analyzing the sub-region motion vectors to obtain a global motion vector, wherein analyzing the sub-region motion vectors by using K-means clustering to obtain the global motion vector.
The K-means clustering algorithm is division-based methods, has the advantages of simplicity and easiness, time complexity of O (n) and suitability for processing large-scale data, and therefore, by adopting the K-means clustering algorithm, the motion vector analysis process can be simplified, and the global motion vector acquisition efficiency is improved.
S103, performing background modeling according to the video frame data of the region outside the moving region to obtain an initial background model.
The background modeling method is not limited in this embodiment, and may be gaussian background modeling, Vibe background modeling, mixed gaussian background modeling, nuclear density background modeling, codebook background modeling, or super-pixel background modeling, as long as the purpose of this embodiment can be achieved. The initial background model is obtained according to the video frame data of the area outside the moving area by the background modeling method, wherein the area outside the moving area comprises a fixed area and an area which cannot be identified. In the initial background modeling stage, only the region data outside all the moving regions in the input video frame is used, so that the introduction of the initial moving object information possibly existing in the scene into the background model can be avoided.
And , performing background modeling according to the video frame data of the region outside the moving region to obtain an initial background model, wherein the background modeling is performed according to the video frame data of the region outside the moving region by using a ViBe algorithm to obtain the initial background model.
In the embodiment, the ViBe algorithm is used for background modeling according to video frame data of regions outside a moving region, the ViBe algorithm can randomly select 20 neighborhood samples to establish sample-based background models for each pixel point, the initialization speed is high, the memory consumption is low, the occupied resources are low and the like, specifically, the initial background model obtained by the ViBe through background modeling is a background model based on a small number of samples, the similarity matching algorithm in the initial background model is optimized, and when a certain number of matching samples of are found, the calculation is stopped.
And S104, performing motion compensation on the initial background model according to the current video frame by using the global motion vector to obtain a background model.
Specifically, in the initial modeling stage, if the camera has motion, motion compensation is performed on the obtained partial background model data according to the global motion vector to make the obtained partial background model data have a spatial mapping relation with the current input frame, and then background modeling is performed continuously by using the current video frame data, so that the influence of the camera motion on the initial background modeling can be effectively reduced.
And S105, performing moving object extraction on the current video frame data through the background model to obtain a moving object area.
The purpose of this step is to detect the motion region and obtain the motion target region. The method mainly utilizes a background model to extract the region of the input current video frame data, namely the moving target, so as to obtain the moving target.
Based on the technical scheme, the background modeling is carried out by utilizing the video frame data of the region outside the moving region to obtain an initial background model; the method has the advantages that the global motion vector is utilized to perform motion compensation on the initial background model according to the current video frame to obtain the background model, the moving target extraction is performed through the background model, the accuracy of background model establishment is improved, the accuracy and the high efficiency of moving target extraction are improved, existing moving targets in the initial background modeling process in the related technology and the inaccuracy of background model establishment caused by the fact that the camera is in motion are avoided, in addition, the moving target extraction provided by the method is suitable for three scenes that the foreground motion vector is larger than the background motion vector, the foreground motion vector is equal to the background motion vector and the foreground motion vector is smaller than the background motion vector, and therefore the self-adaptability is stronger.
Based on the foregoing embodiments, in order to improve the accuracy of target moving object extraction, the present embodiment provides moving object extraction methods, which determine a moving object region by determining that a ratio of an area of the moving region to an area of a moving region is greater than a preset threshold, specifically referring to fig. 2, where fig. 2 is a flowchart of another moving object extraction methods provided in the present embodiment, and the method includes:
s201, extracting a fixed area and a moving area from preset video frame data by using a deep neural network.
And S202, obtaining a global motion vector according to the image block method and the fixed area.
And S203, carrying out background modeling according to the video frame data of the region outside the moving region to obtain an initial background model.
And S204, performing motion compensation on the initial background model according to the current video frame by using the global motion vector to obtain a background model.
Please refer to the above embodiments specifically, which will not be described in detail in this embodiment.
S205, extracting a moving object of the current video frame data through the background model to obtain a moving area.
S206, judging whether the ratio of the area of the motion area to the area of the moving area is larger than a preset threshold value.
And S207, if so, the motion area is a motion target area, and the motion target area is output.
The method for detecting the motion area of the mobile terminal does not limit the preset threshold, a user can set the preset threshold according to actual requirements as long as the purpose of the embodiment can be achieved, specifically of 50%, 60%, 70%, 80%, 90% and 95%, and certainly other values can be achieved as long as the purpose of the embodiment can be met.
Based on the above technical scheme, in this embodiment, after the detection of the motion region and the passing motion region are obtained, the method of determining whether the ratio of the area of the motion region to the area of the moving region is greater than the preset threshold value is used to verify the extraction result, so as to improve the accuracy of the extraction of the moving target.
Referring to fig. 3, the following describes moving object extracting apparatuses provided in this embodiment of the present application, where the moving object extracting apparatuses described below and the moving object extracting methods described above may be referred to correspondingly, and fig. 3 is a schematic structural diagram of moving object extracting apparatuses provided in this embodiment of the present application, and includes:
the region extraction module 301 is configured to extract a fixed region and a moving region from preset video frame data by using a deep neural network;
a global motion vector obtaining module 302, configured to obtain a global motion vector according to an image block method and a fixed area;
the initial background modeling module 303 is configured to perform background modeling according to video frame data of a region outside the moving region to obtain an initial background model;
a compensation module 304, configured to perform motion compensation on the initial background model according to the current video frame by using the global motion vector to obtain a background model;
the extracting module 305 is configured to perform moving object extraction on the current video frame data through the background model to obtain a moving object region.
In specific embodiments of , the region extraction module 301 comprises:
, an extraction unit for extracting an identification area from the preset video frame data by using a deep neural network;
and the second extraction unit is used for carrying out region extraction on the identification region according to the preset scene parameters to obtain a fixed region and a moving region.
In specific embodiments of , the initial background modeling module 303 includes:
and the initial background modeling unit is used for performing background modeling by using a ViBe algorithm according to the video frame data of the region outside the moving region to obtain an initial background model.
In specific embodiments of , the global motion vector obtaining module 302 includes:
the dividing unit is used for carrying out region division on the video frames corresponding to the fixed regions to obtain each rectangular subregion;
a calculation unit for calculating a sub-region motion vector of the rectangular sub-region using a three-step method;
and the acquisition unit is used for analyzing the sub-region motion vector to obtain a global motion vector.
In some specific embodiments of , the obtaining unit includes:
and the obtaining subunit is used for analyzing the sub-region operation vectors by utilizing the K-means clustering to obtain the global motion vector.
In specific embodiments of , the extraction module includes:
the motion region acquisition unit is used for extracting a motion target of current video frame data through a background model to obtain a motion region;
the judging unit is used for judging whether the ratio of the area of the motion area to the area of the moving area is larger than a preset threshold value or not;
and the output unit is used for outputting the motion target area if the motion target area is the motion target area.
Since the embodiment of the moving object extracting apparatus portion and the embodiment of the moving object extracting method portion correspond to each other, please refer to the description of the embodiment of the moving object extracting method portion for the embodiment of the moving object extracting apparatus portion, which is not repeated here.
The electronic devices provided in the embodiments of the present application are introduced below, and the electronic devices described below and the moving object extraction method described above may be referred to correspondingly.
The present embodiment provides electronic devices, including:
a memory for storing a computer program;
and the processor is used for realizing the steps of the moving object extraction method when executing the computer program.
Since the embodiment of the electronic device portion corresponds to the embodiment of the moving object extraction method portion, please refer to the description of the embodiment of the moving object extraction method portion for the embodiment of the electronic device portion, which is not repeated here.
computer-readable storage media provided by the embodiments of the present application are introduced below, and the computer-readable storage media described below and the moving object extraction method described above are referred to in correspondence with each other.
The present embodiment provides computer-readable storage media, on which computer programs are stored, which when executed by a processor implement the steps of the moving object extraction method as described above.
Since the embodiment of the computer-readable storage medium portion corresponds to the embodiment of the moving object extraction method portion, please refer to the description of the embodiment of the moving object extraction method portion for the embodiment of the computer-readable storage medium portion, which is not repeated here.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of functionality for clarity of explanation of interchangeability of hardware and software.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The moving object extracting methods, moving object extracting devices, electronic devices and computer readable storage media provided by the present application are described in detail herein, and the specific examples are applied to illustrate the principles and embodiments of the present application, and the above description is only for assisting understanding the method and the core idea of the present application.

Claims (9)

1, kinds of moving object extraction method, characterized by comprising:
extracting a fixed area and a moving area from preset video frame data by using a deep neural network;
obtaining a global motion vector according to an image block method and the fixed area;
performing background modeling according to the video frame data of the region outside the moving region to obtain an initial background model;
performing motion compensation on the initial background model by using the global motion vector according to the current video frame data to obtain a background model;
performing moving target extraction on the current video frame data through the background model to obtain a moving target area;
obtaining a global motion vector according to the image block method and the fixed area, wherein the method comprises the following steps:
performing area division on the video frame corresponding to the fixed area to obtain each rectangular subarea;
calculating a sub-region motion vector of the rectangular sub-region by using a three-step method;
and analyzing the sub-region motion vector to obtain the global motion vector.
2. The moving object extracting method according to claim 1, wherein extracting the fixed region and the moving region from the preset video frame data using a deep neural network comprises:
extracting an identification area from the preset video frame data by using the deep neural network;
and performing region extraction on the identification region according to preset scene parameters to obtain the fixed region and the moving region.
3. The method for extracting a moving object according to claim 1, wherein performing background modeling according to video frame data of a region outside the moving region to obtain an initial background model comprises:
and performing background modeling by using a ViBe algorithm according to the video frame data of the region outside the moving region to obtain the initial background model.
4. The method according to claim 1, wherein analyzing the sub-region motion vector to obtain the global motion vector comprises:
and analyzing the sub-region operation vectors by utilizing K-means clustering to obtain the global motion vector.
5. The method of any of claims 1-4, wherein performing moving object extraction on the current video frame data through the background model to obtain a moving object region comprises:
extracting a moving target of the current video frame data through the background model to obtain a moving area;
judging whether the ratio of the area of the motion area to the area of the moving area is larger than a preset threshold value or not;
and if so, the motion area is the motion target area, and the motion target area is output.
6, kinds of moving object extraction device, characterized by comprising:
the region extraction module is used for extracting a fixed region and a mobile region from preset video frame data by using a deep neural network;
the global motion vector acquisition module is used for acquiring a global motion vector according to the image block method and the fixed area;
the initial background modeling module is used for carrying out background modeling according to the video frame data of the region outside the moving region to obtain an initial background model;
the compensation module is used for performing motion compensation on the initial background model according to the current video frame by using the global motion vector to obtain a background model;
the extraction module is used for extracting a moving target of the current video frame data through the background model to obtain a moving target area;
wherein the global motion vector acquisition module includes:
the dividing unit is used for carrying out region division on the video frame corresponding to the fixed region to obtain each rectangular subregion;
a calculating unit, configured to calculate a sub-region motion vector of the rectangular sub-region by using a three-step method;
and the acquisition unit is used for analyzing the sub-region motion vector to obtain the global motion vector.
7. The moving object extracting apparatus according to claim 6, wherein the extracting module includes:
a motion region obtaining unit, configured to perform motion target extraction on the current video frame data through the background model to obtain a motion region;
the judging unit is used for judging whether the ratio of the area of the motion area to the area of the moving area is larger than a preset threshold value or not;
and the output unit is used for determining that the motion area is the motion target area and outputting the motion target area if the motion area is the motion target area.
An electronic device of the kind , comprising:
a memory for storing a computer program;
a processor for implementing the steps of the moving object extraction method of any of claims 1-5 when executing the computer program.
Computer-readable storage medium , characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the moving object extraction method according to any of claims 1 to 5- .
CN201910860408.9A 2019-09-11 2019-09-11 Moving object extraction method and device, electronic equipment and readable storage medium Active CN110378930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910860408.9A CN110378930B (en) 2019-09-11 2019-09-11 Moving object extraction method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910860408.9A CN110378930B (en) 2019-09-11 2019-09-11 Moving object extraction method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN110378930A CN110378930A (en) 2019-10-25
CN110378930B true CN110378930B (en) 2020-01-31

Family

ID=68261517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910860408.9A Active CN110378930B (en) 2019-09-11 2019-09-11 Moving object extraction method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN110378930B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115379114A (en) * 2022-07-19 2022-11-22 阿里巴巴(中国)有限公司 Panoramic video processing method and device and electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1376471A1 (en) * 2002-06-19 2004-01-02 STMicroelectronics S.r.l. Motion estimation for stabilization of an image sequence
CN101216941A (en) * 2008-01-17 2008-07-09 上海交通大学 Motion estimation method under violent illumination variation based on corner matching and optic flow method
EP1984893A2 (en) * 2006-02-13 2008-10-29 SNELL & WILCOX LIMITED Method and apparatus for modifying a moving image sequence
CN101489031A (en) * 2009-01-16 2009-07-22 西安电子科技大学 Adaptive frame rate up-conversion method based on motion classification
CN101877790A (en) * 2010-05-26 2010-11-03 广西大学 Panoramic video coding-oriented quick global motion estimation method
CN101902609A (en) * 2010-07-28 2010-12-01 西安交通大学 Motion compensation frame frequency up-conversion method for processing flying caption
CN102930559A (en) * 2012-10-23 2013-02-13 华为技术有限公司 Image processing method and device
CN104463910A (en) * 2014-12-08 2015-03-25 中国人民解放军国防科学技术大学 High-speed motion target extraction method based on motion vector
CN108702512A (en) * 2017-10-31 2018-10-23 深圳市大疆创新科技有限公司 Method for estimating and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10552962B2 (en) * 2017-04-27 2020-02-04 Intel Corporation Fast motion based and color assisted segmentation of video into region layers

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1376471A1 (en) * 2002-06-19 2004-01-02 STMicroelectronics S.r.l. Motion estimation for stabilization of an image sequence
EP1984893A2 (en) * 2006-02-13 2008-10-29 SNELL & WILCOX LIMITED Method and apparatus for modifying a moving image sequence
CN101216941A (en) * 2008-01-17 2008-07-09 上海交通大学 Motion estimation method under violent illumination variation based on corner matching and optic flow method
CN101489031A (en) * 2009-01-16 2009-07-22 西安电子科技大学 Adaptive frame rate up-conversion method based on motion classification
CN101877790A (en) * 2010-05-26 2010-11-03 广西大学 Panoramic video coding-oriented quick global motion estimation method
CN101902609A (en) * 2010-07-28 2010-12-01 西安交通大学 Motion compensation frame frequency up-conversion method for processing flying caption
CN102930559A (en) * 2012-10-23 2013-02-13 华为技术有限公司 Image processing method and device
CN104463910A (en) * 2014-12-08 2015-03-25 中国人民解放军国防科学技术大学 High-speed motion target extraction method based on motion vector
CN108702512A (en) * 2017-10-31 2018-10-23 深圳市大疆创新科技有限公司 Method for estimating and device

Also Published As

Publication number Publication date
CN110378930A (en) 2019-10-25

Similar Documents

Publication Publication Date Title
CN109272509B (en) Target detection method, device and equipment for continuous images and storage medium
JP6230751B1 (en) Object detection apparatus and object detection method
KR101436369B1 (en) Apparatus and method for detecting multiple object using adaptive block partitioning
CN108734684B (en) Image background subtraction for dynamic illumination scene
CN111382637B (en) Pedestrian detection tracking method, device, terminal equipment and medium
CN115457466A (en) Inspection video-based hidden danger detection method and system and electronic equipment
CN110378930B (en) Moving object extraction method and device, electronic equipment and readable storage medium
CN108960247B (en) Image significance detection method and device and electronic equipment
CN109711287B (en) Face acquisition method and related product
CN111723634A (en) Image detection method and device, electronic equipment and storage medium
KR101296318B1 (en) Apparatus and method for object tracking by adaptive block partitioning
CN110599514A (en) Image segmentation method and device, electronic equipment and storage medium
CN108509876B (en) Object detection method, device, apparatus, storage medium, and program for video
CN114359665A (en) Training method and device of full-task face recognition model and face recognition method
CN111539390A (en) Small target image identification method, equipment and system based on Yolov3
CN110728316A (en) Classroom behavior detection method, system, device and storage medium
CN107818287B (en) Passenger flow statistics device and system
CN110580706A (en) Method and device for extracting video background model
CN115243073A (en) Video processing method, device, equipment and storage medium
CN111292374B (en) Method and equipment for automatically plugging and unplugging USB interface
CN111340677B (en) Video watermark detection method, apparatus, electronic device, and computer readable medium
CN111179343B (en) Target detection method, device, computer equipment and storage medium
CN112819859A (en) Multi-target tracking method and device applied to intelligent security
CN113033397A (en) Target tracking method, device, equipment, medium and program product
CN111985423A (en) Living body detection method, living body detection device, living body detection equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210618

Address after: Room 503, building 3, Yijing building, Debang new village, 588 Deya Road, Sifangping street, Kaifu District, Changsha, Hunan 410000

Patentee after: HUNAN SUIFUYAN ELECTRONIC TECHNOLOGY Co.,Ltd.

Address before: Room 903, building B, Yongtong Jiayuan, 303 Sany Avenue, Sifangping street, Kaifu District, Changsha, Hunan 410000

Patentee before: HUNAN DEYA KUNCHUANG TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right