CN113129331B - Target movement track detection method, device, equipment and computer storage medium - Google Patents

Target movement track detection method, device, equipment and computer storage medium Download PDF

Info

Publication number
CN113129331B
CN113129331B CN201911401885.5A CN201911401885A CN113129331B CN 113129331 B CN113129331 B CN 113129331B CN 201911401885 A CN201911401885 A CN 201911401885A CN 113129331 B CN113129331 B CN 113129331B
Authority
CN
China
Prior art keywords
background image
model
information
target
image model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911401885.5A
Other languages
Chinese (zh)
Other versions
CN113129331A (en
Inventor
孙文超
何明
张李秋
李超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Chengdu ICT Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Chengdu ICT Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Chengdu ICT Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201911401885.5A priority Critical patent/CN113129331B/en
Publication of CN113129331A publication Critical patent/CN113129331A/en
Application granted granted Critical
Publication of CN113129331B publication Critical patent/CN113129331B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Abstract

The embodiment of the invention discloses a target movement track detection method, device, equipment and a computer storage medium. The target movement track detection method comprises the following steps: taking a preset background image model as a comparison master plate, monitoring and sampling the acquired real-time video, and acquiring video frame information; when the non-model pixels are determined to exist in the video frame information, extracting boundaries of the non-model pixels, and determining a target model; comparing the associated frame data of the target model in the real-time video to obtain comparison data; and determining the moving track of the target model based on the comparison data. According to the embodiment of the invention, the target movement track can be detected more accurately.

Description

Target movement track detection method, device, equipment and computer storage medium
Technical Field
The invention belongs to the technical field of target identification, and particularly relates to a target movement track detection method and device, electronic equipment and a computer storage medium.
Background
Deep Learning (DL) is a research direction in the field of Machine Learning (ML), which was introduced to Machine Learning to make it closer to the original goal-artificial intelligence (AI, artificial Intelligence). Deep learning is the inherent regularity and presentation hierarchy of learning sample data, and the information obtained during such learning is helpful in interpreting data such as text, images and sounds. Its final goal is to have the machine have analytical learning capabilities like a person, and to recognize text, image, and sound data. Deep learning is a complex machine learning algorithm that achieves far greater results in terms of speech and image recognition than prior art.
With the comprehensive popularization and application of social informatization and intellectualization in various industries, videos and images become one of the most important data sources. Before the image information extraction technology is not mature, video and picture data are mostly reserved through a storage medium, and in addition, no mature application scene is too much. Along with the maturity of image information extraction technology, a great deal of information contained in images and videos is paid attention to. Particularly, with the maturity of the image real-time extraction technology, the method is commercially popularized in various business fields such as traffic violation snapshot, target monitoring, target tracking, parking lot vehicle identification and the like.
In the existing image recognition technology, taking the extraction of traffic video information, parking lot management and other scenes as an example, the image recognition technology mainly adopts formatted foreground object recognition technology. Namely, the object of interest (to be identified) is taken as an object, and the object is obtained by a series of processing algorithm operations in a photo containing a background. The applied algorithm such as edge extraction technique, etc. In general, in the recognition scene of a vehicle, a standardized license plate is used for recognition and identification marking in many cases, but the requirements on the recognition accuracy and the recognition mode of the vehicle are not high. If there are no such features (license plates) in the video stream/image scene, the system will lose recognition.
The pattern recognition is another technical branch developed on the basis of target recognition, such as extraction and recognition of target movement direction and steering trend information, the recognition of the information has important significance, and if the performance parameters such as recognition accuracy, instantaneity and the like meet certain requirements, the information can play a great promoting role in the hot fields such as artificial intelligence and the like. For example, the method can extract the information of interest in an intelligent scene rapidly/in real time, reduce the information acquisition cost, provide a large number of control signals of various types and promote the intelligent technical development of a system. In general, the acquisition of such information is based on coordinate comparison of multiple frame objects in the video stream. This technique requires accurate and rapid recognition capability as a basic image recognition technique. Moreover, since the nature of object recognition in an image is to recognize pixel data information that an object reacts to on the image, the relationship between the pixel information of the image and the object is affected by a plurality of factors. For example, when a moving object is covered with a special optical material or a specular reflection material, the image recognition is very prone to produce a judgment error; when the background scene changes, the illumination environment changes and other factors, the recognition result of the target can be influenced. Thus, there are still many problems with existing target recognition and pattern recognition techniques in such scenarios. Particularly in scenes where imaging is greatly affected by the environment, it is difficult to apply.
Therefore, how to detect the target movement trajectory more accurately is a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The embodiment of the invention provides a target movement track detection method, device and equipment and a computer storage medium, which can be used for detecting a target movement track more accurately.
In a first aspect, a method for detecting a target movement track is provided, including:
taking a preset background image model as a comparison master plate, monitoring and sampling the acquired real-time video, and acquiring video frame information;
when the non-model pixels are determined to exist in the video frame information, extracting boundaries of the non-model pixels, and determining a target model;
comparing the associated frame data of the target model in the real-time video to obtain comparison data;
and determining the moving track of the target model based on the comparison data.
Optionally, taking a preset background image model as a contrast master, and performing monitoring sampling on the acquired real-time video to acquire video frame information, including:
collecting a background image;
constructing a background image model based on the background image;
and taking the background image model as a comparison master plate, and monitoring and sampling the real-time video to obtain video frame information.
Optionally, constructing the background image model based on the background image includes:
extracting image texture information of a background image by using a deep learning algorithm;
and constructing a background image model based on the image texture information.
Optionally, after constructing the background image model based on the background image, the method further comprises:
determining model replacement monitoring strategy information;
and replacing the background image model according to the model replacement monitoring strategy information.
Optionally, the background image model includes a temporary background image model, and monitoring policy information according to model replacement, replacing the background image model includes:
collecting a current background image according to a preset data collection frequency;
constructing a current background image model based on the current background image;
determining the degree of difference between the current background image model and the temporary background image model;
when the difference degree is determined to be larger than the difference degree threshold value, monitoring strategy information according to model replacement, and replacing the temporary background image model by the current background image model.
Optionally, taking a preset background image model as a contrast master, and performing monitoring sampling on the acquired real-time video to acquire video frame information, including:
determining time information corresponding to the real-time video;
determining a background image model corresponding to the time information according to the time information;
and taking the background image model corresponding to the time information as a comparison master plate, and monitoring and sampling the real-time video to obtain video frame information.
In a second aspect, there is provided a target movement trajectory detection device, the device comprising:
the acquisition module is used for taking a preset background image model as a comparison master plate, monitoring and sampling the acquired real-time video, and acquiring video frame information;
the extraction module is used for extracting boundaries of the non-model pixels when the non-model pixels are determined to exist in the video frame information, and determining a target model;
the comparison module is used for comparing the associated frame data of the target model in the real-time video and obtaining comparison data;
and the determining module is used for determining the moving track of the target model based on the comparison data.
Optionally, an acquisition module is used for acquiring a background image; constructing a background image model based on the background image; and taking the background image model as a comparison master plate, and monitoring and sampling the real-time video to obtain video frame information.
Optionally, the acquiring module is used for extracting image texture information of the background image by using a deep learning algorithm; and constructing a background image model based on the image texture information.
Optionally, the target movement track detection device further includes:
the information determining module is used for determining model replacement monitoring strategy information;
and the replacement module is used for replacing the background image model according to the model replacement monitoring strategy information.
Optionally, the replacing module is used for collecting the current background image according to the preset data collection frequency; constructing a current background image model based on the current background image; determining the degree of difference between the current background image model and the temporary background image model; when the difference degree is determined to be larger than the difference degree threshold value, monitoring strategy information according to model replacement, and replacing the temporary background image model by the current background image model.
In a third aspect, there is provided an electronic device, the device comprising:
a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements the target movement trajectory detection method according to any one of the first aspects.
In a fourth aspect, there is provided a computer storage medium storing computer program instructions which, when executed by a processor, implement the target movement trajectory detection method according to any one of the first aspects.
The target movement track detection method, the device, the equipment and the computer storage medium can detect the target movement track more accurately. The target moving track detection method takes a preset background image model as a comparison master plate, monitors and samples the acquired real-time video, and acquires video frame information; when the non-model pixels are determined to exist in the video frame information, extracting boundaries of the non-model pixels, and determining a target model; comparing the associated frame data of the target model in the real-time video to obtain comparison data; and determining the moving track of the target model based on the comparison data. Therefore, the method does not need to identify the target, and only needs to use the background image model as a comparison master plate, and the moving track of the target model can be more accurately determined according to the associated frame data of the comparison target model in the real-time video.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the drawings that are needed to be used in the embodiments of the present invention will be briefly described, and it is possible for a person skilled in the art to obtain other drawings according to these drawings without inventive effort.
Fig. 1 is a schematic flow chart of a target movement track detection method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a background update flow provided in an embodiment of the present invention;
FIG. 3 is a schematic diagram of an intra-frequency frame sequence processing procedure according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an effect of extracting a target according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a background update procedure for periodic learning according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a target movement track detection device according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Features and exemplary embodiments of various aspects of the present invention will be described in detail below, and in order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and the detailed embodiments. It should be understood that the specific embodiments described herein are merely configured to illustrate the invention and are not configured to limit the invention. It will be apparent to one skilled in the art that the present invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the invention by showing examples of the invention.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The existing target recognition and pattern recognition technology has more problems in application to actual scenes, mainly inaccurate target movement track detection, and is difficult to apply particularly in scenes with imaging greatly affected by environment.
In order to solve the problems in the prior art, the embodiment of the invention provides a target movement track detection method, a target movement track detection device, target movement track detection equipment and a computer storage medium. The following first describes a target movement track detection method provided by the embodiment of the present invention.
Fig. 1 is a flowchart of a target movement track detection method according to an embodiment of the present invention. As shown in fig. 1, the target movement track detection method includes the steps of:
s101, taking a preset background image model as a comparison master plate, and monitoring and sampling the acquired real-time video to acquire video frame information.
In order to obtain more accurate video frame information, in one embodiment, a preset background image model is used as a comparison master, and the obtained real-time video is monitored and sampled to obtain the video frame information, which may generally include: collecting a background image; constructing a background image model based on the background image; and taking the background image model as a comparison master plate, and monitoring and sampling the real-time video to obtain video frame information.
To construct a more accurate background image model, in one embodiment, constructing a background image model based on the background image may generally include: extracting image texture information of a background image by using a deep learning algorithm; and constructing a background image model based on the image texture information.
To replace the background image model according to the requirements, in one embodiment, after constructing the background image model based on the background image, it may generally further include: determining model replacement monitoring strategy information; and replacing the background image model according to the model replacement monitoring strategy information.
In one embodiment, the background image model comprises a temporary background image model, and updating the background image model according to model updating listening policy information may generally include: collecting a current background image according to a preset data collection frequency; constructing a current background image model based on the current background image; determining the degree of difference between the current background image model and the temporary background image model; when the difference degree is determined to be larger than the difference degree threshold value, monitoring strategy information according to model replacement, and replacing the temporary background image model by the current background image model.
In one embodiment, taking a preset background image model as a comparison master, and performing monitoring sampling on the acquired real-time video to acquire video frame information, which may generally include: determining time information corresponding to the real-time video; determining a background image model corresponding to the time information according to the time information; and taking the background image model corresponding to the time information as a comparison master plate, and monitoring and sampling the real-time video to obtain video frame information.
S102, when the fact that the non-model pixels exist in the video frame information is determined, boundary extraction is conducted on the non-model pixels, and a target model is determined.
S103, comparing the associated frame data of the target model in the real-time video to obtain the comparison data.
S104, determining the moving track of the target model based on the comparison data.
The target moving track detection method takes a preset background image model as a comparison master plate, monitors and samples the acquired real-time video, and acquires video frame information; when the non-model pixels are determined to exist in the video frame information, extracting boundaries of the non-model pixels, and determining a target model; comparing the associated frame data of the target model in the real-time video to obtain comparison data; and determining the moving track of the target model based on the comparison data. Therefore, the method does not need to identify the target, and only needs to use the background image model as a comparison master plate, and the moving track of the target model can be more accurately determined according to the associated frame data of the comparison target model in the real-time video.
The above is described in the following by way of a specific example, which is as follows:
in one embodiment, it is contemplated that the image acquisition components (e.g., camera components) in most application scenarios are typically in a fixed-mount mode or have a limited range of motion with a defined trajectory. This may result in a substantial proportion of the background image data in the image or video frame data in either a fixed mount mode or a range of motion mode with a limited set trajectory. On the one hand, from the view size of the spatial dimension, the object (or object of interest) size will typically be much smaller than the camera view size; on the other hand, from the time dimension, the object is usually present in the picture much less time than the background. Therefore, if each calculation requires checking the background, a large amount of operation resources without results are likely to be input.
As shown in fig. 2, in order to solve the above-mentioned problems, the embodiment of the present invention proposes a new scheme in terms of system and algorithm, specifically: and sampling and modeling the background image to construct a pure background image model.
Wherein, the pure background image model is different from the foreground object model in extraction, the foreground object is diversified, but the background is relatively single or limited.
Alternatively, the background image model may be formed by extracting and learning the background image texture by a deep learning algorithm. And then, monitoring and sampling the real-time video by taking the pure background image model as a comparison master plate, and when the non-model pixels in the video frame information are monitored to be intruded (or generated), extracting boundaries of the intruded non-model pixels to form a target model. And meanwhile, the target model is compared in the associated frame data of the video stream, and the comparison data record can be drawn into a target motion track.
In one embodiment, the background image is typically a time-varying image, such as a light shadow, weather, semi-permanent obstructions, entering or exiting the field of view. Alternatively, the obstacle may be refuse, landscaping, cars parked temporarily or for a longer period of time, temporarily or short-term piled items, etc.
In one embodiment, in order to be popularized and used in the market, the embodiment of the invention provides an autonomous learning replacement technology of a building unit of a background image model, which is called a model building unit. Specifically, a background model buffer space and a permanent storage space with preset capacities can be configured at the system level.
Optionally, the background model buffer space includes a plurality of model buffers; wherein the model registry may be used to store a background image model in change. Meanwhile, the system can monitor the change of the background model according to a certain frequency (for example, 2-5 minutes/time), and judge the characteristic parameter by taking the target motion trail as the judgment characteristic parameter.
For a frame of image G, each point (x, y) on the image is G (x, y), the background model is B, each point (x, y) is B (x, y), and for a section of multi-frame image within a certain frequency, the method comprises the following steps of:
Diff_GB i (x,y)=(G i (x,y)-B(x,y)) 2 (1)
where i=1, 2..n, n is the number of acquired images in the frequency bin.
Saving the result of the most recent n frames of candidate target images to establish FIFO (First In First Out) first-in first-out channel, when a new frame of image arrives, performing formula (1) operation with the background model, then adding the result into the FIFO channel, and at the same time, discarding the frame of candidate target image at the channel head, as shown in FIG. 3, when D in FIG. 3 k+1 When added, D k-n Is discarded.
The standard deviation was calculated as follows:
s(x,y)=sqrt(∑(Diff_GB i (x,y)-Avg_diff(x,y)) 2 /n) (3)
if: s (x, y) > T (T is a threshold constant), then the point is considered to be a foreground point, otherwise it is a background point. The following describes a target tracking process for foreground points:
in one embodiment, eight fields of seed region growth algorithm may be used to connect adjacent s (x, y)>T composition area block Box j Where j=1, 2,..m, m is the total number of segments in a certain frequency.
Alternatively, the region block centroid can be calculated using the following equation (4):
wherein Mass-center is the centroid of the region block, box represents the region block, and i is the number of points.
If it isVector_len|of the model (a) of (b) is a Vector>Tm (Tm is a threshold constant), then a true target is considered to be present within that frequency.
In one embodiment, according to the above method, an effect diagram of a moving object may be extracted, where the effect diagram of the moving object is shown in fig. 4, and fig. 4 shows the infrared 4 th, 13 th, 22 th, 32 th frame images and corresponding detection images, respectively, and fig. 4 shows the first 4 th images as infrared images of the frames, and the second 4 th images as detection images of the frames.
And when the monitoring unit finds that the background model is changed, extracting the target contour of the changed image to form a target, and tracking the target. At this time, two results may appear, one is that the target moves according to a set movement track, and the movement process is extracted to complete identification; the other is that the system recognizes the target but does not obtain the set result, the system starts the background image model learning unit according to the judgment condition of 'not obtaining the set result', and the learning unit stores the image of the 'not obtained result' after overlapping the 'target' according to the following formula (5) in the model temporary storage area. And executing model replacement judgment monitoring strategies in the next specific time.
The graph after background superposition is:
wherein m is the total number of segments in a certain frequency, box j is an area block, B (x, y) is an original background model, and p ranges from [0,1].
In one embodiment, the model replacement listening policy contains a variety of application scenarios and choices. Generally, the data sampling density of the listening process is increased (by 2-3 times) based on the original listening process. And secondly, under a model replacement monitoring strategy, the comparison object of the image data obtained by the system sampling is a temporary model of a storage model temporary area, and when the temporary model of the model temporary area is different from the sampling model, the system replaces the original temporary model by the sampling model to become a new temporary model.
For the sampling model b_n (x, y), the original temporary storage model is B (x, y), and the calculation is performed:
Diff_B(x,y)=(B_N(x,y)-B(x,y)) 2 (6)
wherein M and N are the number of rows and columns of the image matrix.
If diff_b > Td (Td is a threshold constant), it is considered that a difference is present and no difference is present.
Meanwhile, the system presets a parameter customization interface which can be provided for a service user according to different application scenes, and the interface can be used for setting the execution time of the model replacement monitoring strategy. When the duration parameter reaches a preset value, comparing the temporary model in the temporary storage area with a background model in the permanent model library, and if the difference between the temporary model and the background model reaches a certain set threshold (the threshold is adjustable), replacing the original permanent model with the temporary model to form a new permanent model.
As shown in fig. 5, fig. 5 adds one cycle learning control module as compared to fig. 2. In many application scenarios, the background image may have periodic changes, such as sunrise and sunset shadows, public transportation means with timing and fixed points, and longer changes, such as four seasons, active dynamic changes of the camera component according to a given track or motion sampling mode, and the like. In order to avoid frequent periodic replacement of the model caused by periodic factors and operational burden on the system calculation, the application designs a model period learning algorithm. The algorithm system level changes are: a semi-permanent model temporary storage area is added. The semi-permanent model memory area is used to store multiple sets of models in a change, that is, the compared models are not discarded completely, but are sampled at a certain density and stored in the semi-permanent model memory area. The model period learning algorithm records the time of generation of the model corresponding to each semi-permanent storage area, and performs a one-to-one comparison between the image sampled in real time and the model in the semi-permanent storage area, so as to judge whether the model period learning algorithm is repeated or periodically repeated as a judging condition of whether the model period learning algorithm has a periodic rule. When a model is found to occur periodically, the periodicity is recorded. And extracting the model and the possible time points according to the periodic rule, fitting to form a relation graph, predicting the change rule of the background by using the relation graph, and replacing the model in the permanent model library with the possible model in advance to realize intelligent prediction matching. The method can further reduce the calculation load of the system, effectively reduce the calculation power requirement of the system and improve the scene adaptability of the system product.
The following describes a target movement track detection apparatus, an electronic device, and a computer storage medium according to embodiments of the present invention, where the target movement track detection apparatus, the electronic device, and the computer storage medium described below may be referred to correspondingly with the target movement track detection method described above. Fig. 6 is a schematic structural diagram of a target movement track detection device according to an embodiment of the present invention, as shown in fig. 6, where the target movement track detection device includes:
the acquiring module 601 is configured to monitor and sample an acquired real-time video with a preset background image model as a comparison master, and acquire video frame information;
the extracting module 602 is configured to, when determining that the video frame information contains non-model pixels, perform boundary extraction on the non-model pixels, and determine a target model;
the comparison module 603 is configured to compare the associated frame data of the target model in the real-time video, and obtain comparison data;
a determining module 604, configured to determine a movement track of the target model based on the comparison data.
Optionally, in one embodiment, the acquiring module 601 is configured to acquire a background image; constructing a background image model based on the background image; and taking the background image model as a comparison master plate, and monitoring and sampling the real-time video to obtain video frame information.
Optionally, in one embodiment, the obtaining module 601 is configured to extract image texture information of the background image by using a deep learning algorithm; and constructing a background image model based on the image texture information.
Optionally, in one embodiment, the target movement track detection device further includes:
the information determining module is used for determining model replacement monitoring strategy information;
and the replacement module is used for replacing the background image model according to the model replacement monitoring strategy information.
Optionally, in an embodiment, the replacing module is configured to collect the current background image according to a preset data collection frequency; constructing a current background image model based on the current background image; determining the degree of difference between the current background image model and the temporary background image model; when the difference degree is determined to be larger than the difference degree threshold value, monitoring strategy information according to model replacement, and replacing the temporary background image model by the current background image model.
Optionally, in one embodiment, the obtaining module 601 is configured to determine time information corresponding to the real-time video; determining a background image model corresponding to the time information according to the time information; and taking the background image model corresponding to the time information as a comparison master plate, and monitoring and sampling the real-time video to obtain video frame information.
Each module in the target movement track detection device provided in fig. 6 has a function of implementing each step in the example shown in fig. 1, and achieves the same technical effects as the target movement track detection method shown in fig. 1, and for brevity description, a detailed description is omitted herein.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
The electronic device may include a processor 701 and a memory 702 storing computer program instructions.
In particular, the processor 701 may comprise a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured as one or more integrated circuits implementing embodiments of the present invention.
Memory 702 may include mass storage for data or instructions. By way of example, and not limitation, memory 702 may comprise a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the foregoing. The memory 702 may include removable or non-removable (or fixed) media, where appropriate. Memory 702 may be internal or external to the integrated gateway disaster recovery device, where appropriate. In a particular embodiment, the memory 702 is a non-volatile solid state memory. In a particular embodiment, the memory 702 includes Read Only Memory (ROM). The ROM may be mask programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory, or a combination of two or more of these, where appropriate.
The processor 701 reads and executes the computer program instructions stored in the memory 702 to implement the target movement trajectory detection method shown in fig. 1.
In one example, the electronic device may also include a communication interface 703 and a bus 710. As shown in fig. 7, the processor 701, the memory 702, and the communication interface 703 are connected by a bus 710 and perform communication with each other.
The communication interface 703 is mainly used for implementing communication between each module, device, unit and/or apparatus in the embodiment of the present invention.
Bus 710 includes hardware, software, or both that couple the components of the online data flow billing device to each other. By way of example, and not limitation, the buses may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a micro channel architecture (MCa) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of the above. Bus 710 may include one or more buses, where appropriate. Although embodiments of the invention have been described and illustrated with respect to a particular bus, the invention contemplates any suitable bus or interconnect.
In addition, embodiments of the present invention may be implemented by providing a computer storage medium. The computer storage medium has stored thereon computer program instructions; the computer program instructions, when executed by the processor, implement the target movement trajectory detection method shown in fig. 1.
It should be understood that the invention is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and shown, and those skilled in the art can make various changes, modifications and additions, or change the order between steps, after appreciating the spirit of the present invention.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this disclosure describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, or may be performed in a different order from the order in the embodiments, or several steps may be performed simultaneously.
In the foregoing, only the specific embodiments of the present invention are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present invention is not limited thereto, and any equivalent modifications or substitutions can be easily made by those skilled in the art within the technical scope of the present invention, and they should be included in the scope of the present invention.

Claims (9)

1. A target movement trajectory detection method, characterized by comprising:
taking a preset background image model as a comparison master plate, monitoring and sampling the acquired real-time video, and acquiring video frame information;
when the non-model pixels are determined to exist in the video frame information, extracting boundaries of the non-model pixels, and determining a target model;
comparing the associated frame data of the target model in the real-time video to obtain comparison data;
determining a moving track of the target model based on the comparison data;
taking a preset background image model as a comparison master plate, monitoring and sampling the acquired real-time video, and before acquiring video frame information, further comprising: determining model replacement monitoring strategy information; according to the model replacement monitoring strategy information, replacing the background image model;
the determined model replaces monitoring strategy information; according to the model replacement monitoring strategy information, replacing the background image model comprises the following steps: collecting a current background image according to a preset data collection frequency; constructing a current background image model based on the current background image; determining a degree of difference between the current background image model and a temporary background image model; and when the difference degree is determined to be larger than a difference degree threshold value, monitoring strategy information according to the model replacement, and replacing the temporary background image model by the current background image model.
2. The method for detecting a moving track of an object according to claim 1, wherein the monitoring and sampling the acquired real-time video by using the preset background image model as a comparison master, and acquiring video frame information, includes:
collecting a background image;
constructing the background image model based on the background image;
and taking the background image model as the comparison master plate, monitoring and sampling the real-time video, and obtaining the video frame information.
3. The target movement trajectory detection method according to claim 2, wherein the constructing the background image model based on the background image includes:
extracting image texture information of the background image by using a deep learning algorithm;
and constructing the background image model based on the image texture information.
4. The method for detecting a moving track of an object according to claim 1, wherein the monitoring and sampling the acquired real-time video by using the preset background image model as a comparison master, and acquiring video frame information, includes:
determining time information corresponding to the real-time video;
determining a background image model corresponding to the time information according to the time information;
and taking the background image model corresponding to the time information as the comparison master plate, and monitoring and sampling the real-time video to obtain the video frame information.
5. An object movement trajectory detection device, characterized by comprising:
the acquisition module is used for taking a preset background image model as a comparison master plate, monitoring and sampling the acquired real-time video, and acquiring video frame information;
the extraction module is used for extracting boundaries of the non-model pixels when the non-model pixels are determined to exist in the video frame information, and determining a target model;
the comparison module is used for comparing the associated frame data of the target model in the real-time video to obtain comparison data;
the determining module is used for determining the moving track of the target model based on the comparison data;
the information determining module is used for determining model replacement monitoring strategy information;
the replacement module is used for replacing the background image model according to the model replacement monitoring strategy information;
the replacing module is used for collecting the current background image according to the preset data collection frequency; constructing a current background image model based on the current background image; determining a degree of difference between the current background image model and a temporary background image model; and when the difference degree is determined to be larger than a difference degree threshold value, monitoring strategy information according to the model replacement, and replacing the temporary background image model by the current background image model.
6. The object movement track detection device according to claim 5, wherein the acquisition module is configured to acquire a background image; constructing the background image model based on the background image; and taking the background image model as the comparison master plate, monitoring and sampling the real-time video, and obtaining the video frame information.
7. The object movement trajectory detection device according to claim 6, wherein the acquisition module is configured to extract image texture information of the background image using a deep learning algorithm; and constructing the background image model based on the image texture information.
8. An electronic device, the electronic device comprising: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements the target movement trajectory detection method according to any one of claims 1-4.
9. A computer storage medium having stored thereon computer program instructions which, when executed by a processor, implement the target movement trajectory detection method of any one of claims 1-4.
CN201911401885.5A 2019-12-31 2019-12-31 Target movement track detection method, device, equipment and computer storage medium Active CN113129331B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911401885.5A CN113129331B (en) 2019-12-31 2019-12-31 Target movement track detection method, device, equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911401885.5A CN113129331B (en) 2019-12-31 2019-12-31 Target movement track detection method, device, equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN113129331A CN113129331A (en) 2021-07-16
CN113129331B true CN113129331B (en) 2024-01-30

Family

ID=76768529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911401885.5A Active CN113129331B (en) 2019-12-31 2019-12-31 Target movement track detection method, device, equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN113129331B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015184773A (en) * 2014-03-20 2015-10-22 日本ユニシス株式会社 Image processor and three-dimensional object tracking method
CN105139426A (en) * 2015-09-10 2015-12-09 南京林业大学 Video moving object detection method based on non-down-sampling wavelet transformation and LBP
WO2017132506A1 (en) * 2016-01-29 2017-08-03 Jia Li Local augmented reality persistent sticker objects
CN108805897A (en) * 2018-05-22 2018-11-13 安徽大学 A kind of improved moving object detection VIBE algorithms
CN109859236A (en) * 2019-01-02 2019-06-07 广州大学 Mobile object detection method, calculates equipment and storage medium at system
CN109919008A (en) * 2019-01-23 2019-06-21 平安科技(深圳)有限公司 Moving target detecting method, device, computer equipment and storage medium
CN110060278A (en) * 2019-04-22 2019-07-26 新疆大学 The detection method and device of moving target based on background subtraction
CN110533687A (en) * 2018-05-11 2019-12-03 深眸科技(深圳)有限公司 Multiple target three-dimensional track tracking and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015184773A (en) * 2014-03-20 2015-10-22 日本ユニシス株式会社 Image processor and three-dimensional object tracking method
CN105139426A (en) * 2015-09-10 2015-12-09 南京林业大学 Video moving object detection method based on non-down-sampling wavelet transformation and LBP
WO2017132506A1 (en) * 2016-01-29 2017-08-03 Jia Li Local augmented reality persistent sticker objects
CN110533687A (en) * 2018-05-11 2019-12-03 深眸科技(深圳)有限公司 Multiple target three-dimensional track tracking and device
CN108805897A (en) * 2018-05-22 2018-11-13 安徽大学 A kind of improved moving object detection VIBE algorithms
CN109859236A (en) * 2019-01-02 2019-06-07 广州大学 Mobile object detection method, calculates equipment and storage medium at system
CN109919008A (en) * 2019-01-23 2019-06-21 平安科技(深圳)有限公司 Moving target detecting method, device, computer equipment and storage medium
CN110060278A (en) * 2019-04-22 2019-07-26 新疆大学 The detection method and device of moving target based on background subtraction

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A contour-based moving object detection and tracking;Masayuki Yokoyama等;《2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance》;271-276 *
Background subtraction algorithm based human motion detection;Rupali S.Rakibe等;《International Journal of scientific and research publications》;第3卷(第5期);2250-3153 *
基于视频的运动人体行为捕捉算法研究;付娜;《中国优秀硕士学位论文全文数据库 信息科技辑》(第03期);I138-7173 *

Also Published As

Publication number Publication date
CN113129331A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
CN100545867C (en) Aerial shooting traffic video frequency vehicle rapid checking method
CN109961106B (en) Training method and device of trajectory classification model and electronic equipment
CN109919008A (en) Moving target detecting method, device, computer equipment and storage medium
CN102902955A (en) Method and system for intelligently analyzing vehicle behaviour
CN104134222A (en) Traffic flow monitoring image detecting and tracking system and method based on multi-feature fusion
CN110795595A (en) Video structured storage method, device, equipment and medium based on edge calculation
CN105654508A (en) Monitoring video moving target tracking method based on self-adaptive background segmentation and system thereof
US11107237B2 (en) Image foreground detection apparatus and method and electronic device
CN104615986A (en) Method for utilizing multiple detectors to conduct pedestrian detection on video images of scene change
CN107248296B (en) Video traffic flow statistical method based on unmanned aerial vehicle and time sequence characteristics
CN103646257A (en) Video monitoring image-based pedestrian detecting and counting method
CN102902960A (en) Leave-behind object detection method based on Gaussian modelling and target contour
Yaghoobi Ershadi et al. Vehicle tracking and counting system in dusty weather with vibrating camera conditions
CN114724131A (en) Vehicle tracking method and device, electronic equipment and storage medium
CN113256690A (en) Pedestrian multi-target tracking method based on video monitoring
CN103400395A (en) Light stream tracking method based on HAAR feature detection
CN113129331B (en) Target movement track detection method, device, equipment and computer storage medium
CN117152949A (en) Traffic event identification method and system based on unmanned aerial vehicle
CN110728229B (en) Image processing method, device, equipment and storage medium
CN109034171B (en) Method and device for detecting unlicensed vehicles in video stream
CN111079612A (en) Method and device for monitoring retention of invading object in power transmission line channel
CN116091964A (en) High-order video scene analysis method and system
Yao et al. Embedded technology and algorithm for video-based vehicle queue length detection
CN110858392A (en) Monitoring target positioning method based on fusion background model
CN110781797B (en) Labeling method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant