CN107680118B - Image identification tracking method - Google Patents

Image identification tracking method Download PDF

Info

Publication number
CN107680118B
CN107680118B CN201710755102.8A CN201710755102A CN107680118B CN 107680118 B CN107680118 B CN 107680118B CN 201710755102 A CN201710755102 A CN 201710755102A CN 107680118 B CN107680118 B CN 107680118B
Authority
CN
China
Prior art keywords
image
tracking
initial
region
initial image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710755102.8A
Other languages
Chinese (zh)
Other versions
CN107680118A (en
Inventor
郑智太
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qisda Optronics Suzhou Co Ltd
Qisda Corp
Original Assignee
Qisda Optronics Suzhou Co Ltd
Qisda Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qisda Optronics Suzhou Co Ltd, Qisda Corp filed Critical Qisda Optronics Suzhou Co Ltd
Priority to CN201710755102.8A priority Critical patent/CN107680118B/en
Publication of CN107680118A publication Critical patent/CN107680118A/en
Application granted granted Critical
Publication of CN107680118B publication Critical patent/CN107680118B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an image identification tracking method, which comprises steps S1 to S5. Step S1: capturing continuous initial images and analyzing foreground images; step S2: performing speckle detection on the first foreground image to determine a moving speckle set and a range thereof; step S3: filtering the motion speckle set, and defining a filtered area as a first filtering area; step S4: tracking the interested spot set as a judgment basis of the second initial image; step S5: a first filtering area is identified. In step S4, the method includes steps S41 and S42. Step S41: calculating a first expected area, which is calculated according to a tracking algorithm according to the first tracking input area; step S42: calculating an overlapping proportion, and if the overlapping proportion is larger than a first threshold value, taking the first filtering area as a second tracking input area; and if not, taking the first predicted area as a second tracking input area.

Description

Image identification tracking method
Technical Field
The present invention relates to an image identification and tracking method, and more particularly, to an image identification and tracking method with high identification speed and high tracking accuracy.
Background
In the prior art, most of the imaging devices for detecting the existence of human body adopt various human shape templates or modules, and compare the human shape templates or modules with the detected images to identify the human body, but the required image comparison has huge calculation amount, needs to consume longer time, and has slower image identification speed.
In addition, generally, the tracking processing of the target image is performed by selecting a block to be tracked as a tracking input block of the next image, and using the block of the next input image as a tracking block of the next image, which is output after the tracking operation is performed on the next input image, so as to perform continuous tracking. The tracking accuracy is low because the error between the tracked block and the tracked object may be increased under different conditions according to different tracking algorithms.
Disclosure of Invention
In order to solve the problems of slow image recognition speed and low tracking precision, the invention provides an image recognition and tracking method.
The image recognition and tracking method is used for recognizing and tracking a first object in an image, and comprises the following steps:
step S1: capturing a plurality of continuous initial images, and analyzing to obtain corresponding foreground images according to the plurality of initial images, wherein the plurality of initial images comprise a first continuous initial image and a second continuous initial image; the foreground image corresponding to the first initial image is a first foreground image;
step S2: performing blob detection on the first foreground image to determine a set of moving blobs in the first foreground image and a first set of rectangular boundaries corresponding to the set of moving blobs, wherein each rectangular boundary in the first set of rectangular boundaries is a minimum rectangular boundary capable of covering the corresponding moving blob in the set of moving blobs;
step S3: filtering the moving spot set according to a preset image area and non-zero pixel density requirements to remove a light and shadow region and a region which is not a first object, wherein the filtered moving spot set is defined as an interested spot set, and the interested spot set and a region, corresponding to the first rectangular boundary set of the interested spot set, of the first initial image are first filtering regions;
step S4: tracking the interested spot set of the first filtering area as a judgment basis of the second initial image to prevent the interested spot set from being mistakenly removed in the process of identifying and tracking the second initial image; and
step S5: identifying the first filtering area and judging whether the first object exists;
in step S4, tracking the interested blob set includes:
step S41: calculating a first prediction region, wherein a first tracking input region of the first initial image is calculated according to a tracking algorithm to obtain the first prediction region so as to predict a region of the interested spot set in a second initial image, and the first tracking input region is provided by a previous initial image of the first initial image; and
step S42: calculating an overlap ratio, wherein the overlap ratio is a ratio between a first overlap region and the first filtering region, the first overlap region is an overlap region between the first expected region and the first filtering region, and if the overlap ratio is greater than a first threshold, the first filtering region is used as a second tracking input region of the second initial image; if not, the first predicted region is used as the second tracking input region of the second initial image, wherein the second tracking input region is used for tracking the interested spot set under the second initial image.
As an optional technical solution, in the step S41, the tracking algorithm is a KCF algorithm.
As an alternative solution, the first threshold is in the range of 0.6 to 1.
As an optional technical solution, the predetermined image area and the non-zero pixel density requirement are both obtained according to an installation position and a resolution of an image capturing device, and the image capturing device is configured to capture the first initial image and the second initial image.
Alternatively, in step S1, the plurality of initial images are processed by background modeling and removing or by consecutive initial image subtraction to obtain foreground images.
As an optional technical solution, the first object is a human body.
As an optional technical solution, in the step S2, after performing blob detection on the first foreground image, determining whether the moving blob set exists in the first foreground image, if so, entering the step S3; if not, the first initial image is directly and comprehensively identified to identify whether the first object is contained.
As an optional technical solution, the first foreground image is stored in binary.
Alternatively, in step S41, if the first initial image is the first initial image of the plurality of initial images, the first filtering area is defined as the first trace input area or other initial areas are defined as the first trace input area.
As an alternative solution, after the step S4, the second initial image undergoes the same steps of identification and tracking as the first initial image, and a second tracking input area is generated as a tracking input area of a third initial image, wherein the third initial image is a next image of the second initial image.
Compared with the prior art, the image identification tracking method compares the first expected area with the first filtering area when tracking each step to determine the first overlapping area, then determines the overlapping proportion according to the ratio of the first overlapping area to the first filtering area, and determines the input area of the next tracking according to the overlapping proportion, so that the farther the area to be tracked is from the area to be tracked, the more the area to be tracked is, and the accuracy of tracking can be greatly improved under the condition of continuous updating. Moreover, the speed of image identification can be greatly improved by only identifying the first filtering area.
The invention is described in detail below with reference to the drawings and specific examples, but the invention is not limited thereto.
Drawings
FIG. 1 is a flowchart illustrating an image recognition and tracking method according to the present invention;
FIG. 2 is a flowchart illustrating a method for tracking a set of interesting blobs in an image recognition and tracking method according to the present invention.
Detailed Description
FIG. 1 is a flowchart illustrating an image recognition and tracking method according to the present invention. Referring to fig. 1, the image recognition and tracking method 100 is used for recognizing and tracking a first object in an image, in this embodiment, the first object may be a human, but in other embodiments, the first object may also be other objects to be recognized and tracked, such as a cat, a car, and so on.
The image recognition and tracking method 100 includes steps S1 to S4.
Step S1: and capturing a plurality of continuous initial images, and analyzing according to the initial images to obtain a corresponding foreground image. The plurality of initial images comprise a first initial image and a second initial image which are continuous; in the present embodiment, an image capturing device (e.g., a camera or a monitor) is used to capture an initial image. In practical operation, the first initial image and the second initial image are two consecutive images captured by the image capturing device, for example, the first initial image may be a first image captured by the image capturing device, and the second initial image may be a second image captured by the image capturing device. The first initial image has a foreground image, the foreground image of the first initial image is defined as a first foreground image, and the first foreground image is stored in a binary system. In this embodiment, the first foreground image is obtained by background modeling and removal. Specifically, the background modeling is performed by analyzing the continuous initial image, for example, an image that is still or jogged may be simply determined as the background, or of course, the background setting may be performed by another algorithm, and then the background of the first initial image is removed according to the background modeling to obtain the corresponding first foreground image. In other embodiments, the first foreground image may be obtained by other methods, for example, the first foreground image may be obtained by performing binarization processing after subtracting the first initial image from the adjacent initial image by using a time difference subtraction method.
After the step S1, the first foreground image is selected to reduce the range of the subsequent image to be recognized, without recognizing the background that is not significant to the user.
Step S2: performing blob detection on the first foreground image to determine a set of moving blobs in the first foreground image and a first set of rectangular boundaries corresponding to the set of moving blobs, where each rectangular boundary in the first set of rectangular boundaries is a minimum rectangular boundary capable of covering the corresponding moving blob in the set of moving blobs. After the speckle detection is performed, the speckles interested by the user can be obtained to further narrow the identification range.
Step S3: according to the preset image area and pixel density requirements, filtering the moving spot set to remove the shadow area and the area of the non-first object, wherein the filtered moving spot set is defined as an interested spot set, and the interested spot set and the area of the first rectangular boundary set corresponding to the interested spot set, which is positioned in the first initial image, are first filtering areas. After the blob detection in step S2, the user obtains a coarser region of interest, and after step S3, the user further refines the blob, for example, by presetting a reasonable image area and non-zero pixel density requirements according to the installation location and resolution of the image capture device, thereby refining the first filtered region within the coarser region, and thus, the range to be identified can be greatly reduced.
Step S4: the interested spot set of the first filtering area is tracked and the tracking result is used as the judgment basis of the second initial image, so as to prevent the interested spot set from being mistakenly removed in the process of identifying and tracking the second initial image. Here, the purpose of tracing is to: the method can avoid that the target to be tracked (such as the spot set representing the first object) disappears in the foreground image and the interested spot set when the action amplitude becomes small or even is static, and prevent the inactive spots from being eliminated when the foreground image is acquired or during the spot detection, so that the integrity of the target to be tracked (such as the first object) can be ensured, for example, in the process of identifying the second initial image, whether the corresponding foreground image or the identification result is accurate or reasonable can be judged according to the tracking result obtained by the first initial image in the step S4, so as to reduce the occurrence of misjudgment. In addition, according to the tracking result obtained in step S4 of the first original image, it can be determined whether the moving object identified and tracked in the second original image is the same as the moving object appearing in the first original image, for example, it can be determined whether the person (if someone) appearing in the second original image is the person appearing in the first original image.
Step S5: the first filtering area is identified, and whether the first object exists is determined, that is, the image identification tracking method 100 only needs to identify the image of the first filtering area, so that the range of image identification is greatly reduced, and the identification speed can be greatly improved.
FIG. 2 is a flowchart illustrating a method for tracking a set of interesting blobs in an image recognition and tracking method according to the present invention. Referring to FIG. 2, in step S4, tracking the blob set of interest includes steps S41 and S42.
Step S41: a first expected region is calculated. The first tracking input region of the first initial image is calculated according to a tracking algorithm to obtain a first prediction region for predicting the region of the interested spot set in the second initial image, and the first tracking input region is provided by the adjacent initial image of the first initial image. The tracking algorithm is, for example, kcf (kernel correlation filters) algorithm.
Step S42: the overlap ratio is calculated. If the overlap ratio is greater than a first threshold, the first filtering area is used as a second tracking input area of the second initial image; if not, the first predicted region is used as the second tracking input region of the second initial image, wherein the second tracking input region is used for tracking the interested spot set under the second initial image. The value of the first threshold depends on the actual situation, for example, in the present embodiment, the value of the first threshold may range from 0.6 to 1, and preferably, the first threshold is 0.8, that is, the first expected area overlaps with the first filtering area by more than 80%.
When the image recognition tracking method 100 of the present invention performs tracking in each step, the first expected area is compared with the first filtered area to determine the first overlapping area, then the overlapping ratio is determined according to the ratio between the first overlapping area and the first filtered area, and then the input area for the next tracking is determined according to the overlapping ratio, so as to ensure that the area for the next tracking is not farther from the area to be tracked, and thus the tracking accuracy can be greatly improved under the continuous updating.
In this embodiment, in step S2, after performing blob detection on the first foreground image, it is determined whether a motion blob set exists in the first foreground image, and if so, the process goes to step S3; if not, the first initial image is directly and comprehensively identified to identify whether the first object is contained. Thus, in step S2, a preliminary determination may be made as to whether a subsequent interested spot set exists, and if it does not exist, the comparison of the whole image is performed.
In this embodiment, in step S41, if the first initial image is a first initial image of the plurality of initial images, the first filtering area is defined as the first trace input area or other initial areas are defined as the first trace input area.
In this embodiment, after step S4, the second initial image undergoes the same recognizing and tracking steps as the first initial image to generate a second tracking input region as the tracking input region of the third initial image, wherein the third initial image is the next image of the second initial image. Therefore, one initial image is set through the tracking input area of the previous initial image, so that the tracking accuracy can be ensured under the condition of continuous updating.
In summary, when performing tracking of each step, the image recognition tracking method of the present invention compares the first expected area with the first filtered area to determine the first overlapping area, then determines the overlapping ratio according to the ratio between the first overlapping area and the first filtered area, and determines the input area for the next tracking according to the overlapping ratio, so as to ensure that the area for the next tracking is not farther from the area to be tracked, and thus the tracking accuracy can be greatly improved under continuous updating. Moreover, the speed of image identification can be greatly improved by only identifying the first filtering area.
The present invention is capable of other embodiments, and various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the invention.

Claims (10)

1. An image recognition and tracking method for recognizing and tracking a first object in an image, the image recognition and tracking method comprising:
step S1: capturing a plurality of continuous initial images, and analyzing to obtain corresponding foreground images according to the plurality of initial images, wherein the plurality of initial images comprise a first continuous initial image and a second continuous initial image; the foreground image corresponding to the first initial image is a first foreground image;
step S2: performing blob detection on the first foreground image to determine a set of moving blobs in the first foreground image and a first set of rectangular boundaries corresponding to the set of moving blobs, wherein each rectangular boundary in the first set of rectangular boundaries is a minimum rectangular boundary capable of covering the corresponding moving blob in the set of moving blobs;
step S3: filtering the moving spot set according to a preset image area and non-zero pixel density requirements to remove a light and shadow region and a region which is not a first object, wherein the filtered moving spot set is defined as an interested spot set, and the interested spot set and a region, corresponding to the first rectangular boundary set of the interested spot set, of the first initial image are first filtering regions;
step S4: tracking the interested spot set of the first filtering area as a judgment basis of a second tracking input area of the second initial image so as to prevent the interested spot set from being mistakenly removed in the process of identifying and tracking the second initial image; and
step S5: identifying the first filtering area and judging whether the first object exists;
in step S4, tracking the interested blob set includes:
step S41: calculating a first prediction region, wherein a first tracking input region of the first initial image is calculated according to a tracking algorithm to obtain the first prediction region so as to predict a region of the interested spot set in a second initial image, and the first tracking input region is provided by a previous initial image of the first initial image; and
step S42: calculating an overlap ratio, wherein the overlap ratio is a ratio between a first overlap region and the first filtering region, the first overlap region is an overlap region between the first expected region and the first filtering region, and if the overlap ratio is greater than a first threshold, the first filtering region is used as the second tracking input region of the second initial image; if not, the first predicted region is used as the second tracking input region of the second initial image, wherein the second tracking input region is used for tracking the interested spot set under the second initial image.
2. The image recognition tracking method of claim 1, wherein in the step S41, the tracking algorithm is a KCF algorithm.
3. The image recognition tracking method of claim 1, wherein the first threshold is in a range of 0.6 to 1.
4. The image recognition and tracking method of claim 1, wherein the predetermined image area and the non-zero pixel density requirement are both determined according to a mounting position and a resolution of an image capturing device, the image capturing device being configured to capture the first initial image and the second initial image.
5. The image recognition and tracking method of claim 1, wherein in step S1, the plurality of initial images are processed by background modeling and removal or by subtraction of consecutive initial images to obtain foreground images.
6. The image recognition and tracking method of claim 1, wherein the first object is a human body.
7. The image recognition and tracking method of claim 1, wherein in the step S2, after performing blob detection on the first foreground image, it is determined whether the moving blob set exists in the first foreground image, and if so, the step S3 is performed; if not, the first initial image is directly and comprehensively identified to identify whether the first object is contained.
8. The image recognition tracking method of claim 1, wherein the first foreground image is stored in binary.
9. The image recognition tracking method of claim 1, wherein in step S41, if the first initial image is a first initial image of the plurality of initial images, the first filtering area is defined as the first tracking input area or other initial areas of the first initial image are defined as the first tracking input area.
10. The image recognition and tracking method of claim 1, wherein after step S4, the second initial image undergoes the same recognition and tracking steps as the first initial image to generate a second tracking input region as a tracking input region of a third initial image, wherein the third initial image is a next image of the second initial image.
CN201710755102.8A 2017-08-29 2017-08-29 Image identification tracking method Expired - Fee Related CN107680118B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710755102.8A CN107680118B (en) 2017-08-29 2017-08-29 Image identification tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710755102.8A CN107680118B (en) 2017-08-29 2017-08-29 Image identification tracking method

Publications (2)

Publication Number Publication Date
CN107680118A CN107680118A (en) 2018-02-09
CN107680118B true CN107680118B (en) 2020-10-20

Family

ID=61134782

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710755102.8A Expired - Fee Related CN107680118B (en) 2017-08-29 2017-08-29 Image identification tracking method

Country Status (1)

Country Link
CN (1) CN107680118B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860326B (en) * 2020-07-20 2023-09-26 品茗科技股份有限公司 Building site article movement detection method, device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5796924A (en) * 1996-03-19 1998-08-18 Motorola, Inc. Method and system for selecting pattern recognition training vectors
CN101303727A (en) * 2008-07-08 2008-11-12 北京中星微电子有限公司 Intelligent management method based on video human number Stat. and system thereof
CN101777114A (en) * 2009-01-08 2010-07-14 北京中星微电子有限公司 Intelligent analysis system and intelligent analysis method for video monitoring, and system and method for detecting and tracking head and shoulder
CN103577797A (en) * 2012-08-03 2014-02-12 华晶科技股份有限公司 Image identifying system and image identifying method thereof
CN104639835A (en) * 2014-02-14 2015-05-20 小绿草股份有限公司 Image tracking system and image tracking method
CN106023244A (en) * 2016-04-13 2016-10-12 南京邮电大学 Pedestrian tracking method based on least square locus prediction and intelligent obstacle avoidance model

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5796924A (en) * 1996-03-19 1998-08-18 Motorola, Inc. Method and system for selecting pattern recognition training vectors
CN101303727A (en) * 2008-07-08 2008-11-12 北京中星微电子有限公司 Intelligent management method based on video human number Stat. and system thereof
CN101777114A (en) * 2009-01-08 2010-07-14 北京中星微电子有限公司 Intelligent analysis system and intelligent analysis method for video monitoring, and system and method for detecting and tracking head and shoulder
CN103577797A (en) * 2012-08-03 2014-02-12 华晶科技股份有限公司 Image identifying system and image identifying method thereof
CN104639835A (en) * 2014-02-14 2015-05-20 小绿草股份有限公司 Image tracking system and image tracking method
CN106023244A (en) * 2016-04-13 2016-10-12 南京邮电大学 Pedestrian tracking method based on least square locus prediction and intelligent obstacle avoidance model

Also Published As

Publication number Publication date
CN107680118A (en) 2018-02-09

Similar Documents

Publication Publication Date Title
US10192107B2 (en) Object detection method and object detection apparatus
Wu et al. Lane-mark extraction for automobiles under complex conditions
US9911055B2 (en) Method and system for detection and classification of license plates
US8902053B2 (en) Method and system for lane departure warning
EP3176751B1 (en) Information processing device, information processing method, computer-readable recording medium, and inspection system
CN109727275B (en) Object detection method, device, system and computer readable storage medium
CN109102026B (en) Vehicle image detection method, device and system
CN111382637B (en) Pedestrian detection tracking method, device, terminal equipment and medium
KR102474837B1 (en) Foreground area extracting method and apparatus
CN111783665A (en) Action recognition method and device, storage medium and electronic equipment
WO2023039781A1 (en) Method for detecting abandoned object, apparatus, electronic device, and storage medium
JP2014009975A (en) Stereo camera
CN111582032A (en) Pedestrian detection method and device, terminal equipment and storage medium
US20190156138A1 (en) Method, system, and computer-readable recording medium for image-based object tracking
CN110866428A (en) Target tracking method and device, electronic equipment and storage medium
JP4292371B2 (en) Mobile monitoring device
CN107680118B (en) Image identification tracking method
JP6772059B2 (en) Electronic control devices, electronic control systems and electronic control methods
CN111062415B (en) Target object image extraction method and system based on contrast difference and storage medium
CN110363192B (en) Object image identification system and object image identification method
CN110796099A (en) Vehicle overrun detection method and device
CN110634124A (en) Method and equipment for area detection
CN103714552B (en) Motion shadow removing method and device and intelligent video analysis system
CN110766644B (en) Image down-sampling method and device
JP2020102212A (en) Smoke detection method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201020