CN115147751A - Method for counting station passenger flow in real time based on video image - Google Patents

Method for counting station passenger flow in real time based on video image Download PDF

Info

Publication number
CN115147751A
CN115147751A CN202210505988.1A CN202210505988A CN115147751A CN 115147751 A CN115147751 A CN 115147751A CN 202210505988 A CN202210505988 A CN 202210505988A CN 115147751 A CN115147751 A CN 115147751A
Authority
CN
China
Prior art keywords
circle
counting
true
image
rho
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210505988.1A
Other languages
Chinese (zh)
Inventor
汪理
陶征勇
宋大治
张�浩
王弼宁
陈敦惠
范三龙
李佑文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Sac Rail Traffic Engineering Co ltd
Nanjing Metro Construction Co ltd
China Railway Siyuan Survey and Design Group Co Ltd
Original Assignee
Nanjing Sac Rail Traffic Engineering Co ltd
Nanjing Metro Construction Co ltd
China Railway Siyuan Survey and Design Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Sac Rail Traffic Engineering Co ltd, Nanjing Metro Construction Co ltd, China Railway Siyuan Survey and Design Group Co Ltd filed Critical Nanjing Sac Rail Traffic Engineering Co ltd
Priority to CN202210505988.1A priority Critical patent/CN115147751A/en
Publication of CN115147751A publication Critical patent/CN115147751A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for counting passenger flow of a station in real time based on a video image, which comprises the steps of firstly carrying out weighted average gray processing and three-frame difference operation on an obtained video image to carry out background separation, and carrying out compensation filling and merging operation on the separated background image by adopting a scanning line seed filling algorithm and morphological closed operation; then, according to the obvious head quasi-circular features of the human body in the image, an improved Hough transformation quasi-circular detection method is adopted to accurately and quickly extract the head region quasi-circular features of the human body in the image; and finally, tracking and counting are realized by adopting a frame difference distance tracking method and a counting line statistical method, the method is mainly suitable for passenger flow counting in a station environment, a true circle radius limiting condition and a coincident circle judging condition for head area identification are introduced in the environment, a Hough transformation algorithm is improved under the limiting condition, and the purpose of real-time accurate counting can be effectively achieved by adopting the frame difference distance tracking method for real-time tracking.

Description

Method for counting station passenger flow in real time based on video image
Technical Field
The invention belongs to the technical field of image processing, and relates to a method for counting station passenger flow in real time based on video images.
Background
The passenger flow volume of the departure station is accurately counted in real time, this is of crucial importance for public safety as well as commercial applications. However, the movement of the passengers in the station has the characteristics of disorder and complexity, and the counting statistics of the passenger flow in the station is difficult due to the influence of illumination change, shadow, shielding and the like. At present, the research on passenger flow counting is mature, counting methods are diversified, most counting methods with high identification accuracy rate are complex to realize and have high cost, and the method is simple to realize, high in identification accuracy rate and low in cost aiming at the characteristics of simple algorithm realization, high identification accuracy rate and low cost required in the practical application scene of station passenger flow counting.
Disclosure of Invention
The invention aims to provide a method for accurately extracting and tracking and counting a human head quasi-circle characteristic region in real time based on a video image.
In order to solve the above problems, the present invention is realized by the following technical solutions: a method for counting station passenger flow in real time based on video images comprises the following steps:
step 1: acquiring a station video image in real time by using an IPC (Internet protocol camera);
and 2, step: performing weighted average gray processing on the obtained image, then performing target region segmentation by adopting a three-frame difference method, performing compensation filling on the segmented image by adopting a scanning seed filling algorithm, and realizing the merging operation of the image by adopting morphological closed operation so as to highlight the head quasi-circle characteristic part to be extracted;
and step 3: introducing a true circle radius limiting condition and a coincident circle judging condition for head region identification, and extracting the quasi-circular characteristics of the human head region by utilizing an improved Hough transformation algorithm under the limiting condition;
and 4, step 4: the extracted quasi-circular characteristic region is tracked in real time by adopting a frame difference distance tracking method;
and 5: and counting the moving target by adopting a traditional counting line statistical method.
Further, the target area segmentation in step 2 is specifically performed by firstly performing weighted average gray processing on the acquired image, and after the image sequence is grayed, assuming that the gray component of each frame is g i (x, y), (i =1,2, \ 8230;, N), where x, y denote pixel position, i denotes frame number, N denotes total frame number of the image sequence, and the target region is divided by a three-frame difference method, the resulting variation function being:
Figure RE-GDA0003801268670000021
wherein d = | g i+1 (x,y)-g i (x,y)|∩|g i (x,y)-g i-1 (x, y) |, ψ denotes whether or not the object is movingJudging a threshold value, and considering the point as a moving target when d is larger than or equal to psi, otherwise, considering the point as a background; the scanning seed filling algorithm in step 2 has the main idea that a section located in a given area on a current scanning line is filled first, then whether new sections needing filling exist in the area on two adjacent lines of the section are determined, if so, the new sections are stored in sequence, and the process is repeated until all the stored sections are filled. The morphological closed-loop operation specifically comprises the following steps: combining image erosion and expansion operations, firstly expanding an image, filling small cavities in a connected domain, expanding the boundary of the connected domain, connecting two adjacent connected domains, and then reducing expansion of the boundary of the connected domain and area increase caused by the expansion operation through erosion operation.
Further, the circle radius limiting condition and the coincident circle judging condition in step 3 are respectively defined by the following main formulas: setting the actual size interval of human head [ r ] min ,r max ]The shooting range of the camera is r w High is r h The width of the acquired image is r ws High is r hs Then the radius interval of the human head region displayed by the image is approximate to
Figure RE-GDA0003801268670000022
Taking the interval as a true circle radius limiting condition; for two circles p 1 (a 1 ,b 1 ,r 1 )、ρ 2 (a 2 ,b 2 ,r 2 ) The coincidence circle is judged as
Figure RE-GDA0003801268670000023
If the condition is met, approximately considering that the two circles are coincident, and reserving one of the circles.
Further, the improved Hough transform algorithm described in step 3 specifically includes the following steps:
(1) obtaining an edge point set U (x) by adopting a Canny edge detection method on the processed image i ,y i ) Calculating the gradient of each point in the set through a Sobel function, and initializing the parameter unit set rho (a, b, r) (initialization)Null), where r represents the radius of the circle, (a, b) represents the center of the circle;
(2) if the edge point set U is empty, executing step (R), otherwise executing step (3);
(3) arbitrarily taking four points from U, and meeting the requirement that the distance between any two points is less than
Figure RE-GDA0003801268670000031
Otherwise, reselecting;
(4) two out of four points are taken as common vertical lines, when three common vertical lines intersect at one point, the four points are taken as a common circle, and the circle is taken as rho k (a, b, r), then executing the step (5), otherwise, returning to the step (2);
(5) determining whether there is ρ in the parameter unit set ρ (a, b, r) c (a, b, r) satisfies the circle candidate judgment condition | ρ kc If present, | ≦ ε (ε is an allowable error), and ρ k Taking the circle center of the candidate circle as a center and the diameter as the side length as a characteristic parameter of the candidate circle, introducing the candidate circle into a square window, and then executing the step (7), otherwise, executing the step (6);
(6) will rho k Adding the parameters into parameter unit set rho (a, b, r) and counting the value M ρk =1;
(7) If | a-r | < x i < | a + r | and | b-r | < y i < | b + r | are true simultaneously, i.e., point (x) i ,y i ) Within a square window, then calculate the point (x) i ,y i ) The distance to the candidate circle center, if the result is in the error range, the count value M is ordered ρk Adding 1, judging all points in the square window, counting to obtain a count value M ρk
(8) If M is ρk >M min (M min Is the set minimum count value for judging true circle), step (9) is executed, otherwise the candidate circle is a false circle, and rho is calculated k Removing the parameters from the parameter unit set rho (a, b, r), and returning to the step (2);
(9) the candidate circle is a true circle, and the true circle is added into a true circle parameter unit set (namely a true circle set) rho true (a, b, r) and then taking the points on the circle from the set of edge points U (x) i ,y i ) InRemoving, juxtaposing the null parameter unit set rho (a, b, r), and returning to the step (2);
removal of rho in the R true The radius of the circle in (a, b, r) is not satisfied
Figure RE-GDA0003801268670000032
Then removing rho according to the coincidence circle judgment condition true (a, b, r) repeating redundant circles to finally obtain a true circle set rho true (a,b,r)。
Further, the frame difference distance tracking method in step 4 is specifically implemented by extracting a true circle ρ from two consecutive images obtained by the above operations i (a i ,b i ,r i )、ρ i+1 (a i+1 ,b i+1 ,r i+1 ) Distance between two centers of circles
Figure RE-GDA0003801268670000033
If d < psi d ,ψ d And if the minimum threshold value for judging the movement of the target areas of the two frames of images is determined, the two true circle characteristic areas extracted from the two frames of images are the same target.
Furthermore, the specific mode of the conventional counting line statistical method in step 5 is that a counting line position is set at the position where the moving target enters the range of the camera and leaves the range of the camera based on the extracted characteristic region (namely, the extracted true circle) of the human head class circle, and the counting of the moving target is realized by judging the contact condition of the upper frame and the lower frame of the rectangular frame tracked by the target characteristic with the counting line.
The method can accurately highlight the characteristics of the target area to be extracted from the complex background of the station, can effectively and accurately extract the head quasi-circular characteristic area of the human body by introducing the improved Hough transformation algorithm of the true circle limiting condition and the coincident circle judging condition, and finally realizes accurate tracking and counting by utilizing a frame difference distance tracking method and a counting line statistical method.
Drawings
Fig. 1 is a schematic flow chart of a method for counting station passenger flow in real time based on video images.
Fig. 2 is a flow chart of a modified Hough transform algorithm.
Fig. 3 is a diagram illustrating the effect of extracting the quasi-circular features after the image background is processed.
Detailed Description
The invention is described in further detail below with reference to the figures and specific examples.
As shown in fig. 1, a method for counting passenger flow in a station in real time based on video images comprises the following steps:
step 1: placing the camera above the shooting region, adjusting the whole range from the shooting region to the entrance to the exit, and recording the size of the actual shooting region, i.e. the width of the shooting region is r w High is r h Acquiring the video image of the station in real time and recording the size of the video display image, namely the width of r ws High is r hs
And 2, step: and (3) carrying out weighted average gray scale processing on each frame of image acquired in the step (1), namely gray =0.299R +0.587G +0.144B, wherein the gray is gray scale value after gray scale processing, R, G and B respectively represent red, green and blue component values corresponding to the original image, and the image sequence is subjected to gray scale processing. Assume that the gray component per frame is g i (x, y), (i =1,2, \ 8230;, N), where x, y denote pixel position, i denotes frame number, N denotes total frame number of the image sequence, and the change function obtained by the three-frame difference method is taken as:
Figure RE-GDA0003801268670000041
wherein d = | g i+1 (x,y)-g i (x,y)|∩|g i (x,y)-g i-1 (x, y) |, ψ represents a judgment threshold value of whether the object is moving, when d ≧ ψ the point is regarded as a moving object, and conversely, it is a background. Aiming at the problems of the void phenomenon possibly occurring in the three-frame difference and the partial loss of the extraction area caused by the small motion amplitude of the target area, a scanning seed filling algorithm is adopted for compensation filling, namely, a section located in a given area on the current scanning line is filled firstly, and then whether the section is located in the area or not on the upper line and the lower line adjacent to the section is determined whether the section is needed to be filled or notIf the filled new sections exist, storing the filled new sections in sequence, and repeating the process until all the stored sections are filled; and the merging of the images is realized by adopting morphological closed operation.
And step 3: as shown in fig. 2, the extraction of the human head circular-like feature region is realized by adopting an improved Hough transform algorithm on the image processed in step 2, and the specific operations are as follows:
(1) obtaining an edge point set U (x) by adopting a Canny edge detection method on the processed image i ,y i ) Calculating gradients of all points in the set through a Sobel function, and initializing a parameter unit set rho (a, b, r) (initialized to be empty), wherein r represents the radius of a circle, and (a, b) represents the center of the circle;
(2) if the edge point set U is empty, executing step (R), otherwise executing step (3);
(3) arbitrarily taking four points from U, and meeting the requirement that the distance between any two points is less than
Figure RE-GDA0003801268670000051
Otherwise, reselecting;
(4) two out of four points are taken as common vertical lines, when three common vertical lines intersect at one point, the four points are taken as a common circle, and the circle is taken as rho k (a, b, r), then executing the step (5), otherwise, returning to the step (2);
(5) determining whether there is ρ in the parameter unit set ρ (a, b, r) c (a, b, r) satisfies the circle candidate judgment condition | ρ kc If present, | ≦ ε (ε is an allowable error), and ρ k Taking the circle center of the candidate circle as a center and the diameter as the side length as a characteristic parameter of the candidate circle, introducing a square window, and then executing the step (7), otherwise, executing the step (6);
(6) will rho k Adding the parameters into the parameter unit set rho (a, b, r) and counting the value M ρk =1;
(7) If | a-r | < x i < | a + r | and | b-r | < y i < | b + r | are true simultaneously, i.e., point (x) i ,y i ) Within a square window, then calculate the point (x) i ,y i ) The distance to the candidate circle center, if the result is in the error rangeWithin the enclosure, the count value M is set ρk Adding 1, judging all points in the square window, and counting to obtain a count value M ρk
(8) If M is ρk >M min (M min Is the set minimum count value for judging true circle), step (9) is executed, otherwise the candidate circle is a false circle, and rho is calculated k Removing the parameters from the parameter unit set rho (a, b, r), and returning to the step (2);
(9) the candidate circle is a true circle, and the true circle is added into a true circle parameter unit set (namely a true circle set) rho true (a, b, r) and then taking the points on the circle from the set of edge points U (x) i ,y i ) Removing, juxtaposing the null parameter unit set rho (a, b, r), and returning to the step (2);
r is in rho true Finding out the out-of-range in (a, b, r)
Figure RE-GDA0003801268670000061
Deleting the circle, and judging the condition according to the coincident circle
Figure RE-GDA0003801268670000062
When the two circles satisfy the condition, the two circles are regarded as the same circle, the second circle of the two circles is removed, and therefore rho is deleted true (a, b, r) repeating redundant circles to finally obtain a true circle set rho true (a,b,r)。
And 4, step 4: a true circle rho extracted according to the continuous two frames of images obtained by the operation i (a i ,b i ,r i )、ρ i+1 (a i+1 ,b i+1 ,r i+1 ) Calculating the distance between two circle centers
Figure RE-GDA0003801268670000063
If d < psi d ,ψ d If the minimum threshold value for judging the movement of the target area of the two frames of images is determined, the two true circle characteristic areas extracted from the two frames of images are the same target, and the tracking of the target is realized.
And 5: as shown in fig. 3, a circumscribed rectangle frame is made based on the true circle extracted in step 3, the width and height of the rectangle frame are both larger than the radius of the true circle, the positions of the counting lines are set at the positions where the moving target enters the range of the camera and leaves the range of the camera, and the moving target is counted by judging the contact condition between the upper and lower frames of the rectangle frame tracked by the target characteristics and the counting lines.
The embodiment of the invention is based on the improved Hough transformation algorithm, and has remarkable advantages compared with the traditional Hough transformation algorithm under other same operations. 300 sample pictures are randomly extracted from the counting results obtained by the two methods, and the counting rate, the identification accuracy and the error reason counting results of the two methods are obtained through statistical analysis and comparison.
Table 1 shows the comparison of the calculated rate and accuracy for the two methods.
Figure RE-GDA0003801268670000064
Figure RE-GDA0003801268670000071
Table 2 shows the error cause statistics.
Figure RE-GDA0003801268670000072
The above embodiments are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modification made on the basis of the technical idea of the present invention falls within the protection scope of the claims of the present invention, and the technology not related to the present invention can be realized by the prior art.

Claims (9)

1. A method for counting station passenger flow in real time based on video images is characterized by comprising the following steps:
step 1: acquiring a station video image in real time by using an IPC (Internet protocol camera);
and 2, step: carrying out weighted average gray processing on the obtained image, then carrying out target area segmentation, and carrying out compensation filling and merging operation on the segmented image so as to highlight the head part circle-like characteristic part to be extracted;
and step 3: introducing a true circle radius limiting condition and a coincident circle judging condition for head region identification, and extracting the quasi-circular characteristic of the human head region by utilizing an improved Hough transformation algorithm under the limiting condition;
and 4, step 4: the extracted quasi-circular characteristic region is tracked in real time by adopting a frame difference distance tracking method;
and 5: and counting the moving target by adopting a counting line statistical method.
2. The method for station passenger flow real-time counting based on the video image as claimed in claim 1, wherein: the target area in step 2 is divided into: firstly, carrying out weighted average gray level processing on an acquired image, and then realizing segmentation of a target area by adopting a three-frame difference method, specifically:
firstly, carrying out weighted average gray processing on an acquired image, and setting gray component of each frame as g after graying an image sequence i (x, y), (i =1,2, \ 8230;, N), where x, y denote pixel position, i denotes frame number, N denotes total frame number of the image sequence, and the target region is divided by a three-frame difference method, the resulting variation function being:
Figure FDA0003636168780000011
wherein d = | g i+1 (x,y)-g i (x,y)|∩|g i (x,y)-g i-1 (x, y) |, psi indicates the judgment threshold value of whether the target moves, and if d ≧ psi, the point is considered as a moving target, otherwise, the point is a background.
3. The method for counting station passenger flow in real time based on the video image as claimed in claim 1, characterized in that: the compensation filling and merging operation in the step 2 is as follows: compensating and filling the segmented image by adopting a scanning seed filling algorithm so as to solve the problem of image holes in a motion area which may occur by a three-frame difference method; and the merging operation of the images is realized by adopting a morphological closed operation so as to deal with the problem of image splitting of the motion area which may occur in the three-frame difference method.
4. The method for counting station passenger flow in real time based on the video image as claimed in claim 3, characterized in that: the scanning seed filling algorithm specifically comprises the following steps: firstly filling the section of the current scanning line in the given area, then determining whether new sections needing to be filled exist in the area on the upper line and the lower line adjacent to the section, if so, storing the new sections in sequence, and repeating the process until all the stored sections are filled.
5. The method for station passenger flow real-time counting based on the video image as claimed in claim 3, characterized in that: the morphological closed-loop operation specifically comprises the following steps: combining image erosion and expansion operations, firstly expanding an image, filling small holes in a connected domain, expanding the boundary of the connected domain, connecting two adjacent connected domains, and then reducing expansion of the boundary of the connected domain and area increase caused by the expansion operation through erosion operation.
6. The method for station passenger flow real-time counting based on the video image as claimed in claim 1, wherein: in the step 3, the true circle radius limiting condition and the coincident circle judging condition have the following formulas: setting the actual size interval of human head [ r ] min ,r max ]The shooting range of the camera is r w High is r h The width of the acquired image is r ws High is r hs Then the radius section of the human head region displayed by the image is approximate to
Figure FDA0003636168780000021
Taking the interval as a true circle radius limiting condition; for two circles ρ 1 (a 1 ,b 1 ,r 1 )、ρ 2 (a 2 ,b 2 ,r 2 ) The coincidence circle is judged as
Figure FDA0003636168780000022
If the condition is met, approximately considering that the two circles are coincident, and reserving one of the circles.
7. The method for station passenger flow real-time counting based on the video image as claimed in claim 1, wherein: the improved Hough transform algorithm in the step 3 firstly obtains an edge point set by using the processed image, four points are taken out from the edge point set according to the radius limiting condition of a true circle, a candidate circle is obtained by intersecting three common vertical lines at one point, then a square window is introduced according to the candidate circle, whether the distance from the point in the square window to the center of the candidate circle is within the error range is judged, the cumulative counting is carried out, whether the candidate circle is the true circle is determined according to the size of the final cumulative value, and finally, the judgment of the coincidence circle condition and the limitation condition of the radius of the true circle are carried out on all the obtained true circles, so that the true circle set is obtained through screening; the method comprises the following specific steps:
step 3.1: obtaining an edge point set U (x) by adopting a Canny edge detection method on the processed image i ,y i ) Calculating the gradient of each point in the set through a Sobel function, initializing a parameter unit set rho (a, b, r) to be empty, wherein r represents the radius of a circle, and (a, b) represents the center of the circle;
step 3.2: if the edge point set U is empty, executing the step 3.10, otherwise executing the step 3.3;
step 3.3: arbitrarily taking four points from the edge point set U, and meeting the requirement that the distance between any two points is less than
Figure FDA0003636168780000023
Otherwise, reselecting;
step 3.4: two out of four points are taken as common vertical lines, when three common vertical lines intersect at one point, the four points are taken as a common circle, and the circle is taken as rho k (a, b, r) then step 3.5 is performed, otherwise step 3.2 is returned;
step 3.5: in parameter unit setDetermining whether rho exists in rho (a, b, r) c (a, b, r) satisfies the circle candidate judgment condition | ρ | kc If the error is less than or equal to epsilon, epsilon is an allowable error, and rho is determined k Taking the center of the candidate circle as the center and the diameter as the side length as the characteristic parameters of the candidate circle, introducing the candidate circle into a square window, and then executing the step 3.7, otherwise executing the step 3.6;
step 3.6: will rho k Adding the parameters into the parameter unit set rho (a, b, r) and counting the value M ρk =1;
Step 3.7: if | a-r | < x i < | a + r | and | b-r | < y i < | b + r | are true simultaneously, i.e., point (x) i ,y i ) Within the square window, the point (x) is calculated i ,y i ) The distance to the candidate circle center is calculated, if the result is in the error range, the count value M is calculated ρk Adding 1, judging all points in the square window, and counting to obtain a count value M ρk
Step 3.8: if M is ρk >M min ,M min If the minimum count value is the set minimum count value for judging the true circle, executing the step 3.9, otherwise, if the candidate circle is a false circle, and determining the rho k Removing the parameters from the parameter unit set rho (a, b, r), and returning to the step 3.2;
step 3.9: the candidate circle is a true circle, and the true circle is added into a true circle parameter unit set, namely a true circle set rho true (a, b, r) and then taking the points on the circle from the set of edge points U (x) i ,y i ) Removing, juxtaposing the null parameter unit set rho (a, b, r), and returning to the step 3.2;
step 3.10: removal of rho true The radius of the circle in (a, b, r) is not satisfied
Figure FDA0003636168780000031
Then according to the coincidence circle judgment condition, removing rho true (a, b, r) repeating redundant circles to finally obtain a true circle set rho true (a,b,r)。
8. The method for station passenger flow real-time counting based on the video image as claimed in claim 7, wherein: the frame difference distance trackingThe method comprises the steps of judging whether the Euclidean distance between the centers of two circles meets a set threshold value by using true circles extracted from two adjacent frames, wherein the Euclidean distance meets the same target, and the target tracking is realized by the method, which specifically comprises the following steps: extracting a true circle rho according to the continuous two frames of images obtained in the step 3 i (a i ,b i ,r i )、ρ i+1 (a i+1 ,b i+1 ,r i+1 ) Distance between two centers of circles
Figure FDA0003636168780000032
If d < psi d ,ψ d And if the minimum threshold value for judging the movement of the target areas of the two frames of images is determined, the two true circle characteristic areas extracted from the two frames of images are the same target.
9. The method for station passenger flow real-time counting based on the video image as claimed in claim 1, wherein: the counting line statistical method comprises the following specific modes: based on the extracted human head circle-like characteristic region, namely the extracted true circle, an external rectangular frame is made, counting line positions are set at the positions where the moving target enters the range of the camera and leaves the range of the camera, and counting of the moving target is achieved by judging the contact condition of the upper frame and the lower frame of the rectangular frame tracked by the target characteristics and the counting line.
CN202210505988.1A 2022-05-10 2022-05-10 Method for counting station passenger flow in real time based on video image Pending CN115147751A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210505988.1A CN115147751A (en) 2022-05-10 2022-05-10 Method for counting station passenger flow in real time based on video image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210505988.1A CN115147751A (en) 2022-05-10 2022-05-10 Method for counting station passenger flow in real time based on video image

Publications (1)

Publication Number Publication Date
CN115147751A true CN115147751A (en) 2022-10-04

Family

ID=83406909

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210505988.1A Pending CN115147751A (en) 2022-05-10 2022-05-10 Method for counting station passenger flow in real time based on video image

Country Status (1)

Country Link
CN (1) CN115147751A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109637A (en) * 2023-04-13 2023-05-12 杭州深度视觉科技有限公司 System and method for detecting appearance defects of turbocharger impeller based on vision

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109637A (en) * 2023-04-13 2023-05-12 杭州深度视觉科技有限公司 System and method for detecting appearance defects of turbocharger impeller based on vision
CN116109637B (en) * 2023-04-13 2023-12-26 杭州深度视觉科技有限公司 System and method for detecting appearance defects of turbocharger impeller based on vision

Similar Documents

Publication Publication Date Title
CN109961049B (en) Cigarette brand identification method under complex scene
CN107993245B (en) Aerospace background multi-target detection and tracking method
CN104978567B (en) Vehicle checking method based on scene classification
WO2022027931A1 (en) Video image-based foreground detection method for vehicle in motion
CN106709472A (en) Video target detecting and tracking method based on optical flow features
CN109145708B (en) Pedestrian flow statistical method based on RGB and D information fusion
CN109559324B (en) Target contour detection method in linear array image
CN109685045B (en) Moving target video tracking method and system
CN108229475B (en) Vehicle tracking method, system, computer device and readable storage medium
CN109409208A (en) A kind of vehicle characteristics extraction and matching process based on video
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN112819094A (en) Target detection and identification method based on structural similarity measurement
CN107507222B (en) Anti-occlusion particle filter target tracking method based on integral histogram
CN101114337A (en) Ground buildings recognition positioning method
CN112580447B (en) Edge second-order statistics and fusion-based power line detection method
CN112489055B (en) Satellite video dynamic vehicle target extraction method fusing brightness-time sequence characteristics
CN110991398A (en) Gait recognition method and system based on improved gait energy map
CN110782487A (en) Target tracking method based on improved particle filter algorithm
EP3149707A1 (en) Method and apparatus for object tracking and segmentation via background tracking
Zhu et al. Towards automatic wild animal detection in low quality camera-trap images using two-channeled perceiving residual pyramid networks
CN115147751A (en) Method for counting station passenger flow in real time based on video image
Lee et al. Real-time automatic vehicle management system using vehicle tracking and car plate number identification
CN111539980A (en) Multi-target tracking method based on visible light
CN104200455B (en) A kind of key poses extracting method based on movement statistics signature analysis
CN106446832B (en) Video-based pedestrian real-time detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination