CN103345792B - Based on passenger flow statistic device and the method thereof of sensor depth image - Google Patents

Based on passenger flow statistic device and the method thereof of sensor depth image Download PDF

Info

Publication number
CN103345792B
CN103345792B CN201310279375.1A CN201310279375A CN103345792B CN 103345792 B CN103345792 B CN 103345792B CN 201310279375 A CN201310279375 A CN 201310279375A CN 103345792 B CN103345792 B CN 103345792B
Authority
CN
China
Prior art keywords
depth map
depth
coordinate
map
transducer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310279375.1A
Other languages
Chinese (zh)
Other versions
CN103345792A (en
Inventor
顾国华
尹章芹
李娇
陈海欣
周玉蛟
钱惟贤
陈钱
路东明
任侃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201310279375.1A priority Critical patent/CN103345792B/en
Publication of CN103345792A publication Critical patent/CN103345792A/en
Application granted granted Critical
Publication of CN103345792B publication Critical patent/CN103345792B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a kind of passenger flow statistic device based on sensor depth image and method thereof, this device comprises sensor base, dual-depth sensor, USB data line, data processing control units, display, mode selection switch and power supply; The camera lens of dual-depth sensor all overlooks ground and optical center axle is all vertical with ground level, the two signal output part is connected with data processing control units by USB data line, data processing control units accesses display by video line, mode selection switch is connected with data processing control units, and the output terminal of power supply accesses the power input of other parts.Method is as follows: two depth image is transformed into Same Physical coordinate system and splices; Travel through spliced depth image, find out gray level very small region and mark; Extract each marked region, determine the positional information of head; Set up head movement flight path, the number of statistics turnover.Apparatus of the present invention have the advantage that cost is low, precision is high, stability is strong, have a extensive future.

Description

Based on passenger flow statistic device and the method thereof of sensor depth image
One, technical field
The invention belongs to Digital Image Processing and area of pattern recognition, particularly a kind of passenger flow statistic device based on sensor depth image and method thereof.
Two, background technology
Along with the development of statistical analysis technique and computer technology, instant, reliably volume of the flow of passengers information become possibility.The importance of passenger flow information also embodies day by day simultaneously, particularly for the shopping place such as megastore, supermarket, the public transportation systems such as railway, subway and bus and exhibition center, stadium, library, airport etc. are concerning the exigent public place of public safety, and the importance of passenger flow information is more self-evident in real time, accurately.
Traditional demographic method has human body to count or electronic equipment flip-flop number, not only waste of manpower and material resources, and inefficiency.Along with the develop rapidly of computer vision and image processing techniques, its application in demographics is made to become possibility.Due in the high by stages crowded crowd of passenger flow, passenger is blocked mutually, cannot carry out effective modeling to human body, that only has the head of passenger to embody in image procossing is comparatively complete, therefore mainly adopts the mode extracting head of passenger feature to position passenger and follow the tracks of based on the passenger flow statistical method of machine vision.Automatic passenger flow counting based on color video stream mainly utilizes feature identification in image and pattern match, but it is more responsive to the light change of counting environment, the interference of shade can not be got rid of, occlusion issue mutual to the stream of people, be only applicable to background simple scenario, and needing data volume to be processed large, algorithm application is subject to a lot of restriction, does not possess ubiquity.Automatic passenger flow counting based on stereoscopic vision mainly utilizes dual sensor to extract the depth information of moving target, and detect target according to depth information, the validity of the results show the method, but demand fulfillment dual sensor synchronous acquisition, hardware implementation cost is too high, is not suitable for large-scale promotion.
Three, summary of the invention
The object of the present invention is to provide a kind of precision is high, stability the is strong passenger flow statistic device based on sensor depth image and method, solve crowded in passenger flow statistics, block, the problem of shade, passenger flow is carried out to the statistics of real-time high-efficiency.
The technical solution realizing the object of the invention is, based on a passenger flow statistic device for sensor depth image, comprise sensor base, the first depth transducer, the second depth transducer, USB data line, data processing control units, display, mode selection switch, power supply, first depth transducer is identical with the second depth transducer and be set in parallel in same sensor base, first depth transducer all overlooks ground with the camera lens of the second depth transducer and optical center axle is all vertical with ground level, first depth transducer is all connected with data processing control units by USB data line with the signal output part of the second depth transducer, the signal output part of data processing control units accesses display by video line, mode selection switch is connected with the control signal input end of data processing control units, the output terminal of power supply accesses the first depth transducer respectively, second depth transducer, the power input of data processing control units and display,
After power supply powers on to device, the depth image that first depth transducer and the second depth transducer collect all inputs data processing control units by USB data line, Data Data processing and control element (PCE) through Digital Image Processing, and exports the result of process to display by video line to the depth image collected.
Based on a passenger flow statistical method for sensor depth image, step is as follows:
Step 1, after power supply powers on to device, data processing control units obtains the first depth map that the first depth transducer collects and the second depth map that the second depth transducer collects by USB data line;
Step 2, according to first frame first depth map obtained or first frame second depth map, demarcates the relation in target to the distance H and depth map on ground between corresponding grey scale level G;
Step 3, by mode selection switch determination mode of operation: if select single depth transducer mode of operation, then chooses the first depth map or the second depth map as the 3rd depth map A3 and proceeds to step 6; If select dual-depth working sensor pattern, then enter next step;
Step 4, by the first depth map and the equal projection transform of the second depth map to three dimensions physical coordinates system;
Step 5, carries out splicing by the first depth map A1 and the second depth map A2 that are transformed into three dimensions physical coordinates system and obtains the 3rd depth map A3;
Step 6, carries out region segmentation to the 3rd depth map A3, finds out the pixel coordinate that in each region, gray level minimal value is corresponding, and mark the pixel coordinate searched out;
Step 7, according to the mark of pixel coordinate after region segmentation, carries out head identification and obtains header information;
Step 8, carries out count tracking to the header information identified;
Step 9, processes each frame depth map collected successively according to the method for step 4 ~ 8, and exported the turnover result of statistics to display by video line in the form of a file every one minute.
The present invention compared with prior art, has following remarkable advantage: (1) system can select single-sensor to realize passenger flow statistics, multisensor also can be used to increase and expand visual field, realize many visual fields passenger flow statistics; (2) synchronous during the video acquisition of dual sensor, solve based on sensor stationary problem in stereoscopic vision, the depth map that single-sensor exports comprises human body three-dimensional information, and solve crowded based on passenger flow in video flowing passenger flow statistics, block, the problem such as shade; (3) the system energy detection statistics volume of the flow of passengers in real time, and stability is strong, cost is low, frame frequency is 30pf/s, and precision is more than 96%.
Four, accompanying drawing explanation
Fig. 1 is the passenger flow statistic device structural representation that the present invention is based on sensor depth map.
Fig. 2 is the passenger flow statistical method process flow diagram that the present invention is based on sensor depth map.
Fig. 3 is space physics coordinate graph of a relation (a) first depth transducer coordinate system (b) second depth transducer coordinate system (c) the actual physics coordinate system of two depth transducers.
Fig. 4 is the nine grids schematic diagram in region segmentation of the present invention.
Fig. 5 is original depth map in the embodiment of the present invention, and wherein (a) is the first depth map, and (b) is the second depth map.
Fig. 6 is the first depth map and rear the 3rd depth map formed of the second depth map splicing in the embodiment of the present invention.
Fig. 7 is the head tracking recognition result figure to the 3rd depth map in the embodiment of the present invention.
Five, embodiment
Below in conjunction with drawings and the specific embodiments, further description is made to the present invention.
Composition graphs 1, the present invention is based on the passenger flow statistic device of sensor depth image, comprise sensor base 1, first depth transducer 2, second depth transducer 3, USB data line 7, data processing control units 8, display 9, mode selection switch 10, power supply 11, first depth transducer 2 is identical with the second depth transducer 3 and be set in parallel in same sensor base 1, first depth transducer 2 all overlooks ground with the camera lens of the second depth transducer 3 and optical center axle is all vertical with ground level, first depth transducer 2 is all connected with data processing control units 8 by USB data line 7 with the signal output part of the second depth transducer 3, the signal output part of data processing control units 8 accesses display 9 by video line 12, mode selection switch 10 is connected with the control signal input end of data processing control units 8, the output terminal of power supply 11 accesses the first depth transducer 2 respectively, second depth transducer 3, the power input of data processing control units 8 and display 9,
After power supply 11 powers on to device, mode of operation is determined by mode selection switch 10, the depth image that first depth transducer 2 and the second depth transducer 3 collect all inputs data processing control units 8 by USB data line 7, Data Data processing and control element (PCE) 8 through Digital Image Processing, and exports the result of process to display 9 by video line 12 to the depth image collected.
Horizontal distance W IDTH between described first depth transducer 2 and the second depth transducer 3 meets the following conditions:
H 0 - H 4 1 2 WIDTH = 1 tan v 2
H 0represent the first depth transducer 2 and the second depth transducer 3 distance to ground, 3000mm < H 0< 3700mm, ν are the transverse field angle of the first depth transducer 2 and the second depth transducer 3, H 4(2100 < H 4< H 0) represent the distance of point of contact, two visual fields apart from ground.
Described first depth transducer 2 and the second depth transducer 3 all adopt Asus XtionPRO, and be the XtionSensor that HuaShuo Co., Ltd produces, the gray-scale map that this sensor exports is the depth map comprising human body three-dimensional information.Described mode selection switch 10 is provided with single depth transducer mode of operation and dual-depth working sensor pattern.Ground in the first depth transducer 2 and the second depth transducer 3 visual field is also provided with and judges line 6, as the turnover criterion of passenger flow into direction sign 4, outgoing direction mark 5, turnover.
Composition graphs 2, the present invention is based on the passenger flow statistical method of sensor depth image, step is as follows:
Step 1, after power supply 11 powers on to device, data processing control units 8 obtains the first depth map that the first depth transducer 2 collects and the second depth map that the second depth transducer 3 collects by USB data line 7;
Step 2, according to first frame first depth map obtained or first frame second depth map, demarcates the relation in target to the distance H and depth map on ground between corresponding grey scale level G;
The height on distance ground is H 1target corresponding grey scale level G 1, distance ground height be H 2target corresponding grey scale level G 2, distance ground height be H 3target corresponding grey scale level G 3, wherein 100mm < H 1< 200mm, 500mm < H 2< 800mm, 2300mm < H 3< H 0, H 0represent that sensor is to the distance on ground and 3000mm < H 0< 3700mm, G 0represent that nobody is in field range, the gray level that in depth map, ground is corresponding, then have:
H 0 = &beta; * G 0 H 0 - H 1 = &beta; * G 1 H 0 - H 2 = &beta; * G 2 H 0 - H 3 = &beta; * G 3
In formula, distance calibration is in units of millimeter, and β represents the scale-up factor of distance and gray level, and 10 < β < 20;
Step 3, determines mode of operation by mode selection switch 10: if select single depth transducer mode of operation, then choose the first depth map or the second depth map as the 3rd depth map A3 and proceed to step 6; If select dual-depth working sensor pattern, then enter next step;
Step 4, by the first depth map and the equal projection transform of the second depth map to three dimensions physical coordinates system;
By the first depth map from two dimensional surface coordinate (i 1, j 1) projection transform is to three dimensions physical coordinates (x 1, y 1, z 1) obtain the first new depth map A1, by the second depth map from two dimensional surface coordinate (i 2, j 2) projection transform is to three dimensions physical coordinates (x 2, y 2, z 2) obtain the second new depth map A2; If the pixel resolution of two-dimentional depth map is I*J, the centre coordinate of ordinate is I m, the centre coordinate of horizontal ordinate is J m, the transverse field angle that longitudinal field angle that μ is sensor, ν are sensor, then projection transform relation is as follows:
x y z = k i 0 0 0 k j 0 0 0 1 i j G ( i , j )
Wherein, k i=sin θ i* G (i, j), θ irepresent ordinate i to I in two dimensional surface coordinate (i, j) mcorresponding field of view angle, k j=sin θ j* G (i, j), θ jrepresent that in two dimensional surface coordinate (i, j), horizontal ordinate j is to centre coordinate J mcorresponding field of view angle, g (i, j) is the gray level of two dimensional surface coordinate (i, j) correspondence;
Step 5, splices the first depth map A1 and the second depth map A2 that are transformed into three dimensions physical coordinates system; Composition graphs 3, coordinate system (a) and coordinate system (b) represent the physical coordinates system that the first depth map A1 and the second depth map A2 is corresponding respectively, there is the relation R|T of displacement and rotation between these two physical coordinates systems (a) and (b), utilize R|T Two coordinate relation to be forwarded in same coordinate (c).
Step (5.1), utilizes the method for multiple views measurement data to determine the second depth map A2 coordinate (x 2, y 2, z 2) and the first depth map A1 coordinate (x 1, y 1, z 1) between transformational relation R|T:
( R | T ) x 1 y 1 z 1 = x 2 y 2 z 2
Get S point coordinate in the first depth map A1 and the coordinate that this S o'clock corresponding in the second depth map A2 a=0,1,2 ..., S-1 and S > 2, determine that the second depth map A2 rotates relative to the first depth map A1 and the value of translation:
P L a = R &times; T &times; P R a
R represents the rotation relationship of the second depth map A2 relative to the first depth map A1, and T represents the displacement of the second depth map A2 relative to the first depth map A1, then:
R = cos &theta; sin &theta; 0 0 - sin &theta; cos &theta; 0 0 0 0 1 0 0 0 0 1
T = 1 0 0 0 0 1 0 0 0 0 1 0 &Delta;x 0 &Delta;y 0 &Delta;z 0 1
In above formula, θ represents the rotation angle of the second depth map A2 relative to the first depth map A1, (Δ x 0, Δ y 0, Δ z 0) represent the coordinate (x of the second depth map A2 2, y 2, z 2) relative to the coordinate (x of the first depth map A1 1, y 1, z 1) displacement difference;
Step (5.2), creating a width pixel resolution is C*D, and initial value is the 3rd depth map A3 of 0, wherein I < C < 2I, J < D < 2J; By the first depth map A1 coordinate (x 1, y 1, z 1) be converted to the second depth map A2 coordinate (x 2, y 2, z 2), be stored in the 3rd depth map A3, if gray level G (x, y) > is G after then the first depth map A1 and the second depth map A2 being spliced 1, then G (x, y)=G 1, in the splicing overlapping region of the first depth map A1 and the second depth map A2, gray level gets the average of respective coordinates gray level in the first depth map A1 and the second depth map A2;
Step 6, carries out region segmentation to the 3rd depth map A3, finds out the pixel coordinate that in each region, gray level minimal value is corresponding, and mark the pixel coordinate searched out;
Step (6.1), with A*A pixel for window, 20 < A < 70, the 3rd depth map A3 is divided into (C/A) * (D/A) individual region, and C/A, D/A are positive integer; Nine grids are utilized to travel through all regions of the 3rd depth map A3, Fig. 4 is the schematic diagram of nine grids, the corresponding region of each window of nine grids, window centered by lattice wherein, finds the nine grids central window port area meeting following two conditions: 1. window region in center exists gray level and is less than G 2be greater than G 3pixel 2. this central window port area gray average be less than other window gray average of nine grids; Be N by the zone marker searched out t, t=1,2,3 ... T and T < (C/A) * (D/A), record each region minimal gray level Min simultaneously t;
Step (6.2), by N tlimit, region four extends out one times, finds out gray level in the region after extending out and meets G t(x, y)-Min tall pixel coordinates of < λ (4 < λ < 15) condition, and be labeled as K t, symbiosis becomes T marker character.
Step 7, according to the mark K of pixel coordinate after region segmentation t, carry out head identification, concrete steps are as follows:
Step (7.1), creating T width initial value is the binary map of 0, be numbered 1 ~ T, and the pixel resolution of each width binary map is C*D; Extract marker character K tcorresponding all pixel coordinates, and the gray-scale value of respective pixel coordinate in t width binary map is set to 255;
Step (7.2), determines the area A rea of connected domain in every width binary map t, and to should the gray level average of connected domain in the 3rd depth map A3 find the connected domain with following characteristics: 1. area A rea tbe greater than Area, and Area>=500, the connected domain 2. in binary map is class ellipticity, and 3. head width w ' and height of head h ' is less than 25 according to all meeting after following formula scales and is greater than 15, and formula is as follows:
h &prime; = &alpha; * ( 255.0 - G &OverBar; t ) * h
w &prime; = &alpha; * ( 255.0 - G &OverBar; t ) * w
The wherein true altitude of h ' expression human body head, the developed width of w ' expression human body head, h represents the height of connected domain in binary map, and w represents the wide of connected domain in binary map, and α is scale-up factor and 0 < α < 1;
Record all the connected domain origin coordinates, wide height and the regional average value that search out i.e. header information;
Step 8, follows the tracks of the header information identified;
Step (8.1), in field range with header information central point for a mark, ripple door shape is rectangle, ripple door size is set to Q pixel, and 10 < Q < 100, determined by mark (x, a y) He Bomen (Δ x, Δ y) with Targets Dots predicted value centered by space search region, region of search is as follows:
| x - x ^ | &le; &Delta;x
| y - y ^ | &le; &Delta;y
Step (8.2), first, assuming that flight path, M frame records head Targets Dots for the first time, and the speed of coordinate to Targets Dots obtained based on M+1 frame is estimated, if the speed estimated in region of search scope namely then generate a temporary transient flight path; Secondly, the coordinate obtained based on M+2 frame to Targets Dots position predict, and determine relevant range centered by predicted position any some mark dropped in relevant range expands a temporary transient flight path, continues estimating speed value, predicts based on the position of velocity amplitude to next frame and set up relevant range, anyly drops on some mark in relevant range by flight path new for generation one; Finally, matching is carried out to the flight path quafric curve of all generations, if the error delta < 0.1 of the point on flight path and matched curve, then determine this flight path, if do not met, then delete this flight path;
Step (8.3), after determining target travel flight path, at the l of the 3rd depth map A3 1row and l 2two, row picture judges line, wherein 10 < l 1< l 2< 470 pixel, if current goal point mark x is less than l 1but flight path exists row-coordinate is greater than l 2some mark, then the number entered adds 1, if instead current goal point mark x is greater than l 2but flight path exists row-coordinate is less than l 1some mark, then the number gone out adds 1, deletes this flight path simultaneously;
Step 9, processes each frame depth map collected successively according to the method for step 4 ~ 8, and exported the turnover result of statistics to display 9 by video line 12 in the form of a file every one minute.
Embodiment 1
The present invention is based on the passenger flow statistic device of sensor depth image, design parameter is as follows:
(1), the first depth transducer 2 and the second depth transducer 3 all adopt Asus XtionPRO, the XtionSensor that HuaShuo Co., Ltd produces, longitudinal field angle μ of the first depth transducer 2 and the second depth transducer 3 is 45 °, transverse field angle ν is 58 °, the pixel resolution exporting two-dimentional depth map is I*J is 480*640, then the centre coordinate I of ordinate mbe 240, the centre coordinate J of horizontal ordinate mbe 320, therefore
(2) S=6 point in the first depth map A1 and the second depth map A2, is got respectively, rotation parameter R corresponding between two width scape depth map A1 and A2 obtained after determining to project to space physics coordinate system and displacement parameter T, create the 3rd depth map A3 that a width pixel resolution is C*D=600*1200, initial value is 0, be stored in the 3rd depth map A3 of 600*1200 after the first depth map A1 and the second depth map A2 is spliced; In Fig. 5, (a) is the first depth map A1, and (b) is the second depth map A2, Fig. 6 is the first depth map and rear the 3rd depth map A3 formed of the second depth map splicing, and spliced 3rd depth map A3 splices intact, increases and has expanded visual field;
(3), with A*A=40*40 pixel for window, be that the 3rd depth map A3 of 600*1200 is divided into (C/A) * (D/A)=(600/40) * (1200/40)=15*30 region by pixel;
(4), by N tlimit, region four extends out one times, finds out gray level in the region after extending out and meets G t(x, y)-Min t< λ, all pixel coordinates of λ=6, and be labeled as K t, symbiosis becomes T marker character;
(5), determine target travel flight path after, at the l of the 3rd depth map A3 1=300 row and l 2=400 row are drawn two and are judged line; Head width w ' and height of head h ' reduction formula:
h &prime; = &alpha; * ( 255.0 - G &OverBar; t ) * h
w &prime; = &alpha; * ( 255.0 - G &OverBar; t ) * w
Wherein α is scale-up factor and α=0.007;
Fig. 7 is the head tracking recognition result figure to the 3rd depth map A3, go out target position by circle, and set up the flight path of target travel, in depth map, draw two judge line, the result of turnover statistics is shown, contributes to by picture Real-Time Monitoring passenger flow situation simultaneously;
(6), by the statistics of system write data text file, and make comparisons with the result of reality, result is as table 1, and the result of the present invention's statistics is compared with actual number of people entering as can be seen from the table, and accuracy of the mean is more than 96%.
Table 1
Time Entry/exit (statistics) Entry/exit (actual result) Precision
9:00-9:15 37/17 35/17 0.9714
11:30-12:00 110/93 117/92 0.9647
14:00-14:15 79/64 81/60 0.9543
17:00-17:15 80/39 84/41 0.9517
Mean value 0.9605
In sum, the present invention is based on passenger flow statistic device and the method thereof of sensor depth image, single-sensor can be selected to realize passenger flow statistics, also multisensor can be used to increase and to expand visual field, realize many visual fields passenger flow statistics, synchronous during the video acquisition of dual sensor, solve based on sensor stationary problem in stereoscopic vision, the depth map that single-sensor exports comprises human body three-dimensional information, and solve based on passenger flow in video flowing passenger flow statistics crowded, block, the problems such as shade, system stability is strong, cost is low, frame frequency is 30pf/s, precision is more than 96%, have broad application prospects.

Claims (7)

1. the passenger flow statistic device based on sensor depth image, it is characterized in that, comprise sensor base (1), the first depth transducer (2), the second depth transducer (3), USB data line (7), data processing control units (8), display (9), mode selection switch (10), power supply (11), first depth transducer (2) is identical with the second depth transducer (3) and be set in parallel in same sensor base (1), first depth transducer (2) all overlooks ground with the camera lens of the second depth transducer (3) and optical center axle is all vertical with ground level, first depth transducer (2) is all connected with data processing control units (8) by USB data line (7) with the signal output part of the second depth transducer (3), the signal output part of data processing control units (8) is by video line (12) access display (9), mode selection switch (10) is connected with the control signal input end of data processing control units (8), the output terminal of power supply (11) accesses the first depth transducer (2) respectively, second depth transducer (3), the power input of data processing control units (8) and display (9),
After power supply (11) powers on to device, the depth image that first depth transducer (2) and the second depth transducer (3) collect is all by USB data line (7) input data processing control units (8), Data Data processing and control element (PCE) (8) through Digital Image Processing, and exports the result of process to display (9) by video line (12) to the depth image collected; Horizontal distance W IDTH between described first depth transducer (2) and the second depth transducer (3) meets the following conditions:
H 0represent the first depth transducer (2) and the second depth transducer (3) distance to ground, 3000mm < H 0< 3700mm, ν are the transverse field angle of the first depth transducer (2) and the second depth transducer (3), H 4(2100 < H 4< H 0) represent the distance of point of contact, two visual fields apart from ground.
2. the passenger flow statistic device based on sensor depth image according to claim 1, is characterized in that, described first depth transducer (2) and the second depth transducer (3) all adopt Asus XtionPRO.
3. based on a passenger flow statistical method for sensor depth image, it is characterized in that, step is as follows:
Step 1, after power supply (11) powers on to device, data processing control units (8) obtains the first depth map that the first depth transducer (2) collects and the second depth map that the second depth transducer (3) collects by USB data line (7);
Step 2, according to first frame first depth map obtained or first frame second depth map, demarcates the relation in target to the distance H and depth map on ground between corresponding grey scale level G;
Step 3, determines mode of operation by mode selection switch (10): if select single depth transducer mode of operation, then choose the first depth map or the second depth map as the 3rd depth map A3 and proceed to step 6; If select dual-depth working sensor pattern, then enter next step;
Step 4, by the first depth map and the equal projection transform of the second depth map to three dimensions physical coordinates system;
Step 5, carries out splicing by the first depth map A1 and the second depth map A2 that are transformed into three dimensions physical coordinates system and obtains the 3rd depth map A3;
Step 6, carries out region segmentation to the 3rd depth map A3, finds out the pixel coordinate that in each region, gray level minimal value is corresponding, and mark the pixel coordinate searched out;
Step 7, according to the mark of pixel coordinate after region segmentation, carries out head identification and obtains header information; Described head identification concrete steps are as follows:
Step (7.1), creating T width initial value is the binary map of 0, be numbered 1 ~ T, and the pixel resolution of each width binary map is C*D; Extract marker character K tcorresponding all pixel coordinates, and the gray-scale value of respective pixel coordinate in t width binary map is set to 255;
Step (7.2), determines the area A rea of connected domain in every width binary map t, and to should the gray level average of connected domain in the 3rd depth map A3 find the connected domain with following characteristics: 1. area A rea tbe greater than Area, and Area>=500, the connected domain 2. in binary map is class ellipticity, and 3. head width w ' and height of head h ' is less than 25 according to all meeting after following formula scales and is greater than 15, and formula is as follows:
The wherein true altitude of h ' expression human body head, the developed width of w ' expression human body head, h represents the height of connected domain in binary map, and w represents the wide of connected domain in binary map, and α is scale-up factor and 0 < α < 1;
Record all the connected domain origin coordinates, wide height and the regional average value that search out i.e. header information;
Step 8, carries out count tracking to the header information identified; Described head tracking counting, detailed process is as follows:
Step (8.1), in field range with header information central point for a mark, ripple door shape is rectangle, ripple door size is set to Q pixel, and 10 < Q < 100, determined by mark (x, a y) He Bomen (Δ x, Δ y) with Targets Dots predicted value centered by space search region, region of search is as follows:
Step (8.2), first, assuming that flight path, M frame records head Targets Dots for the first time, and the speed of coordinate to Targets Dots obtained based on M+1 frame is estimated, if the speed estimated in region of search scope namely then generate a temporary transient flight path; Secondly, the coordinate obtained based on M+2 frame to Targets Dots position predict, and determine relevant range centered by predicted position any some mark dropped in relevant range expands a temporary transient flight path, continues estimating speed value, predicts based on the position of velocity amplitude to next frame and set up relevant range, anyly drops on some mark in relevant range by flight path new for generation one; Finally, matching is carried out to the flight path quafric curve of all generations, if the error delta < 0.1 of the point on flight path and matched curve, then determine this flight path, if do not met, then delete this flight path;
Step (8.3), after determining target travel flight path, at the l of the 3rd depth map A3 1row and l 2two, row picture judges line, wherein 10 < l 1< l 2< 470 pixel, if current goal point mark x is less than l 1but flight path exists row-coordinate is greater than l 2some mark, then the number entered adds 1, if instead current goal point mark x is greater than l 2but flight path exists row-coordinate is less than l 1some mark, then the number gone out adds 1, deletes this flight path simultaneously;
Step 9, processes each frame depth map collected successively according to the method for step 4 ~ 8, and exports the turnover result of statistics to display (9) by video line (12) in the form of a file every one minute.
4. the passenger flow statistical method based on sensor depth image according to claim 3, is characterized in that, the relation in target described in step 2 to the distance H and depth map on ground between corresponding grey scale level G is demarcated, specific as follows:
The height on distance ground is H 1target corresponding grey scale level G 1, distance ground height be H 2target corresponding grey scale level G 2, distance ground height be H 3target corresponding grey scale level G 3, wherein 100mm < H 1< 200mm, 500mm < H 2< 800mm, 2300mm < H 3< H 0, H 0represent that sensor is to the distance on ground and 3000mm < H 0< 3700mm, G 0represent that nobody is in field range, the gray level that in depth map, ground is corresponding, then have:
In formula, distance calibration is in units of millimeter, and β represents the scale-up factor of distance and gray level, and 10 < β < 20.
5. the passenger flow statistical method based on sensor depth image according to claim 3, is characterized in that, described in step 4 by the first depth map and the equal projection transform of the second depth map to three dimensions physical coordinates system, step is as follows:
By the first depth map from two dimensional surface coordinate (i 1, j 1) projection transform is to three dimensions physical coordinates (x 1, y 1, z 1) obtain the first new depth map A1, by the second depth map from two dimensional surface coordinate (i 2, j 2) projection transform is to three dimensions physical coordinates (x 2, y 2, z 2) obtain the second new depth map A2; If the pixel resolution of two-dimentional depth map is I*J, the centre coordinate of ordinate is I m, the centre coordinate of horizontal ordinate is J m, the transverse field angle that longitudinal field angle that μ is sensor, ν are sensor, then projection transform relation is as follows:
Wherein, k i=sin θ i* G (i, j), θ irepresent ordinate i to I in two dimensional surface coordinate (i, j) mcorresponding field of view angle, k j=sin θ j* G (i, j), θ jrepresent that in two dimensional surface coordinate (i, j), horizontal ordinate j is to centre coordinate J mcorresponding field of view angle, g (i, j) is the gray level of two dimensional surface coordinate (i, j) correspondence.
6. the passenger flow statistical method based on sensor depth image according to claim 3, is characterized in that, the first depth map A1 and the second depth map A2 that are transformed into three dimensions physical coordinates system is spliced, be specially described in step 5:
Step (5.1), utilizes the method for multiple views measurement data to determine the second depth map A2 coordinate (x 2, y 2, z 2) and the first depth map A1 coordinate (x 1, y 1, z 1) between transformational relation R|T:
Get S point coordinate in the first depth map A1 and the coordinate that this S o'clock corresponding in the second depth map A2 a=0,1,2 ..., S-1 and S > 2, determine that the second depth map A2 rotates relative to the first depth map A1 and the value of translation:
R represents the rotation relationship of the second depth map A2 relative to the first depth map A1, and T represents the displacement of the second depth map A2 relative to the first depth map A1, then:
In above formula, θ represents the rotation angle of the second depth map A2 relative to the first depth map A1, (Δ x 0, Δ y 0, Δ z 0) represent the coordinate (x of the second depth map A2 2, y 2, z 2) relative to the coordinate (x of the first depth map A1 1, y 1, z 1) displacement difference;
Step (5.2), creating a width pixel resolution is C*D, and initial value is the 3rd depth map A3 of 0, wherein I < C < 2I, J < D < 2J; By the first depth map A1 coordinate (x 1, y 1, z 1) be converted to the second depth map A2 coordinate (x 2, y 2, z 2), be stored in the 3rd depth map A3, if gray level G (x, y) > is G after then the first depth map A1 and the second depth map A2 being spliced 1, then G (x, y)=G 1, in the splicing overlapping region of the first depth map A1 and the second depth map A2, gray level gets the average of respective coordinates gray level in the first depth map A1 and the second depth map A2.
7. the passenger flow statistical method based on sensor depth image according to claim 3, is characterized in that, described in step 6, pixel coordinate labeling process is as follows:
Step (6.1), with A*A pixel for window, 20 < A < 70, the 3rd depth map A3 is divided into (C/A) * (D/A) individual region, and C/A, D/A are positive integer; Utilize nine grids to travel through all regions of the 3rd depth map A3, find the nine grids central window port area meeting following two conditions: 1. window region in center exists gray level and is less than G 2be greater than G 3pixel 2. this central window port area gray average be less than other window gray average of nine grids; Be N by the zone marker searched out t, t=1,2,3 ... T and T < (C/A) * (D/A), record each region minimal gray level Min simultaneously t;
Step (6.2), by N tlimit, region four extends out one times, finds out gray level in the region after extending out and meets G t(x, y)-Min t< λ, all pixel coordinates of 4 < λ < 15 conditions, and be labeled as K t, symbiosis becomes T marker character.
CN201310279375.1A 2013-07-04 2013-07-04 Based on passenger flow statistic device and the method thereof of sensor depth image Expired - Fee Related CN103345792B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310279375.1A CN103345792B (en) 2013-07-04 2013-07-04 Based on passenger flow statistic device and the method thereof of sensor depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310279375.1A CN103345792B (en) 2013-07-04 2013-07-04 Based on passenger flow statistic device and the method thereof of sensor depth image

Publications (2)

Publication Number Publication Date
CN103345792A CN103345792A (en) 2013-10-09
CN103345792B true CN103345792B (en) 2016-03-02

Family

ID=49280585

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310279375.1A Expired - Fee Related CN103345792B (en) 2013-07-04 2013-07-04 Based on passenger flow statistic device and the method thereof of sensor depth image

Country Status (1)

Country Link
CN (1) CN103345792B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123776B (en) * 2014-07-31 2017-03-01 上海汇纳信息科技股份有限公司 A kind of object statistical method based on image and system
CN104268506B (en) * 2014-09-15 2017-12-15 郑州天迈科技股份有限公司 Passenger flow counting detection method based on depth image
CN104809787B (en) * 2015-04-23 2017-11-17 中电科安(北京)科技股份有限公司 A kind of intelligent passenger volume statistic device based on camera
CN106548451A (en) * 2016-10-14 2017-03-29 青岛海信网络科技股份有限公司 A kind of car passenger flow crowding computational methods and device
CN113536985B (en) * 2021-06-29 2024-05-31 中国铁道科学研究院集团有限公司电子计算技术研究所 Passenger flow distribution statistical method and device based on depth-of-field attention network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731456A (en) * 2005-08-04 2006-02-08 浙江大学 Bus passenger traffic statistical method based on stereoscopic vision and system therefor
CN1950722A (en) * 2004-07-30 2007-04-18 松下电工株式会社 Individual detector and accompanying detection device
CN101587605A (en) * 2008-05-21 2009-11-25 上海新联纬讯科技发展有限公司 Passenger-flow data management system
CN201845367U (en) * 2010-08-04 2011-05-25 杨克虎 Passenger flow volume statistic device based on distance measurement sensor
CN102890791A (en) * 2012-08-31 2013-01-23 浙江捷尚视觉科技有限公司 Depth information clustering-based complex scene people counting method
CN203503023U (en) * 2013-07-04 2014-03-26 南京理工大学 Passenger flow statistics device based on depth-of-field images of sensor

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5065744B2 (en) * 2007-04-20 2012-11-07 パナソニック株式会社 Individual detector

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1950722A (en) * 2004-07-30 2007-04-18 松下电工株式会社 Individual detector and accompanying detection device
CN1731456A (en) * 2005-08-04 2006-02-08 浙江大学 Bus passenger traffic statistical method based on stereoscopic vision and system therefor
CN101587605A (en) * 2008-05-21 2009-11-25 上海新联纬讯科技发展有限公司 Passenger-flow data management system
CN201845367U (en) * 2010-08-04 2011-05-25 杨克虎 Passenger flow volume statistic device based on distance measurement sensor
CN102890791A (en) * 2012-08-31 2013-01-23 浙江捷尚视觉科技有限公司 Depth information clustering-based complex scene people counting method
CN203503023U (en) * 2013-07-04 2014-03-26 南京理工大学 Passenger flow statistics device based on depth-of-field images of sensor

Also Published As

Publication number Publication date
CN103345792A (en) 2013-10-09

Similar Documents

Publication Publication Date Title
Zai et al. 3-D road boundary extraction from mobile laser scanning data via supervoxels and graph cuts
Wu et al. Rapid localization and extraction of street light poles in mobile LiDAR point clouds: A supervoxel-based approach
Atev et al. A vision-based approach to collision prediction at traffic intersections
Serna et al. Detection, segmentation and classification of 3D urban objects using mathematical morphology and supervised learning
US12002225B2 (en) System and method for transforming video data into directional object count
Dai et al. Residential building facade segmentation in the urban environment
Wang et al. Review on vehicle detection based on video for traffic surveillance
Li et al. Video‐based traffic data collection system for multiple vehicle types
Liu et al. A survey of vision-based vehicle detection and tracking techniques in ITS
CN103345792B (en) Based on passenger flow statistic device and the method thereof of sensor depth image
Luvizon et al. Vehicle speed estimation by license plate detection and tracking
Wang et al. Window detection from mobile LiDAR data
Jodoin et al. Tracking all road users at multimodal urban traffic intersections
Yang et al. Semantics-guided reconstruction of indoor navigation elements from 3D colorized points
He et al. Urban rail transit obstacle detection based on Improved R-CNN
CN112613668A (en) Scenic spot dangerous area management and control method based on artificial intelligence
CN105761507A (en) Vehicle counting method based on three-dimensional trajectory clustering
Zhang et al. Longitudinal-scanline-based arterial traffic video analytics with coordinate transformation assisted by 3D infrastructure data
CN203503023U (en) Passenger flow statistics device based on depth-of-field images of sensor
Azevedo et al. Vehicle tracking using the k-shortest paths algorithm and dual graphs
Imad et al. Navigation system for autonomous vehicle: A survey
CN114820931B (en) Virtual reality-based CIM (common information model) visual real-time imaging method for smart city
Song et al. An accurate vehicle counting approach based on block background modeling and updating
JP2019174910A (en) Information acquisition device and information aggregation system and information aggregation device
Naresh et al. Real Time Vehicle Tracking using YOLO Algorithm

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160302

Termination date: 20180704