WO2004011314A1 - 駅ホームにおける安全監視装置 - Google Patents
駅ホームにおける安全監視装置 Download PDFInfo
- Publication number
- WO2004011314A1 WO2004011314A1 PCT/JP2003/009378 JP0309378W WO2004011314A1 WO 2004011314 A1 WO2004011314 A1 WO 2004011314A1 JP 0309378 W JP0309378 W JP 0309378W WO 2004011314 A1 WO2004011314 A1 WO 2004011314A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- person
- information
- platform
- distance information
- image
- Prior art date
Links
- 238000012545 processing Methods 0.000 claims description 27
- 230000005484 gravity Effects 0.000 claims description 21
- 238000000034 method Methods 0.000 claims description 19
- 238000012806 monitoring device Methods 0.000 claims description 15
- 230000006399 behavior Effects 0.000 abstract 2
- 238000010586 diagram Methods 0.000 description 20
- 239000013598 vector Substances 0.000 description 16
- 238000004364 calculation method Methods 0.000 description 9
- 238000002372 labelling Methods 0.000 description 9
- 238000004422 calculation algorithm Methods 0.000 description 8
- 230000033001 locomotion Effects 0.000 description 8
- 238000001514 detection method Methods 0.000 description 7
- 238000012544 monitoring process Methods 0.000 description 7
- 238000013519 translation Methods 0.000 description 7
- 241000272201 Columbiformes Species 0.000 description 3
- 238000006073 displacement reaction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000005311 autocorrelation function Methods 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 239000010813 municipal solid waste Substances 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 241000282472 Canis lupus familiaris Species 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000003760 hair shine Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007257 malfunction Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B61—RAILWAYS
- B61L—GUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
- B61L23/00—Control, warning or like safety means along the route or between vehicles or trains
- B61L23/04—Control, warning or like safety means along the route or between vehicles or trains for monitoring the mechanical state of the route
- B61L23/041—Obstacle detection
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B61—RAILWAYS
- B61L—GUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
- B61L23/00—Control, warning or like safety means along the route or between vehicles or trains
Definitions
- the present invention relates to a safety monitoring device at a station platform, and more particularly to a safety monitoring device at a track side station platform end based on distance information and images (te information).
- a camera system for monitoring the platform end of a station as shown in FIG. 2 is known.
- the camera is installed at an angle close to the horizontal so that a long distance such as 40 m can be seen in the horizontal direction with several cameras, and several camera images are displayed in one screen image so that people can see it It has such a system configuration.
- the image area to be viewed is long (deep), and when many passengers enter and exit, the passengers are hidden behind the passengers and cannot see all the passengers.
- the camera since the camera is installed at a nearly horizontal angle, it is susceptible to the effects of sunrise, sunset, and other light reflections.
- a fall detection mat as shown in Fig. 3 detects a fall of a person by detecting the pressure when the person falls on the track.
- a fall detection mat as shown in Fig. 3 detects a fall of a person by detecting the pressure when the person falls on the track.
- This system calculates the difference between the image without obstacles and the current image, and detects the presence of an obstacle if the difference is output.
- a configuration for detecting a motion vector of an object for the same purpose is disclosed in Japanese Patent Application Laid-Open No. Hei 10-31147.
- the detection of these obstacles often involves malfunctions due to changes in light and shadow, making them inadequate as surveillance systems. Disclosure of the invention
- An object of the present invention is to provide a safety monitoring device in a station platform that can reliably detect a person falling down on a track side at a track side platform end, and can identify a plurality of people and acquire all action logs thereof. .
- the position on the home edge is specified by identifying the person at the home edge by distance information and texture information by photographing the home edge with a plurality of stereo cameras. At the same time, it reliably detects when a person falls on the track, automatically sends out a stop signal, etc., and simultaneously sends out the corresponding camera image. Also, record all actions of all persons acting on the edge of the home.
- a means for registering in advance a situation calling for attention, an announcement thereof, and a situation for transferring video from a position, a movement, etc. of a person on the home edge is provided.
- the announcement corresponding to the situation is transmitted to the passenger by the camera unit by the synthetic voice registered in advance.
- the safety monitoring device in the station platform of the present invention captures an image of the platform end with a plurality of stereo cameras at the track side platform end of the station, and converts the captured image in the field of view and the coordinate system of the platform for each stereo camera.
- the above configuration is characterized in that a means for acquiring and storing a log of a flow line in a space such as a human home is further provided.
- a recognition target is extracted based on image information from each of the stereo cameras. Means for performing recognition using higher-order local autocorrelation features.
- the means for recognizing the target from both the distance information and the image information discriminates a person from another from the center of gravity information on a plurality of masks having different heights.
- the means for confirming safety acquires the distance information and the image information at the platform end, detects image information above the track range, and detects a fall of a person or a person based on the distance information of the image information. It is characterized by protruding outside the platform and issuing an alarm.
- time-series distance information before and after the higher-order local autocorrelation feature exists at a predetermined location in a predetermined range is used to identify the same person.
- the predetermined location is obtained by dividing a predetermined range into a plurality of blocks, and the search for the next distance information in the time series is performed by using the plurality of blocks as a unit. It is characterized in that it is performed by calculating an autocorrelation feature.
- FIG. 1 is a conceptual diagram of the safety monitoring device of the present invention.
- FIG. 2 is a diagram showing a conventional arrangement of surveillance cameras.
- FIG. 3 is an explanatory view of a conventional fall detection mat.
- FIG. 4 is an overall flowchart of the present invention.
- FIG. 5 is an explanatory diagram of the person counting algorithm of the present invention.
- FIG. 6 is a flowchart of the human-centered identification / counting process of the present invention.
- FIG. 7 is a diagram showing an example of a binary image sliced from a distance image.
- FIG. 8 is a diagram showing the labeling result of FIG.
- FIG. 9 is an explanatory diagram of the center of gravity calculation.
- FIG. 10 is a flowchart of the line tracking processing of the present invention.
- FIG. 11 is an explanatory diagram of a high-order local autocorrelation feature that is invariant to translation.
- FIG. 12 is a diagram showing an example of an approximated vector.
- FIG. 13 is a diagram showing an example of the same face image whose cutout is shifted.
- FIG. 14 is a diagram illustrating a high-order local autonomous translation-invariant and rotational translation-invariant autonomous system used in the present invention. It is explanatory drawing of an autocorrelation feature.
- FIG. 15 is a flowchart of a search range dynamic determination process according to the present invention.
- FIG. 16 is a diagram showing a congestion status map of the present invention.
- FIG. 17 is a flowchart of a search process using texture according to the present invention.
- FIG. 18 is an explanatory diagram of the dynamic search area determination algorithm of the present invention.
- FIG. 19 is a diagram showing changes in the dynamic search area according to the degree of congestion according to the present invention.
- FIG. 20 is an explanatory diagram of a high-speed search algorithm using higher-order local autocorrelation features used in the present invention.
- FIG. 21 is a diagram showing an overall flow line management algorithm of the present invention.
- FIG. 22 is a flowchart of the area monitoring / warning process of the present invention. BEST MODE FOR CARRYING OUT THE INVENTION
- FIG. 1 is a diagram schematically illustrating the system configuration of an embodiment of the present invention
- FIG. 4 is a diagram illustrating an overall flowchart of the information integrated recognition device described in FIG.
- a plurality of stereo cameras 1 1 to 1 1 n are photographed so that there is no blind spot on the platform edge, and a passenger 2 moving on the platform edge is monitored.
- the image pickup devices of two or more cameras are fixed in parallel, and the imaging outputs of the stereo cameras 11 to 1-n are given to the image processing devices in each camera.
- the stereo camera itself is already known, and for example, a digital camera such as Digicrops @ Sarnoff Research Institute of Point Gray is used.
- the present invention it is possible to reliably detect a person falling down on a track at a track side platform end, and to identify a plurality of persons and acquire all action logs thereof.
- the activity log is acquired to manage the traffic flow and improve the premises and guide passengers more safely.
- the position on the home edge is specified while the person at the home edge is identified by the distance information and the image (texture) information (hereinafter, simply referred to as texture).
- texture the image (texture) information
- it reliably detects when a person falls on the track, automatically sends out a stop signal, etc., and simultaneously sends out the corresponding camera image. Also record all actions of all persons acting on the edge of the home.
- a human-centered identification 'counting process 21 The processing of counting the presence and the processing of connecting the points of presence of the above persons in a time series and generating a flow line are performed as the line tracking processing 22.
- FIG. 5 shows a conceptual diagram of the human counting algorithm used in the present invention.
- Fig. 6 shows the flow of the human counting algorithm.
- the camera that shoots images is a stereo camera, which can also obtain distance information, so that a binary image can be created from the distance information. That is, assuming that the three masks are masks 5, 6, and 7 in order from the top in FIG. 5, the mask 5 has a height of, for example, 150 to 160 cm, the mask 6 has a height of 120 to 130 cm, and the mask 7 detects a height of 80 to 90 cm from the distance information and creates a binary image.
- the black part (number 1) in the mask in Fig. 5 means that something exists in that part, and the white part (number 0) has nothing.
- 1 ⁇ , 11, 12, or 13, 14, 12 in these masks indicate the presence of a person.
- 10 corresponds to the head, and image data 11 and 12 exist on each mask on the same X-y coordinate.
- 13 corresponds to the head, and image data 14 and 12 exist on each mask on the same X-y coordinates.
- 15 is, for example, luggage and is not recognized as a person. Dogs and pigeons are eliminated because they do not have multiple images.
- 17 and 16 are recognized as short children. Eventually, it is recognized that there are three people, including children, on the mask in Fig. 5, and the following processing is performed.
- [2] Perform morphological processing on the mask according to the noise of each camera (32 in Fig. 6).
- the morphological processing is a kind of image processing on a binary image based on mathematical morphology, but is well known and does not directly relate to the present invention, and therefore detailed description thereof is omitted.
- [3] Label the topmost (topmost) mask 5 (33 in Fig. 6) and determine the center of gravity of each (5 in Fig. 6). Similarly, set the center of gravity up to the lowest mask 7. At this time, the region including the center of gravity determined in the higher stage than each stage does not perform the process of calculating the center of gravity as being already counted. In this example, two people at level n (mask 5), one at level 2 (mask 6), and zero at level 1 (mask 7), total
- the labeling and the processing for calculating the center of gravity will be described as follows. As shown in Fig. 5, multiple slices are created in the height direction from the distance information, and these are converted into a binary image. The binary image is labeled (separated) and the center of gravity is calculated. Labeling is a common method of image processing that counts the number of clumps. Then, the center of gravity is calculated for each block. A specific method of the above-described process of deriving the center of gravity and labeling will be described with reference to FIGS. 7 to 9.
- FIG. 7 and 8 are explanatory diagrams of the labeling process. As shown in Fig. 7, first, a binary image is created at each stage (level) sliced from an image at a predetermined distance, and the binary figure is labeled as one region with connected components.
- the labeling method scans all pixels from the lower left to the upper right. If the scan encounters one pixel, as shown in Figure 8, attach the first label to that pixel. Continue scanning, and if the pixel at that time is connected to the first label, paste the first label on those pixels as well. Also, if the pixel is 1, but the area is different from the previous area, attach a new label. In Fig. 7, the binary image was divided into 1 and 0 areas, but after labeling, labeling was performed on the background 0 area and each block as shown in Fig. 8. It can be seen that there are individual clumps.
- Fig. 9 is an explanatory diagram for calculating the center of gravity.
- the center of gravity is calculated for each area (lump) obtained after labeling. As shown in Fig. 9, the calculation method is to add the X and y coordinates of the area, and divide by the number of pixels (area).
- the average value (average coordinates) is the barycentric coordinates of the mass.
- FIG. 10 shows the flow of the line tracking process.
- a person is recognized from the center-of-gravity information (distance information).
- distance information information
- the center of gravity information alone connects the flow line, It is not possible to accurately determine whether the point and the next point are the same person. (However, comparing the previous frame and the next frame, if there is only one person in either movable search range, both points are Can be tied to the flow line.)
- the identity of a person is determined using the higher-order local autocorrelation feature (texture information) described later.
- [9] Determine the search area based on the “length of one side of the search range” and “traveling direction”. (If “number of frames since appearance” is 1, determine only the “length of one side of the search range”) Do).
- the criteria for determining a person are: (A) The level difference from the “end height level” is within 1 or less.
- the length is longer than a certain length and the end is not the edge of the screen, it is complemented with texture.
- the search area is divided into small areas, and local feature vectors are derived from the texture of each area. Measure the distance between them and the "translation-invariant and rotation-invariant local feature vector derived from the texture around the end", and use the center of the area with the closest distance among those with distances below the criterion, 11]. If there is no area with a distance less than the standard, there is no connection.
- the “radius length of the search range” is determined in principle from the number of people in the surrounding area on the congestion status map (92 to 94 in Fig. 16). In other words, the discriminability decreases when the traffic is congested, and the next search range is also reduced.
- the congestion situation is basically
- Higher-order local autocorrelation features have characteristics of translation invariance and additivity due to local features, as described later. In addition, it is used in such a way that it is invariable in rotational movement. In other words, even if the same person changes the walking direction (rotates when viewed from above), the above-mentioned higher-order local autocorrelation feature does not change and can be recognized as the same person.
- the high-order local autocorrelation feature is calculated for each block in order to calculate at high speed by using the property of additivity, and is retained for each block.
- target features are extracted from image (texture) information.
- the higher-order local autocorrelation function used here is defined as follows. Assuming that the target image in the screen is f (r), the Nth-order autocorrelation function is given by the displacement direction (a l, a 2, a 3,-aN)
- the order N of the higher-order autocorrelation coefficient is 2.
- the displacement direction is limited to a local 3 ⁇ 3 pixel area around the reference point r. Excluding the equivalent features due to translation, the total number of features for the binary image is 25 (left side in Fig. 11). The calculation of each feature at this time is performed by adding the product of the values of the corresponding pixels of the local pattern to all the pixels to obtain a feature amount of one image.
- This feature has the great advantage that it is invariant to the translation pattern.
- the method of extracting only the target area using the distance information from the stereo camera used as preprocessing here can reliably extract the target, but has the disadvantage that the target area is unstable. Therefore, by using this feature of translation invariance for recognition, robustness to the change of clipping was secured. In other words, the advantage of the invariance of this feature with respect to translation is maximized for variations in the target position within a small area.
- the center of the 3 ⁇ 3 mask indicates the reference point r. Pixels indicated by "1" are added, and pixels indicated by are not added.
- the degree is 2
- the 25 patterns shown on the left side of the figure are created, but the difference between the 0th-order and 1st-order product sum ranges is greatly different.
- a pattern that sums and accumulates the same points only in the 0th and 1st order is added, and a total of 35 patterns are made.
- it is invariant to translation but not to rotation. Therefore, as shown in Fig. 14, The patterns were assembled so that they became one element by adding the patterns that were equivalent by rotation. As a result, we used 11 elements in the vector. When four patterns were used as one element for value normalization, the value divided by 4 was used.
- the 3 ⁇ 3 mask shifts the target image one pixel at a time and scans the entire image.
- the 3 ⁇ 3 mask is moved over all pixels, and the value obtained by multiplying the values of 1 and the marked pixels at that time is added each time the 3 ⁇ 3 mask is moved in pixel units. That is, the product sum is obtained.
- 2 means multiply the value of the pixel twice (square)
- 3 means multiply the pixel three times (cubic).
- the image having the information amount of (8bit) x (x pixels) X (y pixels) is converted into a one-dimensional vector.
- these features are invariant to parallel and rotational movements because they are calculated in local regions. Therefore, the clipping from the stereo camera is unstable, but the features of each dimension are similar even if the clipping region for the target is shifted.
- the images in FIG. 12 and the table in FIG. 13 are examples. In this case, the upper 2 digits of the vector element for the gray image are shown in 25 dimensions. Although the cut-out image of the face is shifted in each of the three figures, the upper two digits of each vector shown in the table are completely similar.
- the displacement of the cutout based on the distance information has a decisive effect on the recognition rate.
- this feature is robust to clipping inaccuracies. This is the greatest advantage of combining higher-order local autocorrelation features and clipping by a stereo camera.
- the pixel values of the image are basically 8-bit gray images, but we use color images to individually characterize three-dimensional values such as RGB (or YIQ). In the case of one dimension, it is possible to further improve the accuracy by using a three-dimensional one-dimensional vector.
- FIG. 15 FIG. 16, FIG. 18, FIG. This will be described with reference to FIG.
- the area where the distance can be accurately displayed is divided into multiple areas (51 in Fig. 15 and 81 in Fig. 16).
- the search range is divided into 24 blocks. Calculate and retain high-order local autocorrelation features for each block.
- the area where the target person was in the previous frame is held in units of four blocks 73 in FIG.
- the above four blocks as one unit, compare higher-order local autocorrelation features and search for the next destination.
- the size of the four blocks is such that one person can enter. Therefore, it is unlikely that more than one person will fit in the four blocks. Even if you have information on the center of gravity of multiple people, From the similarity.
- the 15 feature points [1] to [15] in FIG. 20 in the search range of the current frame are calculated, and the point having the closest feature point is newly added to the same person. It is determined that there is a certain area. As shown in 72 in Fig. 20, the feature was calculated in advance by dividing it into 24 blocks (a, ' ⁇ ⁇ , x). It is a device to keep the amount of calculation of the block.
- the flow line is first obtained from the distance information, and the higher-order local autocorrelation feature is used when there is no person in the search range.
- the higher-order local autocorrelation feature itself is divided into 24 blocks and stored within the search range by one operation.
- the feature value at each location can be obtained at high speed by adding four.
- the above-described liquid crystal distance will be described.
- the local features obtained from the area where the person immediately before was located (hereafter, the “higher-order local autocorrelation features” are abbreviated as “local features”) and the candidates in the current frame that seem to have moved
- local features The local features obtained from the area where the person immediately before was located
- the candidates in the current frame that seem to have moved
- the local features first connect them to the closer one based on the xy2D coordinates of the home where the person is obtained from the distance image. So far, it is the distance in general two-dimensional coordinates.
- the candidates to be connected are at the same distance on the home or are unknown, the reliability will be improved by calculation using the vector of local features obtained from the texture. From this point, the local feature is used to determine whether the obtained regions are the same object (pattern) (the coordinates are completely different from the coordinates on the home).
- A tal, a 2, a 3, ⁇ ⁇ ⁇ , an
- the Euclidean distance takes the mean square ((a 1-b 1) square + (a 2 -b 2) square + (a 3 -b 3) square + + — Bn) squared. If the textures are exactly the same, the distance will be 0.
- the basis of the calculation method is the same as that of the general straight-line distance calculation method up to three dimensions.
- FIG. 21 shows a specific example of the whole flow line management algorithm.
- ⁇ Specify the flow of a person for each camera.
- Each camera is time-synchronized, and adjacent cameras are arranged so that continuous two-dimensional coordinates can be set with a common area (surcharge). Then, by integrating the flow line information of each camera, it is possible to create the flow line within the full-range camera field on the overall management map.
- each camera alone identifies a person and connects its flow lines.
- the sixth point of camera 1 and the first point of camera 2 coincide in time with the two-dimensional coordinates, so that they are managed as continuous flow lines in the overall flow line management map. In this way, it is possible to manage all flow lines in two-dimensional coordinates created by multiple cameras.
- FIG. 22 shows an area monitoring and warning processing flow.
- Fig. 22 The area monitoring and warning processing flow (algorithm for drop judgment, etc.) shown in Fig. 22 is as follows.
- the system of the present invention provides a means for registering in advance the situation calling for attention, the announcement thereof, and the situation of transferring the video from the position, movement, etc. of the person on the home edge. Furthermore, by adding a voice synthesis function to the camera device, announcements according to the situation are transmitted to passengers in units of cameras using pre-registered synthesized voice.
- Automatic drop detection Judges distance information by looking at still images and dynamic changes.
- the information used here is only time-series distance information obtained from a gray image.
- Textures are also tracked with higher-order local autocorrelation features that can respond to position and rotation, so that both distance and texture can be more accurate.
- a plurality of stereo cameras capture images of the platform end at the track side platform end of the station, and identify the person at the platform end based on the distance information and the texture information.
- reliable detection of a person falling down the track at the end of the trackside platform, and more reliable safety monitoring equipment at the station platform that identifies multiple people and obtains all their action logs Can be provided.
- a means for acquiring and storing a log of a flow line in a space such as a human home is provided, and a means for extracting a recognition target based on image information from each of the stereo cameras is provided.
- the means for recognizing the target from both the distance information and the image information includes distinguishing a person from another from the center of gravity information on a plurality of masks having different heights.
- the distance information and the image information at the platform end are acquired, and from the detection of the image information above the track range and the distance information of the image information, the fall of a person or the protrusion of a person or the like to the outside of the platform is identified, and an alarm is issued.
- An expensive and reliable safety monitoring device at station platforms can be provided.
Landscapes
- Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Closed-Circuit Television Systems (AREA)
- Train Traffic Observation, Control, And Security (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2003281690A AU2003281690A1 (en) | 2002-07-25 | 2003-07-24 | Security monitor device at station platform |
US10/522,164 US7460686B2 (en) | 2002-07-25 | 2003-07-24 | Security monitor device at station platform |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2002217222A JP3785456B2 (ja) | 2002-07-25 | 2002-07-25 | 駅ホームにおける安全監視装置 |
JP2002-217222 | 2002-07-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004011314A1 true WO2004011314A1 (ja) | 2004-02-05 |
Family
ID=31184602
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2003/009378 WO2004011314A1 (ja) | 2002-07-25 | 2003-07-24 | 駅ホームにおける安全監視装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US7460686B2 (ja) |
JP (1) | JP3785456B2 (ja) |
AU (1) | AU2003281690A1 (ja) |
WO (1) | WO2004011314A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8107679B2 (en) | 2005-09-29 | 2012-01-31 | Yamaguchi Cinema Co., Ltd. | Horse position information analyzing and displaying method |
CN107144887A (zh) * | 2017-03-14 | 2017-09-08 | 浙江大学 | 一种基于机器视觉的轨道异物入侵监测方法 |
Families Citing this family (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3785456B2 (ja) * | 2002-07-25 | 2006-06-14 | 独立行政法人産業技術総合研究所 | 駅ホームにおける安全監視装置 |
JP4574307B2 (ja) * | 2004-09-27 | 2010-11-04 | 三菱電機株式会社 | 可動式ホーム柵システム |
JP4606891B2 (ja) * | 2005-01-31 | 2011-01-05 | 三菱電機株式会社 | プラットホームドア状態監視システム |
US7613324B2 (en) * | 2005-06-24 | 2009-11-03 | ObjectVideo, Inc | Detection of change in posture in video |
ITRM20050381A1 (it) * | 2005-07-18 | 2007-01-19 | Consiglio Nazionale Ricerche | Metodo e sistema automatico di ispezione visuale di una infrastruttura. |
JP4691708B2 (ja) * | 2006-03-30 | 2011-06-01 | 独立行政法人産業技術総合研究所 | ステレオカメラを用いた白杖使用者検出システム |
JP4706535B2 (ja) * | 2006-03-30 | 2011-06-22 | 株式会社日立製作所 | 複数カメラを用いた移動物体監視装置 |
US8189962B2 (en) * | 2006-12-19 | 2012-05-29 | Hitachi Kokusai Electric Inc. | Image processing apparatus |
US20080273754A1 (en) * | 2007-05-04 | 2008-11-06 | Leviton Manufacturing Co., Inc. | Apparatus and method for defining an area of interest for image sensing |
US7929804B2 (en) * | 2007-10-03 | 2011-04-19 | Mitsubishi Electric Research Laboratories, Inc. | System and method for tracking objects with a synthetic aperture |
JP2009211311A (ja) * | 2008-03-03 | 2009-09-17 | Canon Inc | 画像処理装置及び方法 |
KR100998339B1 (ko) | 2009-06-30 | 2010-12-03 | (주)에이알텍 | 선로 감시 시스템 |
DE102009057583A1 (de) * | 2009-09-04 | 2011-03-10 | Siemens Aktiengesellschaft | Vorrichtung und Verfahren zur Erzeugung einer zielgerichteten realitätsnahen Bewegung von Teilchen entlang kürzester Wege bezüglich beliebiger Abstandsgewichtungen für Personen- und Objektstromsimulationen |
JP4975835B2 (ja) * | 2010-02-17 | 2012-07-11 | 東芝テック株式会社 | 動線連結装置及び動線連結プログラム |
JP2011170564A (ja) * | 2010-02-17 | 2011-09-01 | Toshiba Tec Corp | 動線連結方法,装置及び動線連結プログラム |
JP5037643B2 (ja) * | 2010-03-23 | 2012-10-03 | 東芝テック株式会社 | 動線認識システム |
JP5508963B2 (ja) * | 2010-07-05 | 2014-06-04 | サクサ株式会社 | 駅ホームの監視カメラシステム |
JP5825641B2 (ja) | 2010-07-23 | 2015-12-02 | 国立研究開発法人産業技術総合研究所 | 病理組織画像の特徴抽出システム及び病理組織画像の特徴抽出方法 |
JP5647458B2 (ja) * | 2010-08-06 | 2014-12-24 | 日本信号株式会社 | ホーム転落検知システム |
JP5597057B2 (ja) * | 2010-08-06 | 2014-10-01 | 日本信号株式会社 | ホームでの旅客引きずり検知システム |
JP5631120B2 (ja) * | 2010-08-26 | 2014-11-26 | 東海旅客鉄道株式会社 | 物体検知システム及び方法 |
US9204823B2 (en) | 2010-09-23 | 2015-12-08 | Stryker Corporation | Video monitoring system |
EP2541506A1 (en) * | 2011-06-27 | 2013-01-02 | Siemens S.A.S. | Method and system for managing a flow of passengers on a platform |
US9600744B2 (en) * | 2012-04-24 | 2017-03-21 | Stmicroelectronics S.R.L. | Adaptive interest rate control for visual search |
CN103871042B (zh) * | 2012-12-12 | 2016-12-07 | 株式会社理光 | 基于视差图的视差方向上连续型物体检测方法和装置 |
JP6476945B2 (ja) * | 2015-02-09 | 2019-03-06 | サクサ株式会社 | 画像処理装置 |
JP6471541B2 (ja) * | 2015-03-05 | 2019-02-20 | サクサ株式会社 | 画像処理装置 |
JP6598480B2 (ja) * | 2015-03-24 | 2019-10-30 | キヤノン株式会社 | 画像処理装置、画像処理方法及びプログラム |
JP6624816B2 (ja) * | 2015-06-08 | 2019-12-25 | 京王電鉄株式会社 | ホーム安全柵組み込み式監視台装置 |
DE102016216320A1 (de) * | 2016-08-30 | 2018-03-01 | Siemens Aktiengesellschaft | Überwachung von streckengebundenen Transportsystemen |
EP3523176A1 (de) * | 2016-12-07 | 2019-08-14 | Siemens Mobility GmbH | Verfahren, vorrichtung und bahnfahrzeug, insbesondere schienenfahrzeug, zur gefahrensituationserkennung im bahnverkehr, insbesondere im schienenverkehr |
JP6829165B2 (ja) * | 2017-08-24 | 2021-02-10 | 株式会社日立国際電気 | 監視システム及び監視方法 |
WO2019077750A1 (ja) * | 2017-10-20 | 2019-04-25 | 三菱電機株式会社 | データ処理装置、プログラマブル表示器およびデータ処理方法 |
JP2019217902A (ja) * | 2018-06-20 | 2019-12-26 | 株式会社東芝 | 通知制御装置 |
TWI684960B (zh) * | 2018-12-27 | 2020-02-11 | 高雄捷運股份有限公司 | 月台軌道區侵入警報系統 |
JP7368092B2 (ja) * | 2019-03-19 | 2023-10-24 | パナソニックホールディングス株式会社 | 事故検出装置、及び、事故検出方法 |
WO2021059385A1 (ja) * | 2019-09-25 | 2021-04-01 | 株式会社日立国際電気 | 空間検知システム、および空間検知方法 |
DE102020201309A1 (de) * | 2020-02-04 | 2021-08-05 | Siemens Mobility GmbH | Verfahren und System zur Überwachung eines Transportmittelumfelds |
CN114842560B (zh) * | 2022-07-04 | 2022-09-20 | 广东瑞恩科技有限公司 | 基于计算机视觉的建筑工地人员危险行为识别方法 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0773388A (ja) * | 1993-05-03 | 1995-03-17 | Philips Electron Nv | 監視システム及び監視システム用回路装置 |
JPH0997337A (ja) * | 1995-09-29 | 1997-04-08 | Fuji Heavy Ind Ltd | 侵入物監視装置 |
JPH10304346A (ja) * | 1997-04-24 | 1998-11-13 | East Japan Railway Co | 安全確認用itvシステム |
JPH10341427A (ja) * | 1997-06-05 | 1998-12-22 | Sanyo Electric Co Ltd | 自動警報システム |
JP2001134761A (ja) * | 1999-11-04 | 2001-05-18 | Nippon Telegr & Teleph Corp <Ntt> | 動画像内被写体の関連アクション提供方法および装置並びにこの方法のプログラムを記録した記録媒体 |
JP2001143184A (ja) * | 1999-11-09 | 2001-05-25 | Katsuyoshi Hirano | 移動体の移動履歴を解析集計するシステム |
JP2003246268A (ja) * | 2002-02-22 | 2003-09-02 | East Japan Railway Co | ホーム転落者検知方法及び装置 |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4695959A (en) * | 1984-04-06 | 1987-09-22 | Honeywell Inc. | Passive range measurement apparatus and method |
US4924506A (en) * | 1986-07-22 | 1990-05-08 | Schlumberger Systems & Services, Inc. | Method for directly measuring area and volume using binocular stereo vision |
US4893183A (en) * | 1988-08-11 | 1990-01-09 | Carnegie-Mellon University | Robotic vision system |
US5176082A (en) * | 1991-04-18 | 1993-01-05 | Chun Joong H | Subway passenger loading control system |
EP0532052B1 (en) * | 1991-09-12 | 2008-02-13 | FUJIFILM Corporation | Method for extracting object images and method for detecting movements thereof |
US5592228A (en) * | 1993-03-04 | 1997-01-07 | Kabushiki Kaisha Toshiba | Video encoder using global motion estimation and polygonal patch motion estimation |
US6167143A (en) * | 1993-05-03 | 2000-12-26 | U.S. Philips Corporation | Monitoring system |
JPH07228250A (ja) * | 1994-02-21 | 1995-08-29 | Teito Kousokudo Kotsu Eidan | 軌道内監視装置及びプラットホーム監視装置 |
JPH0993565A (ja) * | 1995-09-20 | 1997-04-04 | Fujitsu General Ltd | 乗降客の安全監視装置 |
US5933082A (en) * | 1995-09-26 | 1999-08-03 | The Johns Hopkins University | Passive alarm system for blind and visually impaired individuals |
JPH09193803A (ja) * | 1996-01-19 | 1997-07-29 | Furukawa Electric Co Ltd:The | プラットホーム付近の安全監視方法 |
US5838238A (en) * | 1996-03-13 | 1998-11-17 | The Johns Hopkins University | Alarm system for blind and visually impaired individuals |
US6064749A (en) * | 1996-08-02 | 2000-05-16 | Hirota; Gentaro | Hybrid tracking for augmented reality using both camera motion detection and landmark tracking |
US6188777B1 (en) * | 1997-08-01 | 2001-02-13 | Interval Research Corporation | Method and apparatus for personnel detection and tracking |
JP3506934B2 (ja) * | 1998-12-11 | 2004-03-15 | 株式会社メガチップス | 監視装置及び監視システム |
JP4861574B2 (ja) * | 2001-03-28 | 2012-01-25 | パナソニック株式会社 | 運転支援装置 |
JP3785456B2 (ja) * | 2002-07-25 | 2006-06-14 | 独立行政法人産業技術総合研究所 | 駅ホームにおける安全監視装置 |
-
2002
- 2002-07-25 JP JP2002217222A patent/JP3785456B2/ja not_active Expired - Lifetime
-
2003
- 2003-07-24 US US10/522,164 patent/US7460686B2/en not_active Expired - Fee Related
- 2003-07-24 WO PCT/JP2003/009378 patent/WO2004011314A1/ja active Application Filing
- 2003-07-24 AU AU2003281690A patent/AU2003281690A1/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0773388A (ja) * | 1993-05-03 | 1995-03-17 | Philips Electron Nv | 監視システム及び監視システム用回路装置 |
JPH0997337A (ja) * | 1995-09-29 | 1997-04-08 | Fuji Heavy Ind Ltd | 侵入物監視装置 |
JPH10304346A (ja) * | 1997-04-24 | 1998-11-13 | East Japan Railway Co | 安全確認用itvシステム |
JPH10341427A (ja) * | 1997-06-05 | 1998-12-22 | Sanyo Electric Co Ltd | 自動警報システム |
JP2001134761A (ja) * | 1999-11-04 | 2001-05-18 | Nippon Telegr & Teleph Corp <Ntt> | 動画像内被写体の関連アクション提供方法および装置並びにこの方法のプログラムを記録した記録媒体 |
JP2001143184A (ja) * | 1999-11-09 | 2001-05-25 | Katsuyoshi Hirano | 移動体の移動履歴を解析集計するシステム |
JP2003246268A (ja) * | 2002-02-22 | 2003-09-02 | East Japan Railway Co | ホーム転落者検知方法及び装置 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8107679B2 (en) | 2005-09-29 | 2012-01-31 | Yamaguchi Cinema Co., Ltd. | Horse position information analyzing and displaying method |
CN107144887A (zh) * | 2017-03-14 | 2017-09-08 | 浙江大学 | 一种基于机器视觉的轨道异物入侵监测方法 |
CN107144887B (zh) * | 2017-03-14 | 2018-12-25 | 浙江大学 | 一种基于机器视觉的轨道异物入侵监测方法 |
Also Published As
Publication number | Publication date |
---|---|
US20060056654A1 (en) | 2006-03-16 |
JP2004058737A (ja) | 2004-02-26 |
US7460686B2 (en) | 2008-12-02 |
AU2003281690A1 (en) | 2004-02-16 |
JP3785456B2 (ja) | 2006-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2004011314A1 (ja) | 駅ホームにおける安全監視装置 | |
JP4970195B2 (ja) | 人物追跡システム、人物追跡装置および人物追跡プログラム | |
US8655078B2 (en) | Situation determining apparatus, situation determining method, situation determining program, abnormality determining apparatus, abnormality determining method, abnormality determining program, and congestion estimating apparatus | |
Navarro-Serment et al. | Pedestrian detection and tracking using three-dimensional ladar data | |
KR101839827B1 (ko) | 원거리 동적 객체에 대한 얼굴 특징정보(연령, 성별, 착용된 도구, 얼굴안면식별)의 인식 기법이 적용된 지능형 감시시스템 | |
US5757287A (en) | Object recognition system and abnormality detection system using image processing | |
KR101788269B1 (ko) | 이상 상황을 감지하는 장치 및 방법 | |
JP2002024986A (ja) | 歩行者検出装置 | |
Volkhardt et al. | Fallen person detection for mobile robots using 3D depth data | |
CN111814635A (zh) | 基于深度学习的烟火识别模型建立方法和烟火识别方法 | |
Snidaro et al. | Automatic camera selection and fusion for outdoor surveillance under changing weather conditions | |
CN115346256A (zh) | 机器人寻人方法及系统 | |
Jalalat et al. | Vehicle detection and speed estimation using cascade classifier and sub-pixel stereo matching | |
JP4056813B2 (ja) | 障害物検知装置 | |
JPH11257931A (ja) | 物体認識装置 | |
JPH10255057A (ja) | 移動物体抽出装置 | |
KR101560810B1 (ko) | 템플릿 영상을 이용한 공간 관리 방법 및 장치 | |
Lee et al. | independent object detection based on two-dimensional contours and three-dimensional sizes | |
Ling et al. | A multi-pedestrian detection and counting system using fusion of stereo camera and laser scanner | |
JP6851246B2 (ja) | 物体検出装置 | |
KR101355206B1 (ko) | 영상분석을 이용한 출입 계수시스템 및 그 방법 | |
JP2003162710A (ja) | 移動物体の認識装置及び認識方法 | |
CN103442218B (zh) | 一种多模式行为识别与描述的视频信号预处理方法 | |
KR102525153B1 (ko) | Cctv 카메라 영상을 활용한 주차현황분석 시스템 및 그 방법 | |
KR102161342B1 (ko) | 스트림 리즈닝 그룹이탈 감시 시스템 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
122 | Ep: pct application non-entry in european phase | ||
ENP | Entry into the national phase |
Ref document number: 2006056654 Country of ref document: US Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10522164 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 10522164 Country of ref document: US |