CN112585957A - Station monitoring system and station monitoring method - Google Patents

Station monitoring system and station monitoring method Download PDF

Info

Publication number
CN112585957A
CN112585957A CN201980054342.2A CN201980054342A CN112585957A CN 112585957 A CN112585957 A CN 112585957A CN 201980054342 A CN201980054342 A CN 201980054342A CN 112585957 A CN112585957 A CN 112585957A
Authority
CN
China
Prior art keywords
image
transmission
unit
captured image
transmission data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980054342.2A
Other languages
Chinese (zh)
Inventor
向谷实
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Music Hall
Ongakukan Co Ltd
Original Assignee
Music Hall
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Music Hall filed Critical Music Hall
Publication of CN112585957A publication Critical patent/CN112585957A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61LGUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
    • B61L23/00Control, warning, or like safety means along the route or between vehicles or vehicle trains
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

The invention provides a station monitoring system and a station monitoring method, which can ensure the analysis precision of images of a platform of a shooting station and reduce the network load caused by transmission. The station monitoring system includes a determination section (82) that determines whether a given object appears in a captured image of a station based on the captured image. And a transmission data generation unit (84) which generates, based on the captured image and the result of the determination, transmission data indicating a low-resolution image in which the resolution of the captured image is reduced and the result of the determination. And a transmission data transmission unit (86) that transmits the transmission data to the central monitoring device (10). And a transmission data receiving unit (60) that receives a plurality of transmission data transmitted from the respective different central monitoring devices (10). And a synthesized video generation unit (68) for generating a synthesized video in which a low-resolution video represented by at least one of the plurality of transmission data is synthesized, the synthesized video representing the result of the determination indicated by each of the plurality of transmission data. A display control unit (74) displays the synthesized image.

Description

Station monitoring system and station monitoring method
Technical Field
The invention relates to a station monitoring system and a station monitoring method.
Background
Patent document 1 describes a technique in which a monitoring device receives images of a station platform captured by a plurality of image transmission devices, and displays an image obtained by synthesizing the received images.
Further, there is also known an image analysis system that performs image analysis such as determination of whether or not a specific object appears in an image, specification of an area in which the specific object appears in the image, and the like.
Documents of the prior art
Patent document
Patent document 1: japanese patent laid-open publication 2014-192844
Disclosure of Invention
Problems to be solved by the invention
The higher the resolution of the image taken of the platform of the station, the higher the accuracy of the analysis of the image. Here, when it is determined whether or not there is a person holding a crutch, a supporting dog, or the like on a platform of a station, and detailed analysis is performed such as determination of an area where the person holding the crutch or the supporting dog appears in the image, it is necessary to analyze an image with a high resolution.
On the other hand, when a video with a high resolution is transmitted, the network load increases.
The present invention has been made in view of the above problems, and an object thereof is to provide a station monitoring system and a station monitoring method capable of reducing the load on the network due to transmission while ensuring the accuracy of analysis of an image of a platform of a station taken.
Means for solving the problems
The station monitoring system of the present invention is a station monitoring system that includes a monitoring device and a plurality of transmission devices, and monitors a platform of a station, the plurality of transmission devices each including: a captured image acquisition unit that acquires a captured image of a station on which the transmission device is installed; a determination unit that determines whether a given object appears in the captured image based on the captured image; a transmission data generating unit that generates, based on the captured image and the result of the determination, transmission data indicating a low-resolution image in which the resolution of the captured image is reduced and the result of the determination; a transmission unit that transmits the transmission data to the monitoring apparatus, the monitoring apparatus including: a reception unit that receives a plurality of the transmission data transmitted from the transmission apparatuses different from each other; a synthesized video generation unit that generates a synthesized video in which the low-resolution video represented by each of the plurality of transmission data is synthesized, the synthesized video being represented by a result of the determination indicated by at least one of the plurality of transmission data; a display control unit that displays the synthesized image.
In one aspect of the present invention, the monitoring apparatus further includes a transmission request unit that transmits a transmission request of the photographed image to the transmission apparatus, the transmission unit of the transmission apparatus transmits the photographed image to the monitoring apparatus in response to reception of the transmission request, and the display control unit of the monitoring apparatus displays the photographed image in response to reception of the photographed image.
Alternatively, the monitoring apparatus further includes a transmission request unit that transmits a transmission request of the captured image to the transmission apparatus, the transmission unit of the transmission apparatus transmits substitute transmission data showing the captured image and a result of the determination to the monitoring apparatus according to reception of the transmission request, and the display control unit of the monitoring apparatus displays the captured image shown by the substitute transmission data showing the determination result shown by the substitute transmission data according to reception of the substitute transmission data.
In one aspect of the present invention, the synthetic image generating unit of the monitoring device generates the synthetic image in which the determination result indicated by the transmission data is indicated by a state corresponding to a state of a vehicle entering a line corresponding to the transmission device that transmitted the transmission data.
In one aspect of the present invention, the determination unit of the transmission device includes an image recognition unit that specifies an area in which the object appears in a frame image included in the captured video by performing image recognition processing on the frame image, and a tracking unit that specifies an area in which the object appears in a frame image included in the captured video by tracking a frame image captured before the frame image to the area specified by the image recognition unit.
In one aspect of the present invention, the transmission device further includes a reference image acquisition unit that acquires a reference captured image that is an image in which a position of the captured object is detected, based on the detection of the object, and the determination unit of the transmission device determines whether or not the object appears in the captured image, based on the reference captured image and the captured image.
In this aspect, it is preferable that the reference captured image acquiring means acquires a plurality of reference captured images captured from different directions from each other in accordance with detection of the object, and the determining means of the transmitting device determines whether or not the object is present in the captured image in accordance with the plurality of reference captured images and the captured image.
Further, in this aspect, it is preferable that the determination unit of the transmission device further includes an image recognition unit that determines an area in which the object appears in the frame image by performing image recognition processing on a frame image included in the photographic image using an area in which the object appears in the reference photographic image that is determined by performing image recognition processing on the reference photographic image.
Alternatively, the determination unit of the transmission device may further include an image recognition unit that determines an area in which the object appears in the reference captured image by performing image recognition processing on the reference captured image, and a tracking unit that determines an area in which the object appears in a frame image included in the captured image by tracking the area determined by the image recognition unit on the reference captured image.
In addition, the station monitoring method of the present invention is a station monitoring method of monitoring a platform of a station, including the steps of: an acquisition step in which a plurality of transmission devices respectively acquire a captured image of a station on which the transmission device is installed; a judging step, wherein the plurality of sending devices judge whether a given object appears in the shot image according to the shot image; a generation step in which the plurality of transmission-target apparatuses generate, from the captured image and the result of the determination, transmission data indicating a low-resolution image in which the resolution of the captured image is reduced and the result of the determination; a transmission step in which the plurality of transmission devices transmit the transmission data to the monitoring device, respectively; a receiving step in which the monitoring apparatus receives a plurality of the transmission data transmitted from the transmission apparatuses that are different from each other; a step of generating a composite video in which the monitoring device generates a composite video in which the low-resolution video represented by each of the plurality of transmission data is synthesized, the composite video representing a result of the determination indicated by at least one of the plurality of transmission data; and a display step, wherein the monitoring device displays the synthetic image.
Drawings
Fig. 1 is a diagram showing an example of the overall configuration of a station monitoring system according to an embodiment of the present invention.
Fig. 2 is a diagram showing an example of the configuration of an image transmission system according to an embodiment of the present invention.
Fig. 3 is a diagram showing one example of a captured image.
Fig. 4 is a diagram showing one example of a captured image.
Fig. 5 is a diagram showing one example of the region determination result image.
Fig. 6 is a diagram showing one example of a captured image.
Fig. 7 is a diagram of an example of the region determination result image.
Fig. 8 is a diagram schematically showing an example of the relationship between the capturing of a captured image, the image recognition of the captured image, and the tracking of a target area.
Fig. 9 is a diagram showing one example of a synthesized image.
Fig. 10 is a diagram showing an example of a synthesized image.
Fig. 11 is a diagram schematically showing an example of the relationship between a station and a vehicle.
Fig. 12 is a diagram showing an example of a synthesized image.
Fig. 13 is a diagram schematically showing an example of the relationship between a station and a vehicle.
Fig. 14 is a functional block diagram showing an example of functions of a station monitoring system according to an embodiment of the present invention.
Fig. 15 is a flowchart showing an example of a flow of processing performed in the video transmission system according to the embodiment of the present invention.
Fig. 16 is a flowchart showing an example of the flow of processing performed by the central monitoring apparatus according to an embodiment of the present invention.
Fig. 17 is a diagram showing one example of a reference captured image.
Fig. 18 is a diagram showing one example of a reference captured image.
Fig. 19 is a diagram showing one example of the region determination result image.
Fig. 20 is a diagram showing one example of the region determination result image.
Detailed Description
Hereinafter, one embodiment of the present invention will be described in detail with reference to the drawings.
Fig. 1 is a diagram showing an example of the overall structure of a station monitoring system 1 according to an embodiment of the present invention. As shown in fig. 1, the station monitoring system 1 of the present embodiment includes a central monitoring device 10 and a plurality of video transmission systems 12.
The central monitoring apparatus 10 and the plurality of image transmission systems 12 are connected to a computer network 14 such as the internet. Thus, the central monitoring apparatus 10 and the image transmission system 12 can communicate with each other through the computer network 14.
The central monitoring apparatus 10 is a computer such as a personal computer. The central monitoring apparatus 10 shown in fig. 1 includes a processor 10a, a storage unit 10b, a communication unit 10c, an overall monitoring monitor 10d, and an individual monitoring monitor 10 e.
The processor 10a is a program control device such as a CPU that operates according to a program installed in the central monitoring device 10. The storage unit 10b is a storage unit such as a ROM or a RAM, a hard disk drive, or the like. The storage unit 10b stores programs executed by the processor 10 a. The communication unit 10c is a communication interface such as a network board for transmitting and receiving data to and from the video transmission system 12. The central monitoring apparatus 10 transmits and receives information between the video transmission systems 12 through the communication unit 10 c. The whole monitor 10d and the individual monitor 10e are display devices such as liquid crystal displays, for example, and display various images according to the instructions of the processor 10 a.
Fig. 2 is a diagram showing an example of the configuration of the video transmission system 12 according to the present embodiment. As shown in fig. 2, the image transmission system 12 of the present embodiment includes a camera 20, an image analysis device 22, a down converter 24, a platform monitor 26, and a station building monitor 28. The image analysis device 22 of the present embodiment is connected to the camera 20, the down converter 24, and the station building monitor 28. The camera 20 of the present embodiment is connected to a platform monitor 26.
The camera 20 is an imaging device such as a digital camera. The platform monitor 26 and the station building monitor 28 are display devices such as liquid crystal displays. The down converter 24 is a device that outputs a received image with reduced resolution, for example.
The image analyzer 22 is a computer such as a personal computer. As shown in fig. 2, the image analysis device 22 of the present embodiment includes a processor 22a, a storage unit 22b, and a communication unit 22 c. The processor 22a is a program control device such as a CPU that operates according to a program installed in the image analysis device 22. The storage unit 22b is a storage unit such as a ROM or a RAM, a hard disk drive, or the like. The storage unit 22b stores a program or the like executed by the processor 22 a. The communication unit 22c is a communication interface such as a network board for transmitting and receiving data to and from the central monitoring apparatus 10. The image analysis device 22 transmits and receives information to and from the central monitoring device 10 through the communication unit 22 c.
Hereinafter, a case where the station monitoring system 1 includes 9 video transmission systems 12(12a to 12i) will be described as an example. The number of the video distribution systems 12 included in the station monitoring system 1 is not limited to 9, as a matter of course.
Fig. 3 is a diagram showing an example of 1 frame image included in a captured image in which a monitoring target such as a platform 30 of a station is captured, which is generated by the camera 20 included in the image transmission system 12e according to the present embodiment. Hereinafter, the frame image included in the captured image is referred to as a captured image 32. Fig. 4 is a diagram showing an example of another captured image 32 included in the captured image. In the present embodiment, for example, the camera 20 generates the captured image 32 at a predetermined frame rate. Here, for example, in a frame subsequent to the frame in which the captured image 32 shown in fig. 3 is generated, the captured image 32 shown in fig. 4 is generated.
In the present embodiment, for example, in each of 9 video transmission systems 12(12a to 12i), a captured video is generated by the camera 20 included in the video transmission system 12. The captured image generated by the camera 20 may be displayed on a platform monitor 26 provided at a platform 30 monitored by the camera 20.
In the following description, 9 video transmission systems 12(12a to 12i) are respectively disposed at stations different from each other, and the platform 30 of the station is monitored by 1 camera 20 at 1 station.
In the present embodiment, for example, the video analyzer 22 performs image recognition processing on the captured image 32 by a known image recognition technique to determine whether or not a predetermined object is present in the captured image 32. In addition, the image recognition processing may be performed using a machine learning model that has already been learned. Hereinafter, the given object is referred to as a target object 34. Examples of the target object 34 include objects that require attention, such as a person who holds a crutch, a guide dog, and an assist dog such as a hearing aid dog. The target object 34 does not have to be 1 kind of object, and may be a plurality of kinds of objects.
Here, the target object 34 does not appear in the captured image 32 shown in fig. 3. On the other hand, a target object 34 appears in the captured image 32 shown in fig. 4. Therefore, it is determined that the target object 34 does not appear in the captured image 32 shown in fig. 3, and it is determined that the target object 34 appears in the captured image 32 shown in fig. 4.
In the present embodiment, for example, the area in which the target object 34 appears is specified in the captured image 32 in which the target object 34 is determined to appear. Hereinafter, the region in which the target object 34 is present is referred to as a target region.
In the present embodiment, for example, target area information showing a target area is generated. Fig. 5 is a diagram showing one example of the area determination result image 36 as one example of the target area information. In the area specification result image 36 shown in fig. 5, a frame-shaped target area image 38 surrounding the target object 34 is superimposed on the captured image 32 shown in fig. 4.
Fig. 6 is a diagram showing one example of the captured image 32 generated by the camera 20 in a frame subsequent to the frame in which the captured image 32 shown in fig. 4 is generated. The target object 34 also appears in the captured image 32 shown in fig. 6.
Here, for example, the video analyzer 22 may perform a known tracking process to track the target region specified in the image recognition process for the captured image 32 shown in fig. 4, thereby specifying the target region in the captured image 32 shown in fig. 6. Also, the area determination result image 36 illustrated in fig. 7 showing the target area determined in this way may also be generated. In the area determination result image 36 shown in fig. 7, a frame-shaped target area image 38 surrounding the target object 34 is superimposed on the captured image 32 shown in fig. 6.
Fig. 8 is a diagram schematically showing an example of the relationship between the shooting of the shot image 32 of 16 frames, the image recognition of the shot image 32, and the tracking of the target area in the present embodiment. Hereinafter, the 16 frames are referred to as 1 st to 16 th frames.
In the present embodiment, the above-described image recognition processing is realized using, for example, a mechanical learning model, and it takes time to execute the processing. On the other hand, the tracking processing is realized by a pattern matching technique or the like using a feature amount such as a distribution of pixel values of pixels in the target region, and the execution time of the processing for 1 image is shorter than that of the image recognition processing. In the following description, as an example, the image recognition processing for 1 captured image 32 requires a time equivalent to 3 frames, and the tracking processing for 1 captured image 32 requires only a time shorter than the time equivalent to 1 frame.
In fig. 8, a captured image 32 captured in the nth frame is represented as p (n) (n is an integer of 1 to 16). Further, the result of image recognition for the captured image 32(p (n)) captured at the nth frame is denoted as r (p (n)). For example, the region determination result image 36 illustrated in fig. 5 corresponds to an example of the result r (p (n)) of image recognition.
As shown in fig. 8, the result (r (p (1))) of the image recognition of the captured image 32(p (1)) of the 1 st frame is determined at the capturing timing of the captured image 32(p (4)) of the 4 th frame.
Then, in the present embodiment, for example, image recognition is performed on the captured image 32(p (4)) of the 4 th frame. Then, the result of the image recognition (r (p (4))) is determined at the capturing timing of the captured image 32(p (7)) of the 7 th frame.
Hereinafter, similarly, the image recognition result (r (p (10))) for the captured image 32(p (10)) of the 10 th frame is determined at the capturing time of the captured image 32(p (13)) of the 13 th frame. Further, the result of image recognition (r (p (13))) for the captured image 32(p (13)) of the 13 th frame is at the capturing time of the captured image 32(p (16)) of the 16 th frame.
In the present embodiment, for example, the target area in the captured image 32 is specified by executing a process of tracking the target area specified by the latest available image recognition result at the time of capturing the captured image 32 with respect to the captured image 32. For example, the processing of tracking the target region specified by the result (r (p (1)) of the image recognition for the captured image 32 of the 1 st frame is executed on the captured images 32(p (4) to p (6)) of the 4 th to 6 th frames. In fig. 8, the results of the tracking processing are represented as t (p (4), r (p (1))), t (p (5), r (p (1))), t (p (6), r (p (1))). For example, the area determination result image 36 illustrated in fig. 7 corresponds to one example of the result t of the tracking processing.
Similarly, for example, the processing of tracking the target region specified by the result (r (p (4))) of the image recognition with respect to the captured image 32 of the 4 th frame is executed on the captured images 32(p (7) to p (9)) of the 7 th to 9 th frames. In fig. 8, the results of the tracking processing are represented as t (p (7), r (p (4))), t (p (8), r (p (4))), t (p (9), r (p (4))). Hereinafter, the process of tracking the target region is similarly performed also on the captured images 32 of the 10 th and subsequent frames.
In the present embodiment, as described above, the target area in the captured image 32 is specified by performing the process of tracking the target area specified by the latest image recognition result available at the time of capturing the captured image 32 on the captured image 32. Therefore, in the present embodiment, the target region in the captured image 32 can be specified in a short time from the capturing of the captured image 32.
In the present embodiment, for example, transmission data indicating a low-resolution video in which the resolution of the captured video is lowered and a determination result of whether or not the target object 34 is present in the captured video is generated.
Here, for example, a low-resolution image including an image in which the target area image 38 is superimposed on the captured image 32 by the down-converter 24 and the area specification result image 36 shown in fig. 7 is reduced in resolution may be generated as frame data. At this time, the low-resolution image is a frame image of a low-resolution movie in which the captured movie is reduced in resolution, and is also an image showing the result of determination as to whether or not the target object 34 is present in the captured image 32.
Further, for example, the transmission data may be generated so as to include a low resolution video in which the resolution of the captured video is lowered, and to include a flag indicating a determination result of whether or not the target object 34 appears in each captured image 32 included in the captured video in a header (header) or the like. In this case, the low-resolution video includes a low-resolution image in which the resolution of the captured image 32 is reduced as a frame image. Note that the flag included in the transmission data at this time indicates the result of determination as to whether or not the target object 34 is present in the captured image.
For example, transmission data including a low-resolution video image in which the resolution of the captured video image is reduced and target area information may be generated. Here, for example, the target area information may be information indicating the position and shape of the target area in a low-resolution image in which the target object 34 appears in the low-resolution image, which is a frame image included in the low-resolution video. For example, the target area information may be information indicating coordinate values corresponding to positions of 4 corners of a frame of the target area in the low-resolution image. The target area information may be format information of an image such as the target area image 38. At this time, the target area information included in the transmission data shows the result of determination as to whether or not the target object 34 is present in the captured image.
Hereinafter, the captured image 32 or the low-resolution image in which the region specification result image 36 in which the captured image 32 and the target region image 38 are superimposed is reduced in resolution is referred to as a low-resolution image corresponding to the captured image 32.
Then, in the present embodiment, the transmission data generated in this way is transmitted to the central monitoring apparatus 10. Similarly, the transmission data generated by the image transmission system 12 may be transmitted from the other image transmission systems 12(12a to 12d and 12f to 12i) to the central monitoring apparatus 10.
In the present embodiment, for example, the captured image or the low-resolution image may be displayed on the station building monitor 28 installed in the station building. Here, the target area may be shown in a frame image included in the video image displayed on the station building monitor 28. For example, the station building monitor 28 may display a video including a series of area specifying result images 36 as frame images.
In the present embodiment, for example, the central monitoring apparatus 10 receives the transmission data transmitted from each of the plurality of video transmission systems 12(12a to 12 i).
Then, the central monitoring apparatus 10 generates a composite video in which the low-resolution video indicated by each of the plurality of transmission data is combined, which indicates the determination result indicated by at least one of the plurality of transmission data, based on the plurality of transmission data. Then, the generated composite image is displayed on the entire monitor 10 d. Fig. 9 and 10 are diagrams each showing an example of a composite image 40 of 1 frame image included as a composite video.
Fig. 9 shows an example of the composite image 40 when the target object 34 is not present in all the received transmission data as the above-described determination result. Fig. 10 shows an example of the composite image 40 when the target object 34 appears in 2 pieces of received transmission data as the above determination result.
The composite image 40 includes, for example, a plurality of individual image areas 42 corresponding to respective mutually different picture delivery systems 12. The composite image 40 illustrated in fig. 9 and 10 includes 9 individual image areas 42(42a to 42i) corresponding to the 9 video transmission systems 12(12a to 12i), respectively. Here, for example, the picture transmission system 12e including the camera 20 that captured the captured image 32 shown in fig. 3, 4, and 6 corresponds to the individual image area 42 e.
In the present embodiment, for example, a low-resolution image, which is a frame image included in a low-resolution video image indicated by transmission data received from the video transmission system 12, is arranged in the individual image area 42 corresponding to the video transmission system 12.
Then, in the present embodiment, for example, for each of the plurality of individual image areas 42, the central monitoring apparatus 10 determines the determination result of whether or not the target object 34 is present in the captured image 32 corresponding to the low-resolution image arranged in the individual image area 42. Here, for example, the result of the determination may be determined based on a flag included in the transmission data. For example, when the transmission data includes target area information indicating the position and shape of the target area in a low-resolution image in which the resolution of the captured image 32 is reduced, it may be determined that the target object 34 appears in the captured image 32. Further, for example, the process of detecting the target area image 38 from the low resolution image shown by the transmission data may be performed. When the target area image 38 can be detected, it may be determined that the target object 34 is present in the captured image 32 before the low-resolution image is reduced in resolution.
Then, in the present embodiment, for example, the composite image 40 showing the determination result is generated. Here, for example, the composite image 40 may be generated by determining that a frame-shaped determination recognition image 44 surrounding the individual image region 42 in which the target object 34 is determined to appear is superimposed on the captured image 32 corresponding to the arranged low-resolution image. As an example of this, it is possible to provide,
fig. 10 shows a judgment recognition image 44a surrounding the individual image area 42e, and a judgment recognition image 44b surrounding the individual image area 42 g.
In the present embodiment, for example, as shown in fig. 10, a target region in which the target object 34 appears in the low-resolution image is shown in the individual image region 42 in which the low-resolution image is arranged. Here, for example, as shown in fig. 10, the area determination result image 46 may be shown in the individual image area 42 as information representing the target area. Here, for example, the area specification result image 46 may be an image in which the resolution of the area specification result image 36 is reduced. Alternatively, for example, the area specification result image 46 may be a frame-shaped image in which the position indicated by the coordinate value indicated by the target area information is a corner.
In the present embodiment, the priority of monitoring the video may be determined based on a status of a vehicle entering a line corresponding to the transmission data, for example. Here, the line corresponding to the transmission data refers to, for example, a line at a station where the video transmission system 12 that transmits the transmission data is disposed, a line adjacent to a platform 30 monitored by the camera 20 of the video transmission system 12 that transmits the transmission data, and the like. Further, 1 camera 20 may correspond to 1 line, and 1 camera 20 may correspond to a plurality of lines. Further, the monitoring priority of the movie may also be determined for the individual image area 42 determined to have the target object 34 appear in the captured image 32.
An example of determination of the monitoring priority of the video is described below.
Fig. 11 schematically shows an example of the relationship between stations where 9 video transmission systems 12(12a to 12i) are respectively arranged and vehicles running between these stations. Fig. 11 shows 9 station objects 50(50a to 50i) corresponding to stations where 9 video transmission systems 12(12a to 12i) are arranged, and 4 vehicle objects 52(52a to 52d) corresponding to running vehicles. In fig. 11, the traveling direction of the vehicle is indicated by an arrow. Here, for example, fig. 11 shows that the position of the vehicle at the shooting time of the shot image 32 corresponding to the composite image 40 illustrated in fig. 10 is taken as the position of the vehicle object 52.
Here, for example, the length between the station object 50e and the vehicle object 52c is shorter than the length between the station object 50g and the vehicle object 52 d. This means that the distance between the station at which the video transmission system 12e is disposed and the vehicle near the station is shorter than the distance between the station at which the video transmission system 12g is disposed and the vehicle near the station. For example, in this case, it is determined that the monitoring priority of the video captured by the camera 20 included in the video transmission system 12e is higher than the monitoring priority of the video captured by the camera 20 included in the video transmission system 12 g.
In fig. 10, a determination identification image 44 based on the state of the monitoring priority of the video determined as described above is arranged in the composite image 40. In the present embodiment, it is assumed that a state corresponding to the monitoring priority of the video is set in advance. For example, the state with the highest priority is a case where the determination recognition image 44a is thickened, and the state with the next highest priority is a case where the determination recognition image 44b is hatched. Therefore, in the example of fig. 10, the judgment recognition image 44a is configured to surround the individual image area 42e, and the judgment recognition image 44b is configured to surround the individual image area 42 g.
In addition, in the above example, the monitoring priority of the video is determined according to the distance between the station and the vehicle where the route corresponding to the transmission data is located. Here, for example, the priority of monitoring the video may be determined based on the estimated time when the vehicle finally arrives at the line corresponding to the transmission data. For example, it may be determined that the shorter the estimation time, the higher the priority.
Further, for example, the priority of monitoring the video may be determined based on the speed of the vehicle when passing through the route corresponding to the transmission data. For example, it may be determined that the higher the speed, the higher the priority.
Further, for example, the monitoring priority of the video may be determined based on the attribute of the vehicle occupant when entering the route corresponding to the transmission data. Here, as an example of attributes of the vehicle occupant, attributes of using a wheelchair, wearing a support dog, and the like are listed. Here, for example, when a vehicle entering the route next has a passenger using a wheelchair or a passenger with a support dog, it may be determined that the priority is higher than the case where such a passenger is not present.
The state corresponding to the monitoring priority of the video is not limited to the case shown in fig. 10. For example, the determination identification image 44 of a color corresponding to the monitoring priority of the video may be arranged in the composite image 40. For example, the individual image area 42 having a size corresponding to the monitoring priority of the video may be arranged in the composite image 40. In this case, for example, the individual image area 42 may be arranged in the composite image 40 so as to have a higher priority.
The above-described situation of the vehicle may be managed by a known operation management system, for example. The central monitoring device 10 may determine the vehicle condition such as the distance, estimated time, speed, and attribute of the passenger from various data acquired from the operation management system. In the present embodiment, the monitoring priority of the video may be determined based on a combination of a plurality of the distance, the estimated time, the speed, and the attribute of the passenger.
In the present embodiment, the central monitoring apparatus 10 may transmit a request for transmitting a captured image to the image transmission system 12. For example, in accordance with an operation of designating the individual image area 42 by a monitor (a user of the central monitoring apparatus 10) monitoring the composite image 40, a transmission request of a captured video is transmitted to the video transmission system 12 as a transmission source of transmission data corresponding to the individual image area 42. Further, for example, a request for transmission of a captured image may be transmitted to the image transmission system 12 determined according to the monitoring priority of the image. For example, the transmission request of the captured image may be transmitted to the image transmission system 12 including the camera 20 that captures the monitored image with the highest priority.
Then, the video transmission system 12 that has received the transmission request of the captured video may transmit the captured video to the central monitoring apparatus 10 in accordance with the transmission request. Then, the central monitoring apparatus 10 may display the received photographed image on the individual monitoring monitor 10 e. In this manner, the monitor can monitor the high-resolution photographed image in detail in the central monitoring apparatus 10.
Here, for example, the video transmission system 12 may transmit substitute transmission data indicating the captured video and the determination result to the central monitoring apparatus 10 in response to the reception of the transmission request. Then, the central monitoring apparatus 10 may display the photographed image indicated by the substitute transmission data, which indicates the above determination result indicated by the substitute transmission data, on the individual monitoring monitor 10 e. Here, for example, a video including the area specifying result image 36 generated from the captured image 32 included in the captured video as a frame image may be transmitted from the video transmission system 12 to the central monitoring apparatus 10 as substitute transmission data. The image may be displayed on the individual monitor 10 e.
The higher the resolution of the captured image, the higher the accuracy of the analysis of the captured image. Here, for example, when performing a detailed analysis such as determination of whether or not there is a person holding a crutch, a supporting dog, or the like at the station 30, and specification of an area where the person holding a crutch, the supporting dog, or the like appears in the captured image, it is necessary to analyze an image with a high resolution. On the other hand, when a captured image with a high resolution is transmitted from the image transmission system 12 to the central monitoring apparatus 10, the load on the computer network 14 increases.
In the present embodiment, since the analysis of the video is performed on the captured video with high resolution, the accuracy of the analysis of the video is ensured. Further, since the transmitted video is a low-resolution video, the load on the computer network 14 can be reduced. Thus, according to the present embodiment, it is possible to reduce the load on the computer network 14 due to transmission while ensuring the analysis accuracy of the image taken of the platform 30 of the station.
In the above description, the monitoring priority of the movie is determined for the individual image area 42 determined as the target object 34 appearing in the captured image 32. Here, for example, regardless of whether or not it is determined that the target object 34 appears in the captured image 32, the monitoring priority of the movie may be determined for all the individual image areas 42.
Fig. 12 is a diagram showing another example of the synthetic image 40. Fig. 13 is another example schematically showing a relationship between stations where 9 video transmission systems 12(12a to 12i) are respectively arranged and vehicles running between the stations. In fig. 13, the traveling direction of the vehicle is indicated by an arrow. Fig. 13 shows the position of the vehicle at the time of capturing the captured image 32 corresponding to the composite image 40 illustrated in fig. 12 as the position of the vehicle object 52.
Here, as shown in fig. 13, the length between the station object 50g and the vehicle object 52d is shorter than the length between the station object 50c and the vehicle object 52 b. Further, the length between the station object 50f and the vehicle object 52c is shorter than the length between the station object 50g and the vehicle object 52 d. Further, the length between the station object 50b and the vehicle object 52a is shorter than the length between the station object 50f and the vehicle object 52 c. For example, in this case, the individual image area 42b, the individual image area 42f, the individual image area 42g, and the individual image area 42c may be determined in order from the higher monitoring priority of the movie.
Fig. 12 shows a judgment recognition image 44 according to the state of the monitoring priority of the movie determined as described above. In the present embodiment, it is assumed that a state corresponding to the monitoring priority of the video is set in advance. For example, the state with the highest priority is a case where the judgment recognized image 44a is thickened as such, the state with the next highest priority is a case where the judgment recognized image 44b is hatched as such, and the state with the next highest priority is a case where the judgment recognized image 44c is left blank as such. Further, the determination recognition image 44 is not arranged in the individual image area 4 after the priority level is 4. Therefore, in the example of fig. 12, the judgment recognition image 44a is set to surround the individual image area 42b, the judgment recognition image 44b is set to surround the individual image area 42f, and the judgment recognition image 44c is set to surround the individual image area 42 g.
In addition, in the above example, the monitoring priority of the video is determined according to the distance between the station and the vehicle where the route corresponding to the transmission data is located. Here, for example, as described above, the monitoring priority of the video may be determined based on the estimated time when the vehicle finally arrives at the route, the speed of the vehicle when passing through the route, the attribute of the vehicle occupant when entering the route, and the like. As described above, the state corresponding to the monitoring priority of the video is not limited to the case shown in fig. 12.
Although the monitoring priorities of the respective images shown in the composite image are different, when the priorities do not appear in the composite image, it is not easy for the monitor to identify the portions that should be monitored with importance.
According to the present embodiment, in the manner described above, it is possible for the monitor to easily recognize the portion to be monitored with emphasis from the displayed composite image.
The functions of the station monitoring system 1 according to the present embodiment and the processes executed by the station monitoring system 1 according to the present embodiment will be described in detail below.
Fig. 14 is a functional block diagram showing an example of functions implemented by the central monitoring device 10 and the video transmission system 12, which are included in the station monitoring system 1 of the present embodiment for monitoring the platform 30 of the station. The central monitoring apparatus 10 and the video transmission system 12 according to the present embodiment do not need to realize all the functions shown in fig. 14, and may realize functions other than the functions shown in fig. 14.
As shown in fig. 14, the central monitoring device 10 according to the present embodiment includes, for example, a transmission data receiving unit 60, a video acquiring unit 62, a vehicle status data acquiring unit 64, a monitoring priority determining unit 66, a synthetic video generating unit 68, a captured video transmission requesting unit 70, a captured video receiving unit 72, and a display control unit 74, which are functional. The transmission data receiving unit 60, the captured image transmission requesting unit 70, and the captured image receiving unit 72 are mainly realized by the communication unit 10 c. The image acquisition unit 62, the vehicle state data acquisition unit 64, the monitoring priority determination unit 66, and the synthetic image generation unit 68 are mainly realized by the processor 10 a. The display control unit 74 is mainly realized by the processor 10a, the entire monitor 10d, and the individual monitor 10 e. The central monitoring device 10 functions as a station monitoring device that monitors the platform 30 of the station according to the present embodiment.
The above functions may be realized by executing, by the processor 10a, a program containing instructions corresponding to the above functions, which is installed in the central monitoring apparatus 10 as a computer. The program may be provided to the central monitoring apparatus 10 via a computer-readable information storage medium such as an optical disk, a magnetic tape, an optical magnetic disk, or a flash memory, or via the internet.
As shown in fig. 14, the video transmission system 12 of the present embodiment includes, for example, a captured video acquisition unit 80, a determination unit 82, a transmission data generation unit 84, a transmission data transmission unit 86, a transmission request reception unit 88, and a captured video transmission unit 90, which are functional units. The determination unit 82 includes an image recognition unit 82a and a tracking unit 82 b. The captured image acquisition unit 80 is mainly realized by the camera 20 and the processor 22a of the image analysis device 22. The determination unit 82 is realized mainly by the processor 22a of the image analysis device 22. The transmission data generating unit 84 is realized mainly by the processor 22a, the communication unit 22c, and the down-converter 24 of the image analyzer 22. The transmission data transmitting unit 86, the transmission request receiving unit 88, and the captured image transmitting unit 90 are realized mainly by the communication unit 22c of the image analysis device 22. The image analysis device 22 of the present embodiment plays a role as a transmission device that transmits transmission data to the central monitoring device 10.
The above-described functions may be realized by the processor 22a executing a program including instructions corresponding to the above-described functions, the program being installed in the image analysis device 22 as a computer. The program may be provided to the image analysis device 22 via a computer-readable information storage medium such as an optical disk, a magnetic tape, a magneto-optical disk, or a flash memory, or via the internet.
The transmission data generation unit 84 may be realized by hardware such as the down-converter 24 and software operated by the image analysis device 22. The transmission data generation unit 84 may be realized by hardware such as the down-converter 24 or software operated by the video analyzer 22.
In the present embodiment, for example, the transmission data reception unit 60 receives a plurality of transmission data items transmitted from different video transmission systems 12. Here, as described above, the transmission data shows the low-resolution video in which the resolution of the captured video is lowered and the determination result of whether or not the target object 34 is present in the captured video.
In the present embodiment, for example, the image acquisition unit 62 acquires a plurality of images indicating the situation of each of the different monitoring targets. Here, the video acquisition unit 62 may acquire a low-resolution video image indicated by each of the plurality of transmission data received by the transmission data reception unit 60.
In the above description, 1 station 30 is photographed by 1 camera 20. Here, for example, different portions of 1 station 30 may be photographed by the other cameras 20. At this time, a part of the platform 30 photographed by 1 camera 20 and another part of the platform 30 photographed by another camera 20 are monitoring objects different from each other. In addition, a plurality of stations 30 may be photographed by 1 camera 20. In this case, the plurality of stations 30 corresponds to 1 monitoring object.
In the present embodiment, for example, the vehicle condition data acquisition unit 64 acquires vehicle condition data showing a condition of a vehicle entering a line corresponding to a video indicating a condition of a monitoring target. As described above, the vehicle state data acquiring unit 64 may acquire vehicle state data from a known operation management system.
In the present embodiment, for example, the monitoring priority determination unit 66 determines the monitoring priority of the video based on the status of the vehicle entering the line corresponding to the video indicating the situation of the monitoring target. Here, for example, the monitoring priority of the video may be determined based on vehicle status data acquired by the vehicle status data acquiring unit 64.
As described above, the monitoring priority determination unit 66 may determine the monitoring priority of the video corresponding to the route based on the distance between the vehicle and the station having the route corresponding to the video indicating the situation of the monitoring target.
The monitoring priority determining unit 66 may determine the monitoring priority of the video corresponding to the route based on the estimated time taken for the vehicle to reach the route corresponding to the video indicating the situation of the monitored object.
The monitoring priority determining unit 66 may determine the monitoring priority of the video corresponding to the route based on the speed of the vehicle when the vehicle passes through the route corresponding to the video indicating the situation of the monitored object.
The monitoring priority determining unit 66 may determine the monitoring priority of the video corresponding to the route based on the attribute of the passenger of the vehicle of the route corresponding to the video indicating the situation of the monitoring target.
The monitoring priority determining unit 66 may determine the monitoring priority of the video image based on a combination of a plurality of the distance, the estimated time, the speed, and the attribute of the passenger.
In the present embodiment, for example, the synthetic image generating unit 68 generates a synthetic image in which a plurality of images representing different monitoring objects are synthesized. Here, the synthetic image generating unit 68 may generate a synthetic image indicating the monitoring priority of the image.
The synthetic video generator 68 may generate a synthetic video in which a low-resolution video indicated by each of the plurality of transmission data is synthesized. Here, the synthetic image generating unit 68 may generate a synthetic image in which the image acquired by the image acquiring unit 62 is synthesized. The synthetic video generator 68 may generate a synthetic video in which the determination result indicating at least one of the plurality of transmission data is synthesized. The synthetic video image generating unit 68 may generate a determination synthetic video image in which the determination results indicated by the plurality of transmission data are indicated by a state corresponding to a state of a vehicle entering a line corresponding to the video transmission system 12 that transmits the transmission data.
In the present embodiment, for example, the captured image transmission request unit 70 transmits a request for transmitting a captured image captured by the station 30 in which the image transmission system 12 is installed in the image transmission system 12. Here, for example, a request for transmission of a captured image may be transmitted to the image transmission system 12 designated by a user such as a monitor. Further, for example, a request for transmission of a captured image may be transmitted to the image transmission system 12 determined according to the monitoring priority of the image. Here, for example, the transmission request of the captured image may be transmitted to the image transmission system 12 including the camera 20 that captures the monitored image with the highest priority.
In the present embodiment, for example, the photographed image receiving unit 72 receives the photographed image transmitted by the image transmission system 12 in response to the reception of the transmission request of the photographed image.
In the present embodiment, for example, the display control unit 74 displays the synthetic image generated by the synthetic image generation unit 68. In the present embodiment, for example, the display control unit 74 displays the captured image received by the captured image receiving unit 72. In the above example, the display control unit 74 displays the synthesized video on the entire monitor 10d and the photographed video on the individual monitor 10 e.
The display control unit 74 may switch the display of a plurality of captured images. For example, a plurality of captured images having a high priority for monitoring may be automatically and repeatedly switched and displayed at predetermined time intervals. The photographed image transmission request unit 70 may change the video transmission system 12 to which the photographed image transmission request is transmitted, according to the display switching timing.
In the present embodiment, for example, the photographed image acquiring unit 80 acquires a photographed image of a monitoring target. Here, for example, a captured image captured by the station 30 provided with the image transmission system 12 may be acquired.
In the present embodiment, for example, the determination unit 82 determines whether or not a predetermined object is present in the captured image based on the captured image acquired by the captured image acquisition unit 80. Here, for example, the target object 34 corresponds to the given object.
In the present embodiment, the image recognition section 82a determines the region in which the target object 34 appears within the captured image 32 by, for example, performing image recognition processing on the captured image 32.
In the present embodiment, for example, the tracking unit 82b tracks the captured image 32 captured before the captured image 32 to the area determined by the image recognition unit 82a, thereby determining the area in which the target object 34 appears in the captured image 32.
In the present embodiment, for example, the transmission data generating unit 84 generates, based on the captured image and the result of the determination, transmission data indicating a low-resolution image in which the resolution of the captured image is reduced and the result of the determination.
In the present embodiment, for example, the transmission data transmitting unit 86 transmits the transmission data generated by the transmission data generating unit 84 to the central monitoring apparatus 10.
In the present embodiment, for example, the transmission request receiving unit 88 receives a transmission request of a captured image transmitted from the captured image transmission requesting unit 70 of the central monitoring apparatus 10.
In the present embodiment, for example, the photographed image transmitting unit 90 transmits the photographed image to the central monitoring apparatus 10 in response to the reception of the transmission request of the photographed image by the transmission request receiving unit 88. Here, the photographed image transmitting unit 90 may transmit the photographed image acquired by the photographed image acquiring unit 80 to the central monitoring apparatus 10.
The photographed image transmitting unit 90 may transmit, to the central monitoring apparatus 10, substitute transmission data indicating the photographed image and the determination result in response to the transmission request of the photographed image by the transmission request receiving unit 88. Here, as described above, for example, a video that is a frame image including the area specification result image 36 generated from the captured image 32 included in the captured video may be transmitted as the substitute transmission data.
In this case, the display control unit 74 of the central monitoring apparatus 10 may display the captured image indicated by the substitute transmission data indicating the determination result indicated by the substitute transmission data in response to the reception of the substitute transmission data. For example, the display control unit 74 of the central monitoring apparatus 10 may display the image 36 including the area specifying result as a frame image.
An example of a flow of repeatedly performing processing at a predetermined frame rate in the video transmission system 12 according to the present embodiment will be described below with reference to a flowchart illustrated in fig. 15.
First, the captured image acquisition unit 80 acquires the captured image 32 (S101). Here, the captured image 32 is a frame image of the frame in the captured image.
Then, the captured image acquisition unit 80 determines whether or not the image recognition unit 82a can perform image recognition (S102). Here, for example, when the image recognition unit 82a is performing image recognition on the captured image 32 acquired in a frame preceding the frame, it is determined that image recognition is not possible. On the other hand, when the image recognition portion 82a does not perform the image recognition with respect to the captured image 32, it is determined that the image recognition is possible.
When it is determined that the image recognition by the image recognition section 82a is possible (S102: Y), the image recognition section 82a starts the execution of the image recognition processing on the captured image 32 acquired in the processing shown in S101 (S103). A target area in which the target object 34 is present in the captured image 32 is determined by the image recognition processing. Here, the area specifying result image 36 may be generated. Further, for example, target region information indicating the position and shape of the target region may be generated.
When it is determined that the image recognition by the image recognition unit 82a is not possible (S102: N), the tracking unit 82b confirms whether or not there is a determination result of the usable target region as a result of the image recognition processing of the processing shown in S103 (S104). After the processing shown in S103 is executed, similarly, the tracking unit 82b checks whether or not there is a determination result of a usable target area as a result of the image recognition processing of the processing shown in S103 (S104).
When the presence determination result is confirmed (S104: Y), the tracking unit 82b executes the tracking process (S105). Here, description will be made with reference to fig. 8, for example, the latest available determination result of the image recognition processing of the processing shown in S103 is determined. Then, a target area in which the target object 34 appears in the captured image 32 obtained in the processing shown in S101 is determined by tracking the target area in the captured image 32 shown in the specific result. Here, the area specifying result image 36 may be generated. Further, for example, target region information indicating the position and shape of the target region may be generated.
When it is confirmed that there is no usable target area (S104: N), the tracking unit 82b determines whether or not the target object 34 is present in the captured image 32 acquired in the processing shown in S101 (S106). When the processing shown in S105 is finished, the tracking unit 82b similarly determines whether or not the target object 34 is present in the captured image 32 acquired in the processing shown in S101 (S106). Here, for example, when the target area is determined in the processing shown in S105, it is determined that the target object 34 appears in the captured image 32. Further, when no tracking is performed and the target area is not determined in the processing shown in S105, it is determined that the target object 34 does not appear in the captured image 32. Further, when it is confirmed that there is no determination result of the target area available in the processing shown in S104, it is also determined that a given object does not appear in the captured image 32.
Then, the transmission data generation unit 84 lowers the resolution of the captured image 32 obtained in the processing shown in S101 to generate a low-resolution image (S107). Here, as described above, the low-resolution image may be generated by reducing the resolution of the region specification result image 36.
Then, the transmission data generation unit 84 generates transmission data from the determination result in the processing shown in S106 and the low-resolution image generated in the processing shown in S107 (S108). Here, for example, transmission data including the area specification result image 36 with reduced resolution generated in the processing shown in S107 may be generated. Further, for example, transmission data including a flag indicating the determination result of the processing shown in S106 in a header or the like may be generated. For example, transmission data including a low-resolution image in which the resolution of the captured image 32 is reduced and target area information indicating the position and shape of the target area in the low-resolution image may be generated.
Then, the transmission data transmitting unit 86 transmits the transmission data generated in the processing shown in S108 to the central monitoring apparatus 10(S109), and returns to the processing shown in S101.
In this processing example, the processing shown in S101 to S109 is repeatedly executed at a predetermined frame rate. In the processing shown in the processing example, the transmission data is transmitted on a frame-by-frame basis, but the processing shown in S101 to S108 may be repeated a plurality of times, for example, and then the generated plurality of transmission data may be collectively transmitted to the central monitoring apparatus 10.
An example of a flow of the process repeated at a predetermined frame rate in the central monitoring apparatus 10 according to the present embodiment will be described below with reference to a flowchart illustrated in fig. 16. In the processing shown in the present processing example, the transmission data reception unit 60 buffers the transmission data received from each of the plurality of video transmission systems 12.
First, the video acquisition unit 62 acquires a plurality of transmission data in the frame from the buffer of the transmission data reception unit 60 (S201). The plurality of transmission data are data transmitted from each of the plurality of video transmission systems 12. In the present processing example, the frame number is included in the transmission data, and the video acquisition unit 62 can specify a plurality of transmission data in the frame.
Then, the synthetic video generator 68 determines the result of determination as to whether or not the target object 34 is present in the captured image 32 corresponding to the transmission data for each of the plurality of transmission data acquired in the processing shown in S201 (S202). As described above, the determination result is determined based on, for example, the detection result from the target area image 38 of the low resolution image shown in the transmission data, the determination result shown by the flag included in the transmission data, the target area information included in the transmission data, and the like.
The vehicle condition data acquiring unit 64 acquires vehicle condition data indicating a condition of the vehicle in the frame (S203).
Then, the monitoring priority determination unit 66 determines the priority of video monitoring for each of the plurality of transmission data acquired in the process shown in S201 based on the vehicle status data acquired in the process shown in S203 (S204). Here, the monitoring priority of the video may be determined only for the transmission data in which it is determined in S202 that the target object 34 appears in the captured image 32.
Then, the synthetic video generator 68 generates the synthetic image 40 of the frame (S205). Here, for example, the composite image 40 is generated based on the low-resolution video included in the transmission data acquired in the processing shown in S201, the result of the determination determined in the processing shown in S202, and the priority determined in the processing shown in S204.
Then, the display control unit 74 displays the composite image 40 generated in the processing shown in S205 (S206), and returns to the processing shown in S201.
In this processing example, the processing shown in S201 to S206 is repeatedly executed at a predetermined frame rate.
The present invention is not limited to the above embodiments.
For example, as shown in fig. 17 and 18, an image of a specific position such as a ticket gate may be captured based on detection of the target object 34 at the specific position. Hereinafter, an image in which the position of the target object 34 captured in accordance with the detection of the target object 34 captured in this manner is detected is referred to as a reference captured image 92. Here, the reference captured image 92 is captured by a camera 20 different from the camera 20 that captures the monitoring target such as the platform 30 of the station.
Fig. 17 shows an example of a reference captured image 92a in which the target object 34 is captured from the front at the ticket gate. Fig. 18 shows an example of a reference captured image 92b in which the target object 34 is captured from behind at a ticket gate. Thereby, the plurality of reference captured images 92 may be captured from different directions depending on the detection of the IC tag.
Here, for example, a sensor for detecting an IC tag embedded in a crutch, an IC card carried by a person holding the crutch, an IC card hung on a support dog, or the like may be provided at a specific position such as a ticket gate. Further, 1 or more reference captured images 92 may be captured based on the detection of the IC tag or the IC card by the sensor. The target object 34 may be detected by a method different from the method for detecting an IC tag or an IC card.
In addition, the position where the detection of the target object 34 is performed is not limited to the ticket gate. For example, the sensor may be provided at a platform door or the like. The reference captured image 92 of the position of the platform door and its periphery may be captured based on the detection of the target object 34 by the sensor.
Then, the photographed image acquiring unit 80 may acquire the reference photographed image 92 photographed in this manner. Here, the captured image acquiring unit 80 may acquire a plurality of reference captured images 92 captured from different directions.
Then, the image recognition unit 82a may execute image recognition processing by a known image recognition technique with respect to the reference captured image 92 to specify a target region in which the target object 34 appears in the reference captured image 92. The image recognition unit 82a may generate the target area information indicating the target area.
Fig. 19 is a diagram showing an example of the area determination result image 94a according to an example of the target area information generated with reference to the captured image 92a shown in fig. 17. Fig. 20 is a diagram showing an example of the area determination result image 94b according to an example of the target area information generated with reference to the captured image 92b shown in fig. 18.
In the area specification result image 94a shown in fig. 19, a frame-shaped target area image 96a surrounding the target object 34 is superimposed on the reference captured image 92a shown in fig. 17. In the area specification result image 94b shown in fig. 20, a frame-shaped target area image 96b surrounding the target object 34 is superimposed on the reference captured image 92b shown in fig. 18.
The determination unit 82 may determine whether or not the target object 34 is present in the captured image based on the reference captured image 92 and the captured image of the monitoring target such as the station platform 30. Here, whether or not the target object 34 is present in the captured image may be determined based on the plurality of reference captured images 92 captured in different directions from each other and the captured image.
For example, the image recognizing unit 82a may specify the region in which the target object 34 appears in the captured image 32 by performing the image recognition processing for the captured image 32 using the target region specified from the reference captured image 92.
Alternatively, the tracking unit 82b may track the target region specified by the image recognition unit 82a on the reference captured image 92 to specify the region in which the target object 34 appears in the captured image 32.
The target object 34 is likely to be clearly represented in the reference captured image 92 captured according to the detection of the target object 34. Therefore, in the above manner, the target object 34 can be more accurately detected from the captured image 32 of the monitoring object such as the platform 30 of the shooting station.
Here, by capturing the plurality of reference captured images 92 from different directions, it is possible to perform image recognition processing and tracking processing on the captured image 32 of the monitoring target such as the platform 30 of the station being captured using more information. Therefore, the target object 34 can be detected more accurately from the photographed image 32 of the monitoring object such as the platform 30 of the station photographed.
For example, the role sharing between the central monitoring apparatus 10 and the video distribution system 12 is not limited to the above description. For example, the video transmission system 12 may determine a situation of a vehicle entering a line corresponding to a video indicating a situation of a monitoring target. In this case, vehicle status data indicating a status of the vehicle may be transmitted from the video transmission system 12 to the central monitoring apparatus 10. The central monitoring device 10 may determine the priority of monitoring the video based on the vehicle status data received from the video transmission system 12.
For example, 1 video transmission system 12 may monitor monitoring targets at a plurality of stations. In this case, the 1 video delivery system 12 includes a plurality of cameras 20 provided at a plurality of stations different from each other. The 1 video transmission system 12 can generate transmission data corresponding to the video from the video captured by each of the plurality of cameras 20.
In addition, a part or all of the functions of the central monitoring apparatus 10 may be installed in 1 image transmission system 12. In addition, for example, 1 image transmission system 12 may monitor the platforms 30 of a plurality of stations. For example, 1 video distribution system 12 may monitor a plurality of platforms 30 installed at a station where the video distribution system 12 is installed. Thus, the present invention is applicable not only to monitoring of a plurality of stations in the central monitoring apparatus 10 but also to monitoring of a plurality of platforms 30 of 1 station.
The specific characters, numerical values, and specific characters in the drawings are examples, and are not limited to these characters and numerical values.

Claims (10)

1. A station monitoring system is characterized by comprising a monitoring device and a plurality of sending devices, wherein the monitoring device monitors a platform of a station, and the plurality of sending devices respectively comprise:
a captured image acquisition unit that acquires a captured image of a station on which the transmission device is installed;
a determination unit that determines whether a given object appears in the captured image based on the captured image;
a transmission data generating unit that generates, based on the captured image and the result of the determination, transmission data indicating a low-resolution image in which the resolution of the captured image is reduced and the result of the determination; and
a transmission unit that transmits the transmission data to the monitoring apparatus,
the monitoring device includes:
a reception unit that receives a plurality of the transmission data transmitted from the transmission apparatuses different from each other;
a synthesized video generation unit that generates a synthesized video in which the low-resolution video represented by each of the plurality of transmission data is synthesized, the synthesized video representing a result of the determination indicated by at least one of the plurality of transmission data; and
a display control unit that displays the synthesized image.
2. The station monitoring system according to claim 1,
the monitoring apparatus further includes a transmission request unit that transmits a transmission request of the photographed image to the transmission apparatus,
the transmitting unit of the transmitting device transmits the photographed image to the monitoring device according to the reception of the transmission request,
the display control unit of the monitoring device displays the shot image according to the reception of the shot image.
3. The station monitoring system according to claim 1,
the monitoring apparatus further includes a transmission request unit that transmits a transmission request of the photographed image to the transmission apparatus,
the transmitting unit of the transmitting device transmits substitute transmission data showing the photographed image and the result of the determination to the monitoring device according to the reception of the transmission request,
the display control unit of the monitoring apparatus displays the captured image indicated by the substitute transmission data indicating a result of the determination indicated by the substitute transmission data, in accordance with the reception of the substitute transmission data.
4. Station monitoring system according to any of claims 1 to 3,
the synthetic video generation unit of the monitoring device generates the synthetic video in which the result of the determination indicated by the transmission data corresponds to a situation in which a vehicle entering a line corresponding to the transmission device that transmitted the transmission data is determined.
5. Station monitoring system according to any of claims 1 to 4,
the judging unit of the transmitting apparatus includes:
an image recognition unit that determines a region in which the object appears within a frame image included in the captured picture by performing image recognition processing on the frame image; and
a tracking unit that determines a region in which the object appears within the frame image included in the captured picture by tracking the region determined by the image recognition unit with respect to a frame image captured before the frame image.
6. Station monitoring system according to any of claims 1 to 4,
the transmission device further includes a reference captured image acquisition unit that acquires a reference captured image that is an image in which the position of the captured object is detected, based on the detection of the object,
the determination unit of the transmission device determines whether the object appears in the captured image based on the reference captured image and the captured image.
7. The station monitoring system as claimed in claim 6,
the reference captured image acquiring unit acquires a plurality of reference captured images captured from different directions from each other in accordance with the detection of the object,
the determination unit of the transmission device determines whether the object appears in the captured image based on the plurality of reference captured images and the captured image.
8. Station monitoring system according to claim 6 or 7,
the determination unit of the transmission device includes an image recognition unit that determines an area in which the object appears in a frame image contained in the photographic image by performing image recognition processing on the frame image using an area in which the object appears in the reference photographic image that is determined by performing image recognition processing on the reference photographic image.
9. Station monitoring system according to claim 6 or 7,
the judging unit of the transmitting apparatus includes:
an image recognition unit that determines a region in which the object appears within the reference captured image by performing an image recognition process on the reference captured image;
a tracking unit that determines a region in which the object appears within a frame image included in the captured image by tracking the region determined by the image recognition unit to the reference captured image.
10. A station monitoring method is used for monitoring a platform of a station, and is characterized by comprising the following steps:
an acquisition step in which a plurality of transmission devices respectively acquire a captured image of a station on which the transmission device is installed;
a judging step in which the plurality of transmitting devices judge whether a given object appears in the photographed image or not, respectively, based on the photographed image;
a generation step in which the plurality of transmission devices generate, based on the captured image and the result of the determination, transmission data indicating a low-resolution image in which the resolution of the captured image is reduced and the result of the determination, respectively;
a transmission step in which the plurality of transmission devices transmit the transmission data to the monitoring device, respectively;
a receiving step in which the monitoring apparatus receives a plurality of the transmission data transmitted from the transmission apparatuses that are different from each other;
a step of generating a composite video in which the monitoring device generates a composite video in which the low-resolution video represented by each of the plurality of transmission data is synthesized, the composite video representing a result of the determination indicated by at least one of the plurality of transmission data; and
and a display step, wherein the monitoring device displays the synthetic image.
CN201980054342.2A 2018-08-20 2019-08-02 Station monitoring system and station monitoring method Pending CN112585957A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2018153982 2018-08-20
JP2018-153982 2018-08-20
PCT/JP2019/030535 WO2020039897A1 (en) 2018-08-20 2019-08-02 Station monitoring system and station monitoring method

Publications (1)

Publication Number Publication Date
CN112585957A true CN112585957A (en) 2021-03-30

Family

ID=69593049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980054342.2A Pending CN112585957A (en) 2018-08-20 2019-08-02 Station monitoring system and station monitoring method

Country Status (3)

Country Link
JP (1) JP7107596B2 (en)
CN (1) CN112585957A (en)
WO (1) WO2020039897A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114228794A (en) * 2021-12-17 2022-03-25 神思电子技术股份有限公司 Automatic monitoring method and equipment for CTC scheduling

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4705852B2 (en) * 2003-07-16 2011-06-22 パナソニック株式会社 Light source device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050226463A1 (en) * 2004-03-31 2005-10-13 Fujitsu Limited Imaging data server and imaging data transmission system
JP2006093839A (en) * 2004-09-21 2006-04-06 Mitsubishi Electric Corp Monitoring terminal unit and monitoring system
CN101099240A (en) * 2005-02-09 2008-01-02 松下电器产业株式会社 Monitoring camera device, monitoring system using the same, and monitoring image transmission method
CN101895727A (en) * 2009-05-21 2010-11-24 索尼公司 Monitoring system, image capturing apparatus, analysis apparatus, and monitoring method
JP2015016704A (en) * 2013-07-08 2015-01-29 株式会社日立ビルシステム Video monitoring system for station
JP2016059014A (en) * 2014-09-12 2016-04-21 沖電気工業株式会社 Monitoring system, video analyzer, video analyzing method, and program
CN106878666A (en) * 2015-12-10 2017-06-20 杭州海康威视数字技术股份有限公司 The methods, devices and systems of destination object are searched based on CCTV camera

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002104189A (en) 2000-09-28 2002-04-10 Matsushita Electric Ind Co Ltd Train operation support system, ground apparatus for train operation support, and vehicle apparatus for train operation support
JP3858018B2 (en) 2003-12-02 2006-12-13 中央電子株式会社 Video surveillance system
JP4685561B2 (en) * 2005-09-12 2011-05-18 株式会社日立国際電気 Display method of camera system and camera system
JP2014127847A (en) 2012-12-26 2014-07-07 Panasonic Corp Image monitoring system
JP6808358B2 (en) 2016-05-27 2021-01-06 キヤノン株式会社 Image processing equipment, image processing methods and programs
JP6385419B2 (en) 2016-12-22 2018-09-05 セコム株式会社 Object detection device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050226463A1 (en) * 2004-03-31 2005-10-13 Fujitsu Limited Imaging data server and imaging data transmission system
JP2006093839A (en) * 2004-09-21 2006-04-06 Mitsubishi Electric Corp Monitoring terminal unit and monitoring system
CN101099240A (en) * 2005-02-09 2008-01-02 松下电器产业株式会社 Monitoring camera device, monitoring system using the same, and monitoring image transmission method
CN101895727A (en) * 2009-05-21 2010-11-24 索尼公司 Monitoring system, image capturing apparatus, analysis apparatus, and monitoring method
JP2015016704A (en) * 2013-07-08 2015-01-29 株式会社日立ビルシステム Video monitoring system for station
JP2016059014A (en) * 2014-09-12 2016-04-21 沖電気工業株式会社 Monitoring system, video analyzer, video analyzing method, and program
CN106878666A (en) * 2015-12-10 2017-06-20 杭州海康威视数字技术股份有限公司 The methods, devices and systems of destination object are searched based on CCTV camera

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114228794A (en) * 2021-12-17 2022-03-25 神思电子技术股份有限公司 Automatic monitoring method and equipment for CTC scheduling
CN114228794B (en) * 2021-12-17 2023-09-22 神思电子技术股份有限公司 Automatic monitoring method and equipment for CTC scheduling

Also Published As

Publication number Publication date
WO2020039897A1 (en) 2020-02-27
JPWO2020039897A1 (en) 2021-05-13
JP7107596B2 (en) 2022-07-27

Similar Documents

Publication Publication Date Title
CA3160731A1 (en) Interactive behavior recognizing method, device, computer equipment and storage medium
CN111553947A (en) Target object positioning method and device
CN112001912B (en) Target detection method and device, computer system and readable storage medium
CN112585957A (en) Station monitoring system and station monitoring method
JP5878634B2 (en) Feature extraction method, program, and system
CN109903308B (en) Method and device for acquiring information
CN112640444A (en) Station monitoring device, station monitoring method, and program
JP2008033818A (en) Object tracking device and its control method, object tracking system, object tracking program, and recording medium recording the program
CN111159529B (en) Information processing system, server, non-transitory computer readable storage medium, and method for processing information
CN110706497B (en) Image processing apparatus and computer-readable storage medium
CN112287805A (en) Moving object detection method and device, readable storage medium and electronic equipment
CN116363628A (en) Mark detection method and device, nonvolatile storage medium and computer equipment
US11917335B2 (en) Image processing device, movable device, method, and program
JP7160763B2 (en) Information processing device, information processing system, information processing method, program, and application program
JP6686076B2 (en) Information processing apparatus, information processing method, program, and application program
JP2012212235A (en) Object detection system, object detection method, and program
JP2011151732A (en) Video processing apparatus
CN117058689B (en) Offline detection data processing method for chemical production
CN112836635B (en) Image processing method, device and equipment
CN209118492U (en) A kind of device of vehicle detection, vehicle identification system and parking management system
JP6863010B2 (en) Information providing device and information providing method
US20200273202A1 (en) Image processing apparatus and image processing method
WO2023176488A1 (en) Moving bodies measurement method
KR101836088B1 (en) Apparatus for processing image
CN114267007A (en) Warning lamp snapshot warning method, terminal, computer program product and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210330